text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
On the variational representation of monotone operators
DCDS-S Home
An energy-conserving time-discretisation scheme for poroelastic media with phase-field fracture emitting waves and heat
August 2017, 10(4): 895-907. doi: 10.3934/dcdss.2017045
Large solutions of parabolic logistic equation with spatial and temporal degeneracies
Andrey Shishkov 1,2,
Institute of Applied Mathematics and Mechanics, The National Academy of Sciences of Ukraine, Dobrovol'skogo str. 1, Slavyansk, Donetsk region, 84116, Ukraine
Peoples' Friendship University of Russia, Miklukho-Maklaya str. 6, Moscow, 117198, Russia
Received May 2016 Revised October 2016 Published April 2017
There is studied asymptotic behavior as
$t\rightarrow T$
of arbitrary solution of equation
$P_0(u):=u_t-\Delta u=a(t,x)u-b(t,x)|u|^{p-1}u\ \ \ \text{ in } [0,T)\times\Omega,$
$\Omega$
is smooth bounded domain in
$\mathbb{R}^N$
$0 < T < \infty$
$p>1$
$a(\cdot)$
is continuous,
$b(\cdot)$
is continuous nonnegative function, satisfying condition:
$b(t, x)\geqslant a_1(t)g_1(d(x))$
$d(x):=\textrm{dist}(x, \partial\Omega)$
. Here
$g_1(s)$
is arbitrary nondecreasing positive for all
$s>0$
function and
$a_1(t)$
satisfies:
$a_1(t)\geqslant c_0\exp(-\omega(T-t)(T-t)^{-1})\ \ \ \forall t<T,\ c_0=\textrm{const}>0a_1(t)\geqslant c_0\exp(-\omega(T-t)(T-t)^{-1})\ \ \ \forall t<T,\ c_0=\textrm{const}>0$
with some continuous nondecreasing function
$\omega(\tau)\geqslant0$
$\forall\tau>0$
. Under additional condition:
$\omega(\tau)\rightarrow\omega_0=\textrm{const}>0\ \ \ \text{ as }\tau\rightarrow0$
it is proved that there exist constant
$k:0 < k < \infty$
, such that all solutions of mentioned equation (particularly, solutions, satisfying initial-boundary condition
$u|_\Gamma=\infty$
$\Gamma=(0, T)\times\partial\Omega\cup\{0\}\times\Omega$
) stay uniformly bounded in
$\Omega_0:=\{x\in\Omega:d(x)>k\omega_0^{\frac12}\}$
. Method of investigation is based on local energy estimates and is applicable for wide class of equations. So in the paper there are obtained similar sufficient conditions of localization of singularity set of solutions near to the boundary of domain for equation with main part
$P_0(u)=(|u|^{\lambda-1}u)_t-\sum_{i=1}^N(|\nabla_xu|^{q-1}u_{x_i})_{x_i}$
$0 < \lambda\leqslant q < p$
Keywords: Large solutions, parabolic logistic equation, temporal degeneracies, spatial degeneracies.
Citation: Andrey Shishkov. Large solutions of parabolic logistic equation with spatial and temporal degeneracies. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 895-907. doi: 10.3934/dcdss.2017045
H. W. Alt and S. Luckhaus, Quasilinear elliptic-parabolic differential equations, Math. Z., 183 (1983), 311-341. Google Scholar
Y. Du and R. Peng, The periodic logistic equation with spatial and temporal degeneracies, Trans. Amer. Math. Soc., 364 (2012), 6039-6070. Google Scholar
Y. Du, R. Peng and P. Polachik, The parabolic logistic equation with blow-up initial and boundary values, Journal D'Analyse Mathematique, 118 (2012), 297-316. doi: 10.1007/s11854-012-0036-0. Google Scholar
J. L. Diaz and L. Veron, Local vanishing properties of solutions of elliptic and parabolic quasilinear equations, Trans. Amer. Math. Soc., 290 (1985), 787-814. Google Scholar
A. A. Kovalevsky, I. I. Skrypnik and A. E. Shishkov, Singular Solutions in Nonlinear Elliptic and Parabolic Equations. De Gruyter Series in Nonlinear Analysis and Applications, De Gruyter, Basel, 24 (2016), 435 p. Google Scholar
O. A. Ladyzhenskaya, V. A. Solonnikov and N. N. Uraltseva, Linear and Quasilinear Equations of Parabolic Type, Nauka, Moscow, 1967,736 p. Google Scholar
V. A. Galaktionov and A. E. Shishkov, Saint-Venant's principle in blow-up for higher-order quasilinear parabolic equations, Proc. Roy. Soc. Edinburgh. Sect. A,, 133 (2003), 1075-1119. Google Scholar
V. A. Galaktionov and A. E. Shishkov, Self-similar boundary blow-up for higher-order quasilinear parabolic equations, Proc. Roy. Soc. Edinburgh. Sect. A,, 135 (2005), 1195-1227. Google Scholar
A. E. Shishkov and A. G. Shchelkov, Boundary regimes with peaking for general quasilinear parabolic equations in multidimensional domains, Sb. Math., 190 (1999), 447-479. Google Scholar
Jian-Wen Sun, Wan-Tong Li, Zhi-Cheng Wang. A nonlocal dispersal logistic equation with spatial degeneracy. Discrete & Continuous Dynamical Systems - A, 2015, 35 (7) : 3217-3238. doi: 10.3934/dcds.2015.35.3217
Jesus Ildefonso Díaz, Jacqueline Fleckinger-Pellé. Positivity for large time of solutions of the heat equation: the parabolic antimaximum principle. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 193-200. doi: 10.3934/dcds.2004.10.193
Aníbal Rodríguez-Bernal, Robert Willie. Singular large diffusivity and spatial homogenization in a non homogeneous linear parabolic problem. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 385-410. doi: 10.3934/dcdsb.2005.5.385
Goro Akagi, Kei Matsuura. Well-posedness and large-time behaviors of solutions for a parabolic equation involving $p(x)$-Laplacian. Conference Publications, 2011, 2011 (Special) : 22-31. doi: 10.3934/proc.2011.2011.22
Rui Peng, Dong Wei. The periodic-parabolic logistic equation on $\mathbb{R}^N$. Discrete & Continuous Dynamical Systems - A, 2012, 32 (2) : 619-641. doi: 10.3934/dcds.2012.32.619
Michael Winkler. Emergence of large population densities despite logistic growth restrictions in fully parabolic chemotaxis systems. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2777-2793. doi: 10.3934/dcdsb.2017135
Umberto Mosco. Impulsive motion on synchronized spatial temporal grids. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6069-6098. doi: 10.3934/dcds.2017261
Aniello Raffaele Patrone, Otmar Scherzer. On a spatial-temporal decomposition of optical flow. Inverse Problems & Imaging, 2017, 11 (4) : 761-781. doi: 10.3934/ipi.2017036
Yihong Du, Yoshio Yamada. On the long-time limit of positive solutions to the degenerate logistic equation. Discrete & Continuous Dynamical Systems - A, 2009, 25 (1) : 123-132. doi: 10.3934/dcds.2009.25.123
István Győri, Yukihiko Nakata, Gergely Röst. Unbounded and blow-up solutions for a delay logistic equation with positive feedback. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2845-2854. doi: 10.3934/cpaa.2018134
Gianluca Mola. Recovering a large number of diffusion constants in a parabolic equation from energy measurements. Inverse Problems & Imaging, 2018, 12 (3) : 527-543. doi: 10.3934/ipi.2018023
Fengbai Li, Feng Rong. Decay of solutions to fractal parabolic conservation laws with large initial data. Communications on Pure & Applied Analysis, 2013, 12 (2) : 973-984. doi: 10.3934/cpaa.2013.12.973
Huijiang Zhao. Large time decay estimates of solutions of nonlinear parabolic equations. Discrete & Continuous Dynamical Systems - A, 2002, 8 (1) : 69-114. doi: 10.3934/dcds.2002.8.69
Tariel Sanikidze, A.F. Tedeev. On the temporal decay estimates for the degenerate parabolic system. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1755-1768. doi: 10.3934/cpaa.2013.12.1755
Rachidi B. Salako, Wenxian Shen. Existence of traveling wave solutions to parabolic-elliptic-elliptic chemotaxis systems with logistic source. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 293-319. doi: 10.3934/dcdss.2020017
Luis Caffarelli, Serena Dipierro, Enrico Valdinoci. A logistic equation with nonlocal interactions. Kinetic & Related Models, 2017, 10 (1) : 141-170. doi: 10.3934/krm.2017006
Piotr Biler, Ignacio Guerra, Grzegorz Karch. Large global-in-time solutions of the parabolic-parabolic Keller-Segel system on the plane. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2117-2126. doi: 10.3934/cpaa.2015.14.2117
Qingyan Shi, Junping Shi, Yongli Song. Hopf bifurcation and pattern formation in a delayed diffusive logistic model with spatial heterogeneity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 467-486. doi: 10.3934/dcdsb.2018182
Minkyu Kwak, Kyong Yu. The asymptotic behavior of solutions of a semilinear parabolic equation. Discrete & Continuous Dynamical Systems - A, 1996, 2 (4) : 483-496. doi: 10.3934/dcds.1996.2.483
Shota Sato, Eiji Yanagida. Asymptotic behavior of singular solutions for a semilinear parabolic equation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (11) : 4027-4043. doi: 10.3934/dcds.2012.32.4027
HTML views (10)
Andrey Shishkov | CommonCrawl |
Certified lattice reduction
AMC Home
A note on the fast algebraic immunity and its consequences on modified majority functions
February 2020, 14(1): 127-136. doi: 10.3934/amc.2020010
Highly nonlinear (vectorial) Boolean functions that are symmetric under some permutations
SelÇuk Kavut 1,, and Seher Tutdere 2,
Department of Computer Engineering, Faculty of Engineering, Balıkesir University, 10145 Balıkesir, Turkey
Department of Mathematics, Faculty of Arts and Science, Balıkesir University, 10145 Balıkesir, Turkey
*Corresponding author: Selçuk Kavut
Received November 2018 Published August 2019
Fund Project: This work is supported financially by Balıkesir University under grant BAP 2015/23
Table(2)
We first give a brief survey of the results on highly nonlinear single-output Boolean functions and bijective S-boxes that are symmetric under some permutations. After that, we perform a heuristic search for the symmetric (and involution) S-boxes which are bijective in dimension 8 and identify corresponding permutations yielding rich classes in terms of cryptographically desirable properties.
Keywords: Boolean functions, covering radius, differential uniformity, heuristic search, nonlinearity.
Mathematics Subject Classification: 11T71, 94A60.
Citation: SelÇuk Kavut, Seher Tutdere. Highly nonlinear (vectorial) Boolean functions that are symmetric under some permutations. Advances in Mathematics of Communications, 2020, 14 (1) : 127-136. doi: 10.3934/amc.2020010
M. Bartholomew-Biggs, Chapter 5: The steepest descent method, in Nonlinear Optimization with Financial Applications. Springer US, (2005), 51–64. Google Scholar
E. Biham and A. Shamir, Differential cryptanalysis of DES-like cryptosystems, Journal of Cryptology, 4 (1991), 3-72. doi: 10.1007/BF00630563. Google Scholar
K. A. Browning, J. F. Dillon, M. T. McQuistan and A. J. Wolfe, An APN permutation in dimension six, Contemporary Mathematics, 518 (2010), 33-42. doi: 10.1090/conm/518/10194. Google Scholar
C. Ding, G. Xiao and W. Shan, The Stability Theory of Stream Ciphers, Lecture Notes in Computer Science, 561. Springer-Verlag, Berlin Heidelberg, 1991. doi: 10.1007/3-540-54973-0. Google Scholar
H. Dobbertin, Construction of bent functions and balanced Boolean functions with high nonlinearity, Lecture Notes in Computer Science, 1008 (1994), 61-74. doi: 10.1007/3-540-60590-8_5. Google Scholar
E. Filiol and C. Fontaine, Highly nonlinear balanced Boolean functions with a good correlation-immunity, Lecture Notes in Computer Science, 1403 (1998), 475-488. doi: 10.1007/BFb0054147. Google Scholar
C. Fontaine, On some cosets of the first-order Reed-Muller code with high minimum weight, IEEE Transactions on Information Theory, 45 (1999), 1237-1243. doi: 10.1109/18.761276. Google Scholar
X.-D. Hou, On the norm and covering radius of first-order Reed-Muller codes, IEEE Transactions on Information Theory, 43 (1997), 1025-1027. doi: 10.1109/18.568715. Google Scholar
S. Kavut, Results on rotation-symmetric S-boxes, Information Sciences, 201 (2012), 93-113. doi: 10.1016/j.ins.2012.02.030. Google Scholar
S. Kavut and Sevdenur Baloǧlu, Results on symmetric S-boxes constructed by concatenation of RSSBs, Cryptography and Communications, 11 (2019), 641–660, http://dx.doi.org/10.1007/s12095-018-0318-1. doi: 10.1007/s12095-018-0318-1. Google Scholar
S. Kavut, S. Maitra, S. Sarkar and M. D. Yücel, Enumeration of 9-variable rotation symmetric Boolean functions having nonlinearity $>240$, Lecture Notes in Computer Science, 4329 (2006), 266-279. doi: 10.1007/11941378_19. Google Scholar
S. Kavut, S. Maitra and M. D. Yücel, Search for Boolean functions with excellent profiles in the rotation symmetric class, IEEE Transactions on Information Theory, 53 (2007), 1743-1751. doi: 10.1109/TIT.2007.894696. Google Scholar
S. Kavut and M. D. Yücel, 9-variable Boolean functions with nonlinearity 242 in the generalized rotation symmetric class, Information and Computation, 208 (2010), 341-350. doi: 10.1016/j.ic.2009.12.002. Google Scholar
S. Maitra, Balanced Boolean function on 13-variables having nonlinearity strictly greater than the bent concatenation bound, Boolean Functions in Cryptology and Information Security, 173–182, NATO Sci. Peace Secur. Ser. D Inf. Commun. Secur., 18, IOS, Amsterdam, 2008. Available from: https://eprint.iacr.org/2007/309.pdf. Google Scholar
S. Maitra, S. Kavut and M. D. Yücel, Balanced Boolean function on 13-variables having nonlinearity greater than the bent concatenation bound, Proceedings of Boolean Functions: Cryptography and Applications, (2008), 109–118. Google Scholar
M. Matsui, Linear cryptanalysis method for DES cipher, Lecture Notes in Computer Science, 765 (1993), 386-397. doi: 10.1007/3-540-48285-7_33. Google Scholar
K. Nyberg, Differentially uniform mappings for cryptography, Lecture Notes in Computer Science, 765 (1994), 55-64. doi: 10.1007/3-540-48285-7_6. Google Scholar
N. J. Patterson and D. H. Wiedemann, The covering radius of the $(2^15, 16)$ Reed-Muller code is at least 16276, IEEE Transactions on Information Theory, 29 (1983), 354-356. doi: 10.1109/TIT.1983.1056679. Google Scholar
V. Rijmen, P. S. L. M. Barreto and D. L. Gazzoni Filho, Rotation symmetry in algebraically generated cryptographic substitution tables, Information Processing Letters, 106 (2008), 246-250. doi: 10.1016/j.ipl.2007.09.012. Google Scholar
S. Sarkar and S. Maitra, Idempotents in the neighbourhood of Patterson-Wiedemann functions having Walsh spectra zeros, Designs, Codes and Cryptography, 49 (2008), 95-103. doi: 10.1007/s10623-008-9181-y. Google Scholar
P. Stǎnicǎ and S. Maitra, Rotation symmetric Boolean functions - count and cryptographic properties, Discrete Applied Mathematics, 156 (2008), 1567-1580. doi: 10.1016/j.dam.2007.04.029. Google Scholar
P. Stǎnicǎ, S. Maitra and J. Clark, Results on rotation symmetric bent and correlation immune Boolean functions, Lecture Notes in Computer Science, 3017 (2004), 161-177. Google Scholar
Table 1. A summary of the highest nonlinearities for odd $ n\ge 9 $
Number of variables ($ n $) 9 11 13 15
Bent concatenation bound 240 992 4032 16256
($ 2^{n-1}-2^\frac{n-1}{2} $)
Upper bound 244 1000 4050 16292
($ 2\left\lfloor 2^{n-2}-2^{\frac{n}{2}-2}\right\rfloor $)
Unbalanced nonlinearities
[18] $ - $ $ - $ $ - $ 16276
[13] 242 996 4040 $ - $
Balanced nonlinearities
[15] $ - $ $ - $ 4036 $ - $
Table 2. Best achieved cryptographic properties [nonlinearity, differential uniformity, algebraic degree]
$ \# $ Representative
permutation Space
size Best result
(for involution S-boxes) Best result
1 $ (7,6,2,1,8,5,4,3) $ $ 2^{147.93} $ $ [84,44,7] $ $ [84,44,7] $
3 $ (6,7,5,8,4,3,1,2)^a $ $ 2^{208.29} $ $ \bf{[106,6,7]} $ $ \bf{[106,6,7]}, \bf{[108,8,6]} $
4 $ (4,3,2,5,8,1,7,6) $ $ 2^{227.35} $ $ [0, -, -] $ $ [0, -, -] $
5 $ (4,5,3,2,8,1,6,7) $ $ 2^{243.74} $ $ \bf {[106,6,7]} $ $ \bf {[106,6,7]} $
6 $ (8,3,4,6,7,1,5,2) $ $ 2^{277.78} $ $ [104,6,7] $ $ [104,6,7], {\bf{[106,8,7]}} $
7 $ (8,6,3,5,2,1,7,4) $ $ 2^{283.02} $ $ [104,10,7] $ $ \it {[104,8,7]} $
9 $ (2,6,3,4,5,8,1,7) $ $ 2^{358.65} $ $ [100,10,7] $ $ [100,10,7],\it{[104,20,7]} $
10 $ (7,3,6,1,8,2,4,5) $ $ 2^{359.22} $ $ [0, -, -] $ $ [0, -, -] $
11 $ (7,6,1,2,3,8,5,4)^b $ $ 2^{412.21} $ $ [104,6,7] $ $ [104,6,7], {\bf{[106,8,7]}} $
13 $ (6,4,8,2,1,7,5,3) $ $ 2^{440.19} $ $ [84,22,7] $ $ [84,22,7] $
16 $ (4,3,8,5,1,6,7,2) $ $ 2^{565.87} $ $ [104,6,7] $ $ [104,6,7] $
18 $ (7,6,5,8,3,2,1,4)^c $ $ 2^{824.73} $ $ [104,6,7] $ $ [104,6,7] $
21 $ (8,2,3,4,5,6,7,1) $ $ 2^{1076.16} $ $ [0, -, -] $ $ [0, -, -] $
22 $ (1,2,3,4,5,6,7,8)^d $ $ 2^{1684} $ $ [102,6,7] $ $ \it{[104,6,7]} $
$ ^a: $ Linear equivalet to RSSBs
$ ^b: $ Linear equivalent to 2-RSSBs
$ ^c: $ Linear equivalent to 4-RSSBs
$ ^d: $ The search space of all bijective S-boxes
Pascale Charpin, Jie Peng. Differential uniformity and the associated codes of cryptographic functions. Advances in Mathematics of Communications, 2019, 13 (4) : 579-600. doi: 10.3934/amc.2019036
Manish K. Gupta, Chinnappillai Durairajan. On the covering radius of some modular codes. Advances in Mathematics of Communications, 2014, 8 (2) : 129-137. doi: 10.3934/amc.2014.8.129
Claude Carlet, Khoongming Khoo, Chu-Wee Lim, Chuan-Wen Loe. On an improved correlation analysis of stream ciphers using multi-output Boolean functions and the related generalized notion of nonlinearity. Advances in Mathematics of Communications, 2008, 2 (2) : 201-221. doi: 10.3934/amc.2008.2.201
Sugata Gangopadhyay, Constanza Riera, Pantelimon Stăniă. Gowers $ U_2 $ norm as a measure of nonlinearity for Boolean functions and their generalizations. Advances in Mathematics of Communications, 2019, 0 (0) : 0-0. doi: 10.3934/amc.2020056
Rafael Arce-Nazario, Francis N. Castro, Jose Ortiz-Ubarri. On the covering radius of some binary cyclic codes. Advances in Mathematics of Communications, 2017, 11 (2) : 329-338. doi: 10.3934/amc.2017025
Constanza Riera, Pantelimon Stănică. Landscape Boolean functions. Advances in Mathematics of Communications, 2019, 13 (4) : 613-627. doi: 10.3934/amc.2019038
Claude Carlet, Serge Feukoua. Three basic questions on Boolean functions. Advances in Mathematics of Communications, 2017, 11 (4) : 837-855. doi: 10.3934/amc.2017061
Sihem Mesnager, Gérard Cohen. Fast algebraic immunity of Boolean functions. Advances in Mathematics of Communications, 2017, 11 (2) : 373-377. doi: 10.3934/amc.2017031
Axel Kohnert, Johannes Zwanzger. New linear codes with prescribed group of automorphisms found by heuristic search. Advances in Mathematics of Communications, 2009, 3 (2) : 157-166. doi: 10.3934/amc.2009.3.157
Tsonka Baicheva, Iliya Bouyukliev. On the least covering radius of binary linear codes of dimension 6. Advances in Mathematics of Communications, 2010, 4 (3) : 399-404. doi: 10.3934/amc.2010.4.399
Andrew Klapper, Andrew Mertz. The two covering radius of the two error correcting BCH code. Advances in Mathematics of Communications, 2009, 3 (1) : 83-95. doi: 10.3934/amc.2009.3.83
Jian Liu, Sihem Mesnager, Lusheng Chen. Variation on correlation immune Boolean and vectorial functions. Advances in Mathematics of Communications, 2016, 10 (4) : 895-919. doi: 10.3934/amc.2016048
Yu Zhou. On the distribution of auto-correlation value of balanced Boolean functions. Advances in Mathematics of Communications, 2013, 7 (3) : 335-347. doi: 10.3934/amc.2013.7.335
Claude Carlet, Serge Feukoua. Three parameters of Boolean functions related to their constancy on affine spaces. Advances in Mathematics of Communications, 2019, 0 (0) : 0-0. doi: 10.3934/amc.2020036
Yang Yang, Xiaohu Tang, Guang Gong. Even periodic and odd periodic complementary sequence pairs from generalized Boolean functions. Advances in Mathematics of Communications, 2013, 7 (2) : 113-125. doi: 10.3934/amc.2013.7.113
Claude Carlet, Brahim Merabet. Asymptotic lower bound on the algebraic immunity of random balanced multi-output Boolean functions. Advances in Mathematics of Communications, 2013, 7 (2) : 197-217. doi: 10.3934/amc.2013.7.197
Daria Bugajewska, Mirosława Zima. On the spectral radius of linearly bounded operators and existence results for functional-differential equations. Conference Publications, 2003, 2003 (Special) : 147-155. doi: 10.3934/proc.2003.2003.147
Volodymyr Pichkur. On practical stability of differential inclusions using Lyapunov functions. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1977-1986. doi: 10.3934/dcdsb.2017116
Pierre Magal. Global stability for differential equations with homogeneous nonlinearity and application to population dynamics. Discrete & Continuous Dynamical Systems - B, 2002, 2 (4) : 541-560. doi: 10.3934/dcdsb.2002.2.541
Yi Ming Zou. Dynamics of boolean networks. Discrete & Continuous Dynamical Systems - S, 2011, 4 (6) : 1629-1640. doi: 10.3934/dcdss.2011.4.1629
HTML views (193)
SelÇuk Kavut Seher Tutdere | CommonCrawl |
Assessment of PIT tag retention, growth and post-tagging survival in juvenile lumpfish, Cyclopterus lumpus
Jack D'Arcy ORCID: orcid.org/0000-0003-4771-13201,
Suzanne Kelly1,
Tom McDermott1,
John Hyland2,
Dave Jackson1 &
Majbritt Bolton-Warberg2
Passive integrated transponder (PIT) tags are used to study the movement and behaviour in populations of a wide variety of fish species and for a number of different applications from fisheries to aquaculture. Before embarking on long-term studies, it is important to collect information on both short- and medium-term survival and tag retention for the species in question. In this study, 90 juvenile lumpfish (10–20 g, 30 fish per replicate tank) were implanted with 12.5-mm FDX PIT tags.
Tag retention, growth rates and survival were compared to those of fish subjected to handling only (90 fish, 30 per replicate tank). Overall survival was 100% during the 28-day monitoring period, and tag retention was 99%.
Results indicate that retention rates of 12.5-mm PIT tags in juvenile lumpfish are high, and there is no significant effect on growth rates or survival in a hatchery environment.
Passive integrated transponder (PIT) tags are a low-cost method for marking individuals for breeding applications and mark-recapture studies as well as offering a non-obtrusive method to observe progress, behaviour and movements of tagged individuals using antennae. PIT tags have been used on many different species of fish since the 1980s e.g. [1,2,3,4,5,6]. Earlier studies were primarily concerned with tag retention and effectiveness of the technology with a variety of body locations tested as insertion points (e.g. peritoneal cavity, opercular muscle and dorsal muscle) [2]. As the use of PIT tags grew more widespread, focus turned to the potential adverse effects of PIT tag implantation in fish [7, 8]. In addition, evaluation of the compatibility of PIT tag use in different species was considered. A common side-effect of PIT tagging is the encapsulation or rejection of tags over the long term [9], with one study resulting in the migration of the tag from the intraperitoneal cavity into the body cavity. This led the authors to conclude that intramuscular insertion was preferential [10]. In contrast, some studies have suggested that intramuscular insertion can lead to greater tag rejection (e.g. [11]), while others found no effect on survival, relative daily growth or tag retention between experimental groups (control, peritoneal cavity and dorsal musculature, [12]).
A suite of parameters has been used to assess the effects of tagging fish, including survival, growth, condition and cortisol level. Growth and condition are the least subjective parameters to measure for assessing tagging effects. Depressed growth has been reported following surgical implantation of PIT tags into the intraperitoneal cavity in several species (e.g. [13, 14]). An evaluation of survival, growth and condition in juvenile Atlantic salmon Salmo salar (L.) tagged with two tag sizes (23 mm vs 32 mm) found reduced growth and some tag rejection of the larger size [15]. Similarly, Smircich and Kelly [16] noted slower growth due to 'heavy' tags (tag 9.3% of body weight) initially, but compensatory growth occurred as the trial progressed. Skov et al. [5] studied mortality, condition, specific growth rates and tag expulsion and found no difference between test groups of roach Rutilus rutilus (Linnaeus, 1758) of average weights between 20.6 and 24.7 g using 23-mm PIT tags inserted into the body cavity. Lower et al. [17] found increased levels of environmental cortisol levels in holding tanks post-tagging, which reverted to normal levels of 12-h post-tagging.
Understanding the effects of tagging on more subjective parameters (e.g. swimming ability, behaviour) may provide a more thorough picture when examined in addition to the parameters discussed above. However, species-specific reaction to tagging should be considered. For example, one study found that maximum burst swimming speeds were significantly lower in PIT-tagged fish compared to the control group [18]. In contrast, an experiment with rainbow trout Oncorhynchus mykiss (Walbaum, 1792) noted no significant effect on swimming performance between experimental groups [19]. In addition, no significant differences between control and PIT-tagged groups for either the latency to resume feeding or the amount of food eaten have been noted in several species [19].
Lumpfish Cyclopterus lumpus (L.) are highly effective at removing sea lice Lepeophtheirus salmonis (Krøyer, 1837) from farmed Atlantic salmon [20,21,22] and are being deployed in sea pens in large numbers [23,24,25]. Special feeds, refuges/shelters and husbandry techniques are required to maintain condition and facilitate effective sea lice removal. The ability to tag, observe and monitor individual fish provides a valuable insight into fish behaviour. For example, LeClerq et al. [26] used a passive-acoustic telemetry system to track individual cleaner fish in salmon pens. By tracking and visualising fish movements, the authors highlighted the critical role of refuges/shelters in cleaner fish husbandry and welfare. Future studies will benefit from the ability to track individual animals over time using PIT tags, and they will have the additional benefits of accounting for individual variation and the reduction in the number of test subjects required.
Before undertaking research using PIT tags, it is imperative that pilot studies be conducted to determine any potential behavioural or physiological consequences to the organism due to tag insertion. It is also necessary to ensure that the data generated are representative for untagged conspecifics and that the tagging itself does not impede or impair health and welfare of the fish [27]. Feasibility studies on tag acceptance are strongly encouraged when no detailed data are available on the species of interest, both for ethical considerations and validation of results [28].
Although many lumpfish studies state that individuals have been tagged, there are no empirical peer-reviewed data, to the authors' knowledge, on the effects of tagging on lumpfish growth and survival. Therefore, for ethical consideration and validation of results prior to the initiation of a large-scale tagging study, it was deemed necessary to undertake an evaluation of this species acceptance of PIT tags. This study was developed to consider the suitability of 10–20 g lumpfish for intraperitoneal tagging with 12.5 mm × 2.1-mm PIT tag by assessing: (1) survival of tagged fish up to 1-month post-tagging (2) growth and condition of tagged fish and (3) tag retention for juvenile lumpfish.
Lumpfish origins and rearing
Lumpfish eggs were sourced from an Icelandic hatchery and transported to Ireland. These were disinfected (Pyceze®, as directed by the manufacturer) on arrival at Carna Research Station (CRS) (National University of Ireland, Galway) and maintained in standard lumpfish egg incubation cones (recirculating system). On hatching, larvae were reared in 440 L square glass re-enforced plastic (GRP) tanks, in the same recirculating system as the egg cones, at 10.0 ± 0.6 °C. After 2 months, all fish were transferred to flow-through tanks (1200 L) for ongrowing. Feed (Otohime®) was administered using belt feeders (recirculating system) and Linn® automatic feeders (flow-through system). Feeders were set to distribute feed for approximately 14 h per day, while belt feeders released feed on a continuous basis; the Linn® feeders were set to dispense feed at regular intervals (every 15–20 min). A simulated photoperiod of 16 h light and 8 h dark was maintained using overhead lights on a timer. Fish were fed during light hours at various rates (1–10% of total biomass) depending on fish size. All tanks were cleaned regularly to prevent build-up of waste, and water quality data (temperature and oxygen) were taken twice daily using a hand-held Oxyguard® Probe. Water in the recirculating system was exchanged at a rate of 20–25% every other day to maintain water quality, a standard practice developed for this system in CRS.
Experimental tanks and set up
Lumpfish were housed in either rectangular tanks ca. 200 L or square tanks (1200 L). All tanks were fed with ambient flow-through sea water that had been filtered via both drum and UV filtration systems. Ambient water temperature ranged between 2.9 and 8.8 °C throughout the duration of the study. Fish were fed for the duration of the tagging study using Linn® automatic feeders as described above, with a photoperiod of 16:8 light/dark.
At 175 days of post-hatching, 180 fish were randomly assigned to a treatment group and to one of six tanks (1200 L). Treatment groups were (a) anaesthetised and measured (handled treatment) or (b) anaesthetised, measured and PIT tagged (PIT-tagged treatment). A decision was made to maintain three groups of tagged and three groups of non-tagged/handled fish separately (i.e. 61,200 L tanks) because it was unknown whether tags would be lost or not. Mixing both tagged and untagged fish in a tank would prevent researchers from determining tag retention, should any tags be lost. This approach was adapted from several studies [5, 10, 29]. Each tank contained 30 fish, with three replicate tanks giving a total of 90 fish per treatment. All fish were starved for at least 24 h prior to the start of the experiment (standard practice in CRS for finfish sampling to reduce stress) and feeding was initiated 24-h post-tagging [30].
The automatic feeder for one of the control tanks tripped, resulting in the tank being fed partially in the dark during the first week of the study. Only one tank of control fish was affected, and this was corrected in subsequent weeks.
Tagging method
As the lumpfish is a weak swimmer with a short body, preference was given to intraperitoneal tag insertion rather than dorsal muscle insertion so as not to further impede swimming ability. Additionally, as the body cavity was large enough to accommodate a 12.5-mm tag (ca. 0.1 g), it was preferable to the opercular muscle which was considered too small for 10–20 g fish. Juvenile lumpfish (10–20 g) were anaesthetised by immersion in a 100 mg/L solution of Tricaine® [31]. Tag mass was between 1 and 2% of fish mass. When the fish became non-responsive (approximately 50 s), they were removed from the anaesthetic, measured and weighed to the nearest 0.1 mm (total length) and 0.1 g (total weight). At this point, fish from the handled treatment were placed in an observation tank until 30 fish had been measured. Once all 30 fish had recovered, they were placed in their respective study tank. This was repeated for each replicate tank.
Fish in the PIT-tagged treatment were tagged with a 12.5 × 2.1-mm full duplex (FDX) PIT tag adapting methods developed by Biomark® (see Fig. 1a). In brief, each fish was held ventral side up with the tail pointing away from the operator. A preloaded, sterile needle was inserted (bevel down) posterior to the edge of the suction disc to the side of the mid-ventral line (Fig. 1a). This ensures that the tag is inserted away from the heart and other vital organs. The angle of the needle was approximately 10°–20° from the axis of the fish body. The depth of needle penetration was dependent on the size of the fish, with larger fish requiring a deeper insertion as their skin was thicker. Failure to pierce the skin fully results in a tag failing to insert fully (D'Arcy and Bolton-Warberg, personal observation). Each fish was placed in an observation tank similar to the handled treatment. Once 30 fish had been tagged and subsequently recovered, they were placed into a study tank. This was repeated until three PIT-tagged treatment tanks were filled.
a and b. Intraperitoneal tagging of juvenile cultured lumpfish (10–20 g) showing a position of fish during tagging, insertion point, and angle of needle and b final position of tag in a fish tagged at a very shallow depth
Short-term observation
A short-term observation (15 min) of behaviour in both handled and PIT-tagged fish was undertaken by monitoring recovery from anaesthesia for each individual immediately following the anaesthetising/tagging in the recovery tank. A subjective baseline for 'normal' behaviour was determined by an operator with 10+ year's husbandry experience of marine finfish (5 years with lumpfish). For this assessment, 'normal' behaviour was defined as behaving in a manner identical to pre-tagging/anaesthetising, i.e. recovery of an upright body position, an ability to stick to flat surfaces, and swimming ability in short bursts typical of lumpfish.
Medium-term assessment
Fish were reared for 28-day post-tagging (time frame also reported in e.g. [5, 16, 30]) in the study tanks with ambient seawater. Any mortalities were removed and recorded daily. During post-tagging on days 8, 14, 22 and 28, all fish were removed from their tanks, and measured for total length and weight without anaesthetic as is standard practice for lumpfish. All fish in the PIT-tagged groups were scanned for tags using a PIT Biomark® 601 hand-held reader, and wounds visually inspected externally. Insertion points with healed skin, healed muscle and the internal muscle appearing closed were deemed sufficiently healed to prevent tag loss, while wounds that remained open were still considered vulnerable to tag loss (e.g. [32]). Any bruising, blood or missing tags were noted, and a subsample was photographed.
All data analysis was carried out using Minitab® 17, with a significance value (ɑ) of 0.05, unless otherwise stated.
Survival and tag retention
Survival (%) was calculated as:
$$S\left( \% \right) = 100 \times \left( {\text{final number of fish}} \right)/\left( {\text{initial number of fish}} \right).$$
A Chi-square test was used to compare survival among treatment groups.
Tag retention (%) was calculated for each tagged group as:
$${\text{TR}}\left( \% \right) = 100 \times \left( {\text{number of fish that retained their tags}} \right)/\left( {\text{number of fish tagged}} \right)$$
The proportion of fish in each tagged tank with wounds that were sufficiently healed, and assumed likely to retain their tag, was calculated. A score was adapted from Thorstad et al. [33] and is described as follows: wound at time of tag insertion as muscle is visible and open = 0% healed; deeper layers of skin and muscle sealed but outer layers of skin still scarred/open = 50% healed; no perceptible wound = 100% healed.
Growth and condition
Nested analysis of variance (ANOVA) was used to test for potential differences in weight of fish between tagged and handled fish, where replicates were nested within treatments (e.g. as used in [34]) on the final day of the experiment. In addition, in order to evaluate differences between experimental groups over the duration of the experiment, a pairwise comparisons t test was undertaken on mean weights and specific growth rate (SGR).
Specific growth rate of each group (tank) was calculated according to the formula of Houde and Schekter [35]:
$${\text{SGR}} = 100 \times \left( {e^{g} - 1} \right)$$
where g = (ln (W2) − ln (W1))/(t2 − t1) and W2 and W1 are mean weights on days t2 and t1, respectively.
Condition factor (K) of individual lumpfish (calculated at each weighing interval) is defined as:
$$K = 100 \times \left( {W/L^{3} } \right)$$
Final condition factors and SGR for all treatment groups were compared using t tests.
The short-term assessment of fish post-anaesthesia and tagging revealed that complete recovery (i.e. returning to a state of behaviour [swimming, suction and orientation] identical to that which was observed prior to anaesthetic and tag insertion by an experienced lumpfish producer/researcher) in both handled and PIT-tagged fish occurred within 2 min. During the medium-term assessment, needle insertion wounds were found to be at least 50% healed within one week and 90–100% healed after 2 weeks in most fish (Fig. 2). One tagged lumpfish had a tag coming out of the wound from day 22 but had not been lost by day 28. Two tagged fish exhibited small bulges at their wound, but the skin around the wound was healed.
Bar chart showing the percentage of lumpfish with wounds ranging from fully healed (day 14–28) to newly incised (day 0)
Survival of all lumpfish (both tagged and control groups) for all six tanks was 100% at 28 days of post-tagging.
Tag retention was 100% at 28-day post-tagging for all PIT-tagged individuals. All fish were checked for wound health on day 28 and found to vary from small bulges near the wound to fully healed skin (100% healed). Overall, 99% of fish had wounds that were classified as 'skin 100% healed' and no longer at a risk of tag loss. The remaining lumpfish (one individual) had the tag protruding and was considered to still be at risk of tag loss.
Initial weights for the experimental population ranged from 9.7 to 21.0 g, with an average of 13.9 ± 2.9 g. There was no significant difference in total weight between replicates (nested ANOVA, p > 0.05) or treatments (nested ANOVA, p > 0.05) at the beginning of the experiment (Fig. 3). Similarly, there was no significant difference between the final weights of the two treatments or replicates within treatments (nested ANOVA, p > 0.05). Pairwise comparison tests between handled and tagged fish (all replicate tanks combined) revealed no significant difference in mean body mass or specific growth rate for the duration of the experiment (t test, p > 0.05). In all tank populations, fish had the lowest growth rates in the period between day 8 and day 14 (Fig. 4) which was attributable to lower than normal temperatures experienced during this time (see Additional file 1).
Mean weight ± SD (g) of juvenile lumpfish in handled and PIT-tagged groups. Means calculated from 90 fish (30 per replicate tank)
Relationship between specific growth rate (%) and geomean (g) of juvenile lumpfish in handled and PIT-tagged groups. Each data point represents the data for one tank of fish (6 points/tanks per sampling period). Each colour/shape combination represents a different time period, unfilled shapes indicate control groups, and filled shapes indicate tagged groups (four different sampling periods)
Final condition factor varied between 0.8 and 1.3, with an overall mean of 1.0 ± 0.1. Over the duration of the experiment, there was no significant difference in condition between handled and tagged groups (all replicate tank data combined, t test, p > 0.05).
This study evaluated the suitability of intraperitoneal implantation of 12.5 mm × 2.1-mm PIT tags on small lumpfish (ca. 10–20 g). The objectives are deemed to have been met based on the results, namely survival, tag retention and growth which were comparable between tagged and control groups. Additionally, wounds healed well and recovery from anaesthesia occurred without any ill effects in all experimental groups. To facilitate comparisons between studies Additional file 2: Table S1 includes results from research using similar parameters to the present study to assess the suitability of PIT tags.
There are several considerations prior to the insertion of PIT tags in a finfish species for the first time. In the past, and in the absence of data, the weight of the tag relative to the weight of the fish (tag/bodymass) has generally been recommended to be no more than 2%. However, Jepsen et al. [28] concluded that the maximum useable tag size is driven by the specific study objectives, the tagging method and the species/life stage involved, although there is not a generally applicable rule relating to the tag/bodymass relationship. They also noted that the impact on behavioural effects, as well as the role of the environment and fish condition at the time of tagging, should be considered.
There is clear evidence that species-specific differences exist with tagging method, tag size and fish size all acting as contributing factors. In some species (e.g. [19, 32, 36]), a minimal effect on survival and health has been observed in tagged individuals. In others (e.g. [13, 37]), tagging resulted in high mortality. The recovery from anaesthesia of the test subjects in this study was assessed using their general behaviour including swimming ability. Their ability to swim was not used to assess the effects of inserting a 12.5-mm PIT tag, however, because lumpfish are relatively poor swimmers that typically spend most of their time adhering to a surface. Studies on other fish species have used swimming ability as an effective evaluation of tag effects on movement, for example [18, 19].
Various methodologies for measuring wound healing in tagging studies have been utilised. In the present study, wound healing was described as percentage healed, which was an easy method to employ. The time it took for the insertion wounds to heal for the lumpfish in this study was comparable to, or faster than, other fish species [7, 10, 11, 37]. However, it should be noted that in one study [37], smaller fish took significantly longer to heal than larger fish, and overall survival was poor (60%) despite the relatively quick healing times. A comparison of implantation methods found significantly reduced wound healing for fish tagged by incision compared to syringe [38]. Wound healing is not measured in all tagging studies [5, 13, 14, 30, 32, 36, 39,40,41,42] (Additional file 2) and may not always be indicative of ill effects of tagging. However, wound healing should be monitored in the interest of fish welfare and ensured that the wound has no significant impact on survival, tag retention, growth, condition and general health.
Overall, there was 100% survival of fish in this study throughout the study period comparable to other studies [5, 7, 10, 28, 30, 40]. The intraperitoneal tagging method employed in this study was adapted from best practice (i.e. bevel down, acute angle, to the side of the central line and away from the anterior); therefore, the probability of tag-induced mortality was reduced. In some longer studies, survival was marginally lower; > 95% in Fuller and McEntire [41] and 92% in Simard et al. [32] at 41–118-day post-tagging. In one study, both fish and tag sizes had a clear impact on mortality, with mortality only occurring in smaller individuals (fork length < 103 mm) tagged with the larger tags (32 mm vs 23 mm) [15]. These results highlight the importance of undertaking an evaluation of tag suitability prior to undertaking a large-scale experiment.
Tag retention was very high among the fish in this study, with similar results found in several other studies [5, 7, 32, 41]. For example, a study of juvenile Atlantic salmon (80 to 135-mm fork length) by Larsen et al. [15] found that retention rates of 23-mm PIT tags with and without suture closure were 100% and 97%, respectively, while retention of larger 32-mm PIT tags without suture closure was 69% primarily due to the large tag-to-body size ratio. Another study found a tag retention rate of > 80% [30], which was not quite as high as this study. Retention rates can be influenced by several factors, for example, the angle of insertion [9], tag size [15] and fish size [37] as cited in Grieve [42]. The high retention rates observed in this study are likely related to the thin needle used for tag insertion, the relatively small tag size (12.5 mm) and the low tag burden (1–2%).
The marginally reduced growth exhibited by one of the control groups during the first week is explained by a non-synchronised automatic feeder. The mean weight for this group was smaller than all the other experimental groups from this point, yet it had similar growth rates once the feeder had its settings corrected. Specific growth rates were reduced in all six groups during the second week of the trial, which coincided with the lowest water temperatures experienced throughout. No significant differences in growth and condition were observed between the tagged and control groups in this study. This compares with other tagging evaluations in which no negative impact was found regardless of tagging location, e.g. [7, 10, 14, 30, 41], while another assessment demonstrated reduced growth rates over the first 3 days of post-tagging but normal growth thereafter [13]. In contrast, two separate studies have found that growth of small fish was negatively affected by tagging [14, 37]. It is probable that tag burden effects were avoided in the present study by selecting larger-sized fish. Other studies have found that larger PIT tags can affect growth via tag burden [14, 27, 43]. Tag burden occurs when the tag significantly adds to the fish's mass. It is noted when the growth of fish is hindered due to an inability to move efficiently and added energy requirements to compensate for the tag's mass [44]. Tag burden in the present study was between 1 and 2%. Other trials have examined the effects of tag burden on fish behaviour and physiology and have concluded that many species can cope with tag-to-body weight ratios of up to 5% without being negatively affected [45,46,47,48]. If larger tags were required with juvenile lumpfish or similar sized tags were to be used on much smaller lumpfish, an additional evaluation study would be required.
When all aspects of this trial are considered, namely a relatively quick healing time, very high survival and tag retention, and no negative effect of growth and condition, it can be concluded that small lumpfish are suitable candidates for tagging with 12.5-mm PIT tags. These results are attributable to adherence to best practice for intraperitoneal tagging, low tag burden, small tag and needle size and candidate species. Prior to all manner of future studies in lumpfish condition/health, welfare, feeding behaviour and broodstock selection, the researcher, having followed the methods described herein, will be reassured that all efforts to tag and track an individual fish using 12.5-mm PIT tags will have minimal adverse effects on the physiology and behaviour of the lumpfish. The ultimate goal of the emerging lumpfish aquaculture industry is to produce juveniles that adapt well to deployment in salmon pens and are efficient at delousing farmed salmon while maintaining the health and welfare of both salmon and cleaner fish [49].
The datasets used and analysed during the current study are available from the corresponding author on reasonable request.
PIT:
Passive integrated transponder
FDX:
Full duplex
Glass re-enforced plastic
SGR:
Specific growth rate
Tukey's HSD test:
Honestly significant difference test
BIM:
Bord Iascaigh Mhara
KGS:
Knowledge gateway scheme
EMFF:
The European Maritime and Fisheries Fund
Prentice E, Flagg T, McClutcheon C, Brastow D. PIT-tag monitoring systems for hydroelectric dams and fish hatcheries. In: American fisheries society symposium. 1990. p. 323–34.
Prentice EF, Park DL. A study to determine the biological feasibility of a new fish tagging system. Annu Rep Res. 1984;1984:83–91.
Cucherousset J, Roussel J-M, Keeler R, Cunjak RA, Stump R. The use of two new portable 12-mm PIT tag detectors to track small fish in shallow streams. North Am J Fish Manag. 2005;25:270–4.
Mahapatra KD, Gjerde B, Reddy PVGK, Sahoo M, Jana RK, Saha JN, et al. Tagging: on the use of passive integrated transponder (PIT) tags for the identification of fish. Aquac Res. 2001;32:47–50.
Skov C, Brodersen J, Bronmark C, Hansson L-A, Hertonsson P, Nilsson PA. Evaluation of PIT-tagging in cyprinids. J Fish Biol. 2005;67:1195–201.
Prentice E, Park D, Flagg T, McCutcheon C. A study to determine the biological feasibility of a new fish tagging system. Report to Bonneville Power Administration, Project 83-319. 1986.
Burdick BD, Hamman RL. A study to evaluate several tagging and marking systems for Colorado squawfish, razorback sucker, and bonytail. Colorado: US Fish and Wildlife Service Grand Junction; 1993.
Stakėnas S, Copp G, Scott D. Tagging effects on three non-native fish species in England (Lepomis gibbosus, Pseudorasbora parva, Sander lucioperca) and of native Salmo trutta. Ecol Freshw Fish. 2009;18:167–76.
Gheorghiu C, Hanna J, Smith JW, Smith DS, Wilkie MP. Encapsulation and migration of PIT tags implanted in brown trout (Salmo trutta L.). Aquaculture. 2010;298:350–3.
Hopko M, Zakęś Z, Kowalska A, Partyka K. Impact of intraperitoneal and intramuscular PIT tags on survival, growth, and tag retention in juvenile pikeperch, Sander lucioperca (L.). Arch Pol Fish. 2010;18:85–92.
Navarro A, Oliva V, Zamorano M, Ginés R, Izquierdo M, Astorga N, et al. Evaluation of PIT system as a method to tag fingerlings of gilthead seabream (Sparus auratus L.): effects on growth, mortality and tag loss. Aquaculture. 2006;257:309–15.
Wagner CP, Jennings MJ, Kampa JM, Wahl DH. Survival, growth, and tag retention in age-0 muskellunge implanted with passive integrated transponders. North Am J Fish Manag. 2007;27:873–7.
Baras E, Westerloppe L, Mélard C, Philippart J-C, Bénech V. Evaluation of implantation procedures for PIT-tagging Juvenile nile tilapia. North Am J Aquac. 1999;61:246–51.
Clark SR. Effects of passive integrated transponder tags on the physiology and swimming performance of a small-bodied stream fish. Trans Am Fish Soc. 2016;145:1179–92.
Larsen MH, Thorn AN, Skov C, Aarestrup K. Effects of passive integrated transponder tags on survival and growth of juvenile Atlantic salmon Salmo salar. Anim Biotelemetry. 2013;1:19.
Smircich MG, Kelly JT. Extending the 2% rule: the effects of heavy internal tags on stress physiology, swimming performance, and growth in brook trout. Anim Biotelemetry. 2014;2:16.
Lower N, Moore A, Scott AP, Ellis T, James JD, Russell IC. A non-invasive method to assess the impact of electronic tag insertion on stress levels in fishes. J Fish Biol. 2005;67:1202–12.
Mueller RP, Moursund RA, Bleich MD. Tagging juvenile Pacific lamprey with passive integrated transponders: methodology, short-term mortality, and influence on swimming performance. North Am J Fish Manag. 2006;26:361–6.
Newby NC, Binder TR, Stevens ED. Passive integrated transponder (PIT) tagging did not negatively affect the short-term feeding behavior or swimming performance of juvenile rainbow trout. Trans Am Fish Soc. 2007;136:341–5.
Skiftesvik AB, Reidun BM, Durif CMF, et al. Delousing of Atlantic salmon (Salmo salar) by cultured vs. wild ballan wrasse (Labrus bergylta). Aquaculture. 2013;402:113–8.
Imsland A, Reynolds P, Eliassen G, Hangstad A, Foss A, Vikingstad E, et al. The use of lumpfish (Cyclopterus lumpus L.) to control sea lice (Lepeophtheirus salmonis Krøyer) infestations in intensively farmed Atlantic salmon (Salmo salar L.). Aquaculture. 2014;424–425:18–23.
Reynolds P, Eliassen G, Elvergård T, Hangstad T, Foss A, Vikingstad E, et al. Reynolds P, Eliassen G, Elvergård TA, Hangstad TA, Foss A, Vikingstad E, Imsland AK. Fish farming expert. 2015. p. 34–6.
Norwegian Directorate of Fisheries. Sale of farmed cleaner fish 2012–2016. http://www.fiskeridir.no. 2017. http://www.fiskeridir.no/English/Aquaculture/Statistics/Cleanerfish-Lumpfish-and-Wrasse. Accessed 20 Oct 2017.
Bolton-Warberg M. An overview of cleaner fish use in Ireland. J Fish Dis. 2017;41:935–9.
Bolton-Warberg M, Murphy O' Sullivan S, Power AM, Irwin Moore A, Wilson L, Sproll F, et al. Cleaner fish use in Ireland. In: Treasurer J, editor. Cleanerfish biological aquaculture application. 5M; 2018.
Leclercq E, Zerafa B, Brooker AJ, Davie A, Migaud H. Application of passive-acoustic telemetry to explore the behaviour of ballan wrasse (Labrus bergylta) and lumpfish (Cyclopterus lumpus) in commercial Scottish salmon sea-pens. Aquaculture. 2018;495:1–12.
Cooke SJ, Woodley CM, Brad Eppard M, Brown RS, Nielsen JL. Advancing the surgical implantation of electronic tags in fish: a gap analysis and research agenda based on a review of trends in intracoelomic tagging effects studies. Rev Fish Biol Fish. 2011;21:127–51.
Jepsen N, Schreck C, Clements S, Thorstad E. A brief discussion on the 2% tag/bodymass rule of thumb. Aquat Telem Adv Appl. 2005;255:9.
Ward DL, Persons WR, Young KL, Stone DM, Vanhaverbeke DR, Knight WK. A laboratory evaluation of tagging-related mortality and tag loss in juvenile Humpback Chub. North Am J Fish Manag. 2015;35:135–40.
Acolas ML, Roussel JM, Lebel JM, Baglinière JL. Laboratory experiment on survival, growth and tag retention following PIT injection into the body cavity of juvenile brown trout (Salmo trutta). Fish Res. 2007;86:280–4.
Skår MW, Haugland GT, Powell MD, Wergeland HI, Samuelsen OB. Development of anaesthetic protocols for lumpfish (Cyclopterus lumpus L.): effect of anaesthetic concentrations, sea water temperature and body weight. PLoS ONE. 2017;12:e0179344.
Simard LG, Sotola VA, Marsden JE, Miehls S. Assessment of PIT tag retention and post-tagging survival in metamorphosing juvenile sea lamprey. Anim Biotelemetry. 2017;5:18.
Thorstad EB, Økland F, Westerberg H, Aarestrup K, Metcalfe JD. Evaluation of surgical implantation of electronic tags in European eel and effects of different suture materials. Mar Freshw Res. 2013;64:324.
Imsland AKD, Danielsen M, Jonassen TM, Hangstad TA, Falk-Petersen I-B. Effect of incubation temperature on eggs and larvae of lumpfish (Cyclopterus lumpus). Aquaculture. 2019;498:217–22.
Houde ED. Growth rates, rations and cohort consumption of marine fish larvae in relation to prey concentrations. Rapp P-V Reun Cons Int Explor Mer. 1981;178:441–53.
O'Donnell MJ, Letcher BH. Implanting 8-mm passive integrated transponder tags into small Brook Trout: effects on growth and survival in the laboratory. North Am J Fish Manag. 2017;37:605–11.
Baras E, Malbrouck C, Houbart M, Kestemont P, Mélard C. The effect of PIT tags on growth and physiology of age-0 cultured Eurasian perch Perca fluviatilis of variable size. Aquaculture. 2000;185:159–73.
Cook KV, Brown RS, Daniel Deng Z, Klett RS, Li H, Seaburg AG, et al. A comparison of implantation methods for large PIT tags or injectable acoustic transmitters in juvenile Chinook salmon. Fish Res. 2014;154:213–23.
Allan H, Unmack P, Duncan R, Lintermans M. Potential impacts of PIT tagging on a critically endangered small-bodied fish: a trial on the surrogate mountain galaxias. Am Fish Soc. 2018;147:1078–84.
Bolland JD, Cowx IG, Lucas MC. Evaluation of VIE and PIT tagging methods for juvenile cyprinid fishes. J Appl Ichthyol. 2009;25:381–6.
Fuller SA, McEntire M. The effect of PIT tagging on survival, tag retention, and weight gain in fingerling white bass. J Appl Aquac. 2013;25:95–101.
Grieve B, Baumgartner LJ, Robinson W, Silva LG, Pomorin K, Thorncraft G, et al. Evaluating the placement of PIT tags in tropical river fishes: a case study involving two Mekong River species. Fish Res. 2018;200:43–8.
Gibbons JW, Andrews KM. PIT tagging: simple technology at its best. Bioscience. 2004;54:447–54.
Bridger CJ, Booth RK. The effects of biotelemetry transmitter presence and attachment procedures on fish physiology and behavior. Rev Fish Sci. 2003;11:13–34.
Winter J. Advances in underwater biotelemetry. In: Fisheries techniques. 1996. p. 555–90.
Brown RS, Cooke SJ, Anderson WG, McKinley RS. Evidence to challenge the "2% rule" for biotelemetry. North Am J Fish Manag. 1999;19:867–71.
Brown RS, Geist DR, Deters KA, Grassell A. Effects of surgically implanted acoustic transmitters > 2% of body mass on the swimming performance, survival and growth of juvenile sockeye and Chinook salmon. J Fish Biol. 2006;69:1626–38.
Brown RS, Harnish RA, Carter KM, Boyd JW, Deters KA, Eppard MB. An evaluation of the maximum tag burden for implantation of acoustic transmitters in juvenile chinook salmon. North Am J Fish Manag. 2010;30:499–505.
Powell A, Treasurer JW, Pooley CL, Keay AJ, Lloyd R, Imsland AK, et al. Use of lumpfish for sea-lice control in salmon farming: challenges and opportunities. Rev Aquac. 2017;1:1–20.
The authors would like to acknowledge that this work was part of the Lumpfish Broodstock and Breeding programme, which was funded by Bord Iascaigh Mhara (BIM), under the EMFF operational programme 2014–2020, under the Knowledge Gateway Scheme (KGS).
This study was co-funded by the Marine Institute and Carna Research Station, Ryan Institute, NUI Galway. This work was part of the lumpfish broodstock and breeding programme, funded by Bord Iascaigh Mhara (BIM) under the European Maritime and Fisheries Fund (EMFF) operational programme 2014–2020, under the Knowledge Gateway Scheme (KGS).
Marine Institute, Rinville, Oranmore, County Galway, Ireland
Jack D'Arcy
, Suzanne Kelly
, Tom McDermott
& Dave Jackson
Carna Research Station, Ryan Institute, National University of Ireland Galway, Carna, County Galway, Ireland
& Majbritt Bolton-Warberg
Search for Jack D'Arcy in:
Search for Suzanne Kelly in:
Search for Tom McDermott in:
Search for John Hyland in:
Search for Dave Jackson in:
Search for Majbritt Bolton-Warberg in:
JD, SK, TM, JH and MBW made substantial contributions in data acquisition. JD, MBW and DJ made substantial contributions to conception and design. MBW undertook the analysis and interpretation of data. MBW and DJ sourced funding for this study. JD and MBW produced initial drafts and all authors contributed to edited versions of the manuscript. All authors read and approved the final manuscript.
Correspondence to Jack D'Arcy.
The welfare of the subjects was foremost in considerations from the design of this trial through anaesthesia and tag insertion. As such, the number of subjects chosen was minimal while maintaining statistical robustness. The severity of tag insertion under anaesthesia was deemed to cause low levels of discomfort. The Health Products Regulatory Authority (HPRA, the National Competent Authority) is responsible for following the provisions of the relevant legislation in the EU (Directive 2010/63/EU) and in the Republic of Ireland (SI No. 543 of 2012) on the protection of animals used for scientific purposes. Both the Marine Institute and The National University of Ireland Galway have Breeder/User/Supplier authorisations under HPRA of Ireland, and as such have licence to breed, use and/or supply animals for scientific or educational purposes. Additionally, all operators have completed training in fish tagging, are compliant with best practice, and hold individual authorisations from HPRA.
All authors have given final approval of the version to be published.
Additional file 1. Water temperature (°C) throughout the study period for lumpfish either handled or PIT tagged, and dotted lines indicate sampling dates. Note drop in temperature between days 8 and 14.
Additional file 2.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
D'Arcy, J., Kelly, S., McDermott, T. et al. Assessment of PIT tag retention, growth and post-tagging survival in juvenile lumpfish, Cyclopterus lumpus. Anim Biotelemetry 8, 1 (2020) doi:10.1186/s40317-019-0190-6
PIT tag evaluation
Lumpsucker | CommonCrawl |
Microstructure and Mechanical Properties of Nickel-Aluminum Bronze Coating on 17-4PH Stainless Steel by Laser Cladding
Lu Zhao1,2,
Baorui Du1,
Jun Yao1,
Haitao Chen3,
Ruochen Ding4 &
Kailun Li1
Chinese Journal of Mechanical Engineering volume 35, Article number: 140 (2022) Cite this article
Bimetallic copper-steel composite could be an effective structural material to improve the performance of traditional nickel-aluminum bronze (NAB) ship propeller due to its high structural strength and corrosion resistance. In this work, the defect-free NAB coatings has been successfully fabricated by laser direct depositing technique on the 17-4PH stainless steel substrate. The phase constitution, microstructure characteristics and hardness properties were investigated in details. The XRD results showed that the coatings mainly consisted of α-Cu, Fe and intermetallic κ phases despite the diffraction peaks shifted more than 0.5°, which may due to the influence of the Ni, Fe and Al atoms dissolved into Cu-matrix. The microstructures of the coatings were affected significantly by laser energy density according to SEM and EDS results. The top region of the coating was more undercooled during solidification, therefore the grains at this region was much finer than that at the bottom region. The higher energy input would lead to coarser grains. Fe-rich dendrites and spherical particles were found in the Cu matrix, which could be a result of liquid separation. The hardness of the coating is in the range of 204 HV0.2–266 HV0.2 which is higher than traditional as- cast NAB. The uneven distribution of Fe-rich phases as well as the hard κ phases could be the main reasons for the fluctuations of the hardness value. Tensile fracture occurred at bronze side, not at transition zone, which shows there is a good interfacial bonding between the two metals produced by laser cladding.
Nickel-aluminum bronze (NAB) was developed for ship propellers and was widely used in valves and other marine industry parts due to its excellent mechanical properties and good corrosion resistance in sea water [1,2,3,4,5]. With the development of modern ships, the traditional NAB propeller has to be designed larger and heavier to ensure the strength, thus increases its manufacturing difficulty. Many kinds of steels have better strength and toughness than copper alloys, but the corrosion resistance of most stainless steels are lower. A novel composite structure of steel-based substrate with copper alloy cladded surface can reduce the difficulty of casting bronze parts in great thickness, maintain the original corrosion resistance as well as giving the structural strength provided by the steel substrate.
Steel-copper composite has been widely used but it is challenging to obtain the high bonding strength of the bimetallic structure. Smelting casting, diffusion welding and explosive welding could be used to produce copper layer on steel, but these techniques were limited by their low ability to form complex shapes [6,7,8]. A copper-silver alloy coating with the copper fraction of 59 ± 2 wt.% was produced on stainless steel by electroplating [9]. Cold – spray technique was employed to fabricate copper coatings on SS316L steel [10] where the nickel intermediate layer made good effects on the adhesion and deformation mechanisms at the interface. However, both of the methods could not obtain good metallurgical bonding between metals. Gas tungsten arc welding, laser melting and laser-arc hybrid welding methods were adopted for stainless steel and copper dissimilar joints [11,12,13]. The quantity of fused copper should be restricted in consideration of the influence of excess copper melting that may induce liquid separation and microcrack formation inside the fusion zone. The special design of the weld joint was the crucial step for traditional welds.
Laser direct depositing (LDD) is an effective additive manufacturing (AM) method to obtain net or near-net-shaped steel-copper composite parts. Many researchers have used this technique to fabricate coatings [14,15,16,17,18]. The effect of heat input via LDD on NAB was studied by Hyatt. The as-deposited material at its lowest heat input 42.5 J/mm was composed of entirely martensitic phase. As the heat input increases, the fraction of α phase increases. In particular, the hardness of the deposited material reaches the maximum value at the heat input of 64 J/mm, which may be due to the secondary precipitation-hardening [19]. Murray et al. fabricated near dense NAB part successfully via selective laser melting (SLM) technique with the lowest porosity of 0.05% ± 0.01%. The chemical composition analysis of as-fabricated NAB indicated a light loss of Fe and enrichment of Al, Ni and Mg [20]. Wire-arc additive manufacturing (WAAM) and heat-treatment were adopted on NAB to tuning the precipitation of κ-phases, and to adjust the mechanical properties and corrosion performance in turn. The thermal history of WAAM is similar to casting due to the large heat input of wire-arc and cyclic heating, so that the as- deposited phases are similar to the as- cast except the little β phases [21].
Most of the former works on AM of NAB focused on the forming process of single NAB material, but little studies on bimetallic steel-bronze structure and mechanical properties. This paper aims to obtain a steel-bronze composite structure by depositing NAB cladding layer on 17-4PH steel, and the microstructure together with mechanical properties are studied in details.
2.1 Materials
17-4PH stainless steel was chosen as the substrate material. The substrate prepared in size of 100 mm × 100 mm × 20 mm was solid solution treated. It was mechanically ground and cleaned with acetone and anhydrous ethanol. Spherical NAB powder particles with the diameter of 45‒100 μm were selected in this experiment to ensure the uniformity and synchronicity of powder feeding. The morphology of NAB powder is shown in Figure 1. The powder was dried at 60 ℃ for 2 h. The chemical compositions of substrate and powder are listed in Table 1.
SEM image showing the morphology of NAB powder
Table 1 Chemical composition of alloys
2.2 Laser Processing
The LDD system was made up of an optical fiber laser (TRUMPF TruDiode 3006) in a continuous mode with a maximum power of 3000 W and the spot diameter of 3.4 mm, a NC precision machine, a powder feeder and a gas protection device (Figure 2). During executing process, laser beam, powder feeder and NC program were started up simultaneously, and the molten pool was protected from oxidation and pollution by a special nozzle blowing Ar gas to the melt surface directly at a rate of 10 L/min.
Schematic diagram of experiment apparatus for depositing
To study the influence of different process parameters on depositing quality, laser power was specified as processing variable, while the scanning speed and hatch distance were set as 1.2 m/min and 1.7 mm, respectively, and the laser repetition width rate was 50%. The power density transferred to energy density expression is as follows:
$$ED_{V} = \frac{P}{vht},$$
where P is laser power, v is scanning speed, h is hatch distance and t is the thickness of the powder. The laser cladding parameters for each sample are presented in Table 2.
Table 2 Laser cladding parameters
2.3 Material Characterization
A penetration test was taken to detect the surface defects. The penetrant was sprayed on the cladding surface and then penetrated into the defects through capillary force. The developer was sprayed on the surface after cleaning, and the penetrant at the defect would be sucked out and extended to the surface if flaws existed.
To examine the microstructure of the NAB coating, samples were cut into cuboid with the size of 10 mm × 10 mm × 7 mm. Metallographic preparation on cross-section was as follows: grinding on 400, 800, 1200, 1500 and 2000 # SiC paper in turn, polishing with diamond suspensions (3 μm and 1 μm), and etched by a metallographic etchant composed of 5 g FeCl3 +20 mL HCl+100 mL H2O. Leica DM1 5000M optical microscope (OM) was used to observe the microstructure of the cladding cross section. XRD analysis was performed on a Bruker D8 Advance X-ray diffractometer to determine the phases in the coatings. The diffractometer worked with Cu Kα radiation (λ = 0.1540598 nm) at 40 kV and 40 mA, and scans were performed over a 2θ range of 10°‒100° with increments of 0.1° and step time of 2 s.
The surface morphology of the cross section was characterized using a TEMSCAN MIRA 3LMH scanning electron microscope (SEM) fitted with an Oxford energy dispersive spectrometer (EDS). Backscattered electron imaging (BSE) was carried out to distinguish the surface topography and surface chemistry.
The hardness was tested at a Vickers hardness tester (Everone MH-5L) equipped with a diamond regular pyramid tip. The distribution of the hardness at the cross section was measured using the load of 200 N and dwell time of 5 s.
The tensile testing samples were cut along the longitudinal direction of the steel- bronze composite with the interface of steel and NAB coating located at the center of the samples. Samples were built with length of 10 mm and thickness of 1.4 mm. The test was performed with CMT 5305GL machine at room temperature with a load speed of 0.1 mm/min.
3.1 Phase Composition of Laser Coating
The XRD profile of the NAB coatings shows very different peaks from the raw powder (Figure 3). The phases presented in the NAB coatings are α-Cu, which could be a solid solution of Cu, Ni, Al, Fe and Cr, and κ phases (intermetallic compounds such as FeAl, Fe3Al and AlNi), as well as Fe phase. However, in contrast to the data of JCPDS (Joint Committee Powder Diffraction Standards), almost all the diffraction peaks have shifted more than 0.5° from their inherent angular positions. The first two diffractions of Cu are 43.41 and 50.56 in 65-9743# card, respectively, but 42.90 and 49.82, actually.
XRD patterns of laser depositing coatings
3.2 Defects in NAB Coating
Figure 4 shows the top morphologies of the coatings before and after the penetrating inspection. This bimetallic structure was composited of two metals with different thermal expansion coefficients, so it's inclined to form flaws as a result of thermal stress. However, no dyeing trace was detected after the penetrating, which illustrates no major pores or cracks were generated during the cladding process. Almost all composite parts bonded well except S4 (Figure 5). There is a hole defect with the about size of 40 μm.
Top morphologies of cladding surfaces: a depositing surface and b HD-ST surface permeated by DPT-5 dyeing penetration
Section images of the interface area of a S1, b S2, c S3 and d S4
3.3 Microstructure
Figure 6 shows the OM image of the cross section on S1. The coating consisted of three areas: bi-metal transition (zone 1), coarse grain zone combined with dark dendrites (zone 2) and acicular grain zone (zone 3).
OM images of a metallographic specimen of the laser coating
Figure 7 shows the typical microstructure of S1. The matrix consisted of coarse grains accompanied with dark dendrites. The chemical compositions of the marked points in Figure 7 are listed in Table 3. It's obvious that the dendritic positions are rich in Fe and Cr, but poor in Cu and Ni. Combined with the XRD results, the light matrix is Cu-rich solid solution, and the dark dendrites are solidified phases of Fe-rich alloy in the coating (Figure 7a). As shown in Figure 7b, globular κII and lamellar κIII precipitate on the boundary of α or β′ grains, and the other smaller dark particles could be designated as κIV according to Murray et al. [20]. Square 1‒3# shows the different morphologies of Fe-rich dendrites.
Typical microstructure of a zone 1 and b zone 2 of S1
Table 3 Element contents of cladding micro-zone (mass fraction, %)
Figure 8 shows the zone 1 and zone 3 of S1‒S4. It can be seen that the microstructures are compact especially there is a good metallurgical bonding between coating and substrate. Thickness of transition zones are 5‒14 μm. Light coarse grains and dark dendritic structures are found in zone 1. The sizes of light grains are basically the same in each sample. The dark coarse dendrites embedded in the light grains. At the highest energy density of 14.12 J/mm3, coarser grains and less fraction of needle-shaped dark phase (a, b) are formed compared to that of lower energy density at 12.35 J/mm3 (c, d). All light grains are smaller in size and the fraction of secondary Widmanstatten structures are greater at 10.58 J/mm3 (e, f) compared to the former cases. At the lowest energy density, more spiculate microstructures formed in zone 3 even in zone 1 (g, h).
BSE images of laser depositing coatings: a, b zone 1 and zone 3 of S1; c, d zone 1 and zone 3 of S2; e, f zone 1 and zone 3 of S3; g, h zone 1 and zone 3 of S4
The dendrite sizes range from a few microns to twenty microns (Figure 9). There are three types of spherical particles in the matrix, with diameters approximate of 3 μm, 1 μm and 0.1 μm, respectively, which are pointed out by the arrows. The spherical particles randomly distribute in the matrix, especially, some light finer spherical structures precipitate in the former larger dark ones that can be seen in 1#.
Typical dendritic structures in zone 2 of S2
3.4 Mechanical Property
Figure 10 shows the hardness values of different samples at different positions. The result shows a general decreasing of hardness from the substrate to the coating and a sudden drop occurred at the transition zone. The hardness values of the coatings vary by a very small amount, which fluctuates between 204 HV0.2 and 266 HV0.2. There is no obvious relationship between the hardness and the energy input. Hardness depends on several factors such as grain size, phase type and phase content ratio. There were some abnormal values in the hardness curve, which were most possibly related to the uneven distribution of κ phases. When the κ phases are dense, the hardness is relatively high, and the hardness is relatively low where less κ phases are formed.
Distribution of hardness value along vertical section direction
Figure 11 plotted the tensile stress-strain curves for S5 and S6. S5 is found to exhibit considerably longer elongation than S6. The ultimate tensile strength of S5 reached 767.80 MPa, and the specimen experienced ductile fracture. In light of the fracture occurred on the NAB body, it illustrated the interfacial strength was higher than that of as-deposited NAB. Micrographs of tensile fracture surfaces of composite structure are shown in Figure 12. Overall morphology of S5 seems cup like depression and a mass of dimples appears due to ductile fracture of the sample (Figure 12a, b). However, fracture surfaces of S6 show debonding at overlap on layer boundaries (Figure 12c, d).
Tensile stress-strain curves for S5 and S6
Tensile specimen and fracture surfaces after tensile testing: a, b S5; c, d S6
4.1 Evolution of the Unfused Defect
The dynamic of the molten pool under laser irradiation can be regarded as a flow process dominated by gravity, surface tension and capillary force. There is a significant competitive relationship between melt droplet wetting (or spreading) and solidification process. As shown in Figure 5, unfused defects only occurred in S4. The lower laser energy density results in a lower molten temperature and stored energy. The surface tension of the melt at low temperature is higher, and the melt has poorer viscosity, spreadability and wettability. The low energy storage results in a faster solidification rate, that leading the melt droplets to solidify before they are fully spread. The melt droplet tends to form sphere spontaneously, causing balling on the laser scanning track, which generates the unfused defects in the local area of adjacent molten tracks [22, 23].
4.2 Account for the Shifts of XRD Peaks
Considerable shifts of most of the peaks reveal a rich solid solution relationship among Cu, Ni, Fe and Al elements. Such as an infinite solution between Ni and Cu, and a solid solubility of 9% for Al in Cu. According to Bragg's law (Eq. (2)), the degree of solid solution will cause the peak position (θ) shifts.
$${2}d{\text{sin}}\theta = n\lambda .$$
When a solid solution is formed, the dissimilar radius of the atom dissolved into the matrix, that can lead to the lattice distortion and enlarge the atomic radius of the solution. If the crystal plane spacing d increases, the θ value will decrease at a certain λ, showing the diffraction peak shifted to the left.
4.3 Microstructure Evolution
4.3.1 The Origin of Fe-rich Phases
Interestingly, the Fe-rich dendrites were found in the matrix of the coating. To further explore whether the Fe-rich dendrites in the coating were formed by a concentration of Fe element from the original powder, the multi-layer cladding test was carried out. Figure 13(a) shows the metallographic images of the cross section extracted from the S5 with multiple layers. As seen in Figure 13(b), layer 1 consists of large and light grains, dark and acicular structure and gray branched dendrites. The gray dendrites are Fe-rich phase, which have the same morphologies as those seen in Figure 8. The Fe-rich dendrites are absent in layer 2 (Figure 13(c)), which demonstrates that they are not originated from the powder but from the substrate.
OM images of S5 section: a multi-layers; b layer 1; c layer 2
The obvious spherical Fe-rich phase in the coating indicates the existence of liquid separation during solidificating. Laser cladding is a process of rapid solidification. When the laser energy density increases to 14.12 J/mm3, the cooling rate could be more than 105 K/s due to the higher thermal conductivity than that of 316L steel [24]. Due to the intense convection and agitation in the molten pool, the Fe-based alloy floats through a buoyancy flow caused by density difference. Fe-rich droplets move up due to their lower density (7.8×103 kg/m3) compared with that of copper-rich melts (8.5×103 kg/m3). In the molten pool, the Fe-rich melt is wrapped by the copper-rich melt and contracted into a second spherical droplet due to the surface tension.
The solid solubility of Cu in iron is very low at room temperature (Figure 14), but Cu content in the spherical particles is up to 13.9% (Table 3). It indicates the spherical Fe-rich phase is a supersaturated solid solution containing Cu. The latent heat of crystallization releases during the solidification of Cu-rich melt causes the solidified Fe-rich dendrite to be remelted, which increases the temperature of the liquid phase and reduces the separation undercooling degree of the Fe-rich liquid. During the cooling process after rapid solidification, the secondary liquid phase separation occurs in the spherical Fe-rich particles, and a large number of fine Cu-rich grains are precipitated in the spherical Fe-rich particles. The larger the degree of undercooling is, the more obvious the liquid phase separates.
Cu-Fe binary phase diagram with metastable miscibility [25]
4.3.2 Microstructure Evolution in Local Regions
Since the thermal conductivity of NAB is higher than that of 17-4PH as well as the thickness of 17-4PH is nearly ten times thicker than that of NAB, the heat flux ratio of the cladding is much more than the substrate. At a certain energy input, most of the heat is emitted by the convective heat transfer between the coating and the air. At the interface, most of the heat is stored in the steel substrate. The heat transmits gradually through the coating, achieving a negative temperature gradient G between the substrate and the molten pool. The inclined epitaxial growth was developed by the decreasing G/R values at the liquid-solid boundary while the initial G is large, and the simultaneous solidification speed of the cladding R is close to zero. Due to the high Fe liquidus temperature and the dominant Fe content at the fusion line, the Fe-rich melt solidified first, and then the Cu-rich melt begins to solidify around the Fe-rich phase. As it moves away from the molten pool boundary, the ratio of G/R also decreases while G decreases and R increases. When an occasional bulge occurs at the interface, it will continue to grow forward rapidly in contact with the melt with a greater degree of undercooling, which accelerate a condition for the growth of the columnar dendrites.
In the middle region of the cladding, the Fe-rich dendrites nucleate and develop in the melt. The evolution can be depicted in the marked 1‒3# which represent different stages of growth (Figure 7). Simultaneously, heterogeneous nucleation and dendrite coarsening are dependent on the solidification time. The cooling rate and the degree of undercooling at the bottom of the coating are lower, so a long time of heat storage makes the α-Cu grains growing coarse.
In the upper region of the cladding, the large degree of undercooling caused by the heat exchange between the melt and the air provides the driving force for the phase transition, and the nucleation begins when the temperature decreases to the solidus line of the alloy. Since the interior melt is in an overheated state and the heat conduction direction is vertical outward from the top surface, the front of the solid-liquid interface has a positive temperature gradient. The crystal grows in a dendritic manner with the bulge accidentally formed entering the undercooled liquid, achieving a high growth rate and branching continuously to form a dendritic skeleton.
4.3.3 Effect of Laser Energy on Microstructure
During the laser cladding process, high energy beam melts the powder and substrate simultaneously. The dilution of substrate causes part of the steel mixed into the coating with the liquid fluid. The final chemical composition of the molten zone is related to the dilution degree of the substrate.
The solidification microstructure dimension of depositing materials is determined by the ratio of cooling rate GR. The secondary dendrite arm spacing of the columnar and equiaxed dendrites can reflect the dimension which is predicted in the following equation [26]:
$$d=a{t}_{f}^{n}=b{({\varepsilon }_{C})}^{-n},$$
where tf is local solidification time, εC is cooling rate, and a, b and n are material specific constants. In the solidification process, a slower cooling rate and a longer coarsening time will develop coarser structures with a larger size of dendrite arms. Therefore, a higher laser energy density provides more energy stored in the composite structure which makes for structures coarsen.
4.4 Relationship of Mechanical Properties and Microstructure
Table 4 lists the chemical compositions of six types of NAB alloys. Compared to UNS C95500 alloy [22] and the UNS 95800 specimen cut from propeller blades [27], the hardness of the laser coatings is higher than NAB materials processed by the traditional method, but some abnormal values exist on points in Figure 10.
Table 4 Hardness of UNS ASTM NAB and as-deposited NAB
The mass transfer and composition mixing of the powder and substrate in the molten pool have an important effect on the hardness. In general, the hardness of α phase is low, that of β' phase is higher, and that of κ phase is the highest which concludes a series of Al(Fe, Ni) intermetallic compounds. There are three strengthening mechanisms that are solid solution strengthening, precipitation strengthening and dispersion strengthening, and one mechanism of weakening hardness, which is related closely to the hardness of copper alloy cladding layer. The solid solution elements such as Al and Ni are present in α phases, and these solute atoms hinder dislocation motion, that solid solution strengthening works. In addition, many spherical particles that without enough time to merge were acted as reinforcement phase embed in the matrix and prevent dislocation migration, resulting in the increase of hardness of the coating. Al, Ni and Fe form the κ phases with high hardness, which is dispersed in the coating matrix and make precipitation strengthening of the second phase [28, 29]. Appropriate amount of Fe and Ni added to the aluminum bronze alloy can refine the grain, and it is fine grain strengthening mechanism of improving the strength and hardness. Due to the dilution of the substrate, a large amount of Fe-Cr melt mixed into the molten pool and solidified into dendritic structures. As such points 1-1 and 2-2, the laser energy applied at these areas happens to be the higher power parameters, which led to a higher degree of dilution and introduces more Fe-rich melt into the coating. The iron dendrite phases were dense and developed in these regions, resulting in a low local hardness.
As can be seen, S5 has a good tensile property. However, the flat regions on S6 revealed the origin location of fracture. There is a possibility of unmelted powder on this laser parameter. Further, the next layer is deposited on the bumpy surface where there were low-lying areas left by the overlap that results in positive defocus of the laser. This situation provided insufficient laser energy density on powder and would increase the range of uncompleted melt region.
This study presents the influence of four different laser conditions on the microstructure and mechanical properties of nickel aluminum bronze coatings. Following conclusions could be drawn.
A bimetallic steel and bronze structure with a good metallurgical bonding but without pores, cracks and other defects can be obtained by laser depositing.
The microstructure of NAB cladding was affected significantly by the laser thermal history. The grain size of the cladding near the interface is lager, and the grain shape inclined needle-like in the upper zone.
The hardness of NAB cladding was strongly dependent on the microstructure, especially on the mass fraction of Fe components, but not on the cooling rate. The value of cladding section was about 204 HV0.2–266 HV0.2. A large number of Fe -rich dendrites, hard κ phases and spherical Fe-rich particles produced by the liquid separation led to the fluctuation of hardness value.
Tensile testing of composite structure showed it fractured on bronze body but not the transition zone. The interface strength is higher than 767.80 MPa. The characteristic of ductile fracture revealed there was a good bonding performance between such two alloys by laser cladding.
E A Culpan, G Rose. Microstructural characterization of cast nickel aluminium bronze. Journal of Materials Science, 1978, 13: 1647–1657.
A Jahanafrooz, F Hasan, G W Lorimer, et al. Microstructural development in complex nickel aluminum bronzes. Metallurgical Transactions A, 1983, 14: 1951–1956.
I Richardson. Guide to nickel aluminium bronze for engineers. USA: Copper Development Association, 2016.
R Z Tian, Z T Wang. Handbook of copper alloy and its processing. Changsha: Central South University Press, 2007.
M Hauer, F Gärtner, S Krebs, et al. Process selection for the fabrication of cavitation erosion-resistant bronze coatings by thermal and kinetic spraying in maritime applications. Journal of Thermal Spray Technology, 2021, 30: 1310–1328.
L Dong, W Chen, L Hou, et al. Metallurgical process analysis and microstructure characterization of the bonding interface of QAl9-4 aluminum bronze and 304 stainless steel composite materials. Journal of Materials Processing Technology, 2016, 238: 325–332.
S Sebastian, V Suyamburajan. Microstructural analysis of diffusion bonding on copper stainless steel. Materials Today: Proceedings, 2021, 37: 1706–1712.
H Zhang, K Jiao, J Zhang, et al. Microstructure and mechanical properties investigations of copper-steel composite fabricated by explosive welding. Materials Science & Engineering A, 2018, 731: 278–287.
N Ciacotich, R U Din, J J Sloth, et al. An electroplated copper–silver alloy as antibacterial coating on stainless steel. Surface & Coatings Technology, 2018, 345: 96–104.
S Singh, H Singh. Effect of electroplated interlayers on bonding mechanism of cold-sprayed copper on SS316L steel substrate. Vacuum, 2020, 172: 109092.
Y Poo-arporn, S Duangnil, D Bamrungkoh, et al. Gas tungsten arc welding of copper to stainless steel for ultra-high vacuum applications. Journal of Materials Processing Technology, 2020, 277: 116490.
S Chen, J Huang, J Xia, et al. Microstructural characteristics of a stainless steel/copper dissimilar joint made by laser welding. Metallurgical and Materials Transactions A, 2013, 44: 3690–3696.
Y Meng, X Li, M Gao, et al. Microstructures and mechanical properties of laser-arc hybrid welded dissimilar pure copper to stainless steel. Optics and Laser Technology, 2019, 111: 140–145.
Y Li, S Dong, P He, et al. Microstructure characteristics and mechanical properties of new-type FeNiCr laser cladding alloy coating on nodular cast iron. Journal of Materials Processing Technology, 2019, 269: 163–171.
J Liu, H Liu, X Tian, et al. Microstructural evolution and corrosion properties of Ni-based alloy coatings fabricated by multi-layer laser cladding on cast iron. Journal of Alloys and Compounds, 2020, 822: 153708.
S Singh, M Kumar, G P S Sodhi, et al. Development of thick copper claddings on SS316L steel for In-vessel components of fusion reactors and copper-cast iron canisters. Fusion Engineering and Design, 2018, 128: 126–137.
Y Zou, B Ma, H Cui, et al. Microstructure, wear, and oxidation resistance of nanostructured Microstructure, wear, and oxidation resistance of nanostructured carbide-strengthened cobalt-based composite coatings on Invar alloys by laser cladding. Surface & Coatings Technology, 2020, 381: 125188.
L Zhu, Y Liu, Z Li, et al. Microstructure and properties of Cu-Ti-Ni composite coatings on gray cast iron fabricated by laser cladding. Optics and Laser Technology, 2020, 122: 105879.
C V Hyatt, K H Magee, T Betancourt. The effect of heat input on the microstructure and properties of nickel aluminum bronze laser clad with a consumable of composition Cu-9.0Al-4.6Ni-3.9Fe-1.2Mn. Metallurgical and Materials Transactions A, 1998, 29: 1677–1690.
T Murray, S Thomas, Y Wu, et al. Selective laser melting of nickel aluminium bronze. Additive Manufacturing, 2020, 33: 101122.
C Dharmendra, B S Amirkhiz, A Lloyd, et al. Wire-arc additive manufactured nickel aluminum bronze with enhanced mechanical properties using heat treatments cycles. Additive Manufacturing, 2020, 36: 101510.
ASTM. Standard specification for aluminum-bronze sand castings. USA: ASTM B148-2014, 2014.
X Zhou, X Liu, D Zhang, et al. Balling phenomena in selective laser melted tungsten. Journal of Materials Processing Technology, 2015, 222: 33–42.
M Ma, Z Wang, X Zeng. A comparison on metallurgical behaviors of 316L stainless steel by selective laser melting and laser cladding deposition. Materials Science & Engineering A, 2017, 685: 265–273.
H Okamoto. Supplemental literature review of binary phase diagrams: Au-Dy, Au-Sc, Au-Yb, C-Hf, C-Ta, Cu-Fe, Dy-Mn, Er-Mn, Ho-Mn, Mn-Tb, Mn-Tm, and Sb-Sn. Journal of Phase Equilibria and Diffusion, 2017, 38: 160–170.
S Kou. Welding metallurgy. New Jersey: John Wiley & Sons, Inc., 2003.
C H Tang, F T Cheng, H C Man. Improvement in cavitation erosion resistance of a copper-based propeller alloy by laser surface melting. Surface and Coatings Technology, 2004, 182: 300–307.
Y Li, Y Lian, Y Sun. Cavitation erosion behavior of friction stir processed nickel aluminum bronze. Journal of Alloys and Compounds, 2019, 795: 233–240.
Y Zeng, F Yang, Z Chen, et al. Enhancing mechanical properties and corrosion resistance of nickel-aluminum bronze via hot rolling process. Journal of Materials Science & Technology, 2021, 61: 186–196.
Institute of Engineering Thermophysics, Chinese Academy of Sciences, Beijing, 100190, China
Lu Zhao, Baorui Du, Jun Yao & Kailun Li
University of Chinese Academy of Sciences, Beijing, 100049, China
Lu Zhao
Shenyang Dalu Laser Advanced Manufacturing Technology Innovation Co. Ltd, Shenyang, 110000, China
Haitao Chen
Institute of Science and Technology, China Three Gorges Corporation, Beijing, 100038, China
Ruochen Ding
Baorui Du
Jun Yao
Kailun Li
LZ was in charge of the whole trial and wrote the manuscript; BD conceived the study; JY, HC, RD and KL analyzed data; Additionally, KL contributed to the writing revisions. All authors read and approved the final manuscript.
Lu Zhao, born in 1986, is currently a PhD candidate at University of Chinese Academy of Sciences, meanwhile, subject to Institute of Engineering Thermophysics, Chinese Academy of Sciences. She received her master degree from Harbin Engineering University, China, in 2012. Her research interests include laser additive manufacturing with metallic material and its application. Tel: +86-15942336608;
Baorui Du, born in 1970, is currently a researcher at Institute of Engineering Thermophysics, Chinese Academy of Sciences. He received his master degree from Beihang University, China, in 1999. His research interests include additive manufacturing with metallic material, numerical control machining technology and their applications in the field of aviation.
Jun Yao, born in 1988, is currently an engineer at Institute of Engineering Thermophysics, Chinese Academy of Sciences. He received his doctoral degree on mechanical manufacture and automation from Beihang University, China, in 2018.
Haitao Chen, born in 1975, is currently a senior engineer at Shenyang Dalu Laser Advanced Manufacturing Technology Innovation Co. Ltd, China. His research interests include laser additive manufacturing with metallic material and its application.
Ruochen Ding, born in 1992, is an engineer at Institute of Science and Technology, China Three Gorges Corporation. She received her doctoral degree on engineering thermophysics from University of Chinese Academy of Sciences, in 2021. E-mail: [email protected].
Kailun Li, born in 1993, is currently an engineer at Institute of Engineering Thermophysics, Chinese Academy of Sciences. He received his doctoral degree on materials science and engineering from Tsinghua University, China, in 2020.
Correspondence to Kailun Li.
The authors declare no competing financial interests.
Zhao, L., Du, B., Yao, J. et al. Microstructure and Mechanical Properties of Nickel-Aluminum Bronze Coating on 17-4PH Stainless Steel by Laser Cladding. Chin. J. Mech. Eng. 35, 140 (2022). https://doi.org/10.1186/s10033-022-00807-z
DOI: https://doi.org/10.1186/s10033-022-00807-z
Laser direct depositing
Nickel-aluminum bronze
Microstructure
Liquid separation
Smart Material | CommonCrawl |
A predictable smoothing evolution model for computer-controlled polishing
Jing Hou1,
Pengli Lei1,
Shiwei Liu ORCID: orcid.org/0000-0003-1939-57382,
Xianhua Chen1,
Jian Wang1,
Wenhui Deng1 &
Bo Zhong1
Quantitative prediction of the smoothing of mid-spatial frequency errors (MSFE) is urgently needed to realize process guidance for computer controlled optical surfacing (CCOS) rather than a qualitative analysis of the processing results. Consequently, a predictable time-dependent model combining process parameters and an error decreasing factor (EDF) were presented in this paper. The basic smoothing theory, solution method and modification of this model were expounded separately and verified by experiments. The experimental results show that the theoretical predicted curve agrees well with the actual smoothing effect. The smoothing evolution model provides certain theoretical support and guidance for the quantitative prediction and parameter selection of the smoothing of MSFE.
In the past few decades, computer controlled optical surfacing (CCOS) has been widely and successfully applied to the manufacture of optical components [1,2,3], providing a deterministic material removal technology for optical devices [4], such as small-sized optical lenses, large astronomical telescopes and high-power laser systems. Different processing methods are commonly used, which has a broad coverage including CNC polishing, gasbag polishing, magnetorheological polishing, ion beam polishing, etc [5,6,7] In some extreme optical systems like large-aperture telescope systems or nanoscale lithography systems, surface errors of the optical components play a critical role in the imaging and operation quality of the entire system. In consequences, study on the formation mechanism and suppression method of surface errors is of great significance to the process and manufacturing.
The surface errors of optical components can be classified as low-spatial frequency errors, mid-spatial frequency errors (MSFE) and high-spatial frequency errors, according to the spatial frequency. The low-spatial frequency error is a shape error, which can introduce various aberrations and lead to image distortion of the optical system; the mid-spatial frequency error represents the ripples of the component surface, which can result in the small angle scattering of light and affect the imaging contrast; the high-spatial frequency error represents the roughness of the surface which enables large-angle scattering of light and reduces specular reflectance. Therefore, sub-band research and surface error control play a key role in the processing and evaluation of optical systems.
In recent years, lots of research have been done on the smoothing of the surface errors. In 1981, Brown and Parks quantitatively explained the smoothing effect with elastic support flexible abrasive belts [8]. Mehta and Reid first proposed flexible pads in 1990 and built bridge models based on elastic theory [9, 10]. After 2010, Kim did a lot of research work on RC pads. A parametric mathematical model was proposed based on the bridge model to describe the polishing effect and efficiency of various polishing processes [11, 12]. Later on, Y. Shu pointed out that Kim's model gave a flat uniform slip factor (SF), by ignoring the time-varying characteristics. When considering the smoothing time during the polishing process, the evolution of surface errors was revealed and an exponentially decreasing function image was obtained [13]. And by comparing the different polishing pad movements, it was pointed out that the smoothing curve under the double planetary motion drops faster and the smoothing limit value is smaller, indicating that the double planetary motion polishing pad has better smoothing effect. At the same time, the addition of random polishing paths in the smoothing process also achieved progress [14, 15]. Nie analyzed the smoothing effect of irregular ripples with finite element method [16].
With some of the theoretical models mentioned above, the effects of different process parameters on the smoothing process have also been studied. Zhang compared the smoothing experiment of the asphalt disk with the polyurethane pad at the same speed, finding that the pitch pad had better smoothing effect than the polyurethane pad, and higher speed resulted in higher smoothing efficiency [17]. With the parametric model, Kim compared the smoothing factors of different polishing tools, finding that the elastic material with harder surface had better smoothing effect [18]. Nie found that the polishing pad groove did affect the material removal and also the radial slotting method had a better smoothing effect by changing the slotting method of the polishing pad groove [19].
In this paper, the smoothing mechanism of the mid-spatial frequency errors in computer-controlled polishing using pitch pads was studied. Based on the existing theoretical model, a further extension and derivation of the smoothing theory has been carried out. Considering the actual polishing process, a predictable smoothing evolution model was established. It is promising that this model can provide more precise guidance and prediction for the actual smoothing process of the surface ripple errors.
Smoothing theory
It is very important to establish a reasonable and effective mathematical model of the smoothing effect in the computer-controlled polishing process. There have been some studies on the smoothing effect by using elastic tools mentioned above.
As shown in Fig. 1(a), the ripple errors on the surface cause uneven contact between the polishing pad and the workpiece, resulting in inhomogeneity in the pressure distribution. The peak of the ripple has an additional pressure difference Padd compared to that of the trough. According to the Preston equation, the material removed during the polishing process is proportional to the polishing pressure. Therefore, more material is removed at the peak of the ripple than at the trough. As a result, the workpiece will become smoother, thereby achieving the smoothing effect of the surface errors. The Preston equation shows that the material removal of workpiece satisfies:
$$ \Delta \varepsilon =K\cdot {P}_{add}\cdot v\cdot \Delta t $$
a A pitch polishing pad contact on the workpiece with ripple errors. b) Experiment platform
where Δε is the amount of change in the amplitude of the ripple errors after single smoothing, K is a constant parameter, v is the relative speed between the polishing pad and the workpiece, and Δt is the time of single smoothing based on the existing surface errors.
For visco-elastic polishing tools, such as pitch pads or RC pads, a parametric smoothing model [11] has been established:
$$ SF=\frac{\Delta \varepsilon }{\Delta Z}=k\cdot \left({\varepsilon}_{ini}-{\varepsilon}_0\right) $$
where SF is the smoothing factor, defined as the ratio of Δε (the amount of change in the amplitude of the ripple errors after single smoothing) to ΔZ (the depth of material removal on the surface of the workpiece after single smoothing under the premise that the contact pressure at the peak and the trough of the ripple errors is the same). To a certain extent, ΔZ can be understood as the depth of material removal on the surface of the workpiece without ripple errors under the same processing conditions. εini is the initial amplitude of the ripple errors before smoothing and ε0 is the final amplitude after multiple smoothing which indicates the limit of the smoothing process, that is, the amplitude does not change with the increase of smoothing time when the amplitude of ripple errors decreases to ε0. The magnitude of SF characterizes the smoothing capacity to the ripple errors in the polishing process. It can be seen that the smoothing factor SF has a linear correlation with the surface roughness of the workpiece, and the proportional coefficient is k. In the parametric smoothing model:
$$ k=\frac{\kappa_{total}}{P} $$
$$ \frac{1}{\kappa_{total}}=\frac{1}{\kappa_{elastic}}+\frac{1}{\kappa_{others}} $$
where P is the pressure between the polishing pad and the workpiece, κtotal is the material coefficient of the polishing pad, which is related to the elastic material coefficient κelastic and the overall material coefficient κothers of other structures.
The parametric smoothing model indicates that the smoothing factor is related to the material parameters and polishing pressure during the polishing process. However, it is difficult to reflect the relationship between the parameters of smoothing process and other process. It is not intuitive to infer the material removal rate of the actual polishing process because no factor is included in this model to describe the evolution of the surface error with time. Therefore, based on the parametric model, a time-dependent smoothing evolution model has also been proposed and applied [13].
Using the mathematical expression of the smoothing factor in the parametric smoothing model, the following equation can be obtained:
$$ \frac{d\varepsilon}{d Z}=\frac{d\left(\varepsilon -{\varepsilon}_0\right)}{d Z}=-k\cdot \left(\varepsilon -{\varepsilon}_0\right) $$
$$ \varepsilon =\left({\varepsilon}_{ini}-{\varepsilon}_0\right)\cdot {e}^{-k\cdot Z}+{\varepsilon}_0 $$
where ε is the amplitude of the ripple errors on the workpiece surface after several times of smoothing process, Z is the total material removal depth of the workpiece surface without ripple errors under the same smoothing time t and processing conditions.
With Z developed by the Preston equation, similar as Eq.1, combined with Eq.3, the above equation yields:
$$ \varepsilon =\left({\varepsilon}_{ini}-{\varepsilon}_0\right)\cdot {e}^{-{\kappa}_{total}\cdot K\cdot v\cdot t}+{\varepsilon}_0 $$
The smoothing model represented by Eq.7 reveals that the surface ripple errors converge exponentially with time during the smoothing process. In the general application of the model, the data points of the polishing process are often fitted by an exponential function, and the smoothing efficiency is measured by the obtained fitting parameter values. However, how to calculate a more accurate predicted curve of smoothing process from the various process parameters of the model is still an urgent problem to be solved. The theoretical model established next is to find the relationship between the practical polishing parameters and the smoothing efficiency by analyzing the actual smoothing parameters in computer-controlled polishing.
Predictable evolution smoothing model
Starting from Eq.6, the material removal depth Z under different processing conditions can be obtained by relatively accurately modeling and simulating of the theoretical tool influence function (TIF). In computer-controlled polishing, the polishing pad usually moves in a specific mode. The most common mode for pitch pad is double planetary motion, as shown in Fig. 2.
The double planetary motion of the polishing pad, where R is the radius of the polishing pad. The polishing pad has a revolution ω1 around a certain circle O at a certain eccentricity e, and a rotation with angular velocity ω2. The distance between the polishing point A and the center O is r. Since the polishing pad motion has both revolution and rotation, with the corresponding linear velocities being v1 and v2, respectively, the velocity v of the polishing pad relative to the workpiece is the vector sum of v1 and v2. The angle between v1 and v2 is β. The angle between O, A and O′ is α
The polishing velocity v of any contact point (r, α) on the polishing pad surface varies with the locations, which can be expressed as:
$$ v\left(r,\alpha \right)={\omega}_1\sqrt{r^2{\left(1+n\right)}^2+{e}^2{n}^2-2 ren\left(1+n\right)\cos \alpha } $$
$$ n=\frac{\omega_2}{\omega_1} $$
Under a uniform pressure distribution of the polishing layer, the TIF of the double planetary motion pad (the average removal within a period T) satisfies:
$$ {\displaystyle \begin{array}{c} TIF=K\cdot P\cdot v\\ {}=K\cdot P\cdot \frac{\underset{-\theta }{\overset{\theta }{\int }}v\left(r,\alpha \right) d\alpha}{T}\\ {}=K\cdot P\cdot \frac{\omega_1}{2\pi}\underset{-\theta }{\overset{\theta }{\int }}v\left(r,\alpha \right) d\alpha \end{array}} $$
where the integration interval θ satisfies the following condition:
$$ \theta =\left\{\begin{array}{c}2\pi \\ {}2\operatorname{arccos}\left(\frac{r^2+{e}^2-{R}^2}{2 re}\right)\\ {}0\end{array}\right.{\displaystyle \begin{array}{c}\\ {}\\ {}\end{array}}{\displaystyle \begin{array}{c}r\le R-e\\ {}R-e<r\le R+e\\ {}r>R+e\end{array}} $$
The TIF is related to the position r as can be seen from the above equation. Meanwhile, in the polishing process, the area of a 2-D TIF image contains all the points at which the polishing pad can produce material removal. Therefore, an overall analysis of the TIF area is carried out to establish a comprehensive average effect of material removal. Then the total volume removal rate (VRR) in the area of the tool influence function satisfies:
$$ {\displaystyle \begin{array}{c} VRR=\iint TIF\left(x,y\right) dxdy\\ {}=K\cdot P\cdot \frac{\omega_1}{2\pi}\underset{0}{\overset{2\pi }{\int }} d\phi \underset{0}{\overset{R}{\int }} rV(r) dr\\ {}=K\cdot P\cdot {\omega}_1\underset{0}{\overset{R}{\int }} rV(r) dr\end{array}} $$
$$ V(r)=\frac{1}{\omega_1}\underset{-\theta }{\overset{\theta }{\int }}v\left(r,\alpha \right) d\alpha =\underset{-\theta }{\overset{\theta }{\int }}\sqrt{r^2{\left(1+n\right)}^2+{e}^2{n}^2-2 ren\left(1+n\right)\cos \alpha } d\alpha $$
During the actual polishing process, due to the non-uniformity of the velocity distribution generated by the polishing pad movement mode, the material removal depth at different positions in a specific dwell time is different. Hence, the average removal depth of each polishing point is taken as the total material removal depth Z of the workpiece to achieve an objective consideration of the smoothing effect. Z satisfies the following equation:
$$ Z=\frac{VRR}{\pi {R}^2}\cdot t=K\cdot P\cdot \frac{\omega_1}{\pi {R}^2}\underset{0}{\overset{R}{\int }} rV(r) dr\cdot t $$
Substituting Eq.3 and Eq.14 into Eq.6, a complete multi-parametric smoothing model can be obtained as follows:
$$ {\displaystyle \begin{array}{c}\varepsilon =\left({\varepsilon}_{ini}-{\varepsilon}_0\right){e}^{-{\kappa}_{total}\cdot \frac{VRR}{P\cdot \pi {R}^2}\cdot t}+{\varepsilon}_0\\ {}=\left({\varepsilon}_{ini}-{\varepsilon}_0\right){e}^{-{\kappa}_{total}\cdot K\cdot \frac{\omega_1}{\pi {R}^2}\underset{0}{\overset{R}{\int }} rV(r) dr\cdot t}+{\varepsilon}_0\end{array}} $$
An error decreasing factor (EDF) is defined to characterize the efficiency of the exponential convergence over time of the surface ripple errors of the workpiece during the smoothing process, and its equation satisfies:
$$ EDF={\kappa}_{total}\cdot \frac{VRR}{P\cdot \pi {R}^2}={\kappa}_{total}\cdot K\cdot \frac{\omega_1}{\pi {R}^2}\underset{0}{\overset{R}{\int }} rV(r) dr $$
In this way, the predictable smoothing evolution model Eq.15 with complete parameters of the entire polishing process is simplified as:
$$ \varepsilon =\left({\varepsilon}_{ini}-{\varepsilon}_0\right){e}^{- EDF\cdot t}+{\varepsilon}_0 $$
In this model, the surface ripple errors converge exponentially with a certain smoothing efficiency which depends on the magnitude of EDF: larger EDF implies higher efficiency. The convergence curve of the whole smoothing process and then the volume removal rate can be theoretically predicted with the given process parameters. However, due to the instability of the pitch layer and the inhomogeneity of the pressure distribution, the actual removal rate might deviate from the theoretic prediction. Therefore, it is necessary to bring in the volume removal efficiency from an actual polishing spot to calculate the EDF based on Eq.16.
Correction of EDF solution process
According to the parameterized smoothing theory, κtotal is com posed of κelastic and κothers. The elastic coefficient κelastic of the pitch layer is related to the spatial frequency f of the workpiece surface ripple errors [18], while κothers is possibly affected by the geometry of the polishing tool itself, material, polishing slurry, and also the spatial frequency f of ripple errors. Therefore, a parameter C, called the slope correction factor, is used instead of κothers [11].
$$ {\kappa}_{total}=\frac{1}{\frac{1}{\kappa_{elastic}(f)}+\frac{1}{C(f)}} $$
The influence of the spatial frequency f of the ripple errors on the factor EDF will be discussed in the experimental part.
In the parametrized smoothing model, according to Eq.2 and Eq.3, fitting a series of continuous experiment data of the smoothing factor SF and the surface ripple errors as shown in Fig. 3(a), a straight line with fitting slope k as shown in Fig. 3(b) can be obtained. Then κtotal can be calculated with k and the pressure P of the polishing pad from Eq.3. At the same time, the convergence curve of the ripple errors in the whole smoothing process can be inferred in accordance with the smoothing model. However, comparing the experimental results, there is a certain difference between the κtotal calculated by using the slope k and the κtotal obtained by the reverse calculation of the experimental results, which leads to the deviation of the predicted curve and the actual smoothing curve. Hence, it's of vital importance to quantitatively analyze and modify the solution process of the EDF based on the parameterized model in combination with experimental phenomena.
a The convergence curve of ripple errors during the actual smoothing process. b The corresponding data points in the parametric model
Several short-time pre-processing is usually carried out to achieve the purpose of predicting the smoothing effect, and then the actual smoothing factor SF is calculated by combining these data. As shown in Fig. 3(a), during the pre-polishing process, the surface ripple errors converge exponentially. The actual experimental data is only a series of points on the curve at equal time intervals, denoted as data 1, data 2, data 3. The linear fit lines corresponding to these data points are shown in the Fig. 3(b). According to the definition of the smoothing factor SF from Eq.2 and Eq.17, for data 1, the following relationship equation between smoothing factor SF and the error decreasing factor EDF can be obtained:
$$ {SF}_1=\frac{\Delta \varepsilon }{\Delta Z}=\frac{\varepsilon_1-{\varepsilon}_2}{\frac{VRR}{\pi {R}^2}\left({t}_2-{t}_1\right)}=\frac{Ae^{- EDF\cdot {t}_1}-{Ae}^{- EDF\cdot {t}_2}}{\frac{VRR}{\pi {R}^2}\left({t}_2-{t}_1\right)}=\frac{Ae^{- EDF\cdot {t}_1}\left(1-{e}^{- EDF\cdot \Delta t}\right)}{\frac{VRR}{\pi {R}^2}\Delta t} $$
Similarly, at the point of data 2, SF satisfies:
$$ {SF}_2=\frac{Ae^{- EDF\cdot {t}_2}\left(1-{e}^{- EDF\cdot \Delta t}\right)}{\frac{VRR}{\pi {R}^2}\Delta t}=\frac{Ae^{- EDF\cdot {t}_1}{e}^{- EDF\cdot \Delta t}\left(1-{e}^{- EDF\cdot \Delta t}\right)}{\frac{VRR}{\pi {R}^2}\Delta t} $$
Then the slope of the line shown in Fig. 3 (b) satisfies:
$$ {k}_0=\frac{SF_1-{SF}_2}{\varepsilon_1-{\varepsilon}_2}=\frac{Ae^{- EDF\cdot {t}_1}{\left(1-{e}^{- EDF\cdot \Delta t}\right)}^2}{\frac{VRR}{\pi {R}^2}{Ae}^{- EDF\cdot {t}_1}\left(1-{e}^{- EDF\cdot \Delta t}\right)}=\frac{1-{e}^{- EDF\cdot \Delta t}}{\frac{VRR}{\pi {R}^2}\Delta t} $$
Substituting the expression of EDF in Eq.16 into the above equation gives:
$$ {k}_0=\frac{1-{e}^{-\frac{\kappa_{total}}{P}\cdot \frac{VRR}{\pi {R}^2}\Delta t}}{\frac{VRR}{\pi {R}^2}\Delta t} $$
It can be seen that under the premise of keeping the process parameters the same, the fitting slope k0 will change with the difference of the single pre-processing time Δt. Affected by this, the κtotal obtained by Eq.3 combining the fitting slope k0 and the polishing pressure P also becomes a variable affected by the single pre-processing time Δt, which does not conform to the essence of the processing process. κtotal is defined as the material coefficient of the polishing pad, which is usually constant without changing the structure or material of the polishing pad. Therefore, the abnormal experimental phenomena obtained above can be reasonably explained by Eq.22. During the actual polishing process, the relationship between the true value of κtotal and the linear slope k0 obtained by fitting the parametric smoothing model no longer satisfies Eq.3, which does not mean that Eq.3 is not suitable for the whole smoothing process。The slope k in Eq.3 is calculated by fitting multiple sets of data points obtained from the complete smoothing process (the ripple errors of the workpiece reach the smoothing limit from the initial amplitude through multiple smoothing processes). Several short-term smoothing during the pre-processing process, however, causes a certain degree of difference between the fitted slope k0 calculated by the fewer data points and the fitting slope k of the complete smoothing process. Under this inducement, the κtotal (expressed as κtotal_(unfixed) in the experimental part) and EDF (expressed as EDF_(unfixed) in the experimental part) calculated by Eq.3 cannot match the actual values obtained from the smoothing process.
In fact, taking the limit value for the single pre-processing time Δt, Eq.22 satisfies:
$$ \underset{\Delta t\to 0}{\lim}\frac{1-{e}^{-\frac{\kappa_{total}}{P}\cdot \frac{VRR}{\pi {R}^2}\Delta t}}{\frac{VRR}{\pi {R}^2}\Delta t}=\frac{\kappa_{total}}{P} $$
Obviously, since the sampling interval Δt of the two data points is not zero in the actual polishing, the value of κtotal calculated based on the fitted slope k and Eq.3 is smaller than its actual value. In the case of multiple sampling and ensuring that the sampling interval Δt is small enough, the above equation can be satisfied approximately, which is hard to implement in the pre-processing process. Calculating the value of κtotal through Eq. 22 can effectively avoid the influence of the sampling interval Δt on the solution process of κtotal and obtain the true material coefficients normalized by sampling interval and the error decreasing factor of the complete smoothing process. Due to the differences in the structure and characteristics of different polishing pads, the material properties need to be tested after replacing the polishing pad for smooth processing, that is, several short-time pre-processing is required. The real material properties of the current polishing pad can be obtained by substituting the calculated smoothing factor SF and the fitting slope k0 into Eq.22, and then the convergence curve of ripple errors in the whole smoothing process can be simulated to realize the prediction and guidance of smoothing processing. At this point, a modified smoothing model has been established completely.
Results and discussions
Smoothing contrast experiment of ripple errors with different spatial periods
In order to explore the relationship between the error decreasing factor (EDF) and the spatial frequency of ripple errors, a set of smoothing contrast experiments were carried out on three pieces of fused silica (size 100 mm × 100 mm) with initial surface error of 3 mm, 5 mm and 7 mm obtained through magnetorheological finishing pretreatment, respectively. These three fused silica components were subjected to a continuous polishing experiment on the same polishing platform (seen Fig. 1(b)).
The experiment was performed on a pitch pad with a diameter of 35 mm, with eccentricity set to 3 mm, angular velocity of revolution set to 200 rpm, of rotation to 20 rpm, and the rotation direction being opposite to the revolution direction. On the surface of the component, the polishing pad travels in a grating path at a speed of 200 mm/min and a grating spacing of 3 mm. The surface of the three fused silica components was examined initially and after each polishing process with an interferometer. For each fused silica component, the polishing experiment should be stopped when the smoothing effect reaches the limit.
Figure 4 shows the area image measured in the experiment with an error of 7 mm in space interval. Considering the edge effect in the polishing process and the uniformity of the material removal after the superposition of the TIF, the data taken contains only the area of the original fused silica element with a center diameter of 40 mm, which is about the same size of a TIF scale.
Surface data of the workpiece surface with a spatial period of 7 mm, which is at the initial time and after the 3rd, 6th, 9th, 12th, 15th and the final polishing process
From the experiment, it is clear that the surface ripple errors gradually decrease as the smoothing process goes on. The RMS (root mean square) is reduced from the initial 32.41 nm to the final 1.57 nm to reach the smoothing limit, when the mid-spatial frequency errors of the fused silica have been removed. The experimental images of the other two components with ripple errors of 5 mm and 3 mm in spatial frequency respectively are similar to the image above. The data collected from the experiment of the three components were analyzed and plotted as shown in Fig. 5, with the RMS value being the ordinate and the number of smoothing times being the abscissa, and an exponential fitting was performed.
Data points and the fitting curve of the three components with different spatial period ripple error in the smoothing experiment
The fitting results on the data of 3 mm, 5 mm, and 7 mm spatial spacing are listed as follows:
$$ {\displaystyle \begin{array}{l}{RMS}_{3 mm}=27.274{e}^{-0.630N}+0.853\ \left(\mathrm{nm}\right)\\ {}{RMS}_{5 mm}=28.859{e}^{-0.471N}+1.343\ \left(\mathrm{nm}\right)\\ {}{RMS}_{7 mm}=31.672{e}^{-0.183N}+0.738\ \left(\mathrm{nm}\right)\end{array}} $$
It is found through the experiments that the workpiece with an initial surface error of 3 mm took the minimum time to reach the smoothing limit, while the workpiece with 7 mm initial surface error took the longest time. In the experiment, a single smoothing time of each workpiece was set to 17 min, and the average smoothing time of each point in the experimental area was:
$$ \Delta t=\frac{\pi \times {20.5}^2}{100\times 100}\times 17\approx 2\min $$
Therefore, the fitted EDF values for the three curves are:
$$ {\displaystyle \begin{array}{l} EDF{(fitted)}_{3 mm}=0.310/\min \\ {} EDF{(fitted)}_{5 mm}=0.236/\min \\ {} EDF{(fitted)}_{7 mm}=0.092/\min \end{array}} $$
In fact, in the three sets of experiments, except for the initial surface ripple error frequency of the workpiece, all other parameters of the polishing process are the same, that is to say, all terms of EDF of the three sets of experiments in the theoretical model are equal except for κtotal. According to Eq.17, the ripple frequency affects the elastic coefficient κelastic (f) of the pitch layer and the slope correction factor C. Comparing the EDF values of the three sets of experiments, the higher the corrugated space frequency is, the larger the material impact parameter κtotal is, which leads to a larger EDF. And the larger EDF is, the higher smoothing efficiency is, then the faster the convergence curve of the smoothing experiment decreases.
Experimental verification of the correction effect of parametric smoothing model
Before the experiment mentioned above, a set of polishing spots were obtained under the same polishing process parameters, as shown in Fig. 6. Due to the inhomogeneous pressure distribution of the actual polishing pad and the instability of the pitch layer, the TIF has a certain deviation from the theory. And in the calculation of the smoothing model, it is the volume removal rate of the actual polishing spot that should be applied, which was 2.06 × 10− 2 mm3/min.
Polishing spot of the experience and the normalized TIF
According to the parametric smoothing model, a series of data points (PV) of the surface ripple errors on the experimental workpieces and the smoothing factor (SF) were plotted and fitted linearly, as shown in Fig. 7.
Linear fitting of the smoothing factor and the surface ripple errors. In the experiment, smoothing time of each workpiece was 17 min
The pressure loading during the experiment was always 5 N. With the fitted straight line slope k0 shown in Table. 1, using Eq.22, the corresponding κtotal value in the experiment can be obtained. Substituting it into the smoothing model Eq.15, together with the parameters of the polishing pad in the experiment, the calculated EDF values corresponding to the spatial frequency can be obtained. In order to verify the correction effect of parametric smoothing model, the original κtotal_(unfixed) calculated from Eq.3 and the corrected κtotal_(fixed) calculated according to Eq.22 are listed below. Finally, the error decreasing factors EDF_(fitted) fitted to the experimental data were compared, as shown in Table. 1.
Table. 1 Parameters calculated results
EDF_(unfixed) and EDF_(fixed) are the EDF values calculated from κtotal_(unfixed) and κtotal_(fixed), respectively. From the table, it can be found that the magnitude of the corrected EDF_(fixed) is closer to the experimental results (with uncertainty within 4.5%), while the uncorrected EDF_(unfixed) has a larger deviation from the actual curve convergence factor (with maximum deviation 38.1%). Therefore, it can be objectively considered that κtotal_(fixed) calculated from the modified relationship Eq.22 is closer to the actual data, thereby confirming the accuracy of Eq.22 and the necessity of correction for parametric model.
Prediction of smoothing experiment
In order to further examine how well the theoretical smoothing model can describe the actual smoothing process, another set of polishing experiment was performed, which is on a round workpiece (Φ150mm), with initial surface ripple spital interval set to 5 mm. A polishing pad with a diameter of 50 mm and eccentricity of 5 mm were chosen in the progress. Different from the previous experiments, the volume removal rate measured through the polishing spot before smoothing experiment was VRR = 1.445 × 10− 1 mm3/min, and the pressure given during polishing was 55 N. The experimental result is shown in Fig. 8.
The comparison of predicted results and experimental results
In Fig. 8, the red curve is the predicted curve calculated by the theoretical parameterized model. Since the material of the polishing pad selected in the experiment was the same as the previous experiment, and the spital interval of the surface ripple errors is 5 mm, the value of κtotal used in the theoretical prediction is 77.357/(pa·nm− 1) as listed in Table.1. According to the parametric smoothing model, the theoretical error decreasing factor EDF2 calculated by Eq.15 satisfies:
$$ {EDF}_2=\frac{77.357 pa\cdot {nm}^{-1}}{\frac{55N}{\pi \times {25}^2{mm}^2}}\times \frac{1.445\times {10}^{-1}{mm}^3\cdot {\min}^{-1}}{\pi \times {30}^2{mm}^2}=0.141{\min}^{-1} $$
Similarly, the actual smoothing time of each point in the polishing area is 3 min, so the actual EDF2(fitted) obtained from the fitting curve of the experimental data is:
$$ {EDF}_2(fitted)=0.433\div 3=0.144{\min}^{-1} $$
It shows that there is a better match between the smoothing convergence curve obtained by theoretical calculation and the experimental curve within a certain uncertainty. In this experiment, the deviation of each data point from the theoretical prediction does not exceed 2%.
The source of the uncertainties is from the fitted slope k0 and the instability of the experimental conditions: the slope k0 obtained from the linear fit on the experimental data is not accurate, therefore propagating to calculating κtotal. At the same time, there is still a certain difference between the polishing pads of the two groups due to the instability of the pitch layer and the change in thickness, which can affect the actual material coefficient. In general, the theoretical model predicts the experimental results from numerical calculations well, and gives a certain accuracy of the predicted curve for the actual processing, application and optimization of the selection of polished spots to smooth the surface ripple errors.
Based on the urgent need for quantitative prediction of smoothing effect, a new predictable smoothing evolution model for computer-controlled polishing was established. The main focus of this study was its basic smoothing theory, solution method and modification. Firstly, combining the existing qualitative characterization equation of the convergence process of the ripple errors, the parametric smoothing model containing time variables suitable for computer-controlled polishing was proposed. An error decreasing factor EDF was defined, as an integral factor that contains the various process parameters of the polishing process, which characterizes the smoothing efficiency under some specific settings of polishing process. According to the several data points and process parameters obtained by the short-time pre-processing before smoothing, the EDF and theoretical error convergence curve of the current smoothing process can be calculated.
In view of the observed experimental phenomenon that the theoretical predicted curve deviates from the actual curve, the solution process of the parameterized smoothing model was revised and verified by experiments. After correction, the maximum deviation of the EDF obtained by theoretical calculation and the EDF obtained by experimental fitting is reduced from 38.1% to 4.5%. On this basis, the predicted experiment for the actual complete smoothing process was carried out by using the parametric model, which indicated that the predicted curve is in good agreement with the actual smoothing curve and the deviation between theoretical data points and actual values is not more than 2%.
The error decreasing factor EDF specifically includes the polishing pad geometry, rotational speed, material, and physicochemical environmental factors during the polishing process, and the spatial frequency of the ripple errors. It can be concluded from the smoothing experiments that the spatial frequency of the surface ripple error does have an effect on the smoothing efficiency. In the same smoothing environment, the smoothing efficiency of the ripple errors with a larger spatial frequency is higher, and the convergence curve falls faster.
The predictable smoothing evolution model proposed in this paper shows high accuracy and good universality, so that the research on the smoothing effect of ripple errors in computer-controlled polishing is no longer limited to qualitative analysis but can achieve quantitative prediction. In addition, it also provides a certain degree of theoretical support and guidance for the adjustment of the process conditions of smoothing the ripple errors.
Data will be shared after publication.
MSFE:
Mid-spatial frequency error
CCOS:
Computer-controlled optical surfacing
EDF :
Error decreasing factor
SF :
Slip factor
TIF :
Tool influence function
VRR :
Volume removal rate
Jones, R.A.: Optimization of computer-controlled polishing [J]. Appl. Opt. 16(1), 218–224 (1977)
Nelson, J., Sanders, G.H.: The status of the thirty meter telescope project [J]. Ground-based. Airborne Telescopes II. 7012, 70121A (2008)
Johns, M., McCarthy, P., Raybould, K.: Giant Magellan telescope: overview [J]. Ground-based. Airborne Telescopes IV. 8444, 84441H (2012)
Wagner, R.E., Shannon, R.R.: Fabrication of aspherics using a mathematical model for material removal [J]. Appl. Opt. 13(7), 1683–1689 (1974)
Wang, Y., Ni, Y., Yu, J.: Computer-controlled polishing technology for small aspheric lens [J]. Opt. Precis. Eng. 15(10), 1527–1533 (2007)
Wang, C., Wang, Z., Yang, X., et al.: Modeling of the static tool influence function of bonnet polishing based on FEA [J]. Int. J. Adv. Manuf. Technol. 74(1–4), 341–349 (2014)
Dong, Z., Cheng, H., Tam, H.: Modified subaperture tool influence functions of a flat-pitch polisher with reverse-calculated material removal rate [J]. Appl. Opt. 53(11), 2455–2464 (2014)
Brown, N.J., Baker, P.C., Parks, R.E.: Polishing-to-figuring transition in turned [J]. Optics Contemporary Methods of Optical Fabrication. 306, 58–66 (1982)
Mehta, P.K., Reid, P.B.: Mathematical model for optical smoothing prediction of high-spatial-frequency surface errors [J]. Optomechanical Engineering and Vibration Control. 3786, 447–460 (1999)
Mehta PK, Hufnagel RE.: Pressure distribution under flexible polishing tools: I. Conventional aspheric optics [J]. Advanc. Opt.Structure. Syst. 1303, 178-88 (1990)
Kim, D.W., Park, W.H., An, H.K., Burge, J.H.: Parametric smoothing model for visco-elastic polishing tools [J]. Opt. Express. 18(21), 22515–22526 (2010)
Kim, D.W., Martin, H., Burge, J.H.: Control of mid-spatial-frequency errors for large steep aspheric surfaces [J]. Optical Fabrication and Testing. OM4D–OM41D (2012)
Shu, Y., Nie, X., Shi, F., Li, S.: Smoothing evolution model for computer controlled optical surfacing [J]. J. Opt. Technol. 81(3), 164–167 (2014)
Dunn, C.R., Walker, D.D.: Pseudo-random tool paths for CNC sub-aperture polishing and other applications [J]. Opt. Express. 16(23), 18942–18949 (2008)
Tam, H.Y., Cheng, H., Dong, Z.: Peano-like paths for subaperture polishing of optical aspherical surfaces [J]. Appl. Opt. 52(15), 3624–3636 (2013)
Nie, X., Li, S., Shi, F., Hu, H.: Generalized numerical pressure distribution model for smoothing polishing of irregular mid-spatial frequency errors [J]. Appl. Opt. 53(6), 1020–1027 (2014)
Zhang, Y., Wei, C., Shao, J., et al.: Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS [J]. Optical Manufacturing and Testing XI. 9575, 95750D (2015)
Kim, D.W., Burge, J.H.: Rigid conformal polishing tool using non-linear visco-elastic effect [J]. Opt. Express. 18(3), 2242–2257 (2010)
Nie, X., Li, S., Hu, H., Li, Q.: Control of mid-spatial frequency errors considering the pad groove feature in smoothing polishing process [J]. Appl. Opt. 53(28), 6332–6339 (2014)
This work was financially supported by the Science Challenge Project (grant numbers: No.TZ2016006–0502-01) and High-grade CNC Machine Tool and Basic Manufacturing Equipment Project (grant numbers: No. 2017ZX04022001–101).
Research Center of Laser Fusion, China Academy of Engineering Physics, Mianyang, 621900, China
Jing Hou, Pengli Lei, Xianhua Chen, Jian Wang, Wenhui Deng & Bo Zhong
School of Mechatronics & Engineering, Harbin Institute of Technology, Harbin, 150001, China
Shiwei Liu
Jing Hou
Pengli Lei
Xianhua Chen
Wenhui Deng
Bo Zhong
Jing Hou and Pengli Lei put forward the predictable evolution smoothing model, Shiwei Liu modified the solution of κtotal, and Xianhua Chen and others assisted in the verification of the smoothing model and the verification of the smoothing effect of ripples of different periods. The author(s) read and approved the final manuscript.
Correspondence to Shiwei Liu.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Hou, J., Lei, P., Liu, S. et al. A predictable smoothing evolution model for computer-controlled polishing. J. Eur. Opt. Soc.-Rapid Publ. 16, 23 (2020). https://doi.org/10.1186/s41476-020-00145-4
Mid-spatial frequency errors
Computer-controlled polishing
Quantitative prediction | CommonCrawl |
January 23, 2019 January 23, 2019 cdsinclair Administrative, Math, PTR
Post-tenure Review Statement
I study the distribution of algebraic numbers, mathematical statistical physics and roots/eigenvalues of random polynomials/matrices.
1The distribution of values of the non-archimedean absolute Vandermonde determinant and the non-archimedean Selberg integral (with Jeff Vaaler). The Mellin transform of the distribution function of the non-archimedean absolute Vandermonde (on the ring of integers of a local field) is related to a non-archimedean analog of the Selberg/Mehta integral. A recursion for this integral allows us to find an analytic continuation to a rational function on a cylindrical Riemann surface. Information about the poles of this rational function allow us to draw conclusions about the range of values of the non-archimedean absolute Vandermonde.
2Non-archimedean electrostatics. The study of charged particles in a non-archimedean local field whose interaction energy is proportional to the log of the distance between particles, at fixed coldness $\beta$. The microcanonical, canonical and grand canonical ensembles are considered, and the partition function is related to the non-archimedean Selberg integral considered in 1. Probabilities of cylinder sets are explicitly computable in both the canonical and grand canonical ensembles.
3Adèlic electrostatics and global zeta functions (with Joe Webster). The non-archimedean Selberg integral/canonical partition function are examples of Igusa zeta functions, and as such local Euler factors in a global zeta function. This global zeta function (the exact definition of which is yet to be determined) is also the partition function for a canonical electrostatic ensemble defined on the adèles of a number field. The archimedean local factors relate to the ordinary Selberg integral, the Mehta integral, and the partition function for the complex asymmetric $\beta$ ensemble. The dream would be a functional equation for the global zeta function via Fourier analysis on the idèles, though any analytic continuation would tell us something about the distribution of energies in the adèlic ensemble.
4Pair correlation in circular ensembles when $\beta$ is an even square integer (with Nate Wells and Elisha Hulbert). This can be expressed in terms of a form in a grading of an exterior algebra, the coefficients of which are products of Vandermonde determinants in integers. Hopefully an understanding of the asymptotics of these coefficients will lead to scaling limits for the pair correlation function for an infinite family of coldnesses via hyperpfaffian/Berezin integral techniques. This would partially generalize the Pfaffian point process arising in COE and CSE. There is a lot of work to do, but there is hope.
5Martingales in the Weil height Banach space (with Nathan Hunter). Allcock and Vaaler produce a Banach space in which $\overline{\mathbb Q}^{\times}/\mathrm{Tor}$ embeds densely in a co-dimension 1 subspace, the (Banach space) norm of which extends the logarithmic Weil height. Field extensions of the maximal abelian extension of $\mathbb Q$ correspond to $\sigma$-algebras, and towers of fields to filtrations. Elements in the Banach space (including those from $\overline{\mathbb Q}^{\times}/\mathrm{Tor}$) represent random variables, and the set up is ready for someone to come along and use martingale techniques—including the optional stopping time theorem—to tell us something about algebraic numbers.
I have three current PhD students and one current departmental Honors student. I have supervised two completed PhDs and six completed honors theses. You can find a list of current and completed PhD and honors students on my CV.
My teaching load has been reduced for the last five years (or so) due to an FTE release for serving on the Executive Council of United Academics. As President of United Academics, and Immediate Past President of the University Senate I am not teaching in the 2018 academic year. In AY2019, I am scheduled to teach a two-quarter sequence on mathematical statistical physics.
I take my teaching seriously. I prepare detailed lecture notes for most courses (exceptions being introductory courses, where my notes are better characterized as well-organized outlines). When practical and appropriate I use active learning techniques, mostly through supervised group work. I am a tough, but fair grader.
Service encompasses pretty much everything that an academic does outside of teaching and research. This includes advising, serving on university and departmental committees, reviewing papers, writing letters of recommendation, organizing seminars and conferences, serving on professional boards, etc. The impossibility of doing it all allows academics to decide what types of service they are going specialize based on their interests and abilities.
I have spent the last three years heavily engaged in university level service. I currently serve as the president of United Academics of the University of Oregon, and I am the immediate-past president of the University Senate. Before that I was the Vice President of the Senate and the chair of the Committee on Committees. All of these roles are difficult and require a large investment of thought and energy. The reward for this hard work is a good understanding of how the university works, who to go to when issues need resolution, and who can be safely ignored.
I know what academic initiatives are underway, being involved in several of them. I am spearheading, with the new Core Education Council, the reform of general education at UO. I am working on the New Faculty Success Program—an onboarding program for new faculty—with the Office of the Provost and United Academics. I am currently on the Faculty Salary Equity Committee and its Executive Committee. I have been a bit player in many other projects and initiatives including student evaluation reform, the re-envisioning of the undergraduate multicultural requirement, and the creation of an expedited tenure process to allow the institution alacrity when recruiting imminent scholars. This list is incomplete.
Next year, with high probability, I will be the chair of the bargaining committee for the next collective bargaining agreement between United Academics and the University of Oregon (this assumes I am elected UA president). I will also be working with the Core Ed Council to potentially redefine the BA/BS distinction, with a personal focus on ensuring quantitative/data/information literacy is distributed throughout our undergraduate curriculum. I will also be working to help pilot (and hopefully scale) the Core Ed "Runways" (themed, cohorted clusters of gen ed courses) with the aspirational goal of having 100% of traditional undergraduates in a high-support, high-engagement, uniquely-Oregon first-year experience within the next 3-5 years.
As important as the service I am doing, is the service I am not doing. I do little to no departmental service (though part of this derives from the CAS dean's interpretation of the CBA) and I avoid non-required departmental functions (for reasons). I do routinely serve on academic committees for graduate/honors students, etc. I decline most requests to referee papers/grants applications, and serve on no editorial boards. The national organizations for which I am an officer are not mathematical organizations, but rather organizations dedicated to shared governance.
The two principles which drive my professional work are truth and fairness.
I remember after a particularly troubling departmental vote, a senior colleague attempted to assuage my anger at the department by explaining that "the world is not fair." I hate this argument because it removes responsibility from those participating in such decisions, and places blame instead on a stochastic universe. And, while there is stochasticity in the universe, we should be working toward ameliorating inequities caused by chance, and in instances where we have agency, making decisions which do not compound them.
I do not think the department does a very good job at recognizing nor ameliorating inequities. Indeed, there are individuals, policies and procedures that negatively impact diversity. See my recent post Women & Men in Mathematics for examples.
My work on diversity and equity issues has been primarily through the University Senate and United Academics. As Vice-president of the UO Senate, I sat on the committee which vetted the Diversity Action Plans of academic units. I also worked on, or presided over several motions put forth by the University Senate which address equity, diversity and inclusion. Obviously, the work of the Senate involves many people, and in many instances I played only a bit part, but nonetheless I am proud to have supported/negotiated/presided over the following motions which have addressed diversity and equity issues on campus:
Implementing A System for the Continuous Improvement and Evaluation of Teaching
Proposed Changes to Multicultural Requirement
Resolution denouncing White Supremacy & Hate Speech on Campus
Proposed Change to Admissions Policies Requiring Disclosure of Criminal and Disciplinary Hearing
A Resolution in support of LGBTQAI Student Rights
Declaring UO a Sanctuary Campus
Reaffirming our Shared Values of Respect for Diversity, Equity, and Inclusion
Student Sexual and gender-Based Harassment and Violence Complaint and Response Policy
Besides my work with the Senate, I have also participated in diversity activities through my role(s) with United Academics of the University of Oregon. United Academics supports both a Faculty of Color and LGBTQ* Caucus which help identify barriers and propose solutions to problems affecting those communities on campus. United Academics bargained a tenure-track faculty equity study, and I am currently serving on a university committee identifying salary inequities based on protected class and proposing remedies for them.
I have attended in innumerable rallies supporting social justice, and marched in countless marches. I flew to Washington D.C. to attend the March for Science. I've participated in workshops and trainings on diversity provided by the American Federation of Teachers, and the American Association of University Professors.
I recognize that I am not perfect. I cannot represent all communities nor emulate the diversity of thought on campus. I have occasionally used out-moded words and am generally terrible at using preferred pronouns (though I try). I recognize my short-comings and continually work to address them.
There are different tactics for turning advocacy into action, and individuals may disagree on their appropriateness and if/when escalation is called for. My general outlook is to work within a system to address inequities until it becomes clear that change is impossible from within. In such instances, if the moral imperative for change is sufficient then I work for change from without. This is my current strategy when tackling departmental diversity issues; I work with administrative units, the Senate and the union to put forth/support policies which minimize bias, discrimination and caprice in departmental decisions. I ensure that appropriate administrators know when I feel the department has fallen down on our institutional commitment to diversity, and I report incidents of bias, discrimination and harassment to the appropriate institutional offices (subject to the policy on Student Directed Reporters).
Fairness is as important to me as truth, and I look forward to the day where I can focus more of my time uncovering the latter instead of continually battling for the former.
United Academics Flyers | CommonCrawl |
Wed, 05 Jun 2019 18:41:34 GMT
1: Statistical Reasoning
[ "article:topic", "showtoc:no", "license:ccbyncsa", "authorname:pkaslik" ]
Contributed by Peter Kaslik
Professor (Mathematics) at Pierce College Fort Steilacoom
Adult Literacy Prize Story
1.5 Data, Parameters, and Statistics
Statistical Reasoning Process
Demonstration of an elementary hypothesis test
Take a moment to visualize humanity in its entirety, from the earliest humans to the present. How would you characterize the well-being of humanity? Think beyond the latest stories in the news. To help clarify, think about medical treatment, housing, transportation, education, and our knowledge. While there is no denying that we have some problems that did not exist in earlier generations, we also have considerably more knowledge.
The progress humanity has made in learning about ourselves, our world and our universe has been fueled by the desire of people to solve problems or gain an understanding. It has been financed through both public and private monies. It has been achieved through a continual process of people proposing theories and others attempting to refute the theories using evidence. Theories that are not refuted become part of our collective knowledge. No single person has accomplished this, it has been a collective effort of humankind.
As much as we know and have accomplished, there is a lot that we don't know and have not yet accomplished. There are many different organizations and institutions that contribute to humanity's gains in knowledge, however one organization stands out for challenging humanity to achieve even more. This organization is XPrize.1 On their webpage they explain that they are an innovation engine. A facilitator of exponential change. A catalyst for the benefit of humanity." This organization challenges humanity to solve bold problems by hosting competitions and providing a monetary prize to the winning team. Examples of some of their competitions include:
2004: Ansari XPrize ($10 million) – Private Space Travel – build a reliable, reusable, privately financed, manned spaceship capable of carrying three people to 100 kilometers above the Earth's surface twice within two weeks.
Current: The Barbara Bush Foundation Adult Literacy XPrize ($7 million) - "challenging teams to develop mobile applications for existing smart devices that result in the greatest increase in literacy skills among participating adult learners in just 12 months."
There are an estimated 36 million American adults with a reading level below third grade level. They have difficulty reading bedtime stories, reading prescriptions, and completing job applications, among other things. Developing a good app could have huge benefits for a lot of people, which would also provide benefits for the country.
The following fictional story will introduce you to the way data and statistics are used to test theories and make decisions. The goal is for you to see that the thought processes are not algebraic and that it is necessary to develop new ways of thinking so we can validate our theories or make evidence- based decisions.
Imagine being part of a team competing for the Adult Literacy Xprize. During the early stages of development, a goal of your team is to create an app that is engaging for the user so that they will use it frequently. You tested your first version (Version 1) of the app on some adults who lacked basic literacy and found it was used an average of 6 hours during the first month. Your team decided this was not very impressive and that you could do better, so you developed a completely new version of the software designated as Version 2. When it was time to test the software, the 10 members of your team each gave it to 8 different people with low literacy skills. This group of 80 individuals that received the software is a small subset, or sample, of all those who have low literacy skills. The objective was to determine if Version 2 is used more than an average of 6 hours per month.
While the data will ultimately be pooled together, your teammates decide to compete against each other to determine whose group of 8 does better. The results are shown in the table below. The column on the right is the mean (average) of the data in the row. The mean is found by adding the numbers in the row and dividing that sum by 8.
Version 2 Data (hours of use in 1 month)
You, The reader 4.4 3.8 4.4 6.7 1.1 5.7 0.8 2.5 3.675
Betty 11 8.4 8.4 2.7 4.4 8.4 5.7 4.4 6.675
Joy 1.6 2.2 12.5 5.7 2.2 6.6 0.8 0.3 3.9875
Kerissa 16.1 11.1 8.7 9.1 1.4 9.1 1.2 14.4 8.8875
Crystal 0 2.1 0 3.2 0.2 1.8 9.1 3.3 2.4625
Marcin 2.2 6.3 1.3 8.8 0.8 2.7 0.9 0.8 2.975
Tisa 8.8 5.8 9.7 2.8 3.2 0.9 0.1 16.1 5.925
Tyler 11 0.9 11.3 6.6 0.3 5.9 1.7 1.9 4.95
Patrick 0.9 1.8 6.3 3.1 6.1 6.3 3.2 6.7 4.3
One way to make sense of the data is to graph it. The graph to the right is called a histogram. It shows the distribution of the amount of time the software was used by each participant. To interpret this graph, notice the scale on the horizontal (x) axis counts by 2. These numbers represent hours of use. The height of each bar shows how many usage times fall between the x values. For example, 26 people used the app between 0 and 2 hours while 2 people used the app between 16 and 18 hours.
The second graph is a histogram of the mean (average) for each of the 10 groups. This is a graph of the column in the table that is shaded. A histogram of means is called a sampling distribution. The distribution to the right shows that 4 of the means are between 2 and 4 hours while only one mean was between 8 and 10 hours. Notice how the means are grouped closer together than the original data.
The overall mean for the 80 data values is 4.88 hours. Our task is to use the graphs and the overall mean to decide if Version 2 is used more than the Version 1 was used (6 hours per month). What is your conclusion? Answer this question before continuing your reading.
Yes Version 2 is better than Version 1 No, Version 2 is not better than Version 1
Which of the following had the biggest influence on your decision?
______ 54 of the 80 data values were below 6
______ The mean of the data is 4.88, which is below 6
______ 8 of the 10 sample means are below 6.
Version 3 was a total redesign of the software. A similar testing strategy was employed as with the prior version. When you received the data from the 8 users you gave the software to, you found that the average length of usage was 10.25 hours. Based on your results, do you feel that this version is better than version 1?
Team Member Version 3 Data (hours of use in 1 month) Mean
You, The reader 14 13 8 4 8 21 3 11 10.25
Your colleague Keer looked at her data, which is shown in the table below. What conclusion would Keer arrive at, based on her data?
Keer 0 3 2 3 5 4 8 11 4.
If your interpretation of your data and Keer's data are typical, then you would have concluded that Version 3 was better than Version 1 based on your data and Version 3 was not better based on Keer's data. This illustrates how different samples can lead to different conclusions. Clearly, the conclusion based on your data and the conclusion based on Keer's data cannot both be correct. To help appreciate who might be in error, let's look at all the data for the 80 people who tested Version 3 of the software.
Keer 0 3 2 3 5 4 8 11 4.5
Betty 8 5 5 4 5 0 1 16 5.5
Joy 7 5 8 4 7 13 7 6 7.125
Kerissa 8 6 14 3 11 2 5 8 7.125
Crystal 6 7 4 7 6 3 7 5 5.625
Marcin 7 7 6 1 2 7 5 5 5
Tisa 3 3 5 4 14 13 3 2 5.875
Tyler 0 7 2 7 4 2 5 2 3.625
Patrick 8 3 1 14 2 6 7 2 5.375
The histogram on the right is of the data from individual users. This shows that about half the data (42 out of 80) are below 6 and the rest are above 6.
The histogram on the right is of the mean of the 8 users for each member of the team. This sampling distribution shows that 7 of the 10 sample means are below 6.
The mean of all the individual data values is 6.0. Consequently, if you concluded that Version 3 was better than Version 1 because the mean of your 8 users was 10.25 hours, you would have come to the wrong conclusion. You would have been misled by data that was selected by pure chance.
None of the first 3 versions was particularly successful but your team is not discouraged. They already have new ideas and are putting together another version of their literacy program.
When Version 4 is complete, each member of the team randomly selects 8 people with low literacy levels, just as was done for the prior versions. The data that is recorded is the amount of time the app is used during the month. Your data is shown below.
You, The reader 60 44 37 62 32 88 32 48.375
Based on your results, do you feel that this version is better than version 1?
The results of all 80 participants is shown in the table below.
You, The reader 60 44 37 32 62 32 88 32 48.375
Keer 48 37 24 20 82 76 67 67 52.625
Betty 88 39 67 24 71 85 81 24 59.875
Joy 23 58 21 88 81 75 84 81 63.875
Kerissa 88 24 58 53 81 57 88 24 59.125
Crystal 47 85 767 24 39 67 40 77 56.875
Marcin 61 45 75 58 87 51 37 73 60.875
Tisa 76 77 58 84 20 55 81 82 66.625
Tyler 82 47 48 60 88 21 50 24 52.5
Patrick 20 40 52 24 55 33 33 84 42.625
The histogram on the right is of the data from individual users. Notice that all these values are higher than 20.
The histogram on the right is of the mean of the 8 users for each member of the team. Notice that all the sample means are significantly higher than 6.
Based on the results of Version 4, all the data is much higher than 6 hours per month. The average is 56.3 hours per month which is almost 2 hours per day. This is significantly more usage of the app than the early versions and consequently will be the app that is used in the XPrize competition.
Making decisions using statistics
There were several objectives of the story you just read.
To give you an appreciation of the variation that can exist in sample data.
To introduce you to a type of data graph called a histogram, which is a good way for looking at the distribution of data.
To introduce you to the concept of a sampling distribution, which is a distribution of means of a sample, rather than of the original data.
To illustrate the various results that can occur when we try to answer questions using data. These results are summarized below in answer to the question of whether the new version is better than the first version.
a. Version 2: This was not better. In fact, it appeared to be worse.
b. Version 3: At first it looked better, but ultimately it was the same.
c. Version 4: This was much better.
Because data sometimes provide clarity about a decision that should be made (Versions 2 and 4), but other times is not clear (Version 3), a more formal, statistical reasoning process will be explained in this chapter with the details being developed throughout the rest of the book.
Before beginning with this process, it is necessary to be clear about the role of statistics in helping us understand our world. There are two primary ways in which we establish confidence in our knowledge of the world, by providing analytical evidence or empirical evidence.
Analytical evidence makes use of definitions or mathematical rules. A mathematical proof is an analytical method for using established facts to prove something new. Analytical evidence is useful for proving things that are deterministic. Deterministic means that the same outcome will be achieved each time (if errors aren't made). Algebra and Calculus are examples of deterministic math and they can be used to provide analytical evidence.
In contrast, empirical evidence is based on observations. More specifically, someone will propose a theory and then research can be conducted to determine the validity of that theory. Most of the ideas we believe with confidence have resulted because of the rejection of theories we previously had and our current knowledge consists of those ideas we have not been able to reject with empirical evidence. Empirical evidence is gained through rigorous research. This contrasts with anecdotal evidence, which is also gained through observation, but not in a rigorous manner. Anecdotal evidence can be misleading.
The role of statistics is to objectively evaluate the evidence so a decision can be made about whether to reject, or not reject a theory. It is particularly useful for those situations in which the evidence is the result of a sample taken from a much larger population. In contrast to deterministic relationships, stochastic populations are ones in which there is randomness, while the evidence is gained though random sampling, thus meaning the evidence we see is the result of chance.
The scientific method that is used throughout the research community to increase our understanding of the world is based on proposing and then testing theories using empirical methods. Statistics plays a vital role in helping researchers understand the data they produce. The scientific method contains the following components.
Propose a hypothesis about the answer to the question
Design research (Chapter 2)
Collect data (Chapter 2)
Develop an understanding of the data using graphs and statistics (Chapter 3)
Use the data to determine if it supports, or contradicts the hypothesis (Chapters 5,7,8)
Draw a conclusion.
Before exploring the statistical tools used in the scientific method, it is helpful to understand the challenges we face with stochastic populations and the statistical reasoning process we use to draw conclusions.
When a theory is proposed about a population, it is based on every person or element of the population. A population is the entire set of people or things of interest.
Because the population contains too many people or elements from which to get information, we make a hypothesis about what the information would be, if we could get all of it.
Evidence is collected by taking a sample from the population.
The evidence is used to determine if the hypothesis should be rejected or not rejected.
These four components of the statistical reasoning process will now be developed more fully. The challenge is to determine if there is sufficient support for the hypothesis, based on partial evidence, when it is known that partial evidence varies, depending upon the sample that was selected. By analogy, it is like trying to find the right person to marry, by getting partial evidence from dating or to find the right person to hire, by getting partial evidence from interviews.
1. Theories about populations.
When someone has a theory, that theory applies to a population that should be clearly defined. For example, a population might be everyone in the country, or all senior citizens, or everyone in a political party, or everyone who is athletic, or everyone who is bilingual, etc. Populations can also be any part of the natural world including animals, plants, chemicals, water, etc. Theories that might be valid for one population are not necessarily valid for another. Examples of theories being applied to a population include the following.
The team working on the literacy app theorizes that one version of their app will be used regularly by the entire population of adults with low literacy skills who have access to it.
A teacher theorizes that her teaching pedagogy will lead to the greatest level of success for the entire population of all the students she will teach.
A pharmaceutical company theorizes that a new medicine will be effective in treating the entire population of people suffering from a disease who use the medicine.
A water resource scientist theorizes that the level of contamination in an entire body of water is at an unsafe level.
Before discussing hypotheses, it is necessary to talk about data, parameters and statistics.
On the largest level, there are two types of data, categorical and quantitative. Categorical datais data that can be put into categories. Examples include yes/no responses, or categories such as color, religion, nationality, pass/fail, win/lose, etc. Quantitative data is data that consists of numbers resulting from counts or measurements. Examples include, height, weight, time, amount of money, number of crimes, heart rate, etc.
The ways in which we understand the data, graphs and statistics, are dependent upon the type of data. Statistics are numbers used to summarize the data. For the moment, there are two statistics that will be important, proportions and means. Later in the book, other statistics will be introduced.
A proportion is the part divided by the whole. It is similar to percent, but it is not multiplied by 100. The part is the number of data values in a category. The whole is the number of data values that were collected. Thus, if 800 people were asked if they had ever visited a foreign country and 200 said they had, then the proportion of people who had visited a foreign country would be:
\(\dfrac{\text{part}}{\text{whole}} = \dfrac{x}{n} = \dfrac{200}{800} = 0.25\)
The part is represented by the variable x and the whole by the variable n.
A mean, often known as an average, is the sum of the quantitative data divided by the number of data values. If we refer back to the literacy app, version 3, the data for Marcin was:
The mean is \(\dfrac{7+ 7 + 6 + 1 + 2 + 7 + 5 +}{8} = {40}{8} = 5\)
While statistics are numbers that are used to summarize sample data, parameters are numbers used to summarize all the data in the population. To find a parameter, however, requires getting data from every person or element in the population. This is called a census. Generally, it is too expensive, takes too much time, or is simply impossible to conduct a census. However, because our theory is about the population, then we have to distinguish between parameters and statistics. To do this, we use different variables.
Data Type Summary Population Sample
Categorical Proportion p \(\hat{p}\) (p-hat)
Quantitative Mean \(\mu\) \(bar{x}\) (x-bar)
To elaborate, when the data is categorical, the proportion of the entire population is represented with the variable p, while the proportion of the sample is represented with the variable \(\hat{p}\). When the data is quantitative, the mean of the entire population is represented with the Greek letter \(\mu\), while the mean of the sample is represented with the variable \(bar{x}\).
In a typical situation, we will not know either p or \(\mu\) and so we would make a hypothesis about them. From the data we collect we will find \(\hat{p}\) or \(bar{x}\) and use that to determine if we should reject our hypothesis.
2. Hypotheses
Hypotheses are written about parameters before data is collected (a priori). Hypotheses are written in pairs that contain a null hypothesis (\(H_0\)) and an alternative hypothesis (\(H_1\)).
Suppose someone had a theory that the proportion of people who have attended a live sporting event in the last year was greater than 0.2. In such a case, they would write their hypotheses as:
\(H_0\) : \(p = 0.2\)
\(H_1\) : \(p > 0.2\)
If someone had a theory that the mean hours of watching sporting events on the TV was less than 15 hours per week, then they would write their hypotheses as:
\(H_0\) : \(\mu\) = 15
\(H_1\) : \(\mu\) < 15
The rules that are used to write hypotheses are:
There are always two hypotheses, the null and the alternative.
Both hypotheses are about the same parameter.
The null hypothesis always contains the equal sign (=).
The alternative contains an inequality sign (<, >, ≠).
The number will be the same for both hypotheses.
When hypotheses are used for decision making, they should be selected in such a way that if the evidence supports the null hypothesis, one decision should be made, while evidence supporting the alternative hypothesis should lead to a different decision.
The hypothesis that researchers desire is often the alternative hypothesis. The hypothesis that will be tested is the null hypothesis. If the null hypothesis is rejected because of the evidence, then the
alternative hypothesis is accepted. If the evidence does not lead to a rejection of the null hypothesis, we cannot conclude the null is true, only that it was not rejected. We will use the term "supported" in this text. Thus either the null hypothesis is supported by the data or the alternative hypothesis is supported. Being supported by the data does not mean the hypothesis is true, but rather that the decision we make should be based on the hypothesis that is supported.
Two of the situations you will encounter in this text are when there is a theory about the proportion or mean for one population or when there is a theory about how the proportion or mean compares between two populations. These are summarized in the table below.
Hypothesis about one population
Hypothesis about 2 populations
The proportion is greater than 0.2
The proportion of population A is greater than the proportion of population B
\(H_0\) : \(p_A = p_B\)
\(H_1\) : \(p_A > p_B\)
The proportion is less than 0.2
\(H_1\) : \(p < 0.2\)
The proportion of population A is less than the proportion of population B
\(H_1\) : \(p_A < p_B\)
The proportion is not equal to 0.2
\(H_1\) : \(p \ne 0.2\)
The proportion of population A is different than the proportion of population B
\(H_1\) : \(p_A \ne p_B\)
The mean is greater than 15
\(H_0\) : \(\mu = 15\)
\(H_1\) : \(\mu > 15\)
The mean of population A is greater than the mean of population B
\(H_0\) : \(\mu_A = \mu_B\)
\(H_1\) : \(\mu_A > \mu_B\)
The mean is less than 15
\(H_1\) : \(\mu < 15\)
The mean of population A is less than the mean of population B
\(H_1\) : \(\mu_A < \mu_B\)
The mean does not equal 15
\(H_1\) : \(\mu \ne 15\)
The mean of population A is different than the mean of population B
\(H_1\) : \(\mu_A \ne \mu_B\)
3. Using evidence to determine which hypothesis is more likely correct.
From the Literacy App story, you should have seen that sometimes the evidence clearly supports one conclusion (e.g. version 2 is worse than version 1), sometimes it clearly supports the other conclusion (version 4 is better than version 1), and sometimes it is too difficult to tell (version 3). Before discussing a more formal way of testing hypotheses, let's develop some intuition about the hypotheses and the evidence.
Suppose the hypotheses are
\(H_0\): p = 0.4
\(H_0\): p < 0.4
If the evidence from the sample is \(\hat{p} = 0.45\), would this evidence support the null or alternative? Decide before continuing.
The hypotheses contain an equal sign and a less than sign but not a greater than sign, so when the evidence is greater than, what conclusion should be drawn? Since the sample proportion does not support the alternative hypothesis because it is not less than 0.4, then we will conclude 0.45 supports the null hypothesis.
If the evidence from the sample is \(\hat{p}\) = 0.12, would this evidence support the null or alternative? Decide before continuing.
In this case, 0.12 is considerably less than 0.4, therefore it supports the alternative.
This is a situation that is more difficult to determine. While you might have decided that 0.38 is less than 0.4 and therefore supports the alternative, it is more likely that it supports the null hypothesis.
How can that be?
In arithmetic, 0.38 is always less than 0.4. However, in statistics, it is not necessarily the case. The reason is that the hypothesis is about a parameter, it is about the entire population. On the other hand, the evidence is from the sample. Different samples yield different results. A direct comparison of the statistic (0.38) to the hypothesized parameter (0.4) is not appropriate. Rather, we need a different way of making that determination. Before elaborating on the different way, let's try another one.
If the evidence from the sample is \(\bar{x}\) = 80, which hypothesis is supported? Null Alternative
If the evidence is \(\bar{x}\) = 80, the alternative would be supported. If the evidence is \(\bar{x}\) = 26, the null would be supported. If the evidence is \(\bar{x}\) = 32, at first glance, it appears to support the alternative, but it is close to the hypothesis, so we will conclude that we are not sure which it supports.
It might be disconcerting to you to be unable to draw a clear conclusion from the evidence. After all, how can people make a decision? What follows is an explanation of the statistical reasoning strategy that is used.
The reasoning process for deciding which hypothesis the data supports is the same for any parameter (p or μ).
Assume the null hypothesis is true.
Gather data and calculate the statistic.
Determine the likelihood of selecting the data that produced the statistic or could produce a more extreme statistic, assuming the null hypothesis is true.
If the data are likely, they support the null hypothesis. However, if they are unlikely, they support the alternative hypothesis.
To illustrate this, we will use a different research question: "What proportion of American adults believe we should transition to a society that no longer uses fossil fuels (coal, oil, natural gas)? Let's assume a researcher has a theory that the proportion of American adults who believe we should make this transition is greater than 0.6. The hypotheses that would be used for this are:
\(H_0\) : p = 0.6
\(H_1\) : p > 0.6
We could visualize this situation if we used a bag of marbles. Since the first step in the statistical reasoning process is to assume the null hypothesis is true, then our bag of marbles might contain 6 green marbles that represent the adults who want to stop using fossil fuels, and 4 white marbles to represent those who want to keep using fossil fuels. Sampling will be done with replacement, which means that after a marble is picked, the color is recorded and the marble is placed back in the bag.
If 100 marbles are selected from the bag (with replacement), do you expect exactly 60 of them (60%) to be green? Would this happen every time?
The results of a computer simulation of this sampling process are shown below. The simulation is of 100 marbles being selected, with the process being repeated 20 times.
0.62 0.57 0.58 0.64 0.64 0.53 0.73 0.55 0.58 0.55
0.61 0.66 0.6 0.54 0.54 0.5 0.62 0.55 0.61 0.61
Notice that some of the times, the sample proportion is greater than 0.6, some of the times it is less than 0.6 and there is only one time in which it actually equaled 0.6. From this we can infer that although the null hypothesis really was true, there are sample proportions that might make us think the alternative is true (which could lead us to making an error).
There are three items in the statistical reasoning process that need to be clarified. The first is to determine what values are likely or unlikely to occur while the second is to determine the division point between likely and unlikely. The third point of clarification is the direction of the extreme.
Likely and Unlikely values
When the evidence is gathered by taking a random sample from the population, the random sample that is actually selected is only one of many, many, many possible samples that could have been taken instead. Each random sample would produce different statistics. If you could see all the statistics, you would be able to determine if the sample you took was likely or unlikely. A graph of statistics, such as sample proportions or sample means, is called a sampling distribution .
While it does not make sense to take lots of different samples to find all possible statistics, a few demonstrations of what happens if someone does do that can give you some confidence that similar results would occur in other situations as well. The data used in the graphs below were done using computer simulations.
The histogram at the right is a sampling distribution of sample proportions. 100 different samples that contained 200 data values were selected from a population in which 40% favored replacing fossil fuel (green marbles). The proportion in favor of replacing fossil fuels (green marbles) was found for each sample and graphed. There are two things you should notice in the graph. The first is that most of the sample proportions are grouped together in the middle and the second thing is that the middle is approximately 0.40 which is equivalent to the proportion of green marbles in the container.
That may, of course, have been a coincidence. So let's look at a different sample. In this one, the original population was 60% green marbles representing those in favor of replacing fossil fuels. The sample size was 500 and the process was repeated 100 times.
Once again we see most of the sample proportions grouped in the middle and the middle is around the value of 0.60, which is the proportion of green marbles in the original population.
We will look at one more example. In this example, the proportion in favor of replacing fossil fuels is 0.80 while the proportion of those opposed is 0.20. The sample size will be 1000 and there will be 100 samples of that size. Where do you expect the center of this distribution to fall?
As you can see, the center of this distribution is near 0.80 with more values near the middle than at the edges.
One issue that has not been addressed is the effect of the sample size. Sample sizes are represented with the variable n. These three graphs all had different sample sizes. The first sample had n=200, the second had n=500 and the third had n=1000. To see the effect of these different sample sizes, all three sets of sample proportions have been graphed on the same histogram.
What this graph illustrates is that the smaller the sample size, the more variation that exists in the sample proportions. This is evident because they are spread out more. Conversely, the larger the sample size, the less variation that exists. What this means is the larger the sample size, the closer the sample result will be to the parameter. Does this seem reasonable? If there were 10,000 people in a population and you got the opinion of 9,999 of them, do you think all your possible sample proportions would be closer to the parameter (population proportion) than if you only asked 20 people?
We will return to sampling distributions in a short time, but first we need to learn about directions of extremes and probability.
Direction of Extreme
The direction of extreme is the direction (left or right) on a number line that would make you think the alternative hypothesis is true. Greater than symbols have a direction of extreme to the right, less than symbols indicate the direction is to the left and not-equal signs indicate a two-sided direction of extreme.
Notation Notation Direction of Extreme
\(H_1\) : \(p > 0.2\) \(H_0\) : \(p_A = p_B\)
\(H_1\) : \(p_A > p_B\) Right
Left \(H_0\) : \(p_A = p_B\)
\(H_1\) : \(p_A < p_B\) Left
\(H_1\) : \(p \ne 0.2\) \(H_0\) : \(p_A = p_B\)
\(H_1\) : \(p_A \ne p_B\) Two-sided
\(H_1\) : \(\mu > 15\) \(H_0\) : \(\mu_A = \mu_B\)
\(H_1\) : \(\mu_A > \mu_B\) Right
\(H_1\) : \(\mu < 15\) \(H_0\) : \(\mu_A = \mu_B\)
\(H_1\) : \(\mu_A < \mu_B\) Left
\(H_1\) : \(\mu \ne 15\) \(H_0\) : \(\mu_A = \mu_B\)
\(H_1\) : \(\mu_A \ne \mu_B\) Two-sided
At this time it is necessary to have a brief discussion about probability. A more detailed discussion will occur in Chapter 4. When theories are tested empirically by sampling from a stochastic population, then the sample that is obtained is based on chance. When a sample is selected through a random process and the statistic is calculated, it is possible to determine the probability of obtaining that statistic or more extreme statistics if we know the sampling distribution.
By definition, probability is the number of favorable outcomes divided by the number of possible outcomes.
\[P(A) = \dfrac{Number\ of\ Favorable\ Outcomes}{Number\ of\ Possible\ Outcomes}\]
This formula assumes that all outcomes are equally likely as is theoretically the case in a random selection processes. It reflects the proportion of times that a result would be obtained if an experiment were done a very large number of times. Because you cannot have a negative number of outcomes or more successful outcomes than are possible, probability is always a fraction or a decimal between 0 and 1. This is shown generically as \(0 \le P(A) \le 1\) where P(A) represents the probability of event A.
Using Sampling Distributions to Test Hypotheses
Remember our research question, "What proportion of American adults believe we should transition to a society that no longer uses fossil fuels (coal, oil, natural gas)? The researchers theory is that the proportion of American adults who believe we should make this transition is greater than 0.6. The hypotheses that would be used for this are:
\(H_0 : p = 0.6\)
\(H_1 : p > 0.6\)
To test this hypothesis, we need two things. First, we need the sampling distribution for the null hypothesis, since we will assume that is true, as stated first in the list for the reasoning process used for testing a hypothesis. The second thing we need is data. Because this is instructional, at this point, several sample proportions will be provided so you can compare and contrast the results.
A small change has been made to the sampling distribution that was shown previously. At the top of each bar is a proportion. On the x-axis there are also proportions. The difference between these proportions is that the ones on the x-axis indicate the sample proportions while the proportions at the top of the bars indicate the proportion of sample proportions that were between the two boundary values. Thus, out of 100 sample proportions, 0.38 (or 38%) of them were between 0.60 and 0.62. The proportions at the top of the bars can also be interpreted as probabilities.
It is with this sampling distribution from the null hypotheses that we can find the likelihood, or probability of getting our data, or more extreme data. We will call this probability a p-value.
As a reminder, for the hypothesis we are testing, the direction of extreme is to the right.
Suppose the sample proportion we got for our data was \(\hat{p}\) = 0.64. What is the probability we would have gotten that sample proportion or more extreme from this distribution? That probability is 0.01, consequently the p-value is 0.01. This number is found at the top of the right-most bar.
Suppose the sample proportion we got from our data was \(\hat{p}\) = 0.62. What is the probability we would have gotten that sample proportion from this distribution? That probability is 0.11. This was calculated by adding the proportions on the top of the two right-most bars. The p-value is 0.11.
You try it. Suppose the sample proportion we got from our data was \(\hat{p}\) = 0.60. What is the probability we would have gotten that sample proportion from this distribution?
Now, suppose the sample proportion we got from our data was \(\hat{p}\) = 0.68. What is the probability we would have gotten that sample proportion from this distribution? In this case, there is no evidence of any sample proportions equal to 0.68 or higher, so consequently the probability, or p-value would be 0.
Testing the hypothesis
We will now try to determine which hypothesis is supported by the data. We will use the p=0.8 distribution to represent the alternative hypothesis. Both the null and alternative distributions are shown on the same graph.
If the data that is selected had a statistic of \(\hat{p}\) = 0.58, what is the p-value? Which of the two distributions do you think the data came from? Which hypothesis is supported?
The p-value is 0.81 (0.32+0.38+0.10+0.01). This data came from the null distribution (p=0.6). This evidence supports the null hypothesis.
If the data that is selected was \(\hat{p}\) = 0.78, what is the p-value? Which of the two distributions do you think the data came from? Which hypothesis is supported?
The p-value is 0 because there are no values in the p=0.6 distribution that are 0.78 or higher. The data came from the alternative (p=0.8) distribution. The alternative hypothesis is supported.
In the prior examples, there as a clear distinction between the null and alternative distributions. In the next example, the distinction is not as clear. The alternative distribution will be represented with a proportion of 0.65
If the data that is selected was \(\hat{p}\) = 0.62, from which of the two distributions do you think the data came from? Which hypothesis is supported?
Notice that in this case, because the distributions overlap, a sample proportion of 0.62 or more extreme could have come from either distribution. It isn't clear which one it came from. Because of this lack of clarity, we could possibly make an error. We might think it came from the null distribution whereas it really came from the alternative distribution. Or perhaps we thought it came from the alternative distribution, but it really came from the null distribution. How do we decide???
Before explaining the way we decide, we need to discuss errors, as they are part of the decision- making process.
There are two types of errors we can make as a result of the sampling process. They are known as sampling errors. These errors are named Type I and Type II errors. A type I error occurs when we think the data supports the alternative hypothesis but in reality, the null hypothesis is correct. A type II error occurs when we think the data supports the null hypothesis, but in reality the alternative hypothesis is correct. In all cases of testing hypotheses, there is the possibility of making either a type I or type II error.
The probability of making either Type I or Type II errors is important in the decision-making process. We represent the probability of making a Type I error with the Greek letter alpha, \(\alpha\). It is also called the level of significance. The probability of making a Type II error is represented with the Greek letter Beta, \(\beta\). The probability of the data supporting the alternative hypothesis, when the alternative is true is called power. Power is not an error. The errors are summarized in the table below.
The True Hypothesis
\(H_0\) Is True \(H_1\) Is True
The Evidence upon which the decision is based The Data Supports \(H_0\) No Error Type II Error Probability: \(\beta\)
The Data Supports \(H_1\) Type I Error Probability: \(\alpha\) No Error Probability: Power
The reasoning process for deciding which hypothesis the data supports is reprinted here.
Determine the likelihood of selecting the data that produced the statistic or could produce a more extreme statistic, assuming the null hypothesis is true. This is called the p-value.
The determination of whether data are likely or not is based on a comparison between the p- value and α. Both alpha and p-values are probabilities. They must always be values between 0 and 1, inclusive. If the p-value is less than or equal to α, the data supports the alternative hypothesis. If the p-value is greater than α, the data supports the null hypothesis. When the data supports the alternative hypothesis, the data are said to be significant. When the data supports the null hypothesis, the data arenot significant. Reread this paragraph at least 3 times as it defines the decision making rule used throughout statistics and it is critical to understand.
Because some values clearly support the null hypothesis, others clearly support the alternative hypothesis but some do not clearly support either, then a decision has to be made, before data is ever collected (a priori), as to the probability of making a type I error that is acceptable to the researcher.The most common values for α are 0.05, 0.01, and 0.10. There is not a specific reason for these choices but there is considerable historical precedence for them and they will be used routinely in this book. The choice for a level of significance should be based on several factors.
If the power of the test is low because of small sample sizes or weak experimental design, a larger level of significance should be used.
Keep in mind the ultimate objective of research – "to understand which hypotheses about the universe are correct. Ultimately these are yes and no decisions." (Scheiner, Samuel M., and Jessica Gurevitch. Design and Analysis of Ecological Experiments. Oxford [etc.: Oxford UP, 2001. Print.) Statistical tests should lead to one of three results. One result is that the hypothesis is almost certainly correct. The second result is that the hypothesis is almost certainly incorrect. The third result is that further research is justified. P- values within the interval (0.01,0.10) may warrant continued research, although these values are as arbitrary as the commonly used levels of significance.
If we are attempting to build a theory, we should use more liberal (higher) values of α, whereas if we are attempting to validate a theory, we should use more conservative (lower) values of \(\alpha\).
Now, you have all the parts for deciding which hypothesis is supported by the evidence (the data). The problem will be restated here.
A vertical line was drawn on the graph so that a proportion of only 0.01 was to the right of the line in the null distribution. This is called a decision line because it is the line that determines how we will decide if the statistic supports the null or alternative hypothesis. The number at the bottom of the decision line is called the critical value.
To answer these questions, first find the p-value. The p-value is 0.11 (0.10 + 0.01).
Next, compare the p-value to \(\alpha\). Since 0.11 > 0.01, this evidence supports the null hypothesis.
Because showing both distributions on the same graph can make the graph a little difficult to read, this graph will be split into two graphs. The decision line is shown at the same critical value on both graphs (0.64). The level of significance, α, is shown on the null distribution. It points in the direction of the extreme. β and power are shown on the alternative distribution. Power is on the same side of the distribution as the direction of extreme while β is on the opposite side. The p-value is also shown on the null distribution, pointing in the direction of the extreme.
Another example will be demonstrated next.
Question: What is the proportion of people who have visited a different country?
Theory: The proportion is less than 0.40
Hypotheses: \(H_0: p = 0.40\)
\(H_1: p < 0.40\)
The distribution on the left is the null distribution, that is, it is the distribution that was obtained by sampling from a population in which the proportion of people who have visited a different country is really 0.40. The distribution on the right is representing the alternative hypothesis.
The objective is to identify the portion of each graph associated with α, β, and power. Once the data has been provided, you will also be able to show the part of the graph that indicates the p-value.
The reasoning process for labeling the distributions is as follows.
1. Determine the direction of the extreme. This is done by looking at the inequality sign in the alternative hypothesis. If the sign is <, then the direction of the extreme is to the left. If the sign is >, then the direction of the extreme is to the right. If the sign is \(\ne\), then the direction of extreme is to the left and right, which is called two-sided. Notice that the inequality sign points towards the direction of extreme. To keep these concepts a little easier as you are learning them, we will not do two-sided alternative hypotheses until later in the text.
In this problem the direction of extreme is to the left because smaller sample proportions support the alternative hypothesis.
2. Draw the Decision line. The direction of extreme along with α are used to determine the placement of the decision line. Alpha is the probability of making a Type I error. A Type I error can only occur if the null hypothesis is true, therefore, we always place alpha on the null distribution. Starting on the side of the direction of extreme, add the proportions at the top of the bars until they equal alpha. Draw the decision line between bars separating those that could lead to a Type I error from the rest of the distribution.
Notice the x-axis value at the bottom of the decision line. This value is called the critical value. Identify the critical value on the alternative distribution and place another decision line there.
In this problem, the direction of extreme is to the left and \(\alpha\) = 4% (0.04) so the decision line is placed so that the proportion of sample proportions to the left is 0.04. The critical value is 0.36 so the other decision line is placed at 0.36 on the alternative distribution.
3. Labeling \(\alpha\), \(\beta\), and power. \(\alpha\) is always placed on the null distribution on the side of the decision line that is in the direction of extreme. \(\beta\) is always placed on the alternative distribution on the side of the decision line that is opposite of the direction of extreme. Power is always placed on the alternative distribution on the side of the decision line that is in the direction of extreme.
4. Identify the probabilities for \(\alpha\), \(\beta\), and power. This is done by adding the proportions at the top of the bars.
In this example, the probability for \(\alpha\) is 0.04. The probability for \(\beta\) is 0.30 (0.02 + 0.06 + 0.22). The probability for power is 0.70 (0.02 + 0.03 + 0.29 + 0.36).
5. Find the p-value. Data is needed to test the hypothesis, so here is the data: In a sample of 200 people, 72 have visited another country. The sample proportion is \(\hat{p} = \dfrac{72}{200} = 0.36\). The p-value, which is the probability of getting the data, or more extreme values, assuming the null hypothesis is true, is always placed on the null distribution and always points in the direction of the extreme.
In this example, the p-value has been indicated on the null distribution.
6. Make a decision. The probability for the p-value is 0.04. To determine which hypothesis is supported by the data, we compare the p-value to alpha. If the p-value is less than or equal to alpha, the evidence supports the alternative hypothesis. In this case, the p-value of 0.04 equals alpha which is also 0.04, so this evidence supports the alternative hypothesis leading to the conclusion that the proportion of people who have visited another country is less than 40%.
7. Errors and their consequence. While this problem is not serious enough to have consequences that matter, we will, nevertheless, explore the consequences of the various errors that could be made.
Because the evidence supported the alternative hypothesis, we have the possibility of making a type I error. If we did make a type I error it would mean that we think fewer than 40% of Americans have visited another country, when in fact 40% have done so.
In contrast to this, if our data had been 0.38 so that our p-value was 0.20, then our results would have supported the null hypothesis and we could be making a Type II error. This error means that we would think 40% of Americans had visited another country when, in fact, the true proportion would be less than that.
8. Reporting results. Statistical results are reported in a sentence that indicates whether the data are significant, the alternative hypothesis, and the supporting evidence, in parentheses, which at this point include the p-value and the sample size (n).
For the example in which \(\hat{p}\) = 0.36 we would write, the proportion of Americans who have visited other countries is significantly less than 0.40 (p = 0.04, n = 200).
For the example in which \(\hat{p}\) = 0.38 we would write, the proportion of Americans who have visited other countries is not significantly less than 0.40 (p= 0.20, n = 200).
At this point, a brief explanation is needed about the letter p. In the study of statistics there are several words that start with the letter p and use p as a variable. The list of words includes parameters, population, proportion, sample proportion, probability, and p-value. The words parameter and population are never represented with a p. Probability is represented with notation that is similar to function notation you learned in algebra, f(x), which is read f of x. For probability, we write P(A) which is read the probability of event A. To distinguish between the use of p for proportion and p for p-value, pay attention to the location of the p. When p is used in hypotheses, such as \(H_0: p = 0.6\), \(H_1: p > 0.6\), it means the proportion of the population. When p is used in the conclusion, such as the proportion is significantly greater than 0.6 (p = 0.01, n = 200), then the p in p = 0.01 is interpreted as a p-value. If the sample proportion is given, it is represented as \(\hat{p}\) = 0.64.
We will conclude this chapter with a final thought about why we are formal in the testing of hypotheses. According to Colquhoun (1971), "Most people need all the help they can get to prevent them from making fools of themselves by claiming that their favorite theory is substantiated by observations that do nothing of the sort. And the main function of that section of statistics that deals with tests of significance is to prevent people making fools of themselves". (Green, 1979).
Identify each of the following as a parameter or statistic.
A. p is a
B. \(\bar{x}\) is a
C. \(\hat{p}\) is a
D. \(\mu\) is a
Are hypotheses written about parameters or statistics? _________________
A sampling distribution is a histogram of which of the following?
______original data
______possible statistics that could be obtained when sampling from a population
Write the hypotheses using the appropriate notation for each of the following hypotheses. Using meaningful subscripts when comparing two population parameters. For example, comparing men to women, you might use scripts of m and w, for instance \(p_m = p_w\).
4a. The mean is greater than 20. \(H_0\): \(H_1\):
4b. The proportion is less than 0.75. \(H_0\): \(H_1\):
4c. The mean for Americans is different than the mean for Canadians. \(H_0\): \(H_1\):
4d. The proportion for Mexicans is greater than the proportion for Americans. \(H_0\): \(H_1\):
4e. The proportion is different than 0.45. 4f. The mean is less than 3000. \(H_0\): \(H_1\):
If the p-value is less than \(\alpha\),
5a. which hypothesis is supported?
5b. are the data significant?
5c. what type error could be made?
For each row of the table you are given a p-value and a level of significance (α). Determine whichhypothesis is supported, if the data are significant and which type error could be made. If a given p- value is not a valid p-value (because it is greater than 1), put an x in each box in the row.
p-value \(\alpha\) Hypothesis \(H_0\) or \(H_1\) Significant or Not Significant Error
Type I or Type II
0.0035 0.01
5.6 \(\times 10^{-6}\) 0.05
For each set of information that is provided, write the concluding sentence in the form used by researchers.
7a. \(H_1: p > 0.5, n = 350\), p - value = 0.022, \(\alpha = 0.05\)
7b. \(H_1: p < 0.25, n = 1400\), p - value = 0.048, \(\alpha = 0.01\)
7c. \(H_1: \mu > 20, n = 32\), p - value = \(5.6 \times 10^{-5}\), \(\alpha = 0.05\)
7d. \(H_1: \mu \ne 20, n = 32\), p - value = \(5.6 \times 10^{-5}\), \(\alpha = 0.05\)
Test the hypotheses:
\(H_0: p = 0.5\)
\(H_1: p < 0.5\)
Use a 2% level of significance.
8a. What is the direction of the extreme?
8b. Label each distribution with a decision rule line. Identify \(\alpha\), \(\beta\), and power on the appropriate distribution.
8c. What is the critical value?
8d. What is the value of \(\alpha\)?
8e. What is the value of \(\beta\)?
8f. What is the value of Power?
The Data: The sample size is 80. The sample proportion is 0.45.
8g. Show the p-value on the appropriate distribution.
8h. What is the value of the p-value?
8i. Which hypothesis is supported by the data?
8j. Are the data significant?
8k. What type error could have been made?
8l. Write the concluding sentence.
\(H_0: \mu = 300\)
\(H_0: \mu > 300\)
Use a 3.5% level of significance.
The Data: The sample size is 10. The sample mean is 360.
Question: Is the five-year cancer survival rate for all races improving?
Briefing 1.1.
5 – year Cancer Survival Rate. According to the American Cancer Society, in 1974-1976 the five- year survival rate for all races was 50%. This means that 50% of the people who were diagnosed with cancer were still alive 5 years later. These people could still be undergoing treatment, could be in remission or could be disease-free. (http://www.cancer.org/acs/groups/con...securedpdf.pdf Viewed 5-29-13)
Study Design: To determine if the survival rates are improving, data will be gathered from people who have been diagnosed with cancer at least 5 years before the start of this study. The data that will be collected is whether the people are still alive 5 years after their diagnosis. The data will be categorical, that is the people will be put into one of two categories, survive or did not survive. Suppose the medical records of 100 people diagnosed with cancer are examined. Use a level of significance of 0.02.
10a. Write the hypotheses that would be used to show that the proportion of people who survive cancer for at least five years after diagnosis is greater than 0.5. Use the appropriate parameter.
\(H_0:\)
10b. What is the direction of the extreme?
10c. Label the null and alternate sampling distributions below with the decision rule line, \(\alpha\), \(\beta\), power.
10d. What is the critical value?
10e. What is the value of \(\alpha\)?
10f. What is the value of \(\beta\)?
10g. What is the value of Power?
The data: The 5-year survival rate is 65%.
10h. What is the p-value for the data?
10i. Write your conclusion in the appropriate format.
10j. What Type Error is possible?
10k. In English, explain the conclusion that can be drawn about the question.
Why Statistical Reasoning Is Important for a Business Student and Professional
Developed in Collaboration with Tom Phelps, Professor of Economics, Mathematics, and Statistics This topic is discussed in ECON 201, Micro Economics.
Generally speaking, as the price of an item increases, there are fewer units of the item purchased. In economics terms, there is less "quantity demanded". The ratio of the percent change in quantity demanded to the percent change in price is called price elasticity of demand. The formula is \(e_d = \dfrac{\%\Delta Q_d}{\%\Delta P}\). For example, if a 1% price increase resulted in a 1.5% decrease in the quantity demanded, the price elasticity is \(e_d = \dfrac{-1.5\%}{1\%}\) = −1.5. It is common for economists to use the absolute value of \(e_d\) since almost all \(e_d\) values are negative. Elasticity is a unit-less number called an elasticity coefficient.
Food is an item that is essential, so demand will always exist, however eating out, which is more expensive than eating in, is not as essential. The average price elasticity of demand for food for the home is 0.51. This means that a 1% price increase results in a 0.51% decrease in quantity demanded. Because eating at home is less expensive than eating in restaurants, it would not be unreasonable to assume that as prices increase, people would eat out less often. If this is the case, we would expect that the price elasticity of demand for eating out would be greater than for eating at home. Test the hypothesis that the mean elasticity for food away from home is higher than for food at home, meaning that changing prices have a greater impact on eating out. (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2804646/) (http://www.ncbi.nlm.nih.gov/pmc/arti...46/table/tbl1/)
11a. Write the hypotheses that would be used to show that the mean elasticity for food away from home is greater than 0.51. Use a level of significance of 7%.
11b. Label each distribution with the decision rule line. Identify \(\alpha\), \(\beta\), and power on the appropriate distribution.
11c. What is the direction of the extreme?
11d. What is the value of \(\alpha\)?
11e. What is the value of \(\beta\)?
11f. What is the value of Power?
The Data: A sample of 13 restaurants had a mean elasticity of 0.80.
11g. Show the p-value on the appropriate distribution.
11h. What is the value of the p-value?
11i. Which hypothesis is supported by the data?
11j. Are the data significant?
11k. What type error could have been made?
11l. Write the concluding sentence.
2: Obtaining Useful Evidence
Peter Kaslik | CommonCrawl |
Horseshoes for $\mathcal{C}^{1+\alpha}$ mappings with hyperbolic measures
October 2015, 35(10): 5153-5169. doi: 10.3934/dcds.2015.35.5153
On the Cauchy problem for a four-component Camassa-Holm type system
Zeng Zhang 1, and Zhaoyang Yin 2,
Department of Mathematics, Sun Yat-sen University, Guangzhou, 510275, China
Department of Mathematics, Zhongshan University, Guangzhou, 510275
Received December 2014 Revised January 2015 Published April 2015
This paper is concerned with a four-component Camassa-Holm type system proposed in [37], where its bi-Hamiltonian structure and infinitely many conserved quantities were constructed. In the paper, we first establish the local well-posedness for the system. Then we present several global existence and blow-up results for two integrable two-component subsystems.
Keywords: A four-component Camassa-Holm system, local well-posedness, global existence, blow-up..
Mathematics Subject Classification: Primary: 35G25; Secondary: 35L0.
Citation: Zeng Zhang, Zhaoyang Yin. On the Cauchy problem for a four-component Camassa-Holm type system. Discrete & Continuous Dynamical Systems, 2015, 35 (10) : 5153-5169. doi: 10.3934/dcds.2015.35.5153
M. S. Alber, R. Camassa, D. D. Holm and J. E. Marsden, The geometry of peaked solitons and billiard solutions of a class of integrable PDEs, Lett. Math. Phys., 32 (1994), 137-151. doi: 10.1007/BF00739423. Google Scholar
H. Bahouri, J.-Y. Chemin and R. Danchin, Fourier Analysis and Nonlinear Partial Differential Equations, Grundlehren der Mathematischen Wissenschaften, Vol. 343, Springer, Berlin-Heidelberg-New York, 2011. doi: 10.1007/978-3-642-16830-7. Google Scholar
A. Bressan and A. Constantin, Global conservative solutions of the Camassa-Holm equation, Arch. Ration. Mech. Anal., 183 (2007), 215-239. doi: 10.1007/s00205-006-0010-z. Google Scholar
R. Camassa and D. D. Holm, An integrable shallow water equation with peaked solitons, Phys. Rev. Lett., 71 (1993), 1661-1664. doi: 10.1103/PhysRevLett.71.1661. Google Scholar
C. Cao, D. D. Holm and E. S. Titi, Traveling wave solutions for a class of one-dimensional nonlinear shallow water wave models, J. Dynam. Differential Equations, 16 (2004), 167-178. doi: 10.1023/B:JODY.0000041284.26400.d0. Google Scholar
G. M. Coclite and K. H. Karlsen, On the well-posedness of the Degasperis-Procesi equation, J. Funct. Anal., 233 (2006), 60-91. doi: 10.1016/j.jfa.2005.07.008. Google Scholar
A. Constantin, Existence of permanent and breaking waves for a shallow water equation: A geometric approach, Ann. Inst. Fourier (Grenoble), 50 (2000), 321-362. doi: 10.5802/aif.1757. Google Scholar
A. Constantin, On the scattering problem for the Camassa-Holm equation, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., 457 (2001), 953-970. doi: 10.1098/rspa.2000.0701. Google Scholar
A. Constantin, The Hamiltonian structure of the Camassa-Holm equation, Exposition. Math., 15 (1997), 53-85. Google Scholar
A. Constantin, The trajectories of particles in Stokes waves, Invent. Math., 166 (2006), 523-535. doi: 10.1007/s00222-006-0002-5. Google Scholar
A. Constantin and J. Escher, Analyticity of periodic traveling free surface water waves with vorticity, Ann. of Math., 173 (2011), 559-568. doi: 10.4007/annals.2011.173.1.12. Google Scholar
A. Constantin and J. Escher, Global existence and blow-up for a shallow water equation, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 26 (1998), 303-328. Google Scholar
A. Constantin and J. Escher, Global weak solutions for a shallow water equation, Indiana Univ. Math. J., 47 (1998), 1527-1545. Google Scholar
A. Constantin and J. Escher, Particle trajectories in solitary water waves, Bull. Amer. Math. Soc., 44 (2007), 423-431. doi: 10.1090/S0273-0979-07-01159-7. Google Scholar
A. Constantin and J. Escher, Wave breaking for nonlinear nonlocal shallow water equations, Acta Math., 181 (1998), 229-243. doi: 10.1007/BF02392586. Google Scholar
A. Constantin and J. Escher, Well-posedness, global existence, and blowup phenomena for a periodic quasi-linear hyperbolic equation, Comm. Pure Appl. Math., 51 (1998), 475-504. doi: 10.1002/(SICI)1097-0312(199805)51:5<475::AID-CPA2>3.0.CO;2-5. Google Scholar
A. Constantin, R. I. Ivanov and J. Lenells, Inverse scattering transform for the Degasperis-Procesi equation, Nonlinearity, 23 (2010), 2559-2575. doi: 10.1088/0951-7715/23/10/012. Google Scholar
A. Constantin and D. Lannes, The hydrodynamical relevance of the Camassa-Holm and Degasperis-Procesi equations, Arch. Ration. Mech. Anal., 192 (2009), 165-186. doi: 10.1007/s00205-008-0128-2. Google Scholar
A. Constantin and H. P. McKean, A shallow water equation on the circle, Comm. Pure Appl. Math., 52 (1999), 949-982. doi: 10.1002/(SICI)1097-0312(199908)52:8<949::AID-CPA3>3.0.CO;2-D. Google Scholar
A. Constantin and L. Molinet, Global weak solutions for a shallow water equation, Comm. Math. Phys., 211 (2000), 45-61. doi: 10.1007/s002200050801. Google Scholar
A. Constantin and W. A. Strauss, Stability of peakons, Comm. Pure Appl. Math., 53 (2000), 603-610. doi: 10.1002/(SICI)1097-0312(200005)53:5<603::AID-CPA3>3.0.CO;2-L. Google Scholar
A. Constantin and W. A. Strauss, Stability of the Camassa-Holm solitons, J. Nonlinear Sci., 12 (2002), 415-422. doi: 10.1007/s00332-002-0517-x. Google Scholar
R. Danchin, A few remarks on the Camassa-Holm equation, Differential Integral Equations, 14 (2001), 953-988. Google Scholar
A. Degasperis, D. D. Kholm and A. N. I. Khon, A new integrable equation with peakon solutions, Teoret. Mat. Fiz., 133 (2002), 170-183. doi: 10.1023/A:1021186408422. Google Scholar
A. Degasperis and M. Procesi, Asymptotic integrability, in Symmetry and Perturbation Theory (Rome, 1998), {World Sci. Publ., River Edge, NJ}, 1999, 23-37. Google Scholar
J. Escher, Y. Liu and Z. Yin, Shock waves and blow-up phenomena for the periodic Degasperis-Procesi equation, Indiana Univ. Math. J., 56 (2007), 87-117. doi: 10.1512/iumj.2007.56.3040. Google Scholar
J. Escher and Z. Yin, Initial boundary value problems for nonlinear dispersive wave equations, J. Funct. Anal., 256 (2009), 479-508. doi: 10.1016/j.jfa.2008.07.010. Google Scholar
J. Escher and Z. Yin, Initial boundary value problems of the Degasperis-Procesi equation, Polish Acad. Sci. Inst. Math., Warsaw, 81 (2008), 157-174. doi: 10.4064/bc81-0-10. Google Scholar
A. S. Fokas, On a class of physically important integrable equations, Phys. D, 87 (1995), 145-150. doi: 10.1016/0167-2789(95)00133-O. Google Scholar
B. Fuchssteiner, Some tricks from the symmetry-toolbox for nonlinear equations: Generalizations of the Camassa-Holm equation, Phys. D, 95 (1996), 229-243. doi: 10.1016/0167-2789(96)00048-6. Google Scholar
B. Fuchssteiner and A. S. Fokas, Symplectic structures, their Bäcklund transformations and hereditary symmetries,, Phys. D, 4 (): 47. doi: 10.1016/0167-2789(81)90004-X. Google Scholar
G. Gui, Y. Liu, P. J. Olver and C. Qu, Wave-breaking and peakons for a modified Camassa-Holm equation, Comm. Math. Phys., 319 (2013), 731-759. doi: 10.1007/s00220-012-1566-0. Google Scholar
A. A. Himonas and C. Holliman, The Cauchy problem for the Novikov equation, Nonlinearity, 25 (2012), 449-479. doi: 10.1088/0951-7715/25/2/449. Google Scholar
D. D. Holm and R. I. Ivanov, Multi-component generalizations of the CH equation: Geometrical aspects, peakons and numerical examples, J. Phys. A, 43 (2010), 492001, 20pp. doi: 10.1088/1751-8113/43/49/492001. Google Scholar
A. N. W. Hone and J. P. Wang, Integrable peakon equations with cubic nonlinearity, J. Phys. A, 41 (2008), 372002, 10pp. doi: 10.1088/1751-8113/41/37/372002. Google Scholar
H. Li, Y. Li and Y. Chen, Bi-Hamiltonian structure of multi-component Novikov equation, J. Nonlinear Math. Phys., 21 (2014), 509-520. doi: 10.1080/14029251.2014.975522. Google Scholar
N. Li, Q. P. Liu and Z. Popowicz, A four-component Camassa-Holm type hierarchy, J. Geom. Phys., 85 (2014), 29-39. doi: 10.1016/j.geomphys.2014.05.026. Google Scholar
Y. A. Li and P. J. Olver, Well-posedness and blow-up solutions for an integrable nonlinearly dispersive model wave equation, J. Differential Equations, 162 (2000), 27-63. doi: 10.1006/jdeq.1999.3683. Google Scholar
X. Liu, Z. Qiao and Z. Yin, On the Cauchy problem for a generalized Camassa-Holm equation with both quadratic and cubic nonlinearity, Commun. Pure Appl. Anal., 13 (2014), 1283-1304. doi: 10.3934/cpaa.2014.13.1283. Google Scholar
Y. Liu and Z. Yin, Global existence and blow-up phenomena for the Degasperis-Procesi equation, Comm. Math. Phys., 267 (2006), 801-820. doi: 10.1007/s00220-006-0082-5. Google Scholar
H. Lundmark, Formation and dynamics of shock waves in the Degasperis-Procesi equation, J. Nonlinear Sci., 17 (2007), 169-198. doi: 10.1007/s00332-006-0803-3. Google Scholar
V. Novikov, Generalizations of the Camassa-Holm equation, J. Phys. A, 42 (2009), 342002, 14pp. doi: 10.1088/1751-8113/42/34/342002. Google Scholar
P. J. Olver and P. Rosenau, Tri-Hamiltonian duality between solitons and solitary-wave solutions having compact support, Phys. Rev. E (3), 53 (1996), 1900-1906. doi: 10.1103/PhysRevE.53.1900. Google Scholar
Z. Qiao, A new integrable equation with cuspons and W/M-shape-peaks solitons, J. Math. Phys., 47 (2006), 112701, 9pp. doi: 10.1063/1.2365758. Google Scholar
Z. Qiao and B. Xia, Integrable peakon systems with weak kink and kink-peakon interactional solutions, Front. Math. China, 8 (2013), 1185-1196. doi: 10.1007/s11464-013-0314-x. Google Scholar
C. Qu, J. Song and R. Yao, Multi-component integrable systems and invariant curve flows in certain geometries, SIGMA Symmetry Integrability Geom. Methods Appl., 9 (2013), Paper 001, 19pp. Google Scholar
G. Rodríguez-Blanco, On the Cauchy problem for the Camassa-Holm equation, Nonlinear Anal., 46 (2001), 309-327. doi: 10.1016/S0362-546X(01)00791-X. Google Scholar
J. Song, C. Qu and Z. Qiao, A new integrable two-component system with cubic nonlinearity, J. Math. Phys., 52 (2011), 013503, 9pp. doi: 10.1063/1.3530865. Google Scholar
J. F. Toland, Stokes waves, Topol. Methods Nonlinear Anal., 7 (1996), 1-48. Google Scholar
X. Wu and Z. Yin, Global weak solutions for the Novikov equation, J. Phys. A, 44 (2011), 055202, 17pp. doi: 10.1088/1751-8113/44/5/055202. Google Scholar
X. Wu and Z. Yin, Well-posedness and global existence for the Novikov equation, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 11 (2012), 707-727. Google Scholar
B. Xia and Z. Qiao, A new two-component integrable system with peakon and weak kink solutions,, preprint, (). Google Scholar
B. Xia and Z. Qiao, Integrable multi-component Camassa-Holm system,, preprint, (). Google Scholar
B. Xia, Z. Qiao and R. Zhou, A synthetical integrable two-component model with peakon solutions,, preprint, (). Google Scholar
Z. Xin and P. Zhang, On the weak solutions to a shallow water equation, Comm. Pure Appl. Math., 53 (2000), 1411-1433. doi: 10.1002/1097-0312(200011)53:11<1411::AID-CPA4>3.0.CO;2-5. Google Scholar
K. Yan, Z. Qiao and Z. Yin, Qualitative analysis for a new integrable two-component Camassa-Holm system with peakon and weak kink solutions, Comm. Math. Phys., 336 (2015), 581-617. doi: 10.1007/s00220-014-2236-1. Google Scholar
Z. Yin, Global weak solutions for a new periodic integrable equation with peakon solutions, J. Funct. Anal., 212 (2004), 182-194. doi: 10.1016/j.jfa.2003.07.010. Google Scholar
Z. Yin, On the Cauchy problem for an integrable equation with peakon solutions, Illinois J. Math., 47 (2003), 649-666. Google Scholar
Z. Zhang and Z. Yin, Well-posedness, global existence and blow-up phenomena for an integrable multi-component Camassa-Holm system,, preprint, (). Google Scholar
Yongsheng Mi, Boling Guo, Chunlai Mu. Well-posedness and blow-up scenario for a new integrable four-component system with peakon solutions. Discrete & Continuous Dynamical Systems, 2016, 36 (4) : 2171-2191. doi: 10.3934/dcds.2016.36.2171
Lei Zhang, Bin Liu. Well-posedness, blow-up criteria and gevrey regularity for a rotation-two-component camassa-holm system. Discrete & Continuous Dynamical Systems, 2018, 38 (5) : 2655-2685. doi: 10.3934/dcds.2018112
Joachim Escher, Olaf Lechtenfeld, Zhaoyang Yin. Well-posedness and blow-up phenomena for the 2-component Camassa-Holm equation. Discrete & Continuous Dynamical Systems, 2007, 19 (3) : 493-513. doi: 10.3934/dcds.2007.19.493
Xi Tu, Zhaoyang Yin. Local well-posedness and blow-up phenomena for a generalized Camassa-Holm equation with peakon solutions. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2781-2801. doi: 10.3934/dcds.2016.36.2781
Ying Fu, Changzheng Qu, Yichen Ma. Well-posedness and blow-up phenomena for the interacting system of the Camassa-Holm and Degasperis-Procesi equations. Discrete & Continuous Dynamical Systems, 2010, 27 (3) : 1025-1035. doi: 10.3934/dcds.2010.27.1025
Wei Luo, Zhaoyang Yin. Local well-posedness in the critical Besov space and persistence properties for a three-component Camassa-Holm system with N-peakon solutions. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 5047-5066. doi: 10.3934/dcds.2016019
Zhaoyang Yin. Well-posedness and blow-up phenomena for the periodic generalized Camassa-Holm equation. Communications on Pure & Applied Analysis, 2004, 3 (3) : 501-508. doi: 10.3934/cpaa.2004.3.501
Jinlu Li, Zhaoyang Yin. Well-posedness and blow-up phenomena for a generalized Camassa-Holm equation. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5493-5508. doi: 10.3934/dcds.2016042
Kai Yan, Zhaoyang Yin. Well-posedness for a modified two-component Camassa-Holm system in critical spaces. Discrete & Continuous Dynamical Systems, 2013, 33 (4) : 1699-1712. doi: 10.3934/dcds.2013.33.1699
Jae Min Lee, Stephen C. Preston. Local well-posedness of the Camassa-Holm equation on the real line. Discrete & Continuous Dynamical Systems, 2017, 37 (6) : 3285-3299. doi: 10.3934/dcds.2017139
Kai Yan. On the blow up solutions to a two-component cubic Camassa-Holm system with peakons. Discrete & Continuous Dynamical Systems, 2020, 40 (7) : 4565-4576. doi: 10.3934/dcds.2020191
Zeng Zhang, Zhaoyang Yin. Global existence for a two-component Camassa-Holm system with an arbitrary smooth function. Discrete & Continuous Dynamical Systems, 2018, 38 (11) : 5523-5536. doi: 10.3934/dcds.2018243
Katrin Grunert. Blow-up for the two-component Camassa--Holm system. Discrete & Continuous Dynamical Systems, 2015, 35 (5) : 2041-2051. doi: 10.3934/dcds.2015.35.2041
Wenxia Chen, Jingyi Liu, Danping Ding, Lixin Tian. Blow-up for two-component Camassa-Holm equation with generalized weak dissipation. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3769-3784. doi: 10.3934/cpaa.2020166
Caixia Chen, Shu Wen. Wave breaking phenomena and global solutions for a generalized periodic two-component Camassa-Holm system. Discrete & Continuous Dynamical Systems, 2012, 32 (10) : 3459-3484. doi: 10.3934/dcds.2012.32.3459
Chenghua Wang, Rong Zeng, Shouming Zhou, Bin Wang, Chunlai Mu. Continuity for the rotation-two-component Camassa-Holm system. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6633-6652. doi: 10.3934/dcdsb.2019160
Hideshi Yamane. Local and global analyticity for $\mu$-Camassa-Holm equations. Discrete & Continuous Dynamical Systems, 2020, 40 (7) : 4307-4340. doi: 10.3934/dcds.2020182
Shouming Zhou, Chunlai Mu, Liangchen Wang. Well-posedness, blow-up phenomena and global existence for the generalized $b$-equation with higher-order nonlinearities and weak dissipation. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 843-867. doi: 10.3934/dcds.2014.34.843
Yongsheng Mi, Boling Guo, Chunlai Mu. On an $N$-Component Camassa-Holm equation with peakons. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1575-1601. doi: 10.3934/dcds.2017065
Shihui Zhu. Existence and uniqueness of global weak solutions of the Camassa-Holm equation with a forcing. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 5201-5221. doi: 10.3934/dcds.2016026
Zeng Zhang Zhaoyang Yin | CommonCrawl |
npj computational materials
Coevolutionary search for optimal materials in the space of all possible compounds
Zahed Allahyari ORCID: orcid.org/0000-0001-8249-71851,2 &
Artem R. Oganov ORCID: orcid.org/0000-0001-7082-97281,2,3
npj Computational Materials volume 6, Article number: 55 (2020) Cite this article
Computational methods
Magnetic properties and materials
An Author Correction to this article was published on 20 July 2020
This article has been updated
Over the past decade, evolutionary algorithms, data mining, and other methods showed great success in solving the main problem of theoretical crystallography: finding the stable structure for a given chemical composition. Here, we develop a method that addresses the central problem of computational materials science: the prediction of material(s), among all possible combinations of all elements, that possess the best combination of target properties. This nonempirical method combines our new coevolutionary approach with the carefully restructured "Mendelevian" chemical space, energy filtering, and Pareto optimization to ensure that the predicted materials have optimal properties and a high chance to be synthesizable. The first calculations, presented here, illustrate the power of this approach. In particular, we find that diamond (and its polytypes, including lonsdaleite) are the hardest possible materials and that bcc-Fe has the highest zero-temperature magnetization among all possible compounds.
Finding materials with optimal properties (the highest hardness, the lowest dielectric permittivity, etc.) or a combination of properties (e.g., the highest hardness and fracture toughness) is the central problem of materials science. Until recently, only experimental materials discovery was possible, with all limitations and expense of the trial-and-error approach, but the ongoing revolution in theoretical/computational materials science (see1,2) begins to change the situation. Using quantum-mechanical calculations, it is now routine to predict many properties when the crystal structure is known. In 2003, Curtarolo demonstrated the data mining method for materials discovery3 by screening crystal structure databases (which can include known or hypothetical structures) via ab initio calculations. At the same time, major progress in fully nonempirical crystal structure prediction took place. Metadynamics4 and evolutionary algorithms5,6,7 have convinced the community that crystal structures are predictable. Despite the success of these and other methods, a major problem remains unsolved: the prediction of a material with optimal properties among all possible compounds. Totally, 4950 binary systems, 161,700 ternary systems, 3,921,225 quaternary systems, and an exponentially growing number of higher-complexity systems can be created from 100 best-studied elements in the Periodic Table. In each system, a very large number of compounds and, technically, an infinite number of crystal structures can be constructed computationally, and an exhaustive screening of such a search space is impractical. Only about 72% of binary, 16% of ternary, 0.6% of quaternary, and less than 0.05% of more complex systems have ever been studied experimentally8, and even in those systems that have been studied, new compounds are being discovered continually9,10,11. Studying all these systems, one by one, using global optimization methods is unrealistic. Data mining is a more practical approach, but the statistics shows that the existing databases are significantly incomplete even for binary systems, and much more so for ternary and more complex systems. Besides, data mining cannot find fundamentally new crystal structures. When searching for materials optimal in more than one property, these limitations of both approaches become even greater. We present a new method implemented in our code, MendS (Mendelevian Search), and show its application to the discovery of (super)hard and magnetic materials.
Mendelevian space
Global optimization methods are effective only when applied to property landscapes that have an overall organization, e.g., a landscape with a small number of funnels, where all or most of the good solutions (e.g., low-energy structures) are clustered. Discovering materials with optimal properties, i.e., performing a complex global optimization in the chemical and structural space, requires a rational organization of the chemical space that puts compounds with similar properties close to each other. If this space is created by ordering the elements by their atomic numbers, we observe a periodic patchy pattern (Fig. 1a), unsuitable for global optimization.
Fig. 1: Pettifor maps showing the distribution of hardness in binary systems, using different sequence of the elements.
a Atomic numbers, b Villars' Periodic number, c Pettifor's MN, and d MN obtained in this work. Noble gases were excluded because of their almost complete inability to form stable compounds at normal conditions. Rare earths and elements heavier than Pu were excluded because of the problems of the DFT calculations. In total, we consider 74 elements that can be combined into 2775 possible binary systems. Each pixel is a binary system, the color encodes the highest hardness in each system.
In 1984, Pettifor suggested a new quantity, the so-called "chemical scale," that arranges the elements in a sequence such that similar elements are placed near each other, and compounds of these elements also display similar properties12. This way, structure maps13 with well-defined regions of similar crystal structures or properties can be drawn. In a thus ordered chemical space, evolutionary algorithms should be extremely effective: they can zoom in on the promising regions at the expense of unpromising ones.
What is the nature of the chemical scale or the Mendeleev number (MN), which is an integer showing the position of an element in the sequence on the chemical scale? Pettifor derived these quantities empirically, while we redefined them using a more universal nonempirical way that clarifies their physical meaning (the method for computing MN is explained in the Supplementary Information). Goldschmidt's law of crystal chemistry states that the crystal structure is determined by stoichiometry, atomic size, polarizability, and electronegativity of atoms/ions14,15, while polarizability and electronegativity are strongly correlated16. Villars et al.17 introduced another enumeration of the elements, emphasizing the role of valence electrons, which he called "Periodic number" (PN). He also showed that atomic size and electronegativity can be derived from AN and PN17. In redefining the chemical scale and MN, we used the most important chemical properties of the atom—size R and electronegativity χ (Pauling electronegativity)—the combination of which can be used as a single parameter succinctly characterizing the chemistry of the element. However, we need to emphasize that the chemical scale and MN are only used in this method for visualizing the results (the choice of MN for plotting such a Pettifor map is up to the user), while in our global coevolutionary algorithm, each atom is represented by both its size R and electronegativity χ to increase the accuracy. In this work, the atomic radius R is defined as half the shortest interatomic distance in the relaxed (for most elements hypothetical) simple cubic structure of an element—see the Supplementary Table 1.
Figure 2 shows the overall linear correlation between the MNs redefined in this work and those proposed by Pettifor. Carefully chosen MNs should lead to strong clustering in the chemical space, where neighboring systems have similar properties. The results of our searches for hard binary compounds using the PN, the MNs suggested by Pettifor and our redefined MNs are shown on Pettifor maps (Fig. 1b–d). Satisfyingly, our redefined MNs result in a better-organized chemical space with a clearer separation of regions containing binary systems with similar hardness. In fact, if our MNs (which are the sequences of projected elements on their regression line in the space of crudely correlated atomic radius and electronegativity) generate a good 2D map, with clear grouping of similar chemical systems (e.g., Na–Cl, K–Cl, Ca–Cl, Na–Br systems are located nearby), then a much better grouping is expected in the space of the initial two parameters R and χ, and it is in this space where variation operators of our method are defined (Fig. 3a, b). Also it worth mentioning that sizes and electronegativities of the atoms change under pressure—and using standard definitions of the MN (such as AN, PN, or Pettifor's MN) will not work well. Our recipe, however, is universal and only requires atomic sizes and electronegativities at the pressure of interest. In this paper, we illustrate our method by binary systems, although more complex, at least ternary, systems are also tractable. In a nutshell (but see Methods section for details), our method performs evolution of a population of variable-composition chemical systems (each of which is tackled by an evolutionary optimization) - i.e. is an evolution over evolutions. Individual chemical systems are allowed to evolve and improve, then are compared and ranked, and the fittest ones get a chance to produce new chemical systems (which will partially inherit structural and chemical information from their parents). Evolving the population of such chemical systems, one efficiently finds the globally optimal solution and numerous high-quality suboptimal solutions as well.
Fig. 2: Correlation between the Mendeleev numbers defined in this work and those proposed by Pettifor.
It is clear that these MNs have overall correlation, but for some elements (i.e. noble gases) there are big differences.
Fig. 3: MendS algorithm.
a Scheme showing how the chemical heredity and b chemical mutation create new compositions. The probability, displayed in shades of gray, is given to each possible daughter system according to its distance from the fitter parent (dark green point). c Flowchart of the coevolutionary algorithm used in MendS (EA—evolutionary algorithm, MO—multi-objective).
Search for hard and superhard binary systems
Pareto optimization18 of hardness and stability was performed over all possible structures (with up to 12 atoms in the primitive cell) and compositions limited to the binary compounds of 74 elements (i.e., all elements excluding the noble gases, rare earth elements, and elements heavier than Pu). In this work, 600 systems have been computed in 20 MendS generations from a total of 2775 unary and binary systems that can be made of 74 elements, i.e., only about one fifth of all possible systems were sampled.
Figure 4 shows the efficiency of this method in finding optimal materials. In this fast calculation, numerous stable and metastable hard and superhard materials were detected in a single run. Carbon (diamond and other allotropes) and boron, known to be the only superhard elements, were both found. In addition, both new and numerous known hard and superhard binary systems, as well as potentially hard systems, were found in the same calculation, among them BxCy19, CxNy20,21, BxNy22,23, BxOy19,24,25, RexBy26,27, WxBy28, SixCy29,30,31,32, WxCy30,31,32, AlxOy30,31,32, TixCy32, SixNy32, TixNy32, BexOy32, RuxOy33,34, OsxOy35, RhxBy36, IrxBy36, OsxBy37,38,39, and RuxBy37,38,39. We reported some of the results of our search in a separate paper on the Cr–B, Cr–C, and Cr–N systems40, and our study of the W–B system41 was inspired by the present finding of promising properties in the Mo–B system (also published in42). The list of all systems studied during the calculation is available in Supplementary Information.
Fig. 4: Results of the simultaneous optimization of the hardness and stability in the space of all unary and binary compounds.
a 1st MendS generation, b 10th MendS generation, c 20th MendS generation. The first five Pareto fronts are shown, green points representing all sampled structures. The instability of each compound is defined using Maxwell's convex hull construction. Diamond, the hardest material, is indicated by a star.
Because of the huge compositional space (2775 systems, each with 102 possible compositions, each of which having a very large number of possible structures), it was necessary to shorten the time of calculations by reducing the number of generations and/or population size. Therefore, the structures and compositions found may be approximate and may need to be refined for the most interesting systems by a precise evolutionary calculation for each system. The results are shown in Table 1. Of these, some transition metal borides are predicted to be hard, some already reported as hard materials (e.g., MoxBy43,44 and MnxBy45) or discussed as potentially hard (e.g., TcxBy46, FexBy47, and VxBy48). Interestingly, a number of previously unknown hard structures more stable than those reported so far were predicted in these systems. Our calculations also revealed completely new hard systems, SxBy and BxPy, and, quite unexpectedly, the MnxHy system was discovered to contain very hard phases (Table 1).
Table 1 The predicted Vickers hardness (Hv), fracture toughness (K1C) and enthalpy above the convex hull of selected materials found using MendS.
For the MoxBy system, several simultaneously hard and low-energy structures were detected in our calculations. Of these, only the stable R\(\bar 3\)m structure of MoB2 was studied before, and the reported hardness for this structure (experimentally obtained 24.2 GPa49 and theoretically found 33.1 GPa44) is in close agreement with the value calculated in this work (28.5 GPa). MoB3 and MoB4 were studied widely before43,44, and a few low-energy and in some cases hard structures were reported for these systems (i.e., R\(\bar 3\)m-MoB3, 31.8 GPa;43P63/mmc-MoB3, 37.3 GPa44; and much softer P63/mmc-MoB4, 8.2 GPa44). In this work, new low-energy structures with high hardness were discovered in these systems (Table 1).
For the MnxBy system, we propose several new compounds which are simultaneously hard and have low energy (Table 1). In the previous study50, P21/c-MnB4 was discussed to be stable and have a very high hardness (computed to be 40.1 GPa50, experimentally obtained 34.6–37.4 GPa51), while C2/m-MnB4 was claimed to be the second lowest-energy structure with the energy difference of 18 meV/atom. Our study confirms the stability of P21/c-MnB4. However, we discovered another MnB4 structure, with the Pnnm space group, the energy of which is lies between the energies of two aforementioned structures of MnB4 (Table 1). In this work, we found that the ferromagnetic phase of Pnnm-MnB4 is more stable than the nonmagnetic one, and the hardness of 40.7 GPa was computed for this magnetic structure.
Because of the radioactivity of technetium, the TcxBy system has not been studied experimentally, while computational studies of this system started recently46,52,53,54. In 2015, P\(\bar 3\)m1-TcB was predicted to be energetically more favorable than the previously discussed Cmcm and WC-type structures55. The reported hardness for this structure, 30.3 GPa55, is very close to the value predicted in this study (31 GPa). Because of the prediction of other stable compounds (e.g., Tc3B5) in our work, this structure became metastable (by 13 meV/atom). In this work, P\(\bar 6\)m2-TcB3 with the computed hardness of 27.2 GPa was predicted as a stable structure at zero pressure. Other works53,54, conducted in parallel to ours, also detected this structure and claimed that it is synthesizable at pressures above 4 GPa56. Another low-energy (3 meV above the convex hull) hard structure (33.1 GPa) with the P\(\bar 3\)m1 space group for TcB3 was also predicted in our study. P\(\bar 6\)m2-Tc3B5, a compound having a hardness of 30.6 GPa and stable at zero pressure, is predicted in our work for the first time. Several other simultaneously hard (in the range of 30 to 36 GPa) and low-energy metastable phases of TcxBy predicted in this work are shown in Table 1.
In recent years, many efforts were focused on searching for low-energy phases of VxBy and studying their electrical and mechanical properties. As a result, several low-energy hard and superhard phases were predicted48,55. Nevertheless, the experimental data exist only for the well-known hexagonal VB2 (AlB2-type) with the P6/mmm space group57. In addition to some previously studied structures58 (e.g., Cmcm-VB, Immm-V3B4, and P6/mmm-VB2), which were also found in our calculations, a few boron-rich phases possessing simultaneously low energy and very high hardness were discovered (Table 1). The calculated hardness for these boron-rich phases is very close to or above 40 GPa (VB7: 39.7 GPa, VB5: 40 GPa, and VB12: 44.5 GPa). A new extremely hard P\(\bar 4\)m2-V3B4 phase is predicted here, with the energy 6 meV lower than the previously proposed Immm structure.
Most of the studies of the FexBy system were dedicated to the FeB2 and FeB4 phases47,59,60. Several works studying different FexBy compounds61,62 reported Fe2B, FeB, and FeB2 as stable phases. In this work, we detected another stable phase, FeB3, with the P21/m space group and the hardness of 30.7 GPa. To the best of our knowledge, FeB3 was never reported, neither theoretically nor experimentally. The orthorhombic Pnnm-FeB4, with the energy of 2 meV above the convex hull (Table 1), was synthesized at pressures above 8 GPa, and its hardness was reported to be 62(5) GPa59, which encouraged many computational studies of this structure. However, none of them confirmed such a high value of hardness, while the Vickers hardness reported in several independent works varies in the range of 24–29 GPa47,60,62,63. We calculated its hardness to be 28.6 GPa.
In the BxPy system, the cubic boron phosphide BP with the zincblende structure is a well-known compound with the hardness reported to be roughly the same as that of SiC64. In our calculations, the hardness of SiC and BP was found to be 33 GPa and 37 GPa, respectively. Moreover, B6P was discovered as another stable compound in this system and predicted to be superhard, with the computed hardness exceeding 41 GPa. In the SiC system, in addition to the known diamond-type β-SiC, another similar structure (actually, a polytype of β-SiC) with the R3m space group and nearly the same hardness was found. The energy of this structure is just 1 meV/atom higher than that of β-SiC.
The MnxHy system is unexpected in the list of hard systems, but several very hard phases were indeed found in it (Table 1). All of these phases are nonmagnetic, highly symmetric, and energetically favorable (lying either on the convex hull or close to it), with the hardness of up to 30 GPa. In this system, two thermodynamically stable compounds (Mn2H and MnH) were predicted, with the space groups P\(\bar 3\)m1 and P63/mmc, and computed hardness of 21.5 and 29.5 GPa, respectively (in Table 1, only structures with the hardness above 26 GPa are shown for this system).
Generally, BxSy system is not hard, but metastable boron sulfides turn out to be potentially hard. We found a low-energy metastable phase of this system, Cmcm-B4S3, with the hardness unexpectedly exceeding 30 GPa. This can stimulate future studies of this system.
For a better insight, some of the prominent structures seen in our simulations are shown in Fig. 5a. More details on all phases presented in Table 1 are given in Supplementary Information.
Fig. 5: Results of our Mendelevian search for hard and superhard materials.
a (1) F\(\bar 4\)3m-BN, (2) R\(\bar 3\)m-MoB2, (3) P\(\bar 3\)m1-MoB3, (4) P\(\bar 6\)m2-MoB5, (5) Cmcm-VB, (6) P6/mmm-VB2, (7) Immm-V3B4, (8) P\(\bar 4\)m2-V3B4, (9) P\(\bar 6\)m2-VB5, (10) I4/mmm-VB12, (11) Pnnm-MnB4, (12) Pm-MnB13, (13) Cmcm-B4S3, (14) P63/mmc-MnH, (15) P21/m-FeB3, and (16) R\(\bar 3\)m-B6P. b "Ashby plot" of the Vickers hardness vs. fracture toughness. Stable hard compounds from the previous works40,74 are shown as suns; stable and metastable compounds found in this work are represented by circles and triangles, respectively.
In our calculations, some boron hydrides were predicted to be superhard, but they had high energy and were not included in Table 1. However, it may be possible to stabilize these hard phases under pressure, or by chemical modification.
Figure 5b shows the studied materials in the space "hardness—fracture toughness." Diamond, lonsdaleite and cubic BN possess the best properties, but are metastable at normal conditions. Among the stable phases, borides of transition metals (especially from groups VB, VIB, VIIB) stand out: we note VB2, V3B4, MoB2, CrB4, WB5, and MnB4 in particular. These and related materials (see65) present a high technological interest.
The fact that all known binary superhard systems were found in a short coevolutionary run demonstrates the power of the method, which is ready to be applied to the other types of materials.
Search for magnetic binary systems
In addition to the Mendelevian search for stable/metastable hard and superhard materials, we performed another Mendelevian search for materials with maximum magnetization and stability to examine the power and efficiency of the method in fast and accurate determination of materials with target properties. We performed this calculation over all possible structures (with up to 12 atoms in the primitive cell) and compositions limited to the binary compounds of 74 elements (i.e., all elements excluding the noble gases, rare earth elements, and elements heavier than Pu). In this calculation, well-known ferromagnets iron, cobalt, nickel, and several magnetic materials made from the combination of these elements with other elements were detected before the sixth generation. Here, for each structure we performed spin-polarized calculations using the GGA-PBE functional66 as implemented in the VASP code67,68. More details on structure relaxation and input parameters can be found in Supplementary Information. The chemical landscape of magnetization and evolution of its sampling in the Mendelevian search for magnetic materials are shown in Fig. 6d–f; this was formed after calculating 450 binary systems over 15 generations. In this plot, materials with high magnetization are clearly clustered together. Figure 6d, f shows how the (co)evolutionary optimization discovered all the promising regions at the expense of the unpromising ones. This calculation has found that among all substances, bcc-Fe has the highest magnetization at zero Kelvin.
Fig. 6: Sampling of the chemical space.
Systems produced (a, d) randomly in the 1st generation, and using all variation operators in the (b, e) 5th and (c, f) 10th generations in searching for hard (a–c) and magnetic (d–f) materials. Randomly generated systems are shown as violet circles.
We have developed a method for predicting materials having one or more optimal target properties. The method, called Mendelevian search (MendS), based on the suitably defined chemical space, powerful coevolutionary algorithm, and multi-objective Pareto optimization technique, was applied to searching for low-energy hard and superhard materials. Note that due to the property of evolutionary and coevolutionary algorithms to enhance sampling of the most promising regions of the search space (where the optimal, as well as all or most of the high-quality suboptimal solutions are clustered together), each MendS search discovers a large number of materials with excellent properties at a low computational cost. Well-known superhard systems—diamond, boron allotropes, and the B–N system—were found in a single calculation together with other notable hard systems (Si–C, B–C, Cr–N, W–C, metal borides, etc.). The Mn–H system was discovered to be unexpectedly hard, and several new hard and superhard phases were revealed in the previously studied systems (V–B, Tc–B, Mn–B, etc.). The method successfully found almost all known hard systems in a single run, and a comprehensive chemical map of hard materials was produced. A similar chemical map was plotted for magnetic materials; well-known magnetic systems such as Ni, Co, Fe were found within just a few generations. These examples show the power and efficiency of our method, which can be used to search for optimal materials with any combination of properties at arbitrary conditions. As the first step in prediction of novel materials possessing desired properties, the method to a large extent solves, in a fully nonempirical way, the central problem of computational materials science.
The whole process can be described as a joint evolution (or coevolution) of evolutionary runs, each of which deals with an individual variable-composition system. Having defined the chemical space, we initialize the calculation by randomly selecting a small number of systems from the entire chemical space for the first MendS generation. These systems are then optimized by the evolutionary algorithm USPEX5,6,7 in its variable-composition mode69, searching for compounds and structures with optimal properties (e.g., here we simultaneously maximized hardness and stability), after which MendS jointly analyses the results from all these systems. Removing identical structures using the fingerprint method70, jointly evaluating all systems, refining and preparing the dataset, and discarding the structures that are unstable by more than 1.0 eV/atom, MendS ranks all systems of the current generation and selects the fittest (in present calculations, fittest 60% were selected) variable-composition systems as potential parents for new systems. Applying chemical variation operators, such as mutation and heredity, to these parent systems yields offspring systems for the next coevolutionary generation. In addition, some systems are generated randomly to preserve the chemical diversity of the population. This process is continued until the number of coevolutionary generations reaches the maximum predefined by the user (Fig. 3c). The underlying ab initio structure relaxations and energy calculations were performed using density functional theory with the projector augmented wave method (PAW) as implemented in the VASP code67,68. Further details on the input parameters of MendS, USPEX, and VASP are given in Supplementary Information.
Defining fitness: multi-objective (Pareto) optimization
Many scientific and engineering problems involve optimization of multiple conflicting objectives, for example, predicting novel materials that improve upon all critical properties of the known ones. The multi-objective evolutionary algorithm (MOEA) enables searching simultaneously for materials with multiple optimal properties, such as the enthalpy, hardness, density, dielectric permittivity, magnetization, etc. Here we performed searches optimizing simultaneously (1) stability, measured as the distance above the convex hull (chances of a compound to be synthesizable are higher if the compound is stable or low-energy metastable, i.e., is on the convex hull or close to it), and (2) hardness, computed using the Lyakhov–Oganov model71.
Hardness is a complicated property of materials which cannot be evaluated directly and rigorously from the crystal structure because it usually includes many nonlinear and mesoscopic effects. However, there are number of empirical models making it possible to estimate hardness from atomic-scale properties. The Chen–Niu empirical model72 is based on the relation between the elastic moduli and hardness. Although this model is reliable, calculating the elastic constants of materials on a large scale is computationally expensive. A similar model based on the elastic moduli was recently proposed by Mazhnik and Oganov73 and unlike Chen's model, does not overestimate the hardness value of materials with low or negative Poisson's ratio while for other materials gives similar results. The Lyakhov–Oganov model71, which computes the hardness from bond hardnesses, is more convenient for high-throughput searches: it is numerically stable, usually reliable, and can be used in calculations without significant cost, taking the crystal structure and chemical composition as input. For better understanding of the reliability of the mentioned models, a comparison of the computed values and experimental results for hardness of various materials is presented in ref. 65.
The result of the multi-objective optimization is, in general, not a single material, but a set of materials with a trade-off between their properties, and these optimal materials form the so-called first Pareto front. Similarly, 2nd, 3rd, … nth Pareto fronts can be defined (Fig. 4). In our method, the Pareto rank18 is used as a fitness.
Variation operators in the chemical space are of central importance for an efficient sampling of the chemical space using the previously sampled compositions and structures. These operators ensure that different populations not only compete, but also exchange information, i.e., learn from each other. An efficient algorithm could be constructed where the chemical space is defined by just one number for each element—the MN (or chemical scale); we use this for plotting the Pettifor maps, but within the algorithm itself, we resort to an even better option where each element is described by two numbers—electronegativity χ and atomic radius R, rescaled to be between 0 and 1—and it is this space where the variation operators act. There are three variation operators defined in the chemical space: chemical heredity, reactive heredity, and chemical mutation.
Chemical heredity replaces elements in parent systems with new elements such that their electronegativities and atomic radii lie in-between those of their parents (Fig. 3a). In doing so, we explore the regions of the chemical space between the parents
$$AB + CD \to XY,$$
where A, B, C, D, X, and Y are different elements, X is between A and C or A and D which is chosen randomly, and Y is between the other two elements (B and C or B and D).
Reactive heredity creates offspring by taking combinations of the elements from parents. For example, if the parents are A–B and C–D, their child is one of the A–C, A–D, B–C, and B–D systems.
Chemical mutation randomly chooses one of the elements of a parent and substitutes it with an element in its vicinity in the space of χ and R (Fig. 3b).
In both chemical mutation and chemical heredity, all elements are assigned the probability
$$P_i = \frac{{e^{ - \alpha x_i}}}{{{\sum} {e^{ - \alpha x_i}} }},i = 1,2,\,...,$$
to be selected, where x is the distance of element i from the parent element (in the case of chemical heredity, this formula is used to give a higher weight to the fitter parent, shown by a dark green point in Fig. 3a), and α is a constant (α = 1.5 is used here). The result of applying these chemical variation operators is shown in Fig. 6: the promising regions of the chemical space are sampled more thoroughly at the expense of the unpromising regions. When a new system is produced from parent system(s), it inherits from them a set of optimal crystal structures which are added to the first generation, greatly enhancing the learning power of the method.
After finishing the coevolutionary simulation, we took the most promising systems identified in it and performed longer evolutionary runs for each of them, calculating the final hardness using the Chen–Niu model72, and fracture toughness—using the Niu–Niu–Oganov model74.
The raw data required to reproduce most of the findings are available to download from [https://data.mendeley.com/datasets/jbp7rs29cc/draft?a=adad25b3-f101-4cfe-864f-6979ab6800f7]. The raw data required to reproduce the results for the Mn-H system cannot be shared at this time because they form a part of an ongoing study.
At the moment, the MendS code is not available for the public use. USPEX team will announce the availability of the code, as soon as the code is released.
An amendment to this paper has been published and can be accessed via a link at the top of the paper.
Oganov, A.R., Saleh, G., Kvashnin, A.G. Computational Materials Discovery. R. Soc. Chem. https://doi.org/10.1039/9781788010122 (2018).
Oganov, A. R., Pickard, C. J., Zhu, Q. & Needs, R. J. Structure prediction drives materials discovery. Nat. Rev. Mater.4, 331–348 (2019).
Curtarolo, S., Morgan, D., Persson, K., Rodgers, J. & Ceder, G. Predicting crystal structures with data mining of quantum calculations. Phys. Rev. Lett.91, 135503 (2003).
Martoňák, R., Laio, A. & Parrinello, M. Predicting crystal structures: the Parrinello-Rahman method revisited. Phys. Rev. Lett.90, 075503 (2003).
Oganov, A. R. & Glass, C. W. Crystal structure prediction using ab initio evolutionary techniques: principles and applications. J. Chem. Phys.124, 244704 (2006).
Oganov, A. R., Lyakhov, A. O. & Valle, M. How evolutionary crystal structure prediction works—and why. Acc. Chem. Res.44, 227–37 (2011).
Lyakhov, A. O., Oganov, A. R., Stokes, H. T. & Zhu, Q. New developments in evolutionary structure prediction algorithm USPEX. Comput. Phys. Commun.184, 1172–1182 (2013).
Villars, P. & Iwata, S. Pauling File verifies/reveals 12 principles in materials science supporting four cornerstones given by Nature. Chem. Metals Alloys6, 81–108 (2013).
Zhang, W. et al. Unexpected stable stoichiometries of sodium chlorides. Science342, 1502–5 (2013).
Zhu, Q., Oganov, A. R. & Lyakhov, A. O. Novel stable compounds in the Mg–O system under high pressure. Phys. Chem. Chem. Phys.15, 7696 (2013).
Zhu, Q., Oganov, A. R., Salvadó, M. A., Pertierra, P. & Lyakhov, A. O. Denser than diamond: Ab initio search for superdense carbon allotropes. Phys. Rev. B83, 193410 (2011).
Pettifor, D. G. A chemical scale for crystal-structure maps. Solid State Commun.51, 31–34 (1984).
Pettifor, D. G. The structures of binary compounds. I. Phenomenological structure maps. J. Phys. C Solid State Phys.19, 285–313 (1986).
Goldschmidt, V. M. Crystal structure and chemical constitution. Trans. Faraday Soc.25, 253 (1929).
Ringwood, A. E. The principles governing trace element distribution during magmatic crystallization Part I: the influence of electronegativity. Geochim. Cosmochim. Acta7, 189–202 (1955).
Nagle, J. K. Atomic polarizability and electronegativity. J. Am. Chem. Soc.112, 4741–4747 (1990).
Villars, P., Daams, J., Shikata, Y., Rajan, K. & Iwata, S. A new approach to describe elemental-property parameters. Chem. Metals Alloys1, 1–23 (2008).
Allahyari, Z. & Oganov, A. R. Multi-objective optimization as a tool for material design. in Handbook of Materials Modeling 1–15, https://doi.org/10.1007/978-3-319-50257-1_71-1 (Springer International Publishing, 2019).
Haines, J., Léger, J. & Bocquillon, G. Synthesis and design of superhard materials. Annu. Rev. Mater. Res.31, 1–23 (2001).
Liu, A. Y. & Cohen, M. L. Prediction of new low compressibility solids. Science245, 841–843 (1989).
Teter, D. M. & Hemley, R. J. Low-compressibility carbon nitrides. Science271, 53–55 (1996).
He, C. et al. Z-BN: a novel superhard boron nitride phase. Phys. Chem. Chem. Phys.14, 10967 (2012).
Li, Y., Hao, J., Liu, H., Lu, S. & Tse, J. S. High-energy density and superhard nitrogen-rich B-N compounds. Phys. Rev. Lett.115, 105502 (2015).
Sasaki, T., Akaishi, M., Yamaoka, S., Fujiki, Y. & Oikawa, T. Simultaneous crystallization of diamond and cubic boron nitride from the graphite relative boron carbide nitride (BC2N) under high pressure/high temperature conditions. Chem. Mater.5, 695–699 (1993).
Hervé Hubert et al. High-Pressure, High-Temperature Synthesis and Characterization of Boron Suboxide (B6O). https://doi.org/10.1021/CM970433+ (1998).
Chung, H.-Y. et al. Synthesis of ultra-incompressible superhard rhenium diboride at ambient pressure. Science316, 436–9 (2007).
Latini, A. et al. Superhard rhenium diboride films: preparation and characterization. Chem. Mater.20, 4507–4511 (2008).
Gu, Q., Krauss, G. & Steurer, W. Transition metal borides: superhard versus ultra-incompressible. Adv. Mater.20, 3620–3626 (2008).
Gao, F. Theoretical model of intrinsic hardness. Phys. Rev. B73, 132104 (2006).
Gao, F. et al. Hardness of covalent crystals. Phys. Rev. Lett.91, 015502 (2003).
Šimůnek, A. & Vackář, J. Hardness of covalent and ionic crystals: first-principle calculations. Phys. Rev. Lett.96, 085501 (2006).
Sung, C.-M. & Sung, M. Carbon nitride and other speculative superhard materials. Mater. Chem. Phys.43, 1–18 (1996).
Leger, J. M., Haines, J. & Blanzat, B. Materials potentially harder than diamond: quenchable high-pressure phases of transition metal dioxides. J. Mater. Sci. Lett.13, 1688–1690 (1994).
Haines, J. & Léger, J. M. Phase transitions in ruthenium dioxide up to 40 GPa: mechanism for the rutile-to-fluorite phase transformation and a model for the high-pressure behavior of stishovite SiO2. Phys. Rev. B48, 13344–13350 (1993).
Lundin, U. et al. Transition-metal dioxides with a bulk modulus comparable to diamond. Phys. Rev. B57, 4979–4982 (1998).
Rau, J. V. & Latini, A. New hard and superhard materials: RhB 1.1 and IrB 1.35. Chem. Mater.21, 1407–1409 (2009).
Chung, H.-Y. Y., Weinberger, M. B., Yang, J.-M. M., Tolbert, S. H. & Kaner, R. B. Correlation between hardness and elastic moduli of the ultraincompressible transition metal diborides RuB2, OsB2, and ReB2. Appl. Phys. Lett.92, 261904 (2008).
Robert W., Cumberland et al. Osmium diboride, an ultra-incompressible, hard material. J. Am. Chem. Soc.127, 7264–7265 (2005).
Hebbache, M., Stuparević, L. & Živković, D. A new superhard material: osmium diboride OsB2. Solid State Commun.139, 227–231 (2006).
Kvashnin, A. G., Oganov, A. R., Samtsevich, A. I. & Allahyari, Z. Computational search for novel hard chromium-based materials. J. Phys. Chem. Lett.8, 755–764 (2017).
Kvashnin, A. G. et al. New tungsten borides, their stability and outstanding mechanical properties. J. Phys. Chem. Lett.9, 3470–3477 (2018).
Rybkovskiy, D. V., Kvashnin, A. G., Kvashnina, Y. A. & Oganov, A. R. Structure, Stability, and Mechanical Properties of Boron-Rich Mo-B Phases: A Computational Study. J. Phys. Chem. Lett. 11, 2393–2401 (2020).
Zhang, M., Wang, H. H. H., Wang, H. H. H., Cui, T. & Ma, Y. Structural modifications and mechanical properties of molybdenum borides from first principles. J. Phys. Chem. C114, 6722–6725 (2010).
Liang, Y., Yuan, X., Fu, Z., Li, Y. & Zhong, Z. An unusual variation of stability and hardness in molybdenum borides. Appl. Phys. Lett.101, 1–6 (2012).
Xu, C. et al. A first-principles investigation of a new hard multi-layered MnB2 structure. RSC Adv.7, 10559–10563 (2017).
Wu, J. H. & Yang, G. Phase stability and physical properties of technetium borides: a first-principles study. Comput. Mater. Sci.82, 86–91 (2014).
Gou, Y., Fu, Z., Liang, Y., Zhong, Z. & Wang, S. Electronic structures and mechanical properties of iron borides from first principles. Solid State Commun.187, 28–32 (2014).
Wu, L. et al. Unraveling stable vanadium tetraboride and triboride by first-principles computations. J. Phys. Chem. C119, 21649–21657 (2015).
Okada, S., Atoda, T., Higashi, I. & Takahashi, Y. Preparation of single crystals of MoB2 by the aluminium-flux technique and some of their properties. J. Mater. Sci.22, 2993–2999 (1987).
Niu, H. et al. Variable-composition structural optimization and experimental verification of MnB3 and MnB4. Phys. Chem. Chem. Phys.16, 15866–15873 (2014).
Gou, H. et al. Peierls distortion, magnetism, and high hardness of manganese tetraboride. Phys. Rev. B89, 064108 (2014).
He, C. & Zhong, J. X. Structures, stability, mechanical and electronic properties of α-boron and α*-boron. AIP Adv.3, 042138 (2013).
Veprek, S., Zhang, R. F. & Argon, A. S. Mechanical properties and hardness of boron and boron-rich solids. J. Superhard Mater.33, 409–420 (2011).
Zhang, M. et al. Hardness of FeB4: density functional theory investigation. J. Chem. Phys.140, 174505 (2014).
Zhang, G.-T., Bai, T.-T., Yan, H.-Y. & Zhao, Y.-R. New crystal structure and physical properties of TcB from first-principles calculations. Chin. Phys. B24, 106104 (2015).
Miao, X., Xing, W., Meng, F. & Yu, R. Prediction on technetium triboride from first-principles calculations. Solid State Commun.252, 40–45 (2017).
Wang, P. et al. Vanadium diboride (VB2) synthesized at high pressure: elastic, mechanical, electronic, and magnetic properties and thermal stability. Inorg. Chem.57, 1096–1105 (2018).
Pan, Y., Lin, Y. H., Guo, J. M. & Wen, M. Correlation between hardness and bond orientation of vanadium borides. RSC Adv.4, 47377–47382 (2014).
Gou, H. et al. Discovery of a superhard iron tetraboride superconductor. Phys. Rev. Lett.111, 1–5 (2013).
Ying, C., Liu, T., Lin, L., Zhao, E. & Hou, Q. New predicted ground state and high pressure phases of TcB3 and TcB4: First-principles. Comput. Mater. Sci.144, 154–160 (2018).
Harran, I., Wang, H., Chen, Y., Jia, M. & Wu, N. Exploring high-pressure FeB2: structural and electronic properties predictions. J. Alloy. Compd.678, 109–112 (2016).
Li, B., Sun, H. & Chen, C. First-principles calculation of the indentation strength of FeB4. Phys. Rev. B90, 014106 (2014).
Kolmogorov, A. N. et al. New superconducting and semiconducting Fe-B compounds predicted with an Ab initio evolutionary search. Phys. Rev. Lett.105, 217003 (2010).
Woo, K., Lee, K. & Kovnir, K. BP: synthesis and properties of boron phosphide. Mater. Res. Express3, 074003 (2016).
Kvashnin, A. G., Allahyari, Z. & Oganov, A. R. Computational discovery of hard and superhard materials. J. Appl. Phys.126, 040901 (2019).
Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett.77, 3865–3868 (1996).
Kresse, G. & Furthmüller, J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys. Rev. B54, 11169–11186 (1996).
Kresse, G. & Joubert, D. From ultrasoft pseudopotentials to the projector augmented-wave method. Phys. Rev. B59, 1758–1775 (1999).
Oganov, A. R., Ma, Y., Lyakhov, A. O., Valle, M. & Gatti, C. Evolutionary crystal structure prediction as a method for the discovery of minerals and materials. Rev. Mineral. Geochem.71, 271–298 (2010).
Valle, M. & Oganov, A. R., IUCr. Crystal fingerprint space—a novel paradigm for studying crystal-structure sets. Acta Crystallogr. Sect. A Found. Crystallogr.66, 507–517 (2010).
Lyakhov, A. O. & Oganov, A. R. Evolutionary search for superhard materials: methodology and applications to forms of carbon and TiO 2. Phys. Rev. B84, 092103 (2011).
Chen, X.-Q., Niu, H., Li, D. & Li, Y. Modeling hardness of polycrystalline materials and bulk metallic glasses. Intermetallics19, 1275–1281 (2011).
Mazhnik, E. & Oganov, A. R. A model of hardness and fracture toughness of solids. J. Appl. Phys.126, 125109 (2019).
Niu, H., Niu, S. & Oganov, A. R. Simple and accurate model of fracture toughness of solids. J. Appl. Phys.125, 065105 (2019).
We thank the Russian Science Foundation (grant 19-72-30043) for financial support. All the calculations were done using the supercomputer Rurik at the Moscow Institute of Physics and Technology.
Skolkovo Institute of Science and Technology, Skolkovo Innovation Center, 3 Nobel Street, Moscow, 143026, Russia
Zahed Allahyari & Artem R. Oganov
Moscow Institute of Physics and Technology, 9 Institutsky Lane, Dolgoprudny, 141700, Russia
International Center for Materials Discovery, Northwestern Polytechnical University, Xi'an, 710072, China
Artem R. Oganov
Zahed Allahyari
A.R.O. created the ideas behind the MendS method and the new Mendeleev numbers. Z.A. implemented the MendS algorithm and Pareto optimization, and performed the calculations. Z.A. and A.R.O. wrote the paper.
Correspondence to Zahed Allahyari or Artem R. Oganov.
The authors declare no competing interests.
Allahyari, Z., Oganov, A.R. Coevolutionary search for optimal materials in the space of all possible compounds. npj Comput Mater 6, 55 (2020). https://doi.org/10.1038/s41524-020-0322-9
Structure prediction of crystals, surfaces and nanoparticles
Scott M. Woodley
, Graeme M. Day
& R. Catlow
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences (2020)
Nonempirical Definition of the Mendeleev Numbers: Organizing the Chemical Space
& Artem R. Oganov
The Journal of Physical Chemistry C (2020)
Molecular gold strings: aurophilicity, luminescence and structure–property correlations
Tim P. Seifert
, Vanitha R. Naina
, Thomas J. Feuerstein
, Nicolai D. Knöfel
& Peter W. Roesky
Nanoscale (2020)
For Authors & Referees
npj Computational Materials ISSN 2057-3960 (online) | CommonCrawl |
Journal of NeuroEngineering and Rehabilitation
Improving internal model strength and performance of prosthetic hands using augmented feedback
Ahmed W. Shehata ORCID: orcid.org/0000-0001-8442-99011,2,3,
Leonard F. Engels4,
Marco Controzzi4,
Christian Cipriani4,
Erik J. Scheme1,2 &
Jonathon W. Sensinger1,2
Journal of NeuroEngineering and Rehabilitation volume 15, Article number: 70 (2018) Cite this article
The loss of an arm presents a substantial challenge for upper limb amputees when performing activities of daily living. Myoelectric prosthetic devices partially replace lost hand functions; however, lack of sensory feedback and strong understanding of the myoelectric control system prevent prosthesis users from interacting with their environment effectively. Although most research in augmented sensory feedback has focused on real-time regulation, sensory feedback is also essential for enabling the development and correction of internal models, which in turn are used for planning movements and reacting to control variability faster than otherwise possible in the presence of sensory delays.
Our recent work has demonstrated that audio-augmented feedback can improve both performance and internal model strength for an abstract target acquisition task. Here we use this concept in controlling a robotic hand, which has inherent dynamics and variability, and apply it to a more functional grasp-and-lift task. We assessed internal model strength using psychophysical tests and used an instrumented Virtual Egg to assess performance.
Results obtained from 14 able-bodied subjects show that a classifier-based controller augmented with audio feedback enabled stronger internal model (p = 0.018) and better performance (p = 0.028) than a controller without this feedback.
We extended our previous work and accomplished the first steps on a path towards bridging the gap between research and clinical usability of a hand prosthesis. The main goal was to assess whether the ability to decouple internal model strength and motion variability using the continuous audio-augmented feedback extended to real-world use, where the inherent mechanical variability and dynamics in the mechanisms may contribute to a more complicated interplay between internal model formation and motion variability. We concluded that benefits of using audio-augmented feedback for improving internal model strength of myoelectric controllers extend beyond a virtual target acquisition task to include control of a prosthetic hand.
The seemingly simple and seamless way adult humans use their hands to grasp and manipulate objects is in fact the result of years of training during childhood, and of a sophisticated blend of feedforward and feedback control mechanisms [1]. The function of such an elegant system may be corrupted when neurological injuries interrupt the connections between the central nervous system (CNS) and the periphery, as in the case of upper limb amputation. In this case, myoelectric prostheses provide a solution to restore hand function by partially restoring the feedforward control mechanism [2]. This mechanism is influenced by two key factors. The first factor is the way the user intentions are decoded, which affects the robustness of control signals driving the prosthesis' motors. The second factor is the human understanding of the system, which is modeled by the CNS and is known as the internal model [3]. The ability to accurately estimate the current state of the musculoskeletal system and properly integrate information from various sensory feedback forms to predict the future state is determined by the strength of the internal model developed [4]. For prosthesis users, this model is mismatched since their prosthetic device properties and control are very different from that of a normal limb, and therefore the need to develop a new internal model or adjust the current one is presumed.
For a representative motor task, such as grasp-and-lift, the brain refines and updates the internal model using multi-modal sensory feedback (tactile, visual, and auditory) during and after the movement [5]. Unlike unimpaired individuals, myoelectric prosthesis users have to rely more on visual feedback, which has been found to negatively affect performance, as users spend more time monitoring their prosthesis or the objects being manipulated [6]. This increased dependency on visual feedback is due to the lack of adequate sensory feedback from the prosthetic devices [7]. This deficiency contributes to an inability of users to fully adjust their internal model and is known to affect overall performance [8].
To address this deficiency, researchers have investigated ways of providing augmented sensory information using invasive and non-invasive methods [9, 10]. Several of the invasive methods show promise, including Targeted Sensory Reinnervation and stimulation of sensory peripheral nerves [11,12,13,14,15]. However, many prosthesis users prefer non-invasive methods that do not require surgical intervention [16, 17].
Researchers have correspondingly evaluated non-invasive sensory substitution methods to provide sensory information either through an alternate sensory channel or through the natural channel but in a different modality [9]. Vibro-tactile [18, 19], mechano-tactile [20], electrotactile [21,22,23], skin stretch [24], and auditory cues [25] are just some of the techniques that have been developed and assessed to provide prosthesis users with supplementary feedback. Although some studies have shown that augmented sensory feedback had little to no effect on performance [26], others have demonstrated the efficacy of augmented sensory feedback in enhancing motor control even for the same experimental procedure [9]. This conflict may arise in part because it is unclear how this augmented feedback affects internal model development and, ultimately, the performance.
One hypothesis is that feedback improves performance through the integration of feedback in a real-time manner during a movement, known as real-time regulation [27,28,29]. Many studies showed promising improvement in performance [30, 31], sense of embodiment [32], and prosthesis incorporation [33] when using feedback for real-time regulation; however the efficacy of the feedback methods used, such as resolution and latency, introduces a new challenge [34]. To overcome this challenge, Dosen et al. [35] proposed providing electromyography (EMG) biofeedback to the user through visual feedback. Their results showed that users were able to exploit the augmented visual biofeedback to improve their performance in a grasping task. In a follow-up study, Schweisfurth and colleagues [36] implemented the EMG biofeedback using a multichannel electrotactile interface to transmit discrete levels of myoelectric signals to users. They compared this feedback approach to classic force feedback and found that the electrotactile biofeedback allowed for more predictable control and improved performance. However, it is unclear whether this improvement is driven by the use of this feedback for real-time regulation or by the adjustments made to the internal model.
Our group has recently suggested a framework that demonstrates that the strength of the internal model is indeed affected by feedback [37]. In the field of myoelectric prosthesis control, we used this framework to assess the strength of the internal model developed for able-bodied subjects when using different myoelectric control strategies. A series of tests were conducted to extract parameters that are used in this framework to compute uncertainties in the developed internal model. One test quantified the ability of subjects to use feedback to adapt and modify their control signals. Other tests quantified variability in control signals for a given controller and variability in the provided feedback. These parameters were used in this framework to determine a weighted factor of the feedback that is assumed to be combined with the internal model based on the uncertainty of the feedback.
In a previous study [38], we noted that various types of control strategies, in the very act of filtering biological signals (i.e., movement classification and activation thresholds), provide inherently different levels of visual biofeedback to the user. For instance, classification-based control provides no visual feedback about any class except the one it deems to be the correct class, thus denying the user of any knowledge about partial activations of other classes [39]. Whereas most research has focused on the impact of those filters on the control (motor) performance of the prosthesis (see reviews [40, 41]), we demonstrated that it also affects the ability of the person to form an internal model. In that study, we assessed the internal model strength and performance when using two common myoelectric control strategies [39, 42] that differed in the inherent feedback provided to the user, namely: (a) regression-based control or (b) classification-based control.
For a two DOF task, a regression-based control provides users with proportional feedback about activations of both DOF while a classification-based control provides users with feedback about only one dominant DOF at a time. We showed that the inclusion of information about the smaller modulations in the secondary DOF in regression controllers (unfiltered control signals) provided valuable and rich information to improve the internal model, even though it resulted in worse short-term performance as measured using task accuracy and path efficiency. In contrast, the inherent filter in classification-based control, which limited the control signal variability and thus improved the smoothness of movements, also prevented the formation of a strong internal model. In other words, continuous feedback-rich control strategies may be used to improve internal model strength, but classification-based controllers enable better immediate performance. Intrigued by this outcome and attempting to incorporate the benefits of both control strategies, in our next study we combined a classification-based control with a regression-based audio-augmented sensory feedback in a virtual target acquisition task [43]. Our outcomes demonstrated that this combination enabled both the development of a stronger internal model than the regression-based controller and better performance than the classification-based controller.
In the present study, we extended our previous work by investigating the benefits of using audio-augmented feedback when controlling a prosthetic hand. The main goal was to assess whether the ability to decouple internal model strength and motion variability, using the continuous audio-augmented feedback, extended to real-world use, where the inherent mechanical variability and dynamics in the mechanisms as well as the user-socket interfaces may contribute to a more complicated interplay between internal model formation and motion variability. To accomplish this goal, we compared internal model strength and performance of a classifier-based myoelectric controller with and without audio-augmented feedback during a grasp-and-lift task using a multi-degree of freedom (DOF) research prosthetic hand [44]. We assessed the internal model strength using psychophysical tests and used an instrumented Virtual Egg to assess the performance [38, 45]. Our results from 14 able-bodied subjects show that audio-augmented feedback may indeed be used to improve internal model strength and performance of a myoelectric prosthesis. These improvements may increase reliability and promote acceptance of prosthetic devices by powered prosthesis users.
Classifier-based myoelectric control is considered as one of the more advanced strategies of myocontrol [42] and may be implemented using various pattern recognition algorithms [46, 47]. In recent studies, we used a Support Vector Regression (SVR) algorithm, which has been proven to enable better performance than other algorithms, to implement a classifier-based myoelectric control strategy [38, 43]. This algorithm provided regression-based control signals that simultaneously activated more than 1 DOF at a time, which were subsequently gated to only allow the activation of 1 DOF at a time. In this work, we used these same gated, i.e., classifier-based control, signals to activate either hand open/close or thumb adduction/abduction of a prosthetic hand. Building on the classifier-based control, we implemented a novel control strategy, namely Audio-augmented Feedback control, which is able to effectively decouple internal model formation from control variability. We relayed information in the regression-based control signals through continuous auditory cues to augment the feedback from the classifier-based myoelectric control (Fig. 1). The amplitude of the audio feedback was directly proportional to the amplitude of the control signals. For each DOF, two distinct frequencies were assigned: open/close hand had 500/400 Hz assigned and thumb adduction/abduction had 900/800 Hz assigned.
Closing the control loop using audio to augment the visual feedback. Dark blue lines represent the classifier-based control signals, red lines represent the regression-based control signals, and purple lines represent the audio feedback
14 healthy subjects (8 male, and 6 female; mean and standard deviation of age: 25 ± 4.5 years) participated in this study. All participants had either normal or corrected-to-normal vision, were right-handed, and none had earlier experience with myoelectric pattern recognition control. Written informed consent according to the University of New Brunswick Research and Ethics Board and to Scuola Superiore Sant'Anna Ethical Committee was obtained from subjects before conducting the experiment (UNB REB 2014–019 and SSSA 02/2017).
The experimental platform consisted of a robotic hand, an array of myoelectric sensors, a PC implementing the control strategy, headphones that conveyed audio feedback, and a test object instrumented with force sensors (Fig. 2). The robotic hand was a right-handed version of the IH2 Azzurra Hand (Prensilia, IT) [44]. It consists of four fingers and a thumb actuated by five motors. In the present work, movements were limited to allow only flexion/extension of the thumb-index-middle digits and the abduction/adduction of the thumb. The hand included encoders on the motors, which were under position control based on commands sent over a serial bus from the PC. Subjects controlled the robotic hand using isometric muscle contractions sensed by an array of eight low power multi-channel operation electrodes (30 × 20 × 10 mm/electrode) placed around their forearm [48]. Seven subjects tested the classifier-based control without augmented feedback (NF) and then retested with the audio-augmented feedback (AF). The remaining subjects tested the classifier-based control without augmented feedback (NF) twice to test for learning effects.
Subject controlling a prosthetic hand to grasp-and-lift an instrumented virtual egg without breaking it. The prosthetic hand is controlled using the subject's myoelectric signals sensed by an electrode array placed on their forearm
The test object was an instrumented Virtual Egg (iVE). The iVE is a rigid plastic test-object (57 × 57 × 57 mm3; approximately 180 g) equipped with two strain gauge-based force sensors (Strain Measurement Devices, UK, model S215–53.3 N; each located at one of two parallel grasping sides), able to measure grip force exerted on the object. The iVE was programmed to virtually break whenever the grip force was larger than a preset threshold (approximately 3.1 N); this event was signaled to the subject through a colored light on the iVE [45].
Participants were instructed to repeatedly grip, lift, replace, and release the iVE at a self-selected routine grasping speed. Specifically, their task consisted of (1) moving their right arm to reach the iVE with the robotic hand mounted on a bypass splint (Fig. 1), (2) contracting their own muscles to control the robotic hand so that it grasped the object, (3) lifting the iVE a few centimeters above the table, (4) putting the iVE back on the table, and, finally, (5) releasing the object by opening the hand.
During the experiment, subjects sat comfortably in front of a computer screen and wore a set of 1000 mW headphones (MDRZX100, Sony, JP) with the volume set to a maximum of 52.5 ± 3 dB (they could remove them during scheduled breaks between testing blocks). Subjects used each feedback method to complete a series of test blocks in a specific order after accomplishing a training and familiarization block. Before the start of each test block, subjects were given a two-minute break, in which they could stand up, remove the headset, unstrap the splint, and stretch if needed. The electrode array, however, was not removed for the duration of the experiment.
The training and familiarization block consisted of 40 grasp-and-lift trials. Subjects were given verbal instructions to complete the task without breaking the iVE in less than seven seconds after which a "Time out" text appeared on the computer screen and the artificial hand returned to a predetermined starting pose (Fig. 3). The training and familiarization starting pose was with the hand fully opened and thumb adducted (Fig. 3a). While in the first 25 trials, subjects were shown the feedback when the iVE broke (fragile mode), they were not given this feedback in the last 15 trials (rigid mode). This was done to keep subjects engaged with the task and not lose interest during the training block. Subjects were allowed to proceed to test blocks when they achieved at least 75% successful grasp-and-lift trials of the iVE in the training block.
Hand starting pose. a Starting pose for the training and familiarization, adaptation, and JND blocks. Subjects had to only activate the thumb and fingers flexion to grasp the object carefully without breaking it. b Starting pose for the performance test: fingers and thumb are extended, and the thumb is abducted. Subjects had to adduct the thumb and then close the hand to grasp the object and transfer it from one side of a barrier to the other
The first test block was used to test adaptation to self-generated errors. In this block, subjects were asked to complete 40 grasp-and-lift trials in less than five seconds per trial. Adaptation rate was computed as how much subjects adjusted their grasp trajectory from one trial to the next based on error observed between their actual trajectory and activating only the hand close/open DOF, i.e., the optimal trajectory [49].
The second test block was used to measure the subject's perception threshold, i.e. a psychometric measure of sensory threshold for perception of a sensory stimulus. Subjects performed a series of two lift trials (fragile mode). In one of the two trials, a specific stimulus was added causing the hand to behave differently. The subjects were then asked to identify the changed trial by pressing the "1" or "2" key (for trial one or two) on a keyboard placed in front of them with their other hand. The magnitude of the added stimulus was calculated using an adaptive staircase as a rotation in the control space in degrees (Fig. 3 in [38]) with target probability set to 0.84 [50, 51]. For instance, if a subject was generating control signals for thumb abduction, a 90 degrees rotation in the control signal would switch activations from thumb abduction to hand close. Each trial lasted for four seconds and subjects were encouraged to take breaks between trials whenever they needed. The final noticeable stimulus reached was recorded when the number of reversals for this staircase reached 23 [38]. The starting pose of the prosthetic hand for the first and second block was similar to that of the training and familiarization block where subjects had to only activate the hand close/open DOF to achieve this task efficiently.
The third and last test block was used to measure performance. Subjects were given 20 trials to move the iVE (fragile mode) from one side of a barrier (H: 14.5 cm x W: 25 cm) to the other in less than 10 s per trial, similar to the Box and Blocks test [52]. The starting pose of the hand was adjusted to evaluate the subject's performance for a 2-DOF task in which subjects had to activate the thumb adduction/abduction DOF to lower the thumb and then activate the hand close/open DOF to grasp the iVE properly to lift it to the other side of the barrier (Fig. 3b). Table 1 summarizes the experimental protocol used in this study.
Table 1 Summary of the experimental protocol
Internal model parameters
Similar to our previous research [38, 43], we assessed human understanding of the myoelectric control strategy for a grasp-and-lift task using the following psychometric measures:
Adaptation rate (−β1) is a measure of feedforward modification of the control signal from one trial to the next [37]. For each trial in the adaptation rate test, control signal activations in both DOFs, i.e., flex/extend thumb-index-middle digits and adduct/abduct thumb, were recorded. To capture the subject's feedforward intent, the first 500 ms of the recorded activations for each trial were analyzed. The target control signal was the activation of the closing of the prosthetic hand only (i.e., flex/extend thumb-index-middle digits). Other activations were considered as self-generated errors, which subjects were instructed to minimize. The following equation was used to compute this adaptation rate.
$$ {error}_{n+1}-\kern0.5em {error}_n\kern0.5em =\kern0.5em {\beta}_1\kern0.5em \times \kern0.5em {error}_n\kern0.5em +\kern0.5em {\beta}_0 $$
where error is the angle formed between the closing of the hand activation trajectory, i.e., target, and the actual hand activation trajectory; n is the trial number; β0 is the linear regression constant; and −β1 is the adaptation rate. A unity value indicates perfect adaptation, i.e., internal model modified to perfectly compensate for errors.
Just-noticeable-difference (JND) is a measure of the minimum perceivable stimulus in degrees identified by the subject when using each feedback method [50]. A lower threshold indicated better user ability to perceive small perturbations in the control strategy used. This parameter was extracted from the Perception threshold test block as the final noticeable stimulus when the number of reversals for an adaptive staircase reached 23.
Internal model uncertainty (Pparam) is a measure of the confidence of a user in the internal model they developed for a control strategy with a certain feedback method. This parameter was computed using outcomes from both the first and second test blocks [38].
Completion Rate (CR) is the percentage of the successful transfers of the iVE from one side of the barrier to the other without breaking it (fragile mode). This parameter was extracted from the third test block.
Mean Completion Time (MCT) is defined as the time taken to successfully transfer the iVE from one side of the barrier to the other without breaking it (fragile mode). This parameter was also extracted from the third test block.
Trial submovements (TS) is the number of submovements per trial. This parameter is calculated as the number of zero-crossing pairs of the third derivative of the grasp force profile per trial [53]. The number of submovements served as an indicator of use of feedback for real-time regulation of the grasping force [54,55,56]. The higher this number, the greater the use of feedback in real-time regulation. This parameter was extracted from the adaptation test.
The Statistical Package for the Social Science software (SPSS v25.0, IBM, US) was used to run Levene's test on JND, adaptation rate, internal model uncertainty, and performance measure results to investigate homogeneity in variances of the data. If data variances were found to be homogenous, we ran two-sample paired t-tests to assess differences between these outcome measures at a significance criterion of α = 0.05 for the two feedbacks tested. If data variances were found to be nonhomogeneous, a Wilcoxon signed-rank test was conducted. For the group of subjects who tested and retested the NF controller, repeated measures ANOVA was used to compute intraclass correlation coefficient (ICC) for internal model parameters and performance parameters using a two-way mixed effects model with absolute agreement at a 95% confidence interval to investigate the effect of prolonged exposure to a control strategy. The confidence interval was calculated using the standard deviation (95% CI = mean ± 1.97 × SD). If not denoted otherwise, all numbers in the text refer to mean ± SD.
To confirm that the benefits of using audio-augmented feedback for improving internal model strength of myoelectric controllers extend beyond a virtual target acquisition task [43], we assessed the internal model developed when using this audio-augmented controller and the no-augmented feedback controller to control a prosthetic hand for a grasp-and-lift task. In addition, short-term performance when using both controllers was evaluated.
Internal model assessment
Two psychophysical experiments were employed to evaluate parameters that are used to assess internal model strength [38]. The first experiment tested the trial-by-trial adaptation to self-generated errors. The outcome from that test indicated how much the internal model was modified from one trial to the next based on error feedback.
Results from the adaptation test (first test block) proved a statistically significant difference between subjects using NF and AF control strategies (two paired-samples t-test (t (6) = − 4.6), p = 0.004)). In particular, the AF control strategy promoted a significantly higher adaptation rate (1.2 ± 0.25) than the NF control strategy (0.75 ± 0.15) (Fig. 4a).
Psychophysical test results. a Adaptation rate results showing audio-augmented feedback control strategy enabling higher adaptation to self-generated error than the no-augmented feedback control strategy. b Perception threshold test results showing low JND value when using the audio-augmented controller. c Internal model uncertainty (Pparam) results showing significant reduction in the internal model uncertainty when using the audio-augmented feedback control strategy. Horizontal bars indicate statistical significant difference. NF: No-augmented Feedback. AF: Audio-augmented Feedback
The outcomes from the perception threshold test matched with those from the adaptation test. Audio-augmented feedback control strategy enabled a significantly lower perception threshold (44.6 ± 10 degrees) than the NF controller (58.5 ± 12.5 degrees) (paired samples t-test (t (6) = 3.4, p = 0.014)) (Fig. 4b).
The adaptation rate and the JND were used to compute the internal model uncertainty developed for each of the tested feedback conditions. Again, the AF control strategy promoted a lower internal model uncertainty (0.22 ± 0.11) compared to subjects using NF control strategy (1.8 ± 0.6) (related samples Wilcoxon signed-rank test; p = 0.018) (Fig. 4c).
Test-retest of NF controller: Results for internal model assessment parameters showed no significant within-subject effect of retesting NF controller with good reliability (ICC > 0.65). Table 2 summarizes the statistical analysis for these results.
Table 2 Summary of test-retest for the Nf controller results
All in all, these results align with previous studies [43] and confirm that audio-augmented feedback promotes: (1) high adaptation rate, (2) the user's ability to perceive low sensory threshold and, in turn (3) a strong internal model for a grasp-and-lift task using a prosthetic hand.
The completion rate (in the last test block) proved higher when using the AF control strategy (65 ± 12%) than when using the NF control strategy (37.34 ± 19%) (Two paired-sample t-test, (t (6) = − 2.87, p = 0.028) (Fig. 5). Notably, testing of the mean completion time did not exhibit a significant difference (MCTAF = 8.3 ± 0.74 s; MCTNF = 8.4 ± 0.65 s) (Fig. 6).
Successful transfer rate of the instrumented virtual egg from one side of a barrier to the other without breaking it. Subjects had 1.74 times higher successful transfers when using the audio-augmented feedback control strategy than when using the no-augmented feedback control strategy. NF: No-augmented Feedback. AF: Audio-augmented Feedback
Completion time for successful transfers. Subjects using the no-augmented feedback controller had similar completion time to subjects using the audio-augmented controller. NF: No-augmented Feedback. AF: Audio-augmented Feedback
Test-retest of NF controller: Similar to the internal model assessment parameters results, results for performance parameters showed no significant within-subject effect of retesting the NF controller with very good reliability (ICC > 0.9, CR) and good reliability (ICC = 0.55, MCT) (Table 2).
Submovements analysis was performed on data recorded from only five subjects as the iVE failed to record data for the other two subjects due to a communication error. When using the NF control strategy, subjects changed their grasping force during the grasp-and-lift task, though not as much as when using AF control strategy (Fig. 7). Results show that subjects using AF control strategy had a significantly higher number of submovements (3.94 ± 0.12) than subjects using NF control strategy (3.26 ± 0.17) as determined by a two-sample independent t-test (t (90) = − 3.17, p = 0.002) (Fig. 8). These results suggest that audio-augmented feedback enables better short-term performance by enabling the development of a stronger internal model.
Progression of grasp-and-lift trials ranging from the beginning of the task (light gray) to the end of the task (dark gray). Representative data from a single subject during adaptation rate test using (a) the no-augmented feedback control strategy (moderate grasp force changes per trial) and (b) the audio-augmented feedback control strategy (high grasp force changes per trial). The red line in both plots shows the preset breaking force
Submovements computed from the grasp forces of successful trials from the adaptation rate test for a sample of five subjects. NF: No-augmented Feedback. AF: Audio-augmented Feedback
Many studies have focused on improving performance of myoelectric prosthesis control by providing feedback to the user, but only a few have investigated the effect of this feedback on the internal model, which is key to improving long-term performance [57]. Due to an inability to assess internal model strength, this effect remained unquantified. For the first time, we used a recently developed psychophysical framework to assess the strength of the internal model developed when using different myoelectric prosthesis controllers [38]. In earlier work, we demonstrated that audio-augmented feedback improves internal model strength and the performance of myoelectric prosthesis control in a virtual target acquisition task [43]. We argued that these improvements may extend beyond a virtual target acquisition task. In this study, we tested the classifier-based control with and without audio-augmented feedback for a grasp-and-lift task when using a prosthetic hand. Our results confirm the hypothesis that audio-augmented feedback enables the development of a strong internal model and better short-term performance when controlling a prosthetic hand for a grasp-and-lift task.
Even when using different controllers, humans are able to incorporate previous knowledge and experience to accomplish tasks [38]. To minimize this translation of stronger internal models, all subjects in this study tested the no-augmented feedback controller first, followed by the augmented audio feedback controller. It could be possible that the reduction in internal model uncertainty for the audio-augmented controller is due, in part, to the prolonged exposure to the control strategy and the experiment. This possibility, however, was addressed in this work when subjects were asked to test and retest the same control strategy (no-augmented feedback) and it was concluded that there was no improvement in adaptation rate, JND, internal model strength, or performance due to the repetition of the test. Consequently, we argue that any improvement in those parameters is due to the control strategy used and not due to prolonged exposure.
To ensure that the continuous audio feedback was not a distraction to the user and, in turn, did not compromise short-term performance, we assessed the short-term performance by computing the completion rate (without breaking the object). Outcomes, in fact, showed a significantly better performance when subjects used the AF compared to the NF controller, albeit both controllers had similar completion time. The submovements analysis revealed that subjects adjusted their grasping forces more frequently when using the AF controller than when they used the NF controller. This finding suggests that augmented audio feedback may not only be used for developing internal models, but subjects' high confidence in the feedback lead to them using this feedback for real-time regulation too. Hence, regression-based augmented audio feedback improves both short-term performance through real-time regulation and long-term performance through development of strong internal models.
Although we did not measure the cognitive load of using audio feedback in this work, other researchers have found that audio feedback may be used to alleviate the cognitive burden when combined with visual feedback [25]. Internal model assessment results from this study may be used to further explain how audio feedback reduces the cognitive load. To further support our findings, future work may include utilizing visual attention measures developed in [6] to quantitatively determine the effect of using controllers with and without feedback on visual attention.
Although the results found in this study providing compelling evidence that internal models can indeed be improved using augmented feedback, they must still be confirmed in the target population. Although we tested only able-bodied subjects, we suspect that similar internal model results may be found when testing amputees since internal model assessment parameters are measures of human behavior and understanding and not physical ability [39]. That said, performance results found here may be scaled when testing amputees due to differences in prosthesis attachment, i.e., bypass vs. socket, and placement of the surface electrode array. One might argue that the control strategy (model) trained and used by able-bodied subjects in this study will be very different than the one trained and used by an amputee. In fact, the control model trained for every individual and for every session is unique and tuned to that individual regardless of chosen location of the electrode array or muscle mass. This trained model is driven by how individuals contract their muscles for a given DOF model training. The length of the residual limb available for electrode placement and integration of sensory feedback in the socket are indeed challenges that are not faced when testing able-bodied subjects and must be addressed when testing amputees. It must be noted that the use of the audio feedback modality in this work reduces the challenges associated with integrating sensory feedback mechanisms within the socket.
In this study, we conducted psychophysical tests on one DOF, i.e., closing the hand to grasp-and-lift an object, to avoid fatigue and loss of motivation. However, we designed the performance test for a two-DOF task where subjects had to activate both DOFs, i.e., digits flexion/extension and thumb adduction/abduction, to ensure that they were able to fully control the device to achieve the task and to collect performance results that could be compared to previous studies [38, 42, 43]. During the performance test, we noticed that lifting the weight of the prosthetic hand affected users' ability to open the hand after grasping the object, which affected the performance for both control strategies tested equally. This weight effect could be avoided in future experiments by using a tool balancer [58].
Furthermore, some subjects reported that continuous audio feedback may be a distraction; however, our results show that, although subjects may not purposely focus on integrating this feedback, they unconsciously integrate it into their internal models. With this in mind, a new question arises: would a task specific discrete audio feedback, i.e., discrete beeps on contact and release of an object akin to the Discrete Event-driven Sensory feedback Control (DESC) principle [1, 59], be less irritating while potentially enabling similar integration? This question will be addressed in future research.
Although audio-augmented feedback showed promising results, the minimum quantity of feedback that is useful for developing strong internal models must still be identified, along with what quality is required for real-time regulation. Future work informed by this study includes: investigating the benefits of using audio feedback for limb-different individuals, exploring a combination of other augmented feedback that might enable an even stronger internal model, exploring the effect of augmenting other feedback modalities on the internal model strength, investigating the effect of audio-augmented feedback control strategy for a more complex task on the internal model strength and the performance, and, finally, investigating the retention of internal models developed while using the audio-augmented feedback control strategy.
We extended our previous work to investigate the benefits of using audio-augmented feedback by testing a classifier-based control with and without this feedback for a grasp-and-lift task when using a prosthetic hand. Results from psychophysical and performance tests showed that audio-augmented feedback enables the development of a strong internal model and better short-term performance. In addition, we concluded that audio feedback may be used in real-time regulation of grasping forces during a grasp-and-lift task.
Audio-augmented Feedback
DESC:
Discrete Event-driven Sensory feedback Control
DOF:
EMG:
iVE:
instrumented Virtual Egg
JND:
Just-Noticeable-Difference
MCT:
Mean Completion Time
NF:
No-augmented Feedback
SVR:
Support Vector Regression
TS:
Trial Submovements
Johansson RS, Cole KJ. Sensory-motor coordination during grasping and manipulative actions. Curr Opin Neurobiol. 1992;2:815–23.
Dromerick AW, Schabowsky CN, Holley RJ, Monroe B. Feedforward control strategies of subjects with transradial amputation in planar reaching. J Rehabil Res Dev. 2010;47(3):201.
Kawato M. Internal models for motor control and trajectory planning. Curr Opin Neurobiol. 1999;9:718–27.
Wolpert DM, Ghahramani Z, Jordan MI. An internal model for sensorimotor integration. Science (80- ). 1995;269:1880–2. https://doi.org/10.1126/science.7569931.
Imamizu H, Miyauchi S, Tamada T, Sasaki Y, Takino R, PuÈtz B, Yoshioka T, Kawato M. Human cerebellar activity reflecting an acquired internal model of a new tool. Nature. 2000;403(6766):192.
Parr JV, Vine SJ, Harrison NR, Wood G. Examining the spatiotemporal disruption to gaze when using a myoelectric prosthetic hand. J Mot Behav. 2018;50(4):416-25.
Atkins DJ, Heard DC, Donovan WH. Epidemiologic overview of individuals with upper-limb loss and their reported research priorities. J Prosthetics and Orthotics. 1996;8(1):2-11.
Lum PS, Black I, Holley RJ, Barth J, Dromerick AW. Internal models of upper limb prosthesis users when grasping and lifting a fragile object with their prosthetic limb. Exp Brain Res. 2014;232:3785–95.
Antfolk C, D'Alonzo M, Rosén B, Lundborg G, Sebelius F, Cipriani C. Sensory feedback in upper limb prosthetics. Expert Rev Med Devices. 2013;10:45–54. https://doi.org/10.1586/erd.12.68.
Childress DS. Closed-loop control in prosthetic systems - historical perspective. Ann Biomed Eng. 1980;8:293–303. http://hopper.library.northwestern.edu/sfx/?&atitle=CLOSED-LOOP+CONTROL+IN+PROSTHETIC+SYSTEMS+-+HISTORICAL-PERSPECTIVE&auinit=DS&aulast=CHILDRESS&date=1980&epage=303&issn=0090-6964&issue=4-6&sid=ISI:WoK&spage=293&stitle=ANN+BIOMED+ENG&title=ANNALS+OF+BIOM
Kuiken TA, Dumanian GA, Lipschutz RD, Miller LA, Stubblefield KA. The use of targeted muscle reinnervation for improved myoelectric prosthesis control in a bilateral shoulder disarticulation amputee. Prosthetics Orthot Int. 2004;28:245–53.
Hebert JS, Olson JL, Morhart MJ, Dawson MR, Marasco PD, Kuiken TA, et al. Novel targeted sensory Reinnervation technique to restore functional hand sensation after Transhumeral amputation. IEEE Trans neural Syst Rehabil Eng. 2014;22:765–73.
Tan DW, Schiefer MA, Keith MW, Anderson JR, Tyler J, Tyler DJ. A neural interface provides long-term stable natural touch perception. Sci Transl Med. 2014;6(257):257ra138-.
Davis TS, Wark HA, Hutchinson DT, Warren DJ, O'Neill K, Scheinblum T, Clark GA, Normann RA, Greger B. Restoring motor control and sensory feedback in people with upper extremity amputations using arrays of 96 microelectrodes implanted in the median and ulnar nerves. J Neural Eng. 2016;13(3):036001.
Delgado-Martínez I, Righi M, Santos D, Cutrone A, Bossi S, D'Amico S, Del Valle J, Micera S, Navarro X. Fascicular nerve stimulation and recording using a novel double-aisle regenerative electrode. Journal of a novel double-aisle regenerative electrode. J Neural Eng. 2017;14(4):046003.
Cordella F, Ciancio AL, Sacchetti R, Davalli A, Cutti AG, Guglielmelli E, et al. Literature Review on Needs of Upper Limb Prosthesis Users. Front Neurosci. 2016;10:1–14.
Engdahl SM, Christie BP, Kelly B, Davis A, Chestek CA, Gates DH. Surveying the interest of individuals with upper limb loss in novel prosthetic control techniques. J Neuroeng Rehabil. 2015;12(1):53.
Rombokas E, Stepp CE, Chang C, Malhotra M, Matsuoka Y. Vibrotactile sensory substitution for electromyographic control of object manipulation. IEEE Trans Biomed Eng. 2013;60:2226–32.
D'Alonzo M, Cipriani C. Vibrotactile sensory substitution elicits feeling of ownership of an alien hand. PLoS One. 2012;7
Antfolk C, D'Alonzo M, Controzzi M, Lundborg G, Rosen B, Sebelius F, et al. Artificial redirection of sensation from prosthetic fingers to the phantom hand map on transradial amputees: Vibrotactile versus mechanotactile sensory feedback. IEEE Trans Neural Syst Rehabil Eng. 2013;21:112–20.
Kaczmarek KA, Webster JG, Bach-y-Rita P, Tompkins WJ. Electrotactile and vibrotactile displays for sensory substitution systems. IEEE Trans Biomed Eng. 1991;38:1–16.
Green AM, Chapman CE, Kalaska JF, Lepore F. Sensory feedback for upper limb prostheses. Enhancing Perform Action Percept Multisensory Integr Neuroplast Neuroprosthetics. 2011;69
Gonzalez-Vargas J, Dosen S, Amsuess S, Yu W, Farina D. Human-machine interface for the control of multi-function systems based on electrocutaneous menu: application to multi-grasp prosthetic hands. PLoS One. 2015;10:e0127528.
Wheeler J, Bark K, Savall J, Cutkosky M. Investigation of rotational skin stretch for proprioceptive feedback with application to myoelectric systems. IEEE Trans Neural Syst Rehabil Eng. 2010;18:58–66.
Gonzalez J, Soma H, Sekine M, Yu W. Psycho-physiological assessment of a prosthetic hand sensory feedback system based on an auditory display: a preliminary study. J Neuroeng Rehabil. 2012;9:33. https://doi.org/10.1186/1743-0003-9-33.
Cipriani C, Zaccone F, Micera S, Carrozza MC. On the shared control of an EMG-controlled prosthetic Hand : analysis of user – prosthesis interaction. IEEE Trans Robot. 2008;24:170–84.
Chatterjee A, Chaubey P, Martin J, Thakor N. Testing a prosthetic haptic feedback simulator with an interactive force matching task. JPO J Prosthetics Orthot. 2008;20:27–34.
Ninu A, Member S, Dosen S, Muceli S, Rattay F, Dietl H, et al. Closed-Loop Control of Grasping With a Myoelectric Hand Prosthesis : Which Are the Relevant Feedback Variables for Force Control ? 2014;:1041–52.
Raspopovic S, Capogrosso M, Petrini FM, Bonizzato M, Rigosa J, Di Pino G, et al. Restoring natural sensory feedback in real-time bidirectional hand prostheses. Sci Transl Med. 2014 6:222ra19--222ra19
Dosen S, Markovic M, Strbac M, Perovic M, Kojic V, Bijelic G, et al. Multichannel Electrotactile feedback with spatial and mixed coding for closed-loop control of grasping force in hand prostheses. IEEE Trans Neural Syst Rehabil Eng. 2016;4320:1–1. https://doi.org/10.1109/TNSRE.2016.2550864.
Markovic M, Karnal H, Graimann B, Farina D, Dosen S. GLIMPSE: Google Glass interface for sensory feedback in myoelectric hand prostheses. J Neural Eng. 2017;14(3):036007.
Marasco PD, Kim K, Colgate JE, Peshkin MA, Kuiken TA. Robotic touch shifts perception of embodiment to a prosthesis in targeted reinnervation amputees. Brain. 2011;134:747–58.
Sengul A, Shokur S, Bleuler H. Brain incorporation of artificial limbs and role of haptic feedback. In: Rodić A, Pisla D, Bleuler H, editors. New trends in medical and service robots: challenges and solutions. Cham: Springer International Publishing; 2014. p. 257–68.
Chapter Google Scholar
Zafar M, Van Doren CL. Effectiveness of supplemental grasp-force feedback in the presence of vision. Med Biol Eng Comput. 2000;38:267–74.
Dosen S, Markovic M, Somer K, Graimann B, Farina D. EMG biofeedback for online predictive control of grasping force in a myoelectric prosthesis. J Neuroeng Rehabil. 2015;12:55.
Schweisfurth MA, Markovic M, Dosen S, Teich F, Graimann B, Farina D. Electrotactile EMG feedback improves the control of prosthesis grasping force. J Neural Eng. 2016;13:56010.
Johnson RE, Kording KP, Hargrove LJ, Sensinger JW. Adaptation to random and systematic errors : Comparison of amputee and non-amputee control interfaces with varying levels of process noise. PLoS One. 2017:1–19.
Shehata AW, Scheme EJ, Sensinger JW. Evaluating internal model strength and performance of myoelectric prosthesis control strategies. IEEE Trans Neural Syst Rehabil Eng. 2018;26:1046–55.
Shehata AW, Scheme EJ, Sensinger JW. The effect of myoelectric prosthesis control strategies and feedback level on adaptation rate for a target acquisition task. InRehabilitation Robotics (ICORR), 2017 International Conference. IEEE. 2017. pp 200-204.
Huang Y, Englehart KB, Member S, Hudgins B, Chan ADC. Scheme for Myoelectric Control of Powered Upper Limb Prostheses. 2005;52:1801–11.
Scheme E, Englehart K. Electromyogram pattern recognition for control of powered upper-limb prostheses: state of the art and challenges for clinical use. J Rehabil Res Dev. 2011;48:643–59.
Hahne JM, Markovic M, Farina D. User adaptation in myoelectric man-machine interfaces. Scientific reports. 2017;7(1):4437.
Shehata AW, Scheme EJ, Sensinger JW. Audible Feedback Improves Internal Model Strength and Performance of Myoelectric Prosthesis Control. Scientific reports. 2018;8(1):8541.
Cipriani C, Controzzi M, Carrozza MC. The SmartHand transradial prosthesis. J Neuroeng Rehabil. 2011;8(1):29.
Controzzi M, Clemente F, Pierotti N, Bacchereti M, Cipriani C. Evaluation of hand function trasporting fragile objects: the virtual eggs test. In: Myoelectric Control Symposium. 2017.
Hargrove L, Englehart K, Hudgins B. A comparison of surface and intramuscular myoelectric signal classification. IEEE Trans Biomed Eng. 2007;54:847–53.
Tenore F, Ramos A, Fahmy A, Acharya S, Etienne-Cummings R, Thakor NV. Towards the control of individual fingers of a prosthetic hand using surface EMG signals. Conf Proc IEEE Eng Med Biol Soc. 2007;2007:6146–9. https://doi.org/10.1109/IEMBS.2007.4353752.
Wilson AW, Losier YG, Parker PA, Lovely DF. A bus-based smart myoelectric electrode/amplifier — system requirements. IEEE Trans Instrum Meas. 2011;60:1–10.
Bastian AJ. Understanding sensorimotor adaptation and learning for rehabilitation. Curr Opin Neurol. 2008;21:628–33. https://doi.org/10.1097/WCO.0b013e328315a293.Understanding.
Faes L, Nollo G, Ravelli F, Ricci L, Vescovi M, Turatto M, et al. Small-sample characterization of stochastic approximation staircases in forced-choice adaptive threshold estimation. Percept {&} Psychophys. 2007;69:254–62.
Ernst MO, Banks MS. Humans integrate visual and haptic information in a statistically optimal fashion. Nature. 2002;415:429–33. https://doi.org/10.1038/415429a.
Mathiowetz V, Volland G, Kashman N, Weber K. Adult norms for the box and block test of manual dexterity. Am J Occup Ther. 1985;39:386–91.
Fishbach A, Roy SA, Bastianen C, Miller LE, Houk JC. Deciding when and how to correct a movement : discrete submovements as a decision making process. Exp Brain Res. 2007;177:45–63.
Doeringer JA, Hogan N. Intermittency in preplanned elbow movements persists in the absence of visual feedback. J Neurophysiol. 1998;80:1787–99.
Kositsky M, Barto AG. The emergence of multiple movement units in the presence of noise and feedback delay. Adv neural Inf process Syst 14, NIPS 2001. Proc. 2001;14:1–8.
Dipietro L, Krebs HI, Fasoli SE, Volpe BT, Hogan N. Submovement changes characterize generalization of motor recovery after stroke. Cortex. 2009;45:318–24.
Strbac M, Isakovic M, Belic M, Popovic I, Simanic I, Farina D, et al. Short- and Long-Term Learning of Feedforward Control of a Myoelectric Prosthesis with Sensory Feedback by Amputees. IEEE Trans Neural Syst Rehabil Eng. 2017:4320 c.
Wilson AW, Blustein DH, Sensinger JW. A third arm – design of a bypass prosthesis enabling incorporation: The International Conference on Rehabilitation Robotics; 2017. p. 1381–6
Clemente F, D'Alonzo M, Controzzi M, Edin BB, Cipriani C. Non-invasive, temporally discrete feedback of object contact and release improves grasp control of closed-loop myoelectric transradial prostheses. IEEE Trans Neural Syst Rehabil Eng. 2016;24:1314–22.
The authors thank Adam Wilson for feedback on training procedures. Many thanks to Francesco Clemente and Michele Bacchereti for their maintenance of the IH2 Azzurra Hand and the instrumented virtual egg.
This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), the New Brunswick Health Research Foundation (NBHRF), and the European Commission (DeTOP - #687905). The work by C.C. was also funded by the European Research Council (MYKI - #679820).
Data is available upon request – email corresponding author.
Institute of Biomedical Engineering, University of New Brunswick, Fredericton, NB, E3B 5A3, Canada
Ahmed W. Shehata, Erik J. Scheme & Jonathon W. Sensinger
Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB, E3B 5A3, Canada
Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Alberta, Edmonton, AB, T6G 2E1, Canada
Ahmed W. Shehata
Scuola Superiore Sant'Anna, The BioRobotics Institute, V.le R. Piaggio 34, 56025, Pontedera, PI, Italy
Leonard F. Engels, Marco Controzzi & Christian Cipriani
Leonard F. Engels
Marco Controzzi
Christian Cipriani
Erik J. Scheme
Jonathon W. Sensinger
AS, CC, ES, and JS planned the experiments. AS, LE, and MC prepared the experiments. AS and LE conducted the experiments. All authors analyzed the results, reviewed the manuscript, and approved the submitted version.
Correspondence to Ahmed W. Shehata.
Informed consent according to the University of New Brunswick Research and Ethics Board and to Scuola Superiore Sant'Anna Ethical Committee was obtained from subjects before conducting the experiment (UNB REB 2014–019 and SSSA 02/2017).
Shehata, A.W., Engels, L.F., Controzzi, M. et al. Improving internal model strength and performance of prosthetic hands using augmented feedback. J NeuroEngineering Rehabil 15, 70 (2018). https://doi.org/10.1186/s12984-018-0417-4
Support vector machines
Internal model
Real-time systems
Augmented feedback
Sensory feedback
Submission enquiries: [email protected] | CommonCrawl |
Discrete Gaussian Sampling role in Lattice-Based Crypto?
I'm reading up on how post-quantum cryptography works, and stumbled upon the notion of discrete Gaussian sampling. However, I can't understand where it fits in the greater picture - currently it feels to me like a solution to a problem nobody put out.
Where exactly in a SVP problem (or any other commonly used lattice problem) would Discrete Gaussian Sampling provide a benefit?
I'm still new to PQ so pardon the highly likely banality of the question
randomness post-quantum-cryptography lattice-crypto
Daniel BDaniel B
$\begingroup$ As an example, I suggest reading up on LWE. The error introduced is typically sampled from a discrete distribution that approximates a Gaussian. This error is what makes the problem difficult without knowledge of the key, and allows use as a cryptographic primitive. You could probably use a different distribution, but this would almost certainly be less efficient. $\endgroup$ – bkjvbx Jun 9 '18 at 15:56
A Gaussian distribution satisfies the following desirable properties:
It can be implemented coordinate-wise: If $x_1, x_2, \ldots , x_n$ are each sampled from a one-variable Gaussian distribution, then $(x_1,x_2,\ldots,x_n)$ is sampled from a multivariable Gaussian distribution.
It approximates a uniform error distribution modulo a lattice exponentially well, regardless of what the lattice is.
The first property makes implementation easy, and the second property makes security proofs easy, since many security proofs in lattice-based crypto involve switching around the lattice lots of times until you get what you want.
To show what I mean by the second property, let's consider a one-dimensional lattice $\mathbb{Z} = \{\ldots, -2, -1, 0, 1, 2, \ldots\} \subset \mathbb{R}^1$. Take a normal distribution with standard deviation $1/2$. In other words, just your average normal distribution:
Suppose that I sample real numbers from this distribution and take the fractional part of the resulting real numbers. (Taking fractional parts correpsonds to taking error vectors with respect to the lattice $\mathbb{Z}$.) The resulting distribution amounts to taking the original normal distribution, chopping it up into unit intervals $\ldots, [-2,-1], [-1,0], [0,1], [1,2], \ldots$, and adding them up. When we do that, we get something quite magical:
Notice how close this distribution is to uniform! The "radius" or standard deviation of this distribution is only $1/2$, which is not much larger than the size of the unit interval; in fact it's smaller. Even with such a small radius, we get a ridiculously good approximation to the uniform distribution. You can prove (and you should prove, as an exercise) that the quality of the approximation is independent of the choice of where the original normal distribution is centered.
Suppose we take a slightly larger normal distribution, say with standard deviation $2/3$:
If we graph the fractional part of this distribution, we get:
That's really really good! You can't even tell that it deviates from uniform. In mathematical terms, we say that the distribution is exponentially close to uniform. Even a small increase in the width of the distribution (from $1/2$ to $2/3$) improves the quality of the approximation dramatically.
You might say, what's the big deal? We can easily get a uniform distribution on any interval. But that's not the point. In lattice-based cryptography, you often don't know what the lattice is. (It's part of someone's secret key.) Suppose as an exercise that we didn't know what this lattice is, and we tried to sample error vectors by taking them uniformly from an interval $[0,n]$ for some $n$. We can't just take the perfect choice of $n=1$. That's cheating, since we're assuming we don't know what the lattice is. In this case, any choice of a small number $n$ will cause the distribution to be horribly wrong; for example, if we chose $n=2/3$ in this scenario, then all of our error vectors would lie in $[0,2/3]$, which is far from uniform in $[0,1]$. Even a slightly larger $n$ is no good; for example if $n=3/2$ then random real numbers sampled uniformly from $[0,3/2]$ will be much more likely to have fractional part (i.e. error vector) lying in $[0,1/2]$ than in $[1/2,1]$. Of course, a very large $n$ (say $n \approx 10^9$) would do the job, but that's exactly the problem: since our distribution on $[0,n]$ doesn't converge to the uniform distribution on $[0,1]$ exponentially fast (unless we cheat by taking $n \in \mathbb{Z}$, which is not allowed), we end up needing to take very large values of $n$, which is not only hard to implement, but a nightmare in security proofs where theoretical analysis is required.
What you need is a way to approximate uniform error vectors exponentially well, without relying on prior knowledge of what the lattice is. Gaussian distributions do that job.
djaodjao
Not the answer you're looking for? Browse other questions tagged randomness post-quantum-cryptography lattice-crypto or ask your own question.
Uniform vs discrete Gaussian sampling in Ring learning with errors
Faster discrete Gaussian sampling
Post-Quantum Primitives' Object Sizes
Effect of tail cutting and precision of discrete Gaussian sampling on LWE / Ring-LWE security
Rejection Sampling reasoning for Lattice Based Signatures | CommonCrawl |
Jounce, Crackle and Pop
March 06, 2018 / Matt Hall
I saw this T-shirt recently, and didn't get it. (The joke or the T-shirt.)
It turns out that the third derivative of displacement \(x\) with respect to time \(t\) — that is, the derivative of acceleration \(\mathbf{a}\) — is called 'jerk' (or sometimes, boringly, jolt, surge, or lurch) and is measured in units of m/s³.
So far, so hilarious, but is it useful? It turns out that it is. Since the force \(\mathbf{F}\) on a mass \(m\) is given by \(\mathbf{F} = m\mathbf{a}\), you can think of jerk as being equivalent to a change in force. The lurch you feel at the onset of a car's acceleration — that's jerk. The designers of transport systems and rollercoasters manage it daily.
$$ \mathrm{jerk,}\ \mathbf{j} = \frac{\mathrm{d}^3 x}{\mathrm{d}t^3}$$
Here's a visualization of velocity (green line) of a Tesla Model S driving in a parking lot. The coloured stripes show the acceleration (upper plot) and the jerk (lower plot). Notice that the peaks in jerk correspond to changes in acceleration.
The snap you feel at the start of the lurch? That's jounce — the fourth derivative of displacement and the derivative of jerk. Eager et al (2016) wrote up a nice analysis of these quantities for the examples of a trampolinist and roller coaster passenger. Jounce is sometimes called snap... and the next two derivatives are called crackle and pop.
What about momentum?
If the momentum \(\mathrm{p}\) of a mass \(m\) moving at a velocity \(v\) is \(m\mathbf{v}\) and \(\mathbf{F} = m\mathbf{a}\), what is mass times jerk? According to the physicist Philip Gibbs, who investigated the matter in 1996, it's called yank:
"Momentum equals mass times velocity.
Force equals mass times acceleration.
Yank equals mass times jerk.
Tug equals mass times snap.
Snatch equals mass times crackle.
Shake equals mass times pop."
There are jokes in there, help yourself.
What about integrating?
Clearly the integral of jerk is acceleration, and that of acceleration is velocity, the integral of which is displacement. But what is the integral of displacement with respect to time? It's called absement, and it's a pretty peculiar quantity to think about. In the same way that an object with linearly increasing displacement has constant velocity and zero acceleration, an object with linearly increasing absement has constant displacement and zero velocity. (Constant absement at zero displacement gives rise to the name 'absement': an absence of displacement.)
Integrating displacement over time might be useful: the area under the displacement curve for a throttle lever could conceivably be proportional to fuel consumption for example. So absement seems to be a potentially useful quantity, measured in metre-seconds.
Integrate absement and you get absity (a play on 'velocity'). Keep going and you get abseleration, abserk, and absounce. Are these useful quantities? I don't think so. A quick look at them all — for the same Tesla S dataset I used before — shows that the loss of detail from multiple cumulative summations makes for rather uninformative transformations:
You can reproduce the figures in this article with the Jupyter Notebook Jerk_jounce_etc.ipynb. Or you can launch a Binder right here in your browser and play with it there, without installing a thing!
David Eager et al (2016). Beyond velocity and acceleration: jerk, snap and higher derivatives. Eur. J. Phys. 37 065008. DOI: 10.1088/0143-0807/37/6/065008
Amarashiki (2012). Derivatives of position. The Spectrum of Riemannium blog, retrieved on 4 Mar 2018.
The dataset is from Jerry Jongerius's blog post, The Tesla (Elon Musk) and
New York Times (John Broder) Feud. I have no interest in the 'feud', I just wanted a dataset.
The T-shirt is from Chummy Tees; the image is their copyright and used here under Fair Use terms.
The vintage Snap, Crackle and Pop logo is copyright of Kellogg's and used here under Fair Use terms.
March 06, 2018 / Matt Hall/ 2 Comments
Fun, Science
mathematics, velocity, calculation, physics, mechanics
Where is the ground?
December 08, 2016 / Evan Bianco
This is the upper portion of a land seismic profile in Alaska. Can you pick a horizon where the ground surface is? Have a go at pickthis.io.
Pick the Ground surface at the top of the seismic section at pickthis.io.
Picking the ground surface on land-based seismic data is not straightforward. Picking the seafloor reflection on marine data, on the other hand, is usually a piece of cake, a warm-up pick. You can often auto-track the whole thing with a few seeds.
Seafloor reflection on Penobscot 3D survey, offshore Nova Scotia. from Matt's tutorial in the April 2016 The Leading Edge, The function of interpolation.
Why aren't interpreters more nervous that we don't know exactly where the surface of the earth is? I'm sure I'm not the only one that would like to have this information while interpreting. Wouldn't it be great if land seismic were more like marine?
Treacherously Jagged TopographY or Near-Surface processing ArtifactS?
If you're new to land-based seismic data, you might notice that there isn't a nice pickable event across the top of the section like we find in marine seismic data. Shot noise at the surface has been muted (deleted) in processing, and the low fold produces an unclean, jagged look at the top of the section. Additionally, the top of the section, time-zero — the seismic reference datum — usually floats somewhere above the land surface — and we can't know where that is unless it can be found in the file header, or looked up in the processing report.
The seismic reference datum, at a two-way time of zero seconds on seismic data, is typically set at mean sea level for offshore data. For land data, it is usually chosen to 'float' above the land surface.
Reframing the question
This challenge is a bit of a trick question. It begs the viewer to recognize that the seemingly simple task of mapping the ground level on a land seismic section is actually a rudimentary velocity modeling or depth conversion exercise in itself. Wouldn't it be nice to have the ground surface expressed as pickable seismic event? Shouldn't we have it always in our images? Baked into our data, so to speak, such that we've always got an unambiguous pick? In the next post, I'll illustrate what I mean and show what's involved in putting it in.
In the meantime, I challenge you to pick where you think the (currently absent) ground surface is on this profile, so in the next post we can see how well you did.
December 08, 2016 / Evan Bianco/ 3 Comments
Science, Workflows
seismic, near surface, interpretation, processing, velocity, pick this | CommonCrawl |
Teaching evolution in U.S. public middle schools: results of the first national survey
Glenn Branch ORCID: orcid.org/0000-0002-4931-39351,
Ann Reid ORCID: orcid.org/0000-0002-8899-52461 &
Eric Plutzer ORCID: orcid.org/0000-0002-5456-72272
Evolution: Education and Outreach volume 14, Article number: 8 (2021) Cite this article
Despite substantial research on the teaching of evolution in the public high schools of the United States, we know very little about evolution teaching in the middle grades. In this paper, we rely on a 2019 nationally representative sample of 678 middle school science teachers to investigate how much time they report devoting to evolution and the key messages they report conveying about it, using this information to assess the state of middle school evolution education today. Throughout these analyses, we provide comparative data from high school biology teachers to serve as a baseline.
We find that, compared to high school biology teachers, middle school science teachers report themselves as less well-equipped to teach evolution, devoting less class time to evolution, and more likely to avoid taking a stand on the scientific standing of evolution and creationism. We show that middle school science teachers with extensive pre-service coursework in evolution and in states that have adopted the Next Generation Science Standards are more likely to report devoting more class time to evolution. Similarly, we show that middle school teachers in states that have adopted the Next Generation Science Standards and who are newer to the profession are more likely to report themselves as presenting evolution as settled science.
Our findings suggest avenues for the improvement of middle school evolution education through teacher preparation and public policy; in addition, a degree of improvement through retirement and replacement is likely to occur naturally in the coming years. More generally, our results highlight the need for further research on middle school education. Our broad statistical portrait provides an overview that merits elaboration with more detailed research on specific topics.
The teaching of evolution in the public schools of the United States has been subject not only to media scrutiny, attention-grabbing court cases, and dramatizations such as Inherit the Wind, but also to intense academic research. In articles and books too numerous to list comprehensively, scholars have scrutinized textbooks (e.g., Skoog 1984), state content standards (e.g., Lerner 2000; Mead and Mates 2009), teacher practices (e.g., Griffith and Brem 2004; Moore and Kraemer 2005), pre-service and in-service teachers' education (e.g., Friederichsen et al. 2016) and the effects of social and community forces that influence each of these (e.g., Berkman and Plutzer 2010). Yet for all of this research, we know very little about evolution teaching in the middle grades.Footnote 1
The middle school years are important for introducing students to many of the evolutionary concepts that they will need to master the topic in high school. (In the United States, middle schools generally serve grades 6 through 8, with students usually 11 to 14 years old, while junior high schools generally serve grades 7 through 9; with students usually 12 to 15 years old; for brevity, we will use "middle school" to abbreviate "middle or junior high school.") Indeed, the foundations for understanding evolution are woven into the Next Generation Science Standards beginning in kindergarten. Therefore, attention to how evolution is taught in the middle grades is long overdue. As we show below, middle school science teachers report devoting considerable time to evolution, so it is imperative that the quality of that instruction and the challenges faced by middle school science teachers be understood and appreciated. In this paper, we aim to take a first and important step in spurring research on evolution education in the middle grades that is comparable to that at the high school level.
To that end, the paper proceeds as follows. First, we document and summarize the meager body of relevant research that we could identify. Second, we introduce our methodology and data set, a 2019 nationally representative sample of 678 middle school science teachers. Third, we present a series of results on how much time these teachers report devoting to evolution and the key messages they report conveying to students, using this information to assess the state of middle school evolution education today. Throughout these analyses, we provide comparative data from high school biology teachers. Fourth, we offer some discussion, providing an overview of the results, a discussion of the limitations of the study and possible directions for future research, and recommendations for teacher preparation and public policy.
What do we know about evolution teaching in the middle grades?
After an extensive search, we identified only a handful of scholarly reports on evolution teaching in middle school. Nadelson and Nadelson's (2010) study of teacher attitudes toward teaching evolution included a subsample of 13 middle school teachers, but the article never breaks out this group for comparison—probably due to the unreliability of statistical comparisons with such a small sample. Fowler and Meisels (2010) report results from 85 middle school teachers, part of a convenience sample of Florida NSTA members, finding that 67% of them agreed that evolution is a central principle in biology, while 40% felt that one "does not need to understand evolution in order to understand biology." Based on their review of the literature, Glaze and Goldston (2015) conclude that "elementary and middle school teachers demonstrated greater misgivings" about teaching evolution, as well as less acceptance of evolution, "than their secondary counterparts." This is a very plausible conclusion, but we were unable to find firm empirical evidence of this in the work they cite. Finally, as part of her recent dissertation, Klahn (2020) conducted structured interviews with ten middle school science teachers about evolution, finding that they favored emphasizing microevolution over macroevolution as a teaching approach, few of them discussed human evolution, and they were concerned about pressures from inside and outside the school.
Taken together, the existing literature suggests that many of the same themes that arise in research on high school biology teachers are present among middle school science teachers, but it is impossible to say much more than that. Indeed, if all these studies provided precise statistical comparisons, the combined sample size would be under 110, so it is not possible to assess whether the observed teachers are representative of all middle school teachers or even of teachers in their particular locale. Thus, there is an acute need for research that uses standard questions and methods and representative samples to understand whether middle school evolution teaching differs from evolution teaching in high schools, and—if so—by how much. In the next section, we describe a study that does precisely this.
Fielded between February and May of 2019, the 2019 Survey of American Science Teachers included both a high school and a middle school sample. Results from the high school responses can be found in Plutzer, Branch, and Reid (2020), and the methods for the present study are described in detail in that report. We repeat the description of the methods here for the convenience of readers. The sample was drawn, based on investigator specifications, from a national teacher file maintained by MDR (Market Data Retrieval, a Dun and Bradstreet direct mail firm that maintains the largest mailing list of educators in the US). To ensure national coverage, national lists of 30,847 high school biology teachers and 55,001 middle school science teachers were first stratified by state and urban/suburban/other location. With the District of Columbia serving as a single stratum, this produced 151 segments. Within each segment, we selected a random sample with a high school sampling probability of 0.081 and a middle school probability of 0.046, yielding an initial set of 2503 high school biology teacher and 2511 middle school teacher names and addresses.
Replicating precisely the survey protocol used by Berkman and Plutzer's 2007 survey of high school biology teachers (Berkman et al. 2008), and consistent with best practices for mail surveys (Dillman et al. 2014), we then sent each teacher an advance prenotification letter explaining the survey and telling them that a large survey packet would arrive in a few days. The packet included a cover letter, a token pre-incentive (a $2 bill), a 12-page survey booklet, and a postage-paid return envelope. One week later a reminder postcard was sent, and a complete replacement packet (though without an incentive) two weeks after that. In the week after the replacement packet was mailed, we emailed reminders to the roughly 85% of non-responding teachers for whom we had valid emails. Two email reminders and one final postcard—saying that the study was about to close—followed.
The overall response rate was 40% for the high school sample and 34% for the middle school sample (using AAPOR response rate formula #4). To place this in context, sample surveys of teachers vary considerably in their overall response rate, ranging from the low single digits (Puhl et al. 2016; Troia and Graham 2016; Davis et al. 2017; Dragowski et al. 2016) and the mid-teens (Lang et al. 2017; Hart et al. 2017) to Department of Education survey programs that approach 70% (National Center for Education Statistics n.d., 2019; Centers for Disease Control and Prevention 2015). In that light, our response rate is at the high end of results achieved outside of government-sponsored studies. However, survey scientists have sought to discourage a heavy reliance on response rates as indicators or data quality. Indeed, scores of studies show that there is no simple relationship between response rates and Total Survey Error or response bias (e.g., Keeter et al. 2000; Groves and Peytcheva 2008; Keeter 2018), leading to a greater focus on direct measures of a sample's representativeness. To this end, we conducted a detailed non-response audit, and found that the responding teachers were broadly representative of the target population. Details are provided in the Appendix (see Tables 10, 13, and 14, and the accompanying text).
We augmented the design weights with a non-response adjustment, and we report weighted estimates throughout this report, although the unweighted results are almost always similar. Full details on the methods of contact, the non-response audit, and methods of weight calculation are provided in the Appendix.
The full pencil-and-paper questionnaire was a twelve-page booklet. In addition to the items discussed in this report, the questionnaire also included sections on the teaching of climate change, textbook selection, and additional questions about how teachers manage controversy in their classrooms—topics that are beyond the scope of this paper.
Pre-service coursework
Middle school science teachers are expected to be generalists, with most middle school science classes covering a mixture of earth and space sciences, life sciences, chemistry, and physics. Some may also include technology or computer applications. Thus, the likelihood that any middle school teacher has extensive coursework in any one area is low. As a consequence, we expect middle school teachers to have less pre-service coursework on evolution and less in-service continuing education on evolution, as compared to high school biology teachers. (The National Survey of Science & Mathematics Education found this to be true with regard to pre-service coursework in 2012 and 2018: see Smith 2020, p. 10, Table 2.8.) And that expectation is borne out by our survey, as shown in Table 1. Because the distribution of coursework for middle school teachers is quite skewed, we combine answers to two different questions and reduce to four levels of coursework (detailed breakdowns are provided in Supplementary Materials, Additional file 1: Table S1). Table 1 shows that 42% of middle school teachers reported that they had not taken even a single college-level course with any evolution content. An additional 23% reported taking just one course; for four in five of this group, this was not a course primarily focused on evolution but another science course that devoted one or more class sessions to evolution.
Table 1 Reported pre-service and continuing education coursework covering evolution (percentages within column)
Understanding of the scientific consensus
Given their less extensive coursework on evolution, we expected fewer middle school teachers to be aware of the scientific consensus on evolution. That is indeed the case. We asked all teachers, "To the best of your knowledge, what proportion of scientists think that humans and other living things have evolved over time?" The Pew Research Center's (2015) survey of AAAS members estimates the correct answer as 98%. As shown in Table 2, only 55% of middle school teachers correctly answered that 81–100% of scientists think that humans evolved, in contrast to 71% in the high school sample.
Table 2 Teachers' perceptions of consensus among scientists on human evolution (percentages within column)
In addition, we expected that the middle school science teachers would be less likely report accepting evolution themselves. The teachers were asked about their view on human origins, using a question frequently used in general population public opinion polls. Table 3 presents the results, along with the results from a 2019 Gallup poll of the general public for reference (Brenan 2019). These results show while middle school teachers were more likely than high school teachers to choose the creationist response, they were significantly (p < 0.001) less likely to do so than members of the general public.Footnote 2
Table 3 Personal views on human evolution (percentages within column)
Teaching evolution: time devoted to the topic and messages conveyed to students
Evolution is expected to be taught in middle school science classes in most states: Vazquez (2017) notes that the state science standards in all but two states mention natural selection and 37 mention evolution. Nevertheless, because middle school science classes are more general and tend to be more multidisciplinary than high school biology, and because middle school students are less capable of understanding complicated concepts like those involved in evolution than high school students, we expected to find that middle school science teachers devote less time to both general evolution and human evolution than their high school counterparts.
We asked each teacher to select his or her primary class (that with the largest enrollment) and tell us about the allocation of time devoted to various topics in that class. The question began, "Thinking about how you lay out your class for the year, please indicate how many class hours (40–50 min) you typically spend on each of the following broad topic areas." After cell biology, ecology, and human health and disease, the teachers were asked about human evolution, general evolutionary processes, and (later in the sequence) "intelligent design or creationism."Footnote 3
We first report on the two evolution topics. As Table 4 shows, only 45% of middle school science teachers reported devoting any class time to human evolution and only 60% to general evolutionary processes; this is in comparison to 78 and 91% of high school biology teachers, respectively. Combining both answers, the mean number of class hours reportedly devoted to evolution (totaling human and general) is nearly double in high school compared to middle school (17.2 class hours versus 9.1).
Table 4 Reported number of class hours devoted to evolution (percentages within row)
If we restrict comparison to those teachers who reported devoting at least one class hour to evolution, however, the difference narrows (18.6 class hours versus 14.6). That is, when middle school science teachers cover evolution at all, they spend only 22% less time on the subject than high school biology teachers do. To put it another way, assuming a 5-day week, high school biology teachers spend about 3.7 weeks on evolution on average, while middle school science teachers spend about 2.9 weeks.
But focusing on the class hours reportedly devoted to the topic can obscure important differences in how the science of evolution is characterized. To assess the messages that teachers convey to students, we presented a series of prompts about the themes that teachers emphasize in the overall course organization and the messages they convey to students.
Perhaps most important is the prompt, "When I teach about the origins of biological diversity (including answering student questions) … I emphasize the broad scientific consensus that evolution is fact."Footnote 4 Teachers could agree, disagree, or choose "not applicable." The results, reported in the upper panel of Table 5, show that middle school science teachers are half as likely as high school biology teachers to strongly agree.
But this result is deceptive because of the large number of middle school teachers who chose "not applicable."Footnote 5 If we restrict our comparison to those who reported devoting at least one class hour to evolution and exclude those who chose "not applicable," the picture is rather different. As shown in the lower panel of Table 5, the gap in emphasis is much narrower than it appeared initially. Indeed, combining "agree" with "strongly agree" shows that (among those teaching about evolution) 82% of middle school science teachers reported emphasizing the scientific consensus that evolution is a fact, which is comparable to the 86% of high school biology teachers who reported conveying this message.
Table 5 Teachers' reported emphasis on evolution as a fact
Creationism in the middle school science classroom
As any student of science education in the United States knows, the teaching of evolution is only half the story. For more than a century, a number of secondary science educators—such as Roger DeHart, John Freshwater, and Rodney LeVake, to name a few recent high-profile instances—have actively promoted or given credence to non-scientific alternatives to evolution in their public school classrooms. We therefore asked how many middle school science teachers currently discuss creationism in their classes. At first glance (upper panel of Table 6), middle school science teachers look a lot like high school biology teachers in terms of time devoted to creationism, including intelligent design. Overall, 7% of middle school science teachers reported spending 1–2 class hours and 8% reported spending three class hours or more on creationism, while the corresponding percentages for high school biology teachers are 9 and 4%, respectively.
Table 6 Class hours reportedly devoted to creationism or intelligent design
Yet this ignores how few middle school science teachers reported covering evolution. So we also compared high school and middle school teachers who reported devoting at least one class period to evolution. Those results, in the lower panel of Table 6, show that among this subset of teachers, slightly more middle school science teachers reported introducing creationist ideas into the classroom than their high school counterparts (19 versus 14%). But these data alone don't reveal whether these teachers are discussing creationism in order to advocate for it or to criticize it.
To better understand the content of this instruction, we posed two prompts to teachers: "I emphasize that intelligent design is a valid, scientific alternative to Darwinian explanations for the origin of species" and "I emphasize that many reputable scientists view creationism or intelligent design as valid alternatives to Darwinian theory." (These ask about the teacher making the assertion without and with appeals to scientific authority.)
We report the responses to these prompts two ways: first providing the overall distribution (upper panels of Table 7) and next restricting analysis to those who did not choose "not applicable" (lower panels of Table 7). Looking at this more restricted sample, we see substantial differences. About 80% of high school teachers disagree with each statement, which suggests that when creationism arises through student questions or by the teachers themselves, they use the occasions to counter the idea that creationism is scientifically credible. While about 18% of high school teachers agree with either the first or the second statement, or both, however, about 36% of middle school teachers agree with the first and about 31% agree with the second. Thus, if they cover these topics, middle school teachers are far more likely to discuss creationist ideas in ways that give them the legitimacy of science.
Table 7 Reported teacher emphasis on creationism as science
A clearer picture emerges when we combine the three questions about teaching emphasis into a teaching typology, along the lines of that developed by Plutzer et al. (2020). In this typology, there are four groups of teachers: those who send the message that evolution is settled science by emphasizing the broad scientific consensus on evolution while not emphasizing the scientific credibility of creationism; those who send mixed messages by emphasizing the broad scientific consensus on evolution while also emphasizing the scientific credibility of creationism; those who avoid the issue by emphasizing neither; and those who send a pro-creationist message by emphasizing the scientific credibility of creationism but not the broad scientific consensus on evolution. The top panel of Table 8 shows many more high school biology teachers report teaching evolution as settled science than do middle school science teachers. In contrast, nearly twice as many middle school science teachers (10.5% compared to 5.8%) report sending exclusively pro-creationist messages and more than 50% more (21.4% compared to 13.8%) report sending mixed messages.
Table 8 Reported teacher emphasis when teaching evolution, by level (column percentages)
Table 9 Field Dates
Table 10 Summary of non-response audit
Table 11 Final dispositions of combined middle and high school samples
Table 12 Calculation of Response Rate
Table 13 Estimated response rates, by characteristics known for respondents and non-respondents
Table 14 Multivariate logistic regression model predicting likelihood of completing survey
For comparison, the bottom panel of Table 8 restricts analysis to those teachers who report devoting at least one class hour to either evolution or creationism, thereby reducing the number of avoiders, but nevertheless raising concerns. Even among middle school science teachers who devote formal class time to evolution or creationism, more than one in five (21%) do not comment on the scientific standing of those views, nearly one in five (18%) convey mixed messages by endorsing both evolution and creationism, and nearly one in ten (9%) endorse creationism alone.
Which factors promote more and better teaching of evolution in the middle grades?
The comparison with high school biology teachers provides a general characterization but also reveals considerable diversity among middle school science teachers. Judging from their reports, some teach far more evolution than others; and some convey messages consistent with the scientific consensus while others do not. In this section, we first seek to explain the variation in hours devoted to evolution and then to explain the variation in the messages teachers convey.
We focus on a suite of variables previously identified in the literature as potentially important predictors in the teaching of evolution. These include a key policy variable—whether the teacher works in a state that has adopted the Next Generation Science Standards (NGSS: NGSS Lead States 2013), which treat evolution as a disciplinary core idea of the life sciences. Plutzer et al. (2020) concluded that the treatment of evolution in the NGSS helped to produce a significant change in the emphasis on evolution in public high school biology classrooms between 2007 and 2019. So we looked to see whether public middle school teachers in NGSS states report allocating their time differently and sending different messages than their colleagues in states that have adopted non-NGSS standards, either based on the same Framework (National Research Council 2012) on which the NGSS are based or not.
We also examine two different measures of teachers' formal preparation: whether they hold a degree (undergraduate or graduate) in a scientific discipline, and the weighted sum of the number of semester-length (or quarter-length) classes they completed that focused primarily on evolution and the number of courses they completed that devoted at least one full class session to evolution (focused classes are weighted double). (The two coursework measures, if treated separately, would be highly correlated, making it difficult to disentangle their effects; the weighted sum treats them together.)
Finally, we also look at teacher seniority, which can affect teaching in a number of ways. Most critically, the most senior teachers were teaching many years before the NGSS were released and may have developed teaching approaches that they are reluctant to change to correspond to the demands of newer standards.
To examine the impact of all these factors on teaching, we estimate two multivariate models. The first regresses the total number of class hours devoted to evolution on these independent variables. The resulting regression slope estimates and 90% confidence intervals are reported in Fig. 1 (in which the baseline effects of omitted comparison groups are included as a convenience for interpretation).
Effects of policy and formal preparation on the number of class hours devoted to evolution by middle school science teachers. Ordinary least squares regression estimates and 90% confidence intervals (accounting for sample weights and design effects, N = 596). Contrast (baseline) categories included for reference have no confidence intervals
The first group of estimated effects concern state adoption of the Next Generation Science Standards. The results show that middle school teachers in NGSS states report devoting 2.5 more class hours to evolution than teachers in states where the standards are more loosely based on the Framework or not based on the Framework at all. This is a substantial, and statistically significant, effect.
Teachers' college-level coursework in evolution also has a statistically significant effect. The coefficients show a strong increasing trend, with even one or two prior courses having a substantial impact (increasing reported coverage by 1.9 and 4.2 class hours, respectively). For middle school science teachers, even small exposure to college-level evolutionary science seems to matter greatly. After accounting for their more extensive coursework, middle school science teachers holding a degree in science report providing more coverage as well (1.7 additional class hours), but this estimate is not statistically significant. Finally, the plot shows that teachers with more than twenty years of experience devote fewer class hours to evolution, but the estimate is far short of statistical significance.
Overall, then, it appears that state adoption of the NGSS has an important impact on the number of class hours devoted to evolution that a typical middle school student will experience. Middle school students are likely to have additional focused instruction on evolution if their teachers majored in science and if their teachers completed college coursework with even minimal evolution content.
We next turn to model the emphasis given to evolution and creationism (as assessed by our typology). Because the typology is a non-ordered nominal variable, the appropriate model is a multinomial logistic regression model. The results of that model are reported in Supplementary Materials, Additional file 1: Fig. S1. Because the results for three of the typology outcomes were similar, we report a simpler model in Fig. 2, in which the dependent variable is coded 1 if the teacher is classified as teaching evolution as settled science and 0 otherwise. The sample here, as in the lower panel of Table 8, includes only teachers who reported devoting class time to either evolution or creationism.
Effects of policy and formal preparation on odds of reporting teaching evolution as settled science. Binary logistic regression estimates and 90% confidence intervals (accounting for sample weights and design effects, N = 402). Contrast (baseline) categories included for reference have no confidence intervals
This coefficient plot shows the relative risk ratio (also called the odds ratio) of teaching evolution as settled science relative to all other alternatives. Markers showing ratios less than one (to the left of the red reference line) mean that the variable reduces the odds of teaching evolution as settled science; ratios over one represent positive effects. The graph also includes 90% confidence intervals around the estimates.
A notable effect revealed by this analysis concerns teacher seniority (which was not a statistically significant factor in the previous model of class hours devoted to evolution). Teachers with more than twenty years of experience are less likely to teach evolution as settled science (their odds of doing so are 49% lower than those with 10–19 years of experience). As shown in the more detailed model reported in Additional file 1: Fig. S1, this effect is primarily driven by more experienced teachers adopting avoidance strategies to navigate instruction in evolution. These teachers are slightly more likely to convey mixed messages but especially likely to convey no messages at all to students regarding the scientific standing of evolution.
Other than seniority, the patterns of effects here are similar to those predicting time devoted to evolution, though the smaller sample size means there is more uncertainty around each estimate. Most notably, the odds that middle school science teachers in NGSS states will report teaching evolution as settled science are more than 1.5 times greater than those in non-NGSS states. Formal course preparation has positive effects as well, but with sizable impacts requiring three or more courses covering evolution. In contrast to the previous model of class hours devoted to evolution, having a degree in science does not have a statistically significant effect (beyond the accompanying increase associated with the coursework required to earn the degree).
Middle school can and should play a major role in promoting scientific literacy in general, and in laying the groundwork for understanding evolution, the foundational framework for modern biology, in particular. Indeed, the Next Generation Science Standards promote the introduction of basic evolutionary science in the middle grades to serve as the first stage of secondary science instruction. And yet, no previous research had ever sought to measure the extent of evolution teaching and the emphasis given to evolution and creationism in US middle schools. This paper takes a first step toward filling that void.
We find that evolution is less frequently covered as a formal class topic in middle school science classes than in general biology classes typically taken by students in the ninth or tenth grade. This is not surprising, given that middle school science classes are more general and tend to be more multidisciplinary than high school biology. However, those teachers who do cover evolution devote only slightly less time to it than do high school teachers.
Middle school science teachers are more likely than their high school counterparts to report that they promote creationism, send mixed messages about the scientific standing of evolution, or simply avoid endorsing evolution's status as settled science. Consequently, fewer than 40% of middle school science classes are led by a teacher who emphasizes that evolution is a well-established scientific fact.Footnote 6
We find that forthright teaching of evolution in the public school middle school science classroom is more likely to occur when teachers themselves have strong science preparation in the form of multiple college courses that covered evolution and hold a degree in science rather than a more general education degree. We also find that instruction is both more extensive and more robust in NGSS states.
Limitations of the present study and suggested directions for future research
A limitation of the study is that the questions used in the survey were not systematically assessed for validity and reliability. This may be of especial concern with regard to those that are prima facie susceptible to multiple interpretations, such as the question assessing personal views on human evolution and the question about "the broad scientific consensus that evolution is fact" (discussed above in notes 2 and 4, respectively). Developed for Berkman and Plutzer's 2007 survey of high school biology teachers (Berkman et al. 2008), these questions were retained for purposes of comparison (as in Plutzer et al. 2020), but it would be desirable to assess their validity and reliability before using them in the future.
A further limitation of the study is that the survey did not probe as deeply as it could have in certain areas of teacher understanding—although obviously no survey can ask about everything that might be relevant. In particular, the survey did not attempt to investigate the degree to which the teachers understand evolution using standard instruments such as MATE (Rutledge and Warden 2000), as, e.g., Glaze and Goldston (2019) did with high school teachers. Similarly, the survey did not attempt to investigate the degree to which the teachers understand the nature of science—a factor correlated with understanding and acceptance of evolution (see, e.g., Lombrozo et al. 2008)—as, e.g., Nehm and Schonfeld (2007) did with high school teachers. Such investigations would be worth conducting in future research.
Similarly, the survey did not probe as deeply as it could have in certain areas relevant to teacher preparation. In particular, no data were collected about licensure.Footnote 7 Different states license middle school teachers in different grade bands—for example, in Alabama, middle school teachers are licensed to teach grades 4 through 8, while in North Carolina, they are licensed to teach grades 6 through 9, while Wisconsin offers both an elementary and middle school license for kindergarten through grade 9 and a middle and high school license for grades 4 through 12—and different types of licenses often involve different requirements about teachers' content knowledge. It would therefore be interesting to investigate the connections between licensing and classroom practice with regard to evolution, although, because the effect of different licensing regimes on classroom practice is largely mediated by different approaches to pre-service teacher preparation, we expect that such a study would tend to confirm the present results.
Implications for teacher preparation and public policy
We found that the most senior middle school science teachers are those who are most likely to avoid addressing evolution's scientific status, which suggests that a degree of improvement through retirement and replacement is likely to occur naturally in the coming years. But what positive steps can be taken to improve middle school evolution education?
We found that middle school science teachers were more likely to devote more class hours to evolution and more likely to present evolution as settled science when they themselves have strong science preparation. In light of this finding, it is clear that improving evolution education at the middle school level depends on middle school science teachers acquiring a solid, scientific understanding of evolutionary biology. It is beyond the scope of the present study to address the vexed and complex question of how to do so, but we suggest that a reasonable and achievable goal would be for middle school science teachers to achieve parity, with respect to both time devoted to and emphasis on the settled status of evolution, with their high school counterparts.
It would be helpful for there to be strong incentives for teacher preparation programs to ensure that middle school science teachers learn about evolution properly. Because we found that middle school science teachers were more likely to devote more class hours to evolution and more likely to present evolution as settled science in states that adopted the NGSS, the recommendation for public policy with regards to state science standards is clear: to improve evolution education at the middle school level, adopt the NGSS or standards with a comparable treatment of evolution. Doing so will provide teacher preparation programs with the incentive to ensure that newly minted teachers are able to meet the demands of the standards. In addition, since only 27 states require that middle school teachers pass a subject-specific licensing test (National Council on Teacher Quality 2020), it seems plausible that reforms to licensure that required teachers charged with teaching evolution to demonstrate their mastery of the field of biology would similarly provide incentive to teacher preparation programs to ensure that pre-service teachers learn about evolution properly.
Middle school science teachers play a key role in the science education of U.S. students. Our broad statistical portrait provides an overview that merits elaboration with more detailed research on specific topics such as middle school lesson plans, professional development for middle school science teachers, teacher education curricula, and more. These topics, and many others, have been studied extensively at the high school level. It is time to pay comparable attention to the middle grades, with an eye not only to understanding but also to alleviating the challenges to the teaching of evolution. We hope that this initial study will spur further research on middle school evolution education.
Replication data set, codebook, and code to replicate all tables and figures will be made available at https://dataverse.harvard.edu/dataverse/2019_Science_Teachers/ no later than 12 months after publication. Investigators may request access to the data sooner.
We can only speculate about the reasons for the neglect. It may be due to the fact that the most visible political and legal conflicts over the teaching of evolution involved teachers–such as John Scopes, Susan Epperson, and Donald Aguillard—who taught at the high school level. It may be owing to the expectation that evolution is taught less extensively in lower grades and studying its teaching there is therefore less important. And it may be that middle school science teachers, as generalists, are not expected to have the training, inclination, or opportunities to teach evolution in much depth.
For simplicity, we describe "God created human beings" as the creationist response, taking the other responses to signal acceptance of human evolution. We acknowledge that the question is a crude instrument, which fails both to reflect the complexity of the conceptual geography and to accommodate ambivalence and uncertainty (see Branch 2017 for discussion). But the fact that it is frequently used makes it helpful for purposes of comparison.
We recognize that intelligent design is simply a strategy for promoting creationism, but we maintained this wording to maximize the comparability of our survey results with those from Berkman and Plutzer's 2007 survey of high school biology teachers (Berkman et al. 2008).
No definition of "fact" was provided in the survey, and it is possible that the ambiguity of the term (discussed in the context of evolution by Jean and Lu 2018) affected the results; further investigation with differently worded questions is indicated. The question originated in Berkman and Plutzer's 2007 survey of high school biology teachers (Berkman, Pachecho, and Plutzer 2008) and was retained to ensure comparability.
Eighty percent of middle school teachers selecting "not applicable" also reported spending zero hours on human evolution and general evolution. However, their avoidance of evolution is not due to evolution being completely irrelevant to their classes. Of the middle school teachers who selected "not applicable," 64% reported devoting class time to cell biology, ecology, or biodiversity—topics to which evolution is clearly relevant.
That said, evolution is presented in a variety of courses offered in multiple grades in the middle schools, judging from the courses described by our sample of teachers. With the average middle school science class devoting a bit over nine class hours to evolution, students completing three middle school science classes will have the opportunity to learn about evolution in a cumulative, if not necessarily structured or intensive, way.
We are grateful to one of the anonymous reviewers for emphasizing the relevance of licensure.
Extrapolating, had we taken the time to track down the gender of all 548 teachers through web searches, LinkedIn, etc., we would have gotten an additional 37 completed surveys.
American Association for Public Opinion Research. Standard definitions: final dispositions of case codes and outcome rates for surveys. 9th edition. AAPOR. 2006. https://www.aapor.org/AAPOR_Main/media/publications/Standard-Definitions20169theditionfinal.pdf.
Berkman MB, Pacheco JS, Plutzer E. Evolution and creationism in America's classrooms: a national portrait. PLoS Biol. 2008;6(5):e124.
Berkman M, Plutzer E. Evolution, creationism, and the battle to control America's classrooms. Cambridge: Cambridge University Press; 2010.
Branch G. Understanding Gallup's latest poll on evolution. Skeptical Inquirer. 2017;41(5):5–6.
Brenan M. 40% of Americans believe in creationism. Gallup. 2019. https://news.gallup.com/poll/261680/americans-believe-creationism.aspx.
Centers for Disease Control and Prevention. Results from the School Health Policies and Practices Study 2014. 2015. https://www.cdc.gov/healthyyouth/data/shpps/pdf/SHPPS-508-final_101315.pdf.
Davis JD, Choppin J, McDuffie AR, Drake C. Middle school mathematics teachers' perceptions of the common core state standards for mathematics and its impact on the instructional environment. School Sci Math. 2017;117:239–49.
Dillman DA, Smyth JD, Christian LM. Internet, phone, mail, and mixed-mode surveys: the tailored design method. New York: Wiley; 2014.
Dragowski EA, McCabe PC, Rubinson F. Educators' reports on incidence of harassment and advocacy toward LGBTQ students. Psychol Schools. 2016;53:127–42.
Fowler SR, Meisels GG. Florida teachers' attitudes about teaching evolution. Am Biol Teach. 2010;72(2):96–9.
Friedrichsen PJ, Linke N, Barnett E. Biology teachers' professional development needs for teaching evolution. Sci Educ. 2016;25:51–61.
Glaze AL, Goldston MJ. US science teaching and learning of evolution: a critical review of the literature 2000–2014. Sci Educ. 2015;99:501–18.
Glaze A, Goldston J. Acceptance, understanding & experience: exploring obstacles to evolution education among Advanced Placement teachers. Am Biol Teach. 2019;81(2):71–6.
Griffith J, Brem S. Teaching evolutionary biology: pressures, stress, and coping. J Res Sci Teach. 2004;41:791–809.
Groves RM, Peytcheva E. The impact of nonresponse rates on nonresponse bias: a meta-analysis. Public Opin Q. 2008;72:167–89.
Hart KC, Fabiano GA, Evans SW, Manos MJ, Hannah JN, Vujnovic RK. Elementary and middle school teachers' self-reported use of positive behavioral supports for children with ADHD: a national survey. J Emot Behav Disord. 2017;25:246–56.
Jean J, Lu Y. Evolution as fact? A discourse analysis. Soc Stud Sci. 2018;48(4):615–32.
Keeter S, Miller C, Kohut A, Groves RM, Presser S. Consequences of reducing nonresponse in a national telephone survey. Public Opin Q. 2000;64:125–48.
Keeter S. Evidence about the accuracy of surveys in the face of declining response rates. In: Vannette DL, Krosnick JA, editors. Palgrave handbook of survey research. London: Palgrave Macmillan; 2018. p. 19–22.
Klahn VL. The stories of middle school science teachers' teaching evolution: A narrative inquiry [dissertation]. Portland (Oregon): Concordia University–Portland; 2020. 156 pp. https://digitalcommons.csp.edu/cgi/viewcontent.cgi?article=1459&context=cup_commons_grad_edd.
Lang SN, Mouzourou C, Jeon L, Buettner CK, Hur E. Preschool teachers' professional training, observational feedback, child-centered beliefs and motivation: direct and indirect associations with social and emotional responsiveness. Child Youth Care Forum. 2017;46:69–90.
Lerner LS. Good science, bad science: teaching evolution in the states. Washington DC: Thomas B. Fordham Foundation; 2000. https://fordhaminstitute.org/national/research/good-science-bad-science-teaching-evolution-states.
Lombrozo T, Thanukos A, Weisberg M. The importance of understanding the nature of science for accepting evolution. Evo Edu Outreach. 2008;1:290–8.
Mead LS, Mates A. Why state standards are important to a strong science curriculum and how states measure up. Evo Edu Outreach. 2009;2:359–81.
Moore R, Kraemer K. The teaching of evolution and creationism in Minnesota. Am Biol Teach. 2005;67:457–66.
Nadelson LS, Nadelson S. K–8 educators perceptions and preparedness for teaching evolution topics. J Sci Teach Edu. 2010;21(7):843–58.
National Center for Education Statistics. Table 203.10: Enrollment in public elementary and secondary schools, by level and grade: Selected years, fall 1980 through fall 2028. Digest of Education Statistics (2018 ed.). 2019. https://nces.ed.gov/programs/digest/d18/tables/dt18_203.10.asp.
National Center for Education Statistics. School and staffing survey methodology report. n.d. https://nces.ed.gov/surveys/sass/methods1112.asp.
National Council on Teacher Quality. Middle school content knowledge national results. 2020. https://www.nctq.org/yearbook/national/Middle-School-Content-Knowledge-91.
National Research Council. A framework for K–12 science education: practices, crosscutting concepts, and core ideas. Washington DC: The National Academies Press; 2012.
Nehm RH, Schonfield IS. Does increasing biology teacher knowledge of evolution and the nature of science lead to greater preference for the teaching of evolution in schools? J Sci Teacher Educ. 2007;18:699–723.
NGSS Lead States. Next generation science standards: for states, by states. Washington DC: The National Academies Press; 2013.
Pew Research Center. Public and scientists' views on science and society. 2015. https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2015/01/PI_ScienceandSociety_Report_012915.pdf.
Plutzer E, Branch G, Reid A. Teaching evolution in U.S. public schools: a continuing challenge. Evo Edu Outreach. 2020;13(1):1–15.
Puhl RM, Neumark-Sztainer D, Austin SB, Suh Y, Wakefield DB. Policy actions to address weight-based bullying and eating disorders in schools: views of teachers and school administrators. J Sch Health. 2016;86:507–15.
Rutledge ML, Warden MA. Evolutionary theory, the nature of science and high school biology teachers: critical relationships. Am Biol Teach. 2000;62:23–31.
Skoog G. The coverage of evolution in high school biology textbooks published in the 1980s. Sci Edu. 1984;68(2):117–28.
Smith PS. 2018 NSSME+: Trends in U.S. science education from 2012 to 2018. Horizon Research. 2020. http://horizon-research.com/horizonresearchwp/wp-content/uploads/2020/04/Science-Trend-Report.pdf.
Troia GA, Graham S. Common core writing and language standards and aligned state assessments: a national survey of teacher beliefs and attitudes. Read Writ. 2016;29:1719–43.
Vazquez B. A state-by-state comparison of middle school science standards on evolution in the United States. Evol Educ Outreach. 2017;10(1):5.
We thank Kate Carter and Brad Hoge for advice about the questionnaire content, Seth B. Warner for assistance with data management, and two anonymous reviewers for their helpful suggestions.
The funding of the data collection and analysis were provided by the authors' home institutions.
National Center for Science Education, Oakland, CA, USA
Glenn Branch & Ann Reid
Penn State University, State College, PA, USA
Eric Plutzer
Glenn Branch
Ann Reid
EP was responsible for all fieldwork, data collection, and data analysis, contributed to questionnaire content, and collaborated in the writing of the manuscript. GB and AR contributed to questionnaire content and collaborated in the writing of the manuscript.
Correspondence to Eric Plutzer.
This project involved voluntary participation in a survey. The procedures and recruitment materials were reviewed by the IRB at Penn State University (study #00011249) and declared exempt.
: Table S1. Reported pre-service and continuing education coursework on evolution (percentages within row). Figure S1. Effects of policy and formal preparation on teacher emphasis as measured by typology class. Multinomial logistic regression estimates and 90% confidence intervals (accounting for sample weights and design effects).
2019 Survey of American Science Teachers: materials and methods
The 2019 Survey of American Science Teachers is the third of a series of three scientific surveys of science teachers. The first, the 2007 National Survey of High School Biology Teachers, was funded by the National Science Foundation and focused on high school biology teachers and their approach to the teaching of evolutionary biology. The second, the 2014–2015 National Survey of American Science Teachers, was conducted by Penn State with the National Center for Science Education and focused on the teaching of climate change. This second study added a sample of middle school teachers and sampled high school teachers of all four core subjects: earth science, biology, chemistry, and physics. The 2019 Survey of American Science Teachers, the third study in the series, retains a focus on high school biology teachers (from the 2007 survey) and middle school science teachers (from the 2014 to 2015 survey).
In order to allow valid comparisons to prior surveys, the most recent effort replicated many of the questions and adhered closely to the study design from previous waves. As a result, when examining the data from identical questions, it is possible to compare this wave's middle school sample to the middle school sample from 2014 to 2015, and to compare the high school biology sample to the 2007 survey and to the biology subgroup within the 2014–2015 high school sample.
The 2019 Survey of American Science Teachers employs two stratified probability samples of science educators. The first represents the population of all science teachers in public middle or junior high schools in the United States. The second represents all biology or life science teachers in public high schools in the United States.
There is no comprehensive list of such educators. However, a direct mail marketing company, Market Data Retrieval (MDR, a division of Dun and Bradstreet) maintains and updates a database of 3.9 million K–12 educators.
MDR selected probability samples conforming to our specifications. Specifically, MDR first identified eligible schools (public middle and junior high schools, and public high schools) and then selected all middle school teachers with the job title "science teacher" and all high school teachers with the job title "biology teacher" or "life science teacher."
The middle school universe contained 55,001 teachers with full name, school name and school address. From these, teachers were selected with probability 0.0455 independently from each of 151 strata defined by urbanism (city, suburb, all others) and state, with the District of Columbia being its own stratum. This resulted in a sample of 2511 middle school science teachers.
The high school biology universe contained 30,847 teachers with full name, school name and school address. From these, teachers were selected with probability 0.0810 independently from each of 151 strata defined by urbanism (city, suburb, all others) and state, with the District of Columbia being its own stratum. This resulted in a sample of 2503 high school science teachers.
Of the 5014 elements in the two samples, MDR provided current email addresses for 4150, or 82.8%.
The questionnaires for this survey included questions employed in the 2007 National Survey of High School Biology Teachers (which focused on the teaching of evolution), and the 2014–2015 National Survey of American Science Teachers (which focused on the teaching of climate change). A few new questions were developed to measure teachers' perceptions of local public opinion.
The survey was initially written for pencil/paper administration and—when finalized—programmed so it could be administered on the Qualtrics online survey platform.
The survey design was a "push to mail" strategy in which all 5014 respondents received an advance pre-notification letter, a survey packet with incentive ($2 in cash) and a postage paid return envelope, two reminder postcards and a replacement survey packet. Non-respondents for whom we had an email address then received an email invitation to complete the survey online.
This included 3161 non-respondents with emails supplied by MDR, and an additional 352 collected during the non-response audit.
Non-respondents then received two additional email reminders. Field dates are summarized in Table 9.
Non-response audit
Beginning on April 11, 2019, after most paper surveys had been received and logged, we identified a subsample of 700 non-respondents, and launched a detailed non-response audit on this group. The primary goal was to confirm or disconfirm their eligibility. From the time we began the audit of non-respondents, we received questionnaires from 62 of these teachers. They were removed from the audit, leaving 638 audited non-respondents.
For each person, we first searched for their school, and sought to locate a current school staff directory. If no directory was found, we searched all classroom web sites at the school, and searched the school web site for the teacher's full name and last name. If we found a match for the teacher anywhere on the school web site, that non-respondent was confirmed as eligible.
In some cases, we found a teacher in the same subject and same first name, but with a different last name. If we were able to absolutely confirm that teacher had recently changed names (e.g., their email matched the name in our list) that teacher was confirmed as eligible.
If we did not find the teacher, we did two broader web searches. First, a search with the teacher's full name and the keyword "science." In some instances, this brought up results indicating that the teacher had changed jobs or retired (e.g., information on the former teacher's LinkedIn page). These were confirmed as ineligible. We recorded the following outcomes:
Teacher confirmed as eligible—listed on school website.
Teacher confirmed as eligible—classroom web pages identified.
Teacher confirmed as eligible—other (e.g., listed in recent news story).
Confirmed ineligible—school has current staff directory, and teacher not listed.
Confirmed ineligible—other (e.g., teacher identified as instructing in a different subject).
Unable to determine—school does not have a staff directory.
Unable to determine—school does not have functional web site.
The final results of the audit are summarized in Table 10.
Thus, of all non-respondents (and assuming ¼ of the unknowns are ineligible) we estimate that 72% are eligible. This is the basis for calculating the "e" component in the response rate (American Association for Public Opinion Research 2006).
Dispositions and response rates
Every individual on the initial mailing list of 5014 names and addresses was assigned a disposition code.
A survey was considered complete if the respondent answered questions from at least two of the following three question groups: Question #1 which asked teachers how many class hours they devoted to each of nine topics (appearing on the second page of the paper questionnaire), a group of attitude questions appearing on pages 7–8 of the written questionnaire; and a group of demographic and background variables on pages 9 and 11 of the paper questionnaire.
A survey was considered partially complete if the respondent answered at least how many class hours they devoted to each of nine topics (appearing on the second page of the paper questionnaire). A summary of the dispositions appears in Table 11.
Response rates
We utilize the response rate definitions published by the American Association for Public Opinion Research (2006). These require an estimate of the percentage of all non-respondents who are eligible or non-eligible (e.g., due to retirement) to complete the survey. This quantity, referred to as e, was estimated from a detailed audit of 638 non-respondents. Based on these dispositions we calculate the response rate (AAPOR response rate formula #4) to be 37%. This is interpreted as the percentage of all eligible respondents who submitted a usable questionnaire (complete or partially complete). Respondents who returned questionnaires that are blank or fail to qualify as partial, are considered non-respondents. The details of the response rate calculation are reported in Table 12.
Response rates by teacher and school characteristics
Response rates can be broken down and estimated for different groups, providing that there are data for non-respondents as well as respondents. As a result, we cannot test for differences based on questionnaire items (we lack information on seniority, degrees earned, religiosity, and so on for all non-respondents).
We can, however, utilize "frame" variables and those provided by the direct mail vendor MDR. Table 13 reports on eight such comparisons.
Teacher characteristics. The response rate was somewhat lower for middle school teachers (34%) compared to high school biology teachers (40%). Using the salutations (Mr., Ms., Miss, Ms., etc.) provided in the direct mail file, we classified teachers as female, male, or gender unknown. The latter group included a small number of teachers with salutations of "Dr." or "Coach." However, the large majority had gender-ambiguous first names such as Tracy, Jamie, Kim or Chris. Men (39%) and women (38%) did not differ significantly, but we had a lower return among those whose communications could not be personalized (Dear Kim Smith rather than Dear Mr. Smith, for example).Footnote 8
The value of conducting an email follow-up to the pencil/paper survey is evident in the 39% response rate for those teachers with a valid email supplied by the vendor (those lacking an email had a 30% response rate). Note that some of these additional returns were paper surveys returned only after teachers received an email announcing the availability of a web survey.
School type. We had a somewhat lower response rate from teachers at public charter schools (31%). Note, however, that because charters still represent a tiny slice of the public school market, raising their response rate to the overall average would have only increased the number of surveys completed by charter school teachers by three or four.
School demographics. As in previous surveys, we find lower response rates from teachers working in schools with medium or large minority populations. Schools whose student bodies are more than 15% African American or more than 15% Hispanic, or more than 50% free lunch eligible, all had response rates between 30 and 33%.
Urbanism. Finally, response rates did not differ substantially by urbanism except for schools in central cities with populations exceeding 250,000. Teachers in these large school systems responded at a 30% rate.
Overall, we uncovered systematic differences. By and large these are modest in magnitude and do not introduce major distortions in the data. For example, teachers in large central city school systems constituted 12% of the teachers we recruited, and 10% of the final data set. However, since these individual differences might be additive (e.g., central city schools with many minority and school lunch-eligible students), we estimated a propensity model to assess the total impact of all factors simultaneously.
Table 14 reports a logistic regression model in which the dependent variable is the submission of a usable survey (scored 1, all other dispositions scored 0, with confirmed ineligible respondents dropped from the analysis).
This confirms most of the observational difference reported in Table 13. The odds ratio column is more intuitive and shows that the odds of returning a usable survey was 26% higher in the high school sample, 30% higher for teachers with a valid email on file, and about 26% higher when we used a gender-based salutation. Teachers at schools with sizable Black and Hispanic presence in the student body are also underrepresented (odds ratios below 1). However, after controlling for student body composition, the effects of school lunch eligibility and urbanism are diminished.
Propensity scores. We use this model to calculate the probability to respond for all original members of the sample. That allows us to calculate the response propensity for all respondents. Those whose characteristics make them unlikely to respond must, therefore, speak on behalf of more non-respondents. We use the inverse of the propensity as a second-stage weighting adjustment.
Analysis weights were constructed in a two-stage process. A base weight adjusts for possible under-coverage by the sample supplier and the non-response adjustment balances the sample based on characteristics that are predictive of non-response (e.g., student body composition).
Base weight. MDR claims to have contact information for approximately 85% of all K–12 teachers, but that coverage rate can vary by grade, subject, and state.
We assume that science teachers comprise the same percentage of all middle school teachers in each state, and we assume that biology teachers constitute the same share of high school faculty in each state. It follows that the distribution across states in the MDR data base should be proportional to the number of teachers in each state. If not, adjustment is necessary to make the sample fully representative.
We therefore constructed the following two ratios:
$$ \frac{{\begin{array}{*{20}{c}} {Number\;of\;middle\;school\;teachers\;as\;counted\;} \\ {by\;the\;National\;Center\;for\;Education\;Statistics} \end{array}}}{{\begin{array}{*{20}{c}} {Number\;of\;middle\;school\;teachers\;in\;MDR} \\ {direct\;mail\;data\;base} \end{array}}} $$
$$ \frac{{\begin{array}{*{20}{c}} {Number\;of\;high\;school\;teachers\;as\;counted\;} \\ {by\;the\;National\;Center\;for\;Education\;Statistics} \end{array}}}{{\begin{array}{*{20}{c}} {Number\;of\;biology\;teachers\;in\;MDR} \\ {direct\;mail\;data\;base} \end{array}}} $$
These were each standardized to have a mean of 1.0 so that ratios above 1 indicate relative under-coverage by MDR.
Non-response calibration. The second stage weight is based on the logistic regression model reported in Table 14. From this model, we calculated the probability of completing the survey (defined as completing a usable survey, classified as "complete" or "partial" in Table 11.
The second stage non-response adjustment is simply the inverse of the response propensity, 1/π.
Analysis weight (designated as final_weight in the data set) is the product of the first stage coverage adjustment and the second stage non-response adjustment, standardized so it has a mean of 1. The weights range from 0.24 to 3.23, with a standard deviation of 0.35. Ninety percent of the cases have weights between 0.55 and 1.60, indicating that weighting will have only a small impact on statistical results in comparison to unweighted analyses.
Branch, G., Reid, A. & Plutzer, E. Teaching evolution in U.S. public middle schools: results of the first national survey. Evo Edu Outreach 14, 8 (2021). https://doi.org/10.1186/s12052-021-00145-z | CommonCrawl |
HIV virological non-suppression and its associated factors in children on antiretroviral therapy at a major treatment centre in Southern Ghana: a cross-sectional study
Adwoa K. A. Afrane1,
Bamenla Q. Goka1,2,
Lorna Renner1,2,
Alfred E. Yawson3,
Yakubu Alhassan4,
Seth N. Owiafe1,
Seth Agyeman5,
Kwamena W. C. Sagoe6 &
Awewura Kwara7
Children living with human immunodeficiency virus (HIV) infection require lifelong effective antiretroviral therapy (ART). The goal of ART in HIV-infected persons is sustained viral suppression. There is limited information on virological non-suppression or failure and its associated factors in children in resource limited countries, particularly Ghana.
A cross-sectional study of 250 children aged 8 months to 15 years who had been on ART for at least 6 months attending the Paediatric HIV clinic at Korle Bu Teaching hospital in Ghana was performed. Socio-demographic, clinical, laboratory and ART Adherence related data were collected using questionnaires as well as medical records review. Blood samples were obtained for viral load and CD4+ count determination. Viral load levels > 1000 copies/ml on ART was considered virological non-suppression. Logistic regression was used to identify factors associated with virological non-suppression.
The mean (±SD) age of the study participants was 11.4 ± 2.4 years and the proportion of males was 53.2%. Of the 250 study participants, 96 (38.4%) had virological non-suppression. After adjustment for significant variables, the factors associated with non-suppressed viral load were female gender (AOR 2.51 [95% CI 1.04–6.07], p = 0.041), having a previous history of treatment of tuberculosis (AOR 4.95 [95% CI 1.58–15.5], p = 0.006), severe CD4 immune suppression status at study recruitment (AOR 24.93 [95% CI 4.92–126.31], p < 0.001) and being on a nevirapine (NVP) based regimen (AOR 7.93 [95% CI 1.58–1.15], p = 0.005).
The prevelance of virological non-suppression was high. Virological non-suppression was associated with a previous history of TB treatment, female gender, severe CD4 immune suppression status at study recruitment and being on a NVP based regimen. Early initiation of ART and phasing out NVP-based regimen might improve viral load suppression in children. In addition, children with a history of TB may need focused measures to maximize virological suppression.
Human immunodeficiency virus (HIV) infection continues to be one of the most relevant infectious diseases [1, 2]. Antiretroviral therapy (ART) is a critical component of the overall management plan for HIV infection. The primary goal of ART is to suppress viral replication, which ultimately results in restoration of the immune system, reduction in HIV transmission and a general improvement in the quality of life of people infected with HIV [3, 4]. The World Health Organization (WHO) in 2013 recommended HIV viral load (HIV VL) monitoring as the gold standard for monitoring ART effectiveness in resource-limited settings [5]. This recommendation was adopted by Ghana in 2016. According to Ghana's National AIDS Control Programmme (NACP) guidelines, viral load testing is recommended 6 months after initiating ART and therafter annually for people who have achieved virological suppression [6]. However people with HIV VL levels > 1000 copies/ml are required to undergo intensified adherence support after which the viral load is repeated 3 months later in order to differentiate poor adherence from treatment failure [6].
Virological non-suppression could be due to poor adherence to ART, resistance to ART (transmitted or acquired) or pharmacokinetic issues (poor absorption, under-dosing and drug interactions) [7, 8]. ART regimen-related factors could result in the development of drug resistance [9, 10]. In addition, the delay in introduction of newer, more potent antiretrovirals with high barrier to drug resistance such as second-generation integrase strand transfer inhibitors (INSTIs) in resource-limited settings is a key contributing factor to virolgocial non-suppression.
In previous studies a wide range of factors have been associated with virological non-suppression in children and these include socio-demographic factors such as younger age (less than 3 years) [11], male gender [12, 13], WHO advanced HIV stage [14, 15], co-infection with tuberculosis (TB) [16, 17], nevirapine (NVP) based therapy [18, 19], and poor adherence to treatment [20, 21]. Adherance to ART has always been a challenge for the paediatric population because drug formulations are often less tolerable, and may require dose adjustment as the child grows [22, 23]. Lack of a consistent caregiver in younger children and disclosure issues in adolescents also pose a challenge to medication adherence [22, 23]. These unique issues in children and adolescents can result in virological non-suppression, without the presence of drug resistance mutations [8].
High rates of virological non-supression is common in children in low-and middle-income countries (LMICs) and could be due to poor medication adherence or treatment failure [8]. Identifying patients with virological non-suppression is important for intensified adherence counseling and increased frequency of follow up but it also an important sign of treatment failure especially in persons with good adherence [24]. The aim of this study was to determine the prevalence of virological non-suppression and its associated factors among children living with HIV (CLWH) attending a Paediatric HIV clinic at Teaching Hospital in Ghana. This knowledge would help to target interventions for improving virological suppression, reduce ART failue and ultimately improveclinical care of CLWH.
Study design and setting
This study used a cross-sectional design to recruit paediatric HIV positive patients attending the outpatient clinic from October 2017 to July 2018 at the Korle Bu Teaching Hospital (KBTH), Accra, Ghana. KBTH has 2500 beds and is currently the largest hospital in West Africa and the third largest on the African continent [25]. The hospital is the main tertiary referral centre in Accra and serves the majority of the Southern part of Ghana. The Paediatric HIV clinic at KBTH has been providing comprehensive HIV/AIDS care and management services since 2004. Patients are referred from primary and secondary health facilities as well as from other departments within the hospital. An average of 40 patients are seen per clinic day. The National AIDS Control Program (NACP) provides ART medication free of charge. HIV VL and CD4+ counts are also paid for by the program. The clinic uses national treatment guidelines that are in line with current WHO recommendations for ART [26]. Treatment is initiated for all patients irrespective of the CD4+ count. Antiretroviral drugs available and in use in various combinations at the clinic at the time of the study were zidovudine (AZT), lamivudine (3TC), abacavir (ABC), efavirenz (EFV), nevirapine (NVP), tenofovir (TDF), and ritonavir boosted lopinavir (LPV/r).
Study participants
Study participants were CLWH aged between 8 months and 15 years and who had been on ART for at least 6 months. Children who had been transferred from referral points and did not have their previous notes available and children with HIV-2 mono infection were excluded from the study because currently, there are no FDA-approved assays for quantification of HIV-2 RNA. Voluntary informed consent was obtained from parents and guardians of study participants and assent from children aged 10-15 years. Before seeking informed consent/assent, study participants were screened to determine whether they were eligble or not. A questionnaire was administered to study participants and caregivers. Information was also collected retrospectively from their medical records.
Sample size determination
The Cochran's sample size formula [27] was used to calculate the sample size to determine the prevalence of non-suppressed viremia. A minimum sample size of 250 study participants was determined using confidence level of 95% and an error margin of 5%. Consecutive cases of HIV children who met the eligibility criteria were enrolled into the study until the sample size was reached.
Participants had their drug doses checked for appropriateness of their current drug regimen. The dosage, (frequency and dose/kg or m2) was cross checked with the recommended dosage and appropriateness was documented as 'yes' or 'no.' ART Adherence was assessed by a pharmacist on the day of recruitment of the study participant using the pill count and this is usually done at every appointment visit.
Pill count (the number of pills taken) was calculated based on the number of unused pills that the care giver brought back when refilling their ART medication on the day of study recruitment [28]. Total refill was the expected number of pills to have been taken since the last visit.
Pill count was callulated as: Total refill - Pill left.
$$ \mathrm{Percentage}\ \mathrm{adherence}\ \mathrm{was}\ \mathrm{calculated}\ \mathrm{as}\;\frac{\mathrm{Pill}\ \mathrm{count}}{\mathrm{Total}\ \mathrm{Refill}}\times 100\% $$
Laboratory investigations done on the day of recuitment were CD4+ count and HIV VL. CD4+ absolute cell count and cell percentage were quantified by a dual-platform flow cytometry technology using a FACS Count system (Becton-Dickinson, Franklin Lakes, NJ, USA) according to manufacturer's instructions at the Fevers Unit Laboratory of KBTH. The HIV RNA VL testing was performed at the Central Laboratory of KBTH using the COBAS® AMPLICOR Monitor test (Roche Diagnostic Systems, Branchburg, NJ, USA), with a a lower limit of detection of 20 copies/ml. The laboratory at the Fevers unit and the Central Lab KBTH are certified by the South African Public Health Reference Laboratory and participates in an external quality assurance testing programme by the South African Public Health Reference Laboratory.
Operational definitions
In accordance with WHO guidelines, study participants were categorized as having virological non-suppression if the HIV VL level was > 1000 copies/ml on the day of recruitment, after at least 6 months of using ART. Drug Adherence was determined by caregivers report and categorized according to WHO guidelines as follows: Good ≥95%; Fair: 85–94%: Poor < 85% [29].
The dependent variable was virological non-suppression (VL > 1000 copies/ml). The independent variables were the sociodemographic factors, clinical factors and ART Adherence factors. Pearson's chi-square test of association was used to determine strength of association between the independent categorical variables and the outcome variable (virological non-suppression). The logistic regression model was used in determining the factors influencing HIV VL suppression among study participants with statistical significance set at p < 0.05. With the exception of age of the child which was included in the binary logistic regression model because of known associations in the literature [8, 11], all other variables p-values < 0.10 from the Pearson's chi-square test were included. The crude odds ratio (OR) and adjusted odds ratio (AOR) were determined and their respective 95% confidence intervals were calculated.
Ethical considerations
Ethical apoproval (KBTH-IRB/ 00060/2017) was obtained from the Institutional Review Board of Korle Bu Teaching Hospital, Accra, Ghana. Informed consent was obtained from parents or legal guardians for each minor participant prior to enrolment to participate in the study.
The baseline characteristics of the study paticipants are shown in Table 1. Of the 250 participants, 59.2% were within the age range of 10 to 15 years and 53.2% were males. Overall, 46.0% of primary caregivers who were mothers 44% guardians (grandmother, grandfather, aunt, uncle, foster caregiver) and only 10% were fathers. The educational and occupation of parents is described in Table 1. Overall, the mothers of 157 (62.8%) participants were HIV positive while 93 (37.2%) of mothers had unknown HIV status. The unknown status included parents who were dead. The fathers of 62 (24.8%) study participants were HIV positive whilst 105 (42.0%) were HIV negative and 83 (33.2%) had unknown HIV status.
Table 1 Baseline characteristics of patients according to virological suppression status
Clinical and laboratory details of study participants
Of all participants, 71 (28.4%) had a history of previous TB treatment and 97 (38.8%) had WHO clinical stage 1 disease at initiation of ART.
Baseline VL and CD4+ count of study participants
Out of the 250 study participants, only 74 (30%) had baseline VL documented. Of the 74 study participants that had records of baseline VL, the median (range) log10 copies/ml was 4.8 (1.7–6.9). The breakdown of baseline VL level is shown in Table 1. Of the 74 participants with baseline VL, 30 (40.5%), had baseline VL < 10, 000 copies/ml, and 10 (13.5%) had viral load levels ≥500,000 copies/ml. Overall, of the 225 study participants that had records for their Baseline CD4+ counts, 35 (15.6%) had severe CD4 immune suppression status (CD4 < 15% / CD4+ count < 200 cells/mm3), 25 (11.1%) had advanced CD4 immune suppression status (CD4%: 15–19% / CD4 count: 200–349 cells/mm3), 18 (8%) had mild CD4 immune status (CD4%: 20–25% / CD4+ count: 350–499 cells/mm3) with the majority of study participants 147 (65.3%) having normal CD4+ status (CD4 > 25% / CD4 count: > 500 cells/mm) [3].
CD4+ count at study recruitment
The CD4+ results for 20 of the study participants were not available because of technical issues that occurred with their blood samples at the laboratory. Of the 230 that had CD4+ results on the day of study recruitment, 10 (4.3%) were children less than 5 years and 220 (95.7%) were children between 5 and 15 years of age. For the overall CD4+ immune suppression status, 24 (10.4%) had severe immune suppression status, (CD4 < 15% / CD4 count < 200 cells/mm3), 18 (7.8%) had advanced immune suppression status (CD4%: 15–19% / CD4+ count: 200–349 cells/mm3), 23 (10%) had mild immune suppression status (CD4%: 20–25% / CD4 count: 350–499 cells/mm3) and 165 (71.7%) had normal CD4+ immune status (CD4%: > 25% / CD4+ count: > 500 cells/mm) [3].
Proportion of patients with virological non-suppression
At study enrolment, the mean (± SD) duration on ART was 64 months ±3.0 months. Overall, 96 (38.4%) of the 250 participants had virological non-suppression (VL > 1000 copies/ml). Of the 96 participants with virological non-suppression, 20 (20.8%) had VL > 100,000 copies/mL. Overall, 96 (38.4%) of the 250 participant had VL levels < 20 copies/ml and 58 (23.2%) had low-level viraemia (VL levels of 20–1000 copies/ml). For virological non-suppression within the various age groups, 14 (46.7%) were within the age group (< 5 years), 28 (38.9%) were within the age group (5–9 years) and 54 (36.5%) were within the age group (10–15 years).
Factors associated with virological non-suppression
Bivariate analysis of factors associated with virological non-suppression are shown in Table 1. Females were more likely to have virological non-suppression in comparison to males (54.2% vs 32.2%, p = 0.035). The overall CD4+ immune suppression status of the subject at study recruitment showed a significant association with the subjects VL. (Fischer's exact (Φ, p < 0.001). The adherence rate of subject measured by pill count percentage showed a significant association with the subjects VL (χ2 = 7.99, p = 0.018).
There were no significant differences for the following variables: age of subject, primary caregiver's relationship to subject, educational status of father, educational status of mother, occupational status of father, occupational status of mother, history of previous TB treatment, WHO stage at initiation of ART, baseline VL and baseline CD4+ count values (overall), duration on ART, current ART regimen and person responsible for child's medication (Table 1).
Factors associated with virological non-suppression in multivariate analysis
In multivariate analysis of patients, females were 2.5 times more likely to have virological non-suppression when compared with male study participants (AOR 2.51 [95% CI 1.04–6.07], p = 0.041). Additionally, study participants with severe CD4+ immune suppression status at study recruitment were 25 times more likely to have virological non-suppression when compared to children with normal CD4+ / no immune suppression in multivariate analysis (AOR 24.93 [95% CI 4.92–126.31], p < 0.001). Participants with a prior history of TB treatment were 4.95 times more likely to have virological non-suppression as compared to participants without a prior history of TB treatment (AOR 4.95 [95% CI 1.58–15.5], p < 0.006). Participants with a NVP based regimen was 7.93 times more likely to be associated with virological non-suppression (AOR 7.93 [95% CI 1.58–15.5], p = 0.005). Multivariate analysis of adherence pill count did not yield any significant results (Table 2).
Table 2 Univariate and multivariate analysis of factors associated with virological non suppression in CLWH on ART for a at least 6 months (N = 250)
In this cross-sectional study, a relatively large proportion of the 250 CLWH (38%) had virological non-suppression after being on ART for a mean period of 64 months. This translates to a virological suppression rate of 62% and is similar to the estimate made by Ghana with respect of its achievement of the third 90 of the UNAIDS 90–90-90 targets. The high rate of non-suppression suggest that intensified efforts to improve HIV treatment in CLWH is needed to achieve the current 95–95-95 targets proposed by UNAIDS with the purporse of ending AIDS by 2030 [30]. Female gender, having a previous history for TB treatment, severe CD4+ immunodeficiency status at study recruitment and a NVP-based regimen were associated with virological non-suppression. Some factors identified in other studies such as adherence to ART, clinical stages 3 and 4, parent's educational level and their employment status were not significant in this current study.
As antiretroviral access continues to expand in resource-limited countries like Ghana, monitoring response to ART by the use of VL measurements is critical in determining the effectiveness of ART in the population [31]. The Sub-Saharan African region's prevalence for virological non-suppression (> 1000 copies/ml) in children who have been on ART for at least 6 months ranges from 13 to 44% [8, 24, 32]. A virological non-suppression rate of 38% found in this study is concerning given the risk for the emergence of ART resistance and subsequently failure of the ART regimen, necessitating a switch to second, or later third line treatment [8]. This ultimately would result in an increase in morbidity and mortality of the CLWH and cause the spread of resistant viruses [8]. Given these consequences, VL monitoring per national guidelines should be routinely done in all children on ART and those identified as being virologically non-suppressed, should have adherence counselling and then a repeat viral load level to confirm if they have virological failure. Once virological failure has been confirmed then the regimen switch should occur according to Standard Treatment Guidelines outlined by NACP [6].
Rountine VL monitoring is also important for early detection of treatment failure due to pre-treatment drug resistance (PDR), which is known to compromise the efficacy of ART at an individual and population level [33]. Bavara et al using a large database created for surveillance of HIV-1 drug resistance in Italy confirmed that having more than one PDR is an important predictor of virological failure [33]. This phenomenon is on the increase and has also been reported in sub-Saharan Africa [34, 35] Latin America [36] and Asia [37]. While PDR can be detected through baseline genotypic resistance testing (GRT) prior to initiation of ART, it is expensive and not performed at patient entry in care low- and middle-income countries [9]. Currently, VL monitoring has been a programmatic challenge at our Paediatric HIV clinic due to frequent interruptions in the availbility of resources required by the laboratory, resulting in erratic provision of services. Adequate funding and improved logisitics management to ensure uninterrupted VL testing in the laboratory would boost the implementation of the VL monitoring protocol that exisits in the clinic.
The rate of non-suppression children could be due to a number of reasons, depending on the settings. We observed that females were 2.5 times more likely to have virological non- suppression as compared to males. This phenomenon was similar to the study by Muri et al [38], in Tanzanian children, whereby females were also 2.5 more times likely to have virological non-suppression. On the contrary, some studies have reported that males had increased odds of virological non-suppression [13, 39], whilst other studies however found no association between sex and virological non-suppression [40]. The role of gender in virological suppression could be biologic according to authors such as Njom et al. [41]. The relationship of virological suppression and gender is therefore inconclusive and requires further studies.
A third of our study population had been previously treated for TB. We found that having a history of previous TB treatment increased the odds of having virological non-suppression by as much as five times. These findings are in agreement with studies reported by Ahoua et al [42], and by Rajin et al. [43] On the other hand, it has been recently reported that children who had a history of TB co-infection had better virological outcomes [13]. Reasons for this could be due to the close monitoring, frequent clinic visits and adherence support, adopted as part of TB treatment offered at the sample sites. The association of a previous history of TB and virological non-suppression in this current study could be due to the increased pill burden and drug-drug interactions between the medications for TB and HIV therapies, especially the NNRTIs or PIs in the setting of rifampin-containing TB treatment. The significance of TB comorbidity on the occurrence of virological non-suppression buttresses the need for the prioritization of frequent VL monitoring and adherence support in TB/HIV co-infected patients as well as patients who have a history of previous TB. While we were not able to examine the effect of other opportunistic infections (OIs) in this study, it is important to note that non-TB opportunistic infections are now less common than in the past because of early HIV diagnosis and initiation ofeffective ART. As a result, there is reduction of OI-related morbidity and mortality in person with HIV [44].
The odds of virological non-suppression was almost eight times more likely in study participants whose current drug regimen was NVP-based as compared to a study participant who had an EFV-based regimen. The findings of this study is consistent with current literature that shows that patients on NVP-based regimens experience more virological failure than patients on EFV-based regimens [18, 19]. The use of regimens containing NVP is associated with a low genetic barrier of drug resistance and a higher risk of baseline resistance in cases where NVP was used as prophylaxis in the babies for Prevention of Mother To Child Transmission (PMTCT). This current study however did not evaluate prior NVP exposure. Our findings support the current ART guidelines being used at the HIV clinic at KBTH which is to phase out NVP based regimens and replace with LPV/r or EFV regimens for children less than 20 kg. Current guidelines recommend a Dolutegravir (DTG) based regimen as the preferred first-line for children weighing at least 20 kg. Hopefully with the introduction of DTG and its scale up, the non suppression due to certain antiretroviral drugs such as NVP will be reduced.
We observed that study participants found to have severe CD4+ immune suppression status at the time of study recruitment were 25 times more likely to have virological non-suppression. These findings are in congruence with studies reported by Jobanputra et al [40], among children in Swaziland and by Izudi et al [45], among children in Northwestern Uganda where it was found that patients with low CD4+ count values at study recruitment were more likely to have virological non-suppression. This finding is expected and supports the knowledge that viral suppression leads to immune recovery and could reflect the fact that those study participants who were virologically suppressed had a chance to reconstitute their immune systems for their CD4+ counts to increase [46]. This finding also supports early initiation of ART in children and according to the current ART guideline in Ghana, all children confirmed to have HIV diagnosis after birth are started on ART regardless of the CD4+ count.
There was no association between parent-educational level, employment status of parent and virological non-suppression in this study. This is in agreement with a Danish HIV Cohort reported by Legarth et al [47] which also showed no association between education level and virological non-suppression. This finding is however in contrast to a study reported by Mensah et al [48] in Ghana, in which a child with an unemployed caregiver was five times more likely to have virological non-suppression. There is substantial evidence on the socioeconomic inequalities in the treatment outcomes of chronic diseases like HIV. This current study could not confirm association between employment status and virological non-suppression. This observation could be due to the fact that for almost a third of study participants, the educational and employment status of parents was not known.
Studies on the relationship between self-reported adherence to ART and virological non- suppression have shown inconsistent results [20, 49]. In this study, adherence level measured by pill count was not associated with virological non-suppression. On the other hand in a clinical trial reported by Intasan et al [50], in Cambodian children, non-adherence was associated with virological non-suppression. The measure of adherence used in the study by Intasan et al [50], was however the 3 day self -report by caregiver. In a more recent research by Natukunda et al [24], in adolescents, reported in 2019, more than 70% of adolescents who experienced virological non-suppression were sufficiently adherent as measured by pill count (adherence > 95%). On the other hand, there are also studies whereby poorly adherent patients maintained undetectable VL [49, 51].
Children with low level viraemia (LLV) with detectable viral loads above 20 copies/ml but less than 1000 copies/ml was common (23%) in this study. Thus, these children have the risk of continiuning on a failing regimen for a considerable time, especially given that VL are routinely done once a year. There is therfore the need to design algorithms for patients with LLV to have more frequent VL monitoring as literature has shown the emergence of high-level resistance in this group of individuals .
The strength of this current study is that it did not only determine the prevalence of virological non-suppression but explored factors associated with the phenomenon in children. In the absence of drug resistance testing information, close monitoring of VL levels, multidisciplinary support and prompt clinical judgment are key in ensuring children who have failed treatment are appropriately transitioned to second line therapy. Ultimately, 'an ounce of prevention is better than a pound of cure.'
The reliance on self-reported data as a measure of adherence, which may be affected by recall and social desirability bias. Analysis of baseline VL and CD4+ count was not available for some of the subjects and hence analysis on baseline VL and CD4+ could not be done for those patients. This is because VL monitoring was started in April 2011 (as a national policy) and hence all children above 8 years of age did not have the opportunity of having a baseline VL level done.
The high rate of virological non-suppresion is consistent with findings in other countries in Sub-Saharan setting and emphasizes the great challenge to successfully suppressing HIV in paediatric patients and reinforces the need for regular monitoring of viral load levels in children. Patients on ART with active TB and those with a history of previous TB should be prioritized for more regular VL monitoring (6 monthly) as the national guidelines advocate for yearly monitoring of viral load for patients whose viral load levels are suppressed. Further research should focus on determining resistance patterns in this study population.
The data sets used and analysed during this study are available with the corresponding author on request.
ABC:
AOR:
Adjusted Odds Ratio
CD4%:
CD4 percentage
CLWH:
Children Living with HIV
DTG:
Dolutegravir
EFV:
Efavirenz
FDA:
HIV VL:
HIV viral load
KBTH:
Korle Bu Teaching Hospital
LMIC:
Low- and middle-income countries
LLV:
Low level viraemia
LPV/r:
Ritonavir boosted lopinavir
NACP:
National AIDS Control Programme
NVP:
PDR:
Pre-treatment Resistance
Protease Inhibitor
SPSS:
Statistical Package for Social Sciences
TEN:
TB:
HIV-1 RNA Viral Load
U.O.R:
Unadjusted Odds Ratio
ZDV:
Zidovudine
3TC:
Lamivudine
History of HIV and AIDS overview | Avert. https://www.avert.org/professionals/history-hiv-aids/overview (Accessed 26 Dec 2019).
HIV/AIDS Facts, Prevention, Signs, Symptoms & Medications. eMedicineHealth. https://www.emedicinehealth.com/hivaids/article_em.htm (Accessed 26 Dec 2019).
Reddi A, Leeper SC, Grobler AC, Geddes R, France KH, Dorse GL, et al. Preliminary outcomes of a paediatric highly active antiretroviral therapy cohort from KwaZulu-Natal, South Africa. BMC Pediatr. 2007;7(1):13. https://doi.org/10.1186/1471-2431-7-13.
Aids C on P, Health S on IC. Increasing antiretroviral drug access for children with HIV infection. Pediatrics. 2007;119(4):838–45. https://doi.org/10.1542/peds.2007-0273.
WHO | Consolidated guidelines on HIV prevention, diagnosis, treatment and care for key populations. WHO. http://www.who.int/hiv/pub/guidelines/keypopulations-2016/en/ (Accessed 20 Sep 2016).
National AIDS/STI Control Programme, Ghana Health Service. 2016 HIV Sentinel Survey Report 2016.
talent maphosa. 6. hiv antiretroviral resistance. 14:27:59 UTC.https://www.slideshare.net/talentmaphosa1/6-hiv-antiretroviral-resistance (Accessed 26 Dec 2019).
Bulage L, Ssewanyana I, Nankabirwa V, Nsubuga F, Kihembo C, Pande G, et al. Factors associated with virological non-suppression among HIV-positive patients on antiretroviral therapy in Uganda, august 2014–July 2015. BMC Infect Dis. 2017;17(1):326. https://doi.org/10.1186/s12879-017-2428-3.
WHO | Global action plan for HIV drug resistance 2016-2021. https://www.who.int/hiv/pub/drugresistance/hiv-drug-resistance-brief-2016/en/ (Accessed 13 Feb 2019).
Hamers RL, de Wit TFR, Holmes CB. HIV drug resistance in low-income and middle-income countries. Lancet HIV. 2018;5(10):e588–96. https://doi.org/10.1016/S2352-3018(18)30173-5.
Bacha T, Tilahun B, Worku A. Predictors of treatment failure and time to detection and switching in HIV-infected Ethiopian children receiving first line anti-retroviral therapy. BMC Infect Dis. 2012;12(1):197. https://doi.org/10.1186/1471-2334-12-197.
Janssens B, Raleigh B, Soeung S, Akao K, Te V, Gupta J, et al. Effectiveness of highly active antiretroviral therapy in HIV-positive children: evaluation at 12 months in a routine program in Cambodia. Pediatrics. 2007;120(5):e1134–40. https://doi.org/10.1542/peds.2006-3503.
Kadima J, Patterson E, Mburu M, Blat C, Nyanduko M, Bukusi EA, et al. Adoption of routine virologic testing and predictors of virologic failure among HIV-infected children on antiretroviral treatment in western Kenya. PLoS One. 2018;13(11):e0200242. https://doi.org/10.1371/journal.pone.0200242.
Rupérez M, Pou C, Maculuve S, Cedeño S, Luis L, Rodríguez J, et al. Determinants of virological failure and antiretroviral drug resistance in Mozambique. J Antimicrob Chemother. 2015;70(9):2639–47. https://doi.org/10.1093/jac/dkv143.
Yassin S, Gebretekle GB. Magnitude and predictors of antiretroviral treatment failure among HIV-infected children in fiche and Kuyu hospitals, Oromia region, Ethiopia: a retrospective cohort study. Pharmacol Res Perspect. 2017;5(1):e00296. https://doi.org/10.1002/prp2.296.
Komati S, Shaw PA, Stubbs N, Mathibedi MJ, Malan L, Sangweni P, et al. Tuberculosis risk factors and mortality for HIV infected persons receiving antiretroviral therapy in South Africa. AIDS Lond Engl. 2010;24(12):1849–55. https://doi.org/10.1097/QAD.0b013e32833a2507.
Costenaro P, Penazzato M, Lundin R, Rossi G, Massavon W, Patel D, et al. Predictors of treatment failure in HIV-positive children receiving combination antiretroviral therapy: cohort data from Mozambique and Uganda. J Pediatr Infect Dis Soc. 2015;4(1):39–48. https://doi.org/10.1093/jpids/piu032.
Pillay P, Ford N, Shubber Z, Ferrand RA. Outcomes for Efavirenz versus Nevirapine-containing regimens for treatment of HIV-1 infection: a systematic review and meta-analysis. PLoS One. 2013;8(7):e68995. https://doi.org/10.1371/journal.pone.0068995.
Mgelea EM, Kisenge R, Aboud S. Detecting virological failure in HIV-infected Tanzanian children. SAMJ South Afr Med J. 2014;104(10):696–9. https://doi.org/10.7196/SAMJ.7807.
Nieuwkerk PT, Oort FJ. Self-reported adherence to antiretroviral therapy for HIV-1 infection and virologic treatment response: a meta-analysis. J Acquir Immune Defic Syndr 1999. 2005;38:445–8.
Gross R, Bilker WB, Friedman HM, Strom BL. Effect of adherence to newly initiated antiretroviral therapy on plasma viral load. AIDS Lond Engl. 2001;15(16):2109–17. https://doi.org/10.1097/00002030-200111090-00006.
Marhefka SL, Koenig LJ, Allison S, Bachanas P, Bulterys M, Bettica L, et al. Family experiences with pediatric antiretroviral therapy: responsibilities, barriers, and strategies for remembering medications. AIDS Patient Care STDs. 2008;22(8):637–47. https://doi.org/10.1089/apc.2007.0110.
Haberer JE, Kiwanuka J, Nansera D, Ragland K, Mellins C, Bangsberg DR. Multiple measures reveal antiretroviral adherence successes and challenges in HIV-infected Ugandan children. PLoS One. 2012;7(5):e36737. https://doi.org/10.1371/journal.pone.0036737.
Natukunda J, Kirabira P, Ong KIC, Shibanuma A, Jimba M. Virologic failure in HIV-positive adolescents with perfect adherence in Uganda: a cross-sectional study. Trop Med Health. 2019;47(1):8. https://doi.org/10.1186/s41182-019-0135-z.
About us – Brief History. http://kbth.gov.gh/brief-history/. Accessed 3 Jan 2019.
Consolidated Guidelines on the Use of Antiretroviral Drugs for Treating and Preventing Hiv Infection: Recommendations for a Public Health Approach. 2016 http://www.deslibris.ca/ID/10089566 (Accessed 11 Apr 2019).
Heinisch O. Cochran, W. G.: sampling techniques, 2. Aufl. John Wiley and Sons, New York, London 1963. Preis s. Biom Z. 1965;7:203.
Nichols JS, Kyriakides TC, Antwi S, Renner L, Lartey M, Seaneke OA, et al. High prevalence of non-adherence to antiretroviral therapy among undisclosed HIV-infected children in Ghana. AIDS Care. 2019;31(1):25–34. Available from: https://www.tandfonline.com/doi/full/10.1080/09540121.2018.1524113. Accessed 5 Feb 2019.
WHO | Antiretroviral therapy of HIV infection in infants and children: towards universal access. WHO. http://www.who.int/hiv/pub/guidelines/art/en/ (Accessed 23 May 2017).
Ending AIDS: progress towards the 90–90–90 targets | UNAIDS. http://www.unaids.org/en/resources/documents/2017/20170720_Global_AIDS_update_2017 (Accessed 14 Feb 2019).
Tucker JD, Bien CH, Easterbrook PJ, Doherty MC, Penazzato M, Vitoria M, et al. Optimal strategies for monitoring response to antiretroviral therapy in HIV-infected adults, adolescents, children and pregnant women: a systematic review. AIDS Lond Engl. 2014;28(Suppl 2):S151–60.
Makadzange AT, Higgins-Biddle M, Chimukangara B, Birri R, Gordon M, Mahlanza T, et al. Clinical, virologic, immunologic outcomes and emerging HIV drug resistance patterns in children and adolescents in public ART Care in Zimbabwe. PLoS One. 2015;10(12):e0144057. https://doi.org/10.1371/journal.pone.0144057.
Bavaro DF, Di Carlo D, Rossetti B, Bruzzone B, Vicenti I, Pontali E, et al. Pretreatment HIV drug resistance and treatment failure in non-Italian HIV-1-infected patients enrolled in ARCA. Antivir Ther. 2020;25(2):61–71. https://doi.org/10.3851/IMP3349.
Chimukangara B, Kharsany ABM, Lessells RJ, Naidoo K, Rhee S-Y, Manasa J, et al. Moderate-to-high levels of pretreatment HIV drug resistance in KwaZulu-Natal Province, South Africa. AIDS Res Hum Retrovir. 2019;35(2):129–38. https://doi.org/10.1089/aid.2018.0202.
McCluskey SM, Lee GQ, Kamelian K, Kembabazi A, Musinguzi N, Bwana MB, et al. Increasing prevalence of HIV pretreatment drug resistance in women but not men in rural Uganda during 2005–2013. AIDS Patient Care STDs. 2018;32(7):257–64. https://doi.org/10.1089/apc.2018.0020.
Aulicino PC, Zapiola I, Kademian S, Valle MM, Fernandez Giuliano S, Toro R, et al. Pre-treatment drug resistance and HIV-1 subtypes in infants from Argentina with and without exposure to antiretroviral drugs for prevention of mother-to-child transmission. J Antimicrob Chemother. 2019;74(3):722–30. https://doi.org/10.1093/jac/dky486.
Gupta RK, Gregson J, Parkin N, Haile-Selassie H, Tanuri A, Forero LA, et al. HIV-1 drug resistance before initiation or re-initiation of first-line antiretroviral therapy in low-income and middle-income countries: a systematic review and meta-regression analysis. Lancet Infect Dis. 2018;18(3):346–55. https://doi.org/10.1016/S1473-3099(17)30702-8.
Muri L, Gamell A, Ntamatungiro AJ, Glass TR, Luwanda LB, Battegay M, et al. Development of HIV drug resistance and therapeutic failure in children and adolescents in rural Tanzania: an emerging public health concern. AIDS Lond Engl. 2017;31(1):61–70. https://doi.org/10.1097/QAD.0000000000001273.
Kamya MR, Mayanja-Kizza H, Kambugu A, Bakeera-Kitaka S, Semitala F, Mwebaze-Songa P, et al. Predictors of long-term viral failure among ugandan children and adults treated with antiretroviral therapy. JAIDS J Acquir Immune Defic Syndr. 2007;46:187–93.
Jobanputra K, Parker LA, Azih C, Okello V, Maphalala G, Kershberger B, et al. Factors associated with Virological failure and suppression after enhanced adherence Counselling, in children, adolescents and adults on antiretroviral therapy for HIV in Swaziland. PLOS ONE. 2015;10:e0116144.
Njom Nlend AE, Motaze AN, Ndiang ST, Fokam J. Predictors of Virologic failure on first-line antiretroviral therapy among children in a referral pediatric center in Cameroon. Pediatr Infect Dis J. 2017;36(11):1067–72. https://doi.org/10.1097/INF.0000000000001672.
Ahoua L, Guenther G, Pinoges L, Anguzu P, Chaix M-L, Le Tiec C, et al. Risk factors for virological failure and subtherapeutic antiretroviral drug concentrations in HIV-positive adults treated in rural northwestern Uganda. BMC Infect Dis. 2009;9(1):81. https://doi.org/10.1186/1471-2334-9-81.
Rajian M, Gill PS, Chaudhary U. Effect of tuberculosis co infection on virological failure in HIV patients on first line of highly active antiretroviral therapy. Int J Curr Microbiol Appl Sci. 2017;6(1):78–81. https://doi.org/10.20546/ijcmas.2017.601.010.
Goldschmidt RH, Chu C, Dong BJ. Initial Management of Patients with HIV infection. Am Fam Physician. 2016;94(9):708–16.
Izudi J, Alioni S, Kerukadho E, Ndungutse D. Virological failure reduced with HIV-serostatus disclosure, extra baseline weight and rising CD4 cells among HIV-positive adults in northwestern Uganda. BMC Infect Dis. 2016;16(1):614. https://doi.org/10.1186/s12879-016-1952-x.
Bayu B, Tariku A, Bulti AB, Habitu YA, Derso T, Teshome DF. Determinants of virological failure among patients on highly active antiretroviral therapy in University of Gondar Referral Hospital, Northwest Ethiopia: a case-control study. HIVAIDS - Res Palliat Care. 2017. https://doi.org/10.2147/HIV.S139516.
Legarth R, Omland LH, Kronborg G, Larsen CS, Pedersen C, Gerstoft J, et al. Educational attainment and risk of HIV infection, response to antiretroviral treatment, and mortality in HIV-infected patients. AIDS Lond Engl. 2014;28(3):387–96. https://doi.org/10.1097/QAD.0000000000000032.
Mensah E. Predictors of Virological failure among children infected with HIV-1 on Haart at KATH. 2017. http://ir.knust.edu.gh:8080/handle/123456789/10001 (Accessed 4 Feb 2019).
Measurement Issues in Using Pharmacy Records to Calculate Adherence to Antiretroviral Drugs: HIV Clinical Trials: Vol 14, No 2. https://www.tandfonline.com/doi/abs/10.1310/hct1402-68 (Accessed 2 Feb 2019).
Intasan J, Bunupuradah T, Vonthanak S, Kosalaraksa P, Hansudewechakul R, Kanjanavanit S, et al. Comparison of adherence monitoring tools and correlation to virologic failure in a pediatric HIV clinical trial. AIDS Patient Care STDs. 2014;28(6):296–302. https://doi.org/10.1089/apc.2013.0276.
Use of a prescription-based measure of antiretroviral therapy adherence to predict viral rebound in HIV-infected individuals with viral suppression - Cambiano - 2010 - HIV Medicine - Wiley Online Library. https://onlinelibrary.wiley.com/doi/full/10.1111/j.1468-1293.2009.00771.x. Accessed 2 Feb 2019.
The following people have contributed in diverse ways to the study, Prof. Yaw Afrane, Rev. (Prof.) John Appiah-Poku, Prof Margaret Lartey, Dr. Nyonuku Akosua Baddoo, Dr. Timothy Archampong, Dr. Emilia Udofia, Dr. Frank Owusu Sekyere, Dr. Bola Ozoya, Dr. Jocelyn Dame, Dr. Claire Keane, Dr. Abena Takyi, Mr. Isaac Boamah, Miss Christabel Siaw-Akugbey, Mr. Derrick Tetteh, Mrs. Obedia Seneake, Miss Sarah Brew and Mr. Shittu Dhikrullahi. I also acknowledge the UG-UF D43 training grant that supported me to take a course in research proposal development for this study.
AKAA and AK received support from University of Florida-University of Ghana Training Program in Tuberculosis and HIV Research in Ghana funded by Fogarty International Center at the National Institutes of Health (grant number D43 TW010055) for training.
Department of Child Health, Korle Bu Teaching Hospital, Accra, Ghana
Adwoa K. A. Afrane, Bamenla Q. Goka, Lorna Renner & Seth N. Owiafe
Department of Child Health, University of Ghana Medical School, Accra, Ghana
Bamenla Q. Goka & Lorna Renner
Department of Community Health, University of Ghana Medical School, Legon, Accra, Ghana
Alfred E. Yawson
Department of Health Policy Planning and Management, School of Public Health, University of Ghana, Legon, Accra, Ghana
Yakubu Alhassan
Department of Immunology, Korle Bu Teaching Hospital, Accra, Ghana
Seth Agyeman
Department of Medical Microbiology, University of Ghana Medical School, Accra, Ghana
Kwamena W. C. Sagoe
Department of Medicine, University of Florida, College of Medicine, Gainesville, Florida, USA
Awewura Kwara
Adwoa K. A. Afrane
Bamenla Q. Goka
Lorna Renner
Seth N. Owiafe
AKAA contributed to conception and study design, acquisition of data, analysis and interpretation of data and drafting of the manuscript. AK, BG, LR contributed to study design, interpretation of data and substantively revised it. SO contributed to acquisition of data. SA contributed to laboratory testing. YA contributed to analysis of data and interpretation of data. AEY contributed to analysis of data, interpretation of data and substantively revised it. KWCS contributed to conception and study design, interpretation of data and substantively revised it. All authors read and approved the final manuscript.
Correspondence to Adwoa K. A. Afrane.
Ethical approval (KBTH-IRB/ 00060/2017) was obtained from the Institutional Review Board of Korle Bu Teaching Hospital, Accra, Ghana. An informed consent was obtained from parents or legal guardians for each minor participant prior to enrolment to participate in the study. All procedures performed involving study participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
All authors declare that there are no competing interests.
Afrane, A.K.A., Goka, B.Q., Renner, L. et al. HIV virological non-suppression and its associated factors in children on antiretroviral therapy at a major treatment centre in Southern Ghana: a cross-sectional study. BMC Infect Dis 21, 731 (2021). https://doi.org/10.1186/s12879-021-06459-z
Paediatric HIV
Virological non-suppression | CommonCrawl |
Examples of the Mathematical Red Herring principle
I read the Mathematical Red Herring principle the other day on SE and wondered what some other good examples of this are? Also anyone know who came up with this term?
The mathematical red herring principle is the principle that in mathematics, a "red herring" need not, in general, be either red or a herring.
Frequently, in fact, it is conversely true that all herrings are red herrings. This often leads to mathematicians speaking of "non-red herrings," and sometimes even to a redefinition of "herring" to include both the red and non-red versions.
The only one I could think of a manifold with boundary which is not a manifold in the usual definition.
soft-question
Michaela LightMichaela Light
$\begingroup$ The OP's link has a link to ncatlab.org/nlab/show/red%20herring%20principle , which has a number of examples (including manifold with boundary). $\endgroup$ – Barry Cipra Jul 20 '15 at 23:09
All differential equations are stochastic differential equations,
but most stochastic differential equations are not differential equations.
Michael HardyMichael Hardy
$\begingroup$ Nor must they be stochastic. $\endgroup$ – PyRulez Feb 8 '19 at 18:24
My understanding of this principle is that sometimes, adjectives widen the scope of nouns (or modify their scope in other, more complicated ways) and this can be confusing. Examples:
partial functions aren't necessarily functions
non-unital rings aren't necessarily rings
non-associative algebras aren't necessarily algebras (under my preferred definition)
Another funny one is:
a partially ordered set isn't necessarily an ordered set.
In this case, an adverb (partially) is widening the scope of an adjective (ordered).
There's a related phenomenon whereby we give a black-box meaning to phrases of the form [adjective]-[noun], and that meaning isn't a compound of the meanings of these two words individually. E.g.
Topological spaces aren't "spaces" because the term "space" lacks a technical meaning
Lawvere theories aren't "theories" because the term "theory" lacks a technical meaning
goblin GONEgoblin GONE
A "set of measure zero" is often defined without saying what measure is used, or what value it takes on the set. Thus, neither the "measure" nor the "zero" are defined/true on their own.
Tim PostonTim Poston
The Division Algorithm is not an algorithm, it's a theorem.
Gerry MyersonGerry Myerson
$\begingroup$ the only herrings i've ever seen were blue & white $\endgroup$ – DanielWainfleet Aug 18 '15 at 5:22
Russell's Paradox and the Banach-Tarski Paradox are not paradoxes,they are theorems. Russell showed that the assumption of the existence of a set with certain properties leads to a contradiction, hence no such set exists. Banach-Tarski is a highly counter-intuitive property of 3-D Cartesian space, which may seem to contradict Lebesgue measure theory, but it uses non-measurable sets. Anyone have some more "paradoxes"?..... Russell's Paradox : If X is a widget which dapples every widget that does not dapple itself, and does not dapple any widget that dapples itself, then X dapples itself if and only if it doesn't.
DanielWainfleetDanielWainfleet
$\begingroup$ The Euler-Cramer paradox is also not a paradox,but it was a good question. $\endgroup$ – DanielWainfleet Aug 18 '15 at 5:24
For almost any mathematical noun $N$, a nonstandard $N$ by definition can not be a $N$.
That is, for any model $M$, we say that something is a $N$ within the model if satisfies $N$'s definition within the model. For example, a non-standard Turing machine is an element of $M$ that satisfies this definition, but using the models interpretation of it instead of the standard one. This only makes sense if $M$ can interpret $N$'s definition, of course. The main things that could be interpreted differently is what terms set, finite, and transition function mean. For example, a model could interpret finite to mean being "a thing whose size is a hyperinteger" instead of "a thing whose size is an integer". (Of course, we don't want to change too much, or we can not apply meta-mathematical techniques as easily. For example a model that defines finite as "contains a field" would not be a good model. Technically, we would call it a structure, not a model, at that point.)
Anyways, for most theories this something called the "standard model", or the intended interpretation. For example, the standard model of peano arithmetic is $\mathbb N$. Usually if we talk about $N$, without specifying what model we are working in, we assume we are talking about a $N$ in the standard model. There is no standard model of group theory, however, because the axioms of group theory do not have an intended interpretation. Each group has its own interpretation, and none of these are more intended than any other. With set theory, it gets kind of ambiguous whether or not there is a standard model (well, there definitely is not one in the traditional since, since the elements of a model are contained in a set, and there is no set of all sets). Models can also be submodels of other models.
So, what is a nonstandard $N$? For a theory $T$ with a standard model and a model $M$ of $T$ with the standard model as a submodel, a nonstandard $N$ in $M$ is an element of $M$ that satisfies $N$'s definition inside $M$, but is not a $N$ in the standard model. Since talking about $N$ unqualified usually means $N$ in the standard model, we can say that nonstandard $N$ are not $N$, for almost any mathematical noun $N$.
Of course, some of you may be asking "why would you want something like this"? To prevent you from offending model theorists, I'll answer it prematurely, with some examples.
There are nonstandard real numbers that are between $0$ and every positive real number (in certain models). Moreover, this can be done in a model that satisfies the same first order statements as the standard model (in fact, first order formulas even have the same standard solutions). Since theorems are statements, and some are first order, this means we already know a ton of stuff about the nonstandard real numbers. This lets us do Calculus in terms on infinitesimals instead of in terms of limits. This is called nonstandard analysis. Anything true in analysis is true in nonstandard analysis, and anything false is analysis is false in nonstandard analysis, so this is just an extension of regular analysis, and is therefore compatible with it. Although somethings are defined differently, they end up being equivalent. (For example, instead of the epsilon delta definition of a limit, an equivalent one is given in terms of infinitesimals.) There is even two entire text books for introductory calculus courses using nonstandard analysis instead of standard analysis.
You can also use nonstandard models in graph theory. Any nonstandard graph can be turned into a graph, but a finite nonstandard graph might get turned into an infinite graph. In fact, there's a nonstandard model of set theory in which every standard graph (infinite or otherwise) is a subgraph of a finite nonstandard graph. Therefore, you can show that the four color theorem on finite graphs in the nonstandard model implies the four color theorem on all graphs in the standard model.
PyRulezPyRulez
Not the answer you're looking for? Browse other questions tagged soft-question or ask your own question.
Are mathematical articles on Wikipedia reliable?
List objects that are not what they are called
What is it called when the definition of "<adjective> <thing>" does not imply that it is a special case of "<thing>"?
"Definite" property : does that mean something "alone" or must be precedeed by "positive"
What do modern-day analysts actually do?
The 'abelian group' custom
$\epsilon, \delta$…So what?
Why are mathematical results discovered by multiple people independently?
Which are the mathematical problems in non-standard analysis? (If any)
Good examples of Everyday Isomorphisms | CommonCrawl |
EURASIP Journal on Advances in Signal Processing
A novel approach to extracting useful information from noisy TFDs using 2D local entropy measures
Ana Vranković1,
Jonatan Lerga1,2 &
Nicoletta Saulig3
EURASIP Journal on Advances in Signal Processing volume 2020, Article number: 18 (2020) Cite this article
The paper proposes a novel approach for extraction of useful information and blind source separation of signal components from noisy data in the time-frequency domain. The method is based on the local Rényi entropy calculated inside adaptive, data-driven 2D regions, the sizes of which are calculated utilizing the improved, relative intersection of confidence intervals (RICI) algorithm. One of the advantages of the proposed technique is that it does not require any prior knowledge on the signal, its components, or noise, but rather the processing is performed on the noisy signal mixtures. Also, it is shown that the method is robust to the selection of time-frequency distributions (TFDs). It has been tested for different signal-to-noise-ratios (SNRs), both for synthetic and real-life data. When compared to fixed TFD thresholding, adaptive TFD thresholding based on RICI rule and the 1D entropy-based approach, the proposed adaptive method significantly increases classification accuracy (by up to 11.53%) and F1 score (by up to 7.91%). Hence, this adaptive, data-driven, entropy-based technique is an efficient tool for extracting useful information from noisy data in the time-frequency domain.
Various real-life phenomena produce signals that contain information on the systems of their origin. When analyzing underlying dynamics of these signals, most of them are non-stationary, meaning that their spectrum is time-varying and have dynamical spectral behavior (e.g., bio-medical signals, signals from radars, sonars, seismic activity, audio). In addition, many real-life signals are also multicomponent and may be decomposed to multiple amplitudes and/or frequency modulated components.
When dealing with signal interpretation, signals are commonly represented in one of two domains, namely time domain or frequency domain. In classical representations, the variables representing time and frequency are mutually exclusive. The time-frequency distribution (TFD) of the signal, when the signal has time-varying frequency content and dynamical spectral behavior, allows us to represent the signal jointly in time and frequency domain and to detect frequency components at each time instant [1]. TFDs are used in various fields, such as nautical studies [2], medicine [3, 4], electrical engineering [5, 6], and image processing [7, 8].
One of the simplest TFDs is the short-time Fourier transform (STFT) proposed by Gabor in 1946 which introduces a moving window and applies the Fourier transform (FT) to the signal inside the window [9]. However, the performance of the STFT is highly dependant on the window size and, according to the Heisenberg uncertainty principle, there exists a compromise between time and frequency resolution (increasing window size increases frequency resolution and reduces time resolution and vice versa). This has motivated the development of numerous other high-resolution TFDs, many of which are quadratic. The main shortcoming of the quadratic class of TFDs is the inevitable appearance of cross-terms or interferences caused by the TFD quadratic nature (this has led to the development of a wide range of reduced interferences quadratic TFDs).
In nonstationary signal analysis in the time-frequency domain, one of the fundamental problems is measuring the signal information content, both globally and locally (e.g., complexity and the number of signal components). Knowing the information content allows efficient pre-processing and dynamic memory allocation prior to signal features extraction (e.g., instantaneous frequency and amplitude estimation) in blind source separation, machine learning, automatic classification systems, etc.
A challenging problem in signal analysis is blind source separation, i.e., separating signal components from a noisy mixture without any a-priori knowledge about the signal. Some of the algorithms that are considered standard in solving this problem are greedy approach [10, 11], relaxation approach [12, 13], smoothed approach [14], and component analysis method [15–17]. A time-frequency approach has been proposed in [18]. There is a variety of different methods and several new approaches have been studied in the last few years [19–22]. Methods exploring the use of entropy measures in separating the source signal have also been investigated in many studies.
Flandrin et al. [23] in their paper from 1994 gave a detailed discussion on the Rényi information measure for deterministic and random signals. In this study, authors have indicated the Rényi entropy measure general utility as a complexity indicator of signals in the time-frequency plane. Extensive research has shown that the most suitable entropy measure for TFD of a signal is the Rényi entropy [24].
In [25, 26] and later on in [27, 28], authors present and analyze a method based on the Rényi entropy for blind source separation as well as an extensive comparison of the proposed method with several different methods. Authors state that the method based on Rényi's entropy should be preferred over other methods. Methods in the mentioned papers are not related to the signals TFD.
A modification of sparse component analysis based on the time-frequency domain was given in [29]. The blind source separation problem in the time-frequency domain has also been investigated in [30], as well as in [31]: the mixed signals were transformed from the time domain to the time-frequency domain. Both the effectiveness and superiority of the proposed algorithm were verified, but under the assumption that there are several sensors and that there are single-source points. Both methods are dependent on the number of sensors. Other methods dependent on the number of sensors for blind source separation based on the mixing matrix are presented in [32, 33].
A method of combining wavelet transform with time-frequency blind source separation based on the smooth pseudo-Wigner-Ville distribution is investigated in [34] to extract electroencephalogram characteristic waves, and the result is used to construct the support vector machine. In the paper written by Saulig et al. [35], the authors propose an automatic adaptive method for identification and separation of the useful information contained in TFDs. The main idea behind the method is based on the K-means clustering algorithm that performs a 1D partitioning of the data set. Instead of hard thresholding, authors use blind separation of useful information from background noise with the local Rényi entropy. The advantage of this approach is that there is no need for any prior knowledge of the signal. The results show that this method acts as a near-to-optimal automatic hard-threshold selector.
Combining a data-driven method for adaptive Rényi entropy calculation with the relative intersection of confidence intervals (RICI) method could allow the user to extract useful content without the need of any information about the signal source. The method could automatically adapt to the data obtained from the signal TFD. In this paper, we present a method for blind source separation based on the local 2D windowed Rényi entropy of the signals TFD. The method is self-adaptive in terms of choosing the appropriate window for the entropy calculation. It has been tested on the spectrogram and reduced interference distribution (RID) based on the Bessel function. Results are obtained for multicomponent signals. The results are compared to both fixed TFD thresholding and RICI based selection of fixed TFD thresholds without entropy calculations. In addition, comparison to the recently introduced [35] entropy-based method is performed. The method is adaptive and no prior knowledge of the signal is required. It can be applied to various multicomponent frequently modulated signals both in noisy and noise-free environments. This blind-source separation method could potentially be applied to different real-life problems, such as biomedical signals (EEG, ECG, etc.) and seismology (earthquake seismographs). The method performance remains stable when considering different TFDs.
The rest of the paper is structured as follows. Section 2.1 provides a brief overview of time-frequency signal representations starting from the spectrogram and focusing on the RID with Bessel function. Entropy measures, in particular, the Rényi entropy, is defined in Section 2.2. Next, the proposed method is described in Section 2.3, followed by the RICI based adaptive thresholding procedure given in Section 2.4. Section 3 elaborates in detail numerical results achieved by the proposed technique. Finally, conclusions are found in Section 4. Nomenclature used in the paper is given in Table 1.
Table 1 Nomenclature
Time-frequency distributions
The majority of real-life signals are non-stationary signals, meaning that their frequency content changes with time. The classic time or frequency representation does not display the dependencies between the two.
TFDs are used for the representation of the signal's frequency contents w.r.t. time, allowing the analyst to see the start and end time of each signal component in the time-frequency domain. Unlike in classical representations, TFD can show whether the signal is monocomponent or multicomponent which can be hard to achieve with spectral analysis.
Two different distributions were used for the algorithm validation, namely the spectrogram and the reduced interference distribution (RID) based on the Bessel function.
The spectrogram
Computation of the spectrogram from the signal's time domain essentially corresponds to the squared magnitude of the STFT of the signal [1, 36, 37].
$$ \begin{aligned} S_{x}(t,f)=\mid STFT_{x}(t,f)\mid^{2}\\ = \left|\int_{-\infty}^{\infty}x(\tau)\omega(t-\tau)e^{-j2\pi f\tau} d\tau\right|^{2} \end{aligned} $$
x is the analyzed signal and ω is the smoothing window. The spectrogram introduces nonlinearity in the time-frequency representation. The spectrogram of the sum of two signals does not correspond to the sole sum of the spectrograms of the two signals but presents a third term if the two components share time-frequency supports. Also, the representation is dependent on the window function ω(t). A smaller window produces better time resolution, while a wider window gives a better frequency resolution. In other words, the observation window ω(t) allows localization of the spectrum in time but also smears the spectrum in frequency.
The reduced interference distribution (RID) based on Bessel function
The RID is a quadratic TFD in which the cross-terms are constricted w.r.t. the auto-terms. In this paper, the Bessel function of the first kind has been used [38]. The distribution is defined as
$$ RIDB_{x}(t,f)=\int_{-\infty}^{+\infty} h(\tau)R_{x}(t,\tau)e^{-j2\pi f \tau}d\tau, $$
where h is the frequency smoothing window and Rx represents the kernel
$$ \begin{aligned} R_{x}(t,\tau)=\int_{t-|\tau|}^{t+|\tau|}\frac{2g(\upsilon)}{\pi|\tau|}\sqrt{1-\left(\frac{\upsilon-t}{\tau}\right)^{2}}x\\ \cdot \left(\upsilon+\frac{\tau}{2}\right)x^{*}\left(\upsilon-\frac{\tau}{2}\right) d\upsilon, \end{aligned} $$
g is the time smoothing window and x∗ denotes the complex conjugate of x. The paper provides the comparison of the results for the simple spectrogram and high-resolution RID. Note, however, that other quadratic, high-resolution TFDs can also be used with similar performances.
The Rényi entropy
Entropy measures are most commonly used in the analysis of medical signals such as EEG, heart-rate variability, blood pressure, and similar.
The entropy estimation is a calculation of the time density of the average information in a stochastic process.
Shannon in [39] presents the concept of information of a discrete source without memory as a function that quantifies the uncertainty of a random variable at each discrete time. The average of that information is known as Shannon entropy. The Shannon entropy is restricted to random variables taking discrete values. A discrete random variable s, which can take a finite number M of possible values si∈{s1,…,sM} with corresponding probabilities pi∈{p1,...,pM}, has the Shannon entropy defined as
$$ H(s)=-\sum_{i=1}^{M}p_{i}log_{2}(p_{i}). $$
From the Shannon entropy, many other entropy measures have emerged. One of the extensions of the Shannon entropy has been presented by Rényi [40].
The Rényi entropy of order α, where α≥0 and α≠1 [23], is defined as
$$ H(s)=\frac{1}{1-\alpha}log_{2}\sum_{i=1}^{M}p^{\alpha}_{i}. $$
Depending on the chosen α, different entropy measures are defined. For α=0, the obtained entropy is known as Hartley entropy. H(s) as \(\alpha \xrightarrow {} 1\) is the Shannon entropy, while α=2 is the collision entropy used in quantum information theory and it bounds the collision probability of the distribution.
When \(\alpha \xrightarrow {}+\infty \), the obtained entropy is known as the min-entropy.
When the TFD entropy is calculated, odd integer values are suggested for the parameter α as the contribution of cross-terms oscillatory structures cancels under the integration with odd powers [24, 40].
The definition of Rényi entropy can be extended to continuous random variables by
$$ H(s)=\frac{1}{1-\alpha}log_{2}\int_{-\infty}^{+\infty}p^{\alpha}_{i}(x) dx. $$
When it is applied to a normalized TFD, the Rényi entropy is defined as
$$ H_{\alpha,(t,f)}=\frac{1}{1-\alpha}log_{2}\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}{C^{\alpha}}(t, f) dt df, $$
where Cα(t,f) is TFD of the signal.
The proposed method
The proposed method, aimed at extracting useful information from noisy signals, relies on the hypothesis that a two-dimensional entropy map could provide a more suitable substrate for a sensitive extraction procedure, compared to the classical extraction procedures from TFDs. After obtaining the TFD of the signal, for each point in the distribution, the local entropy is calculated over square window sizes ranging from one to the one tenth of the signal size as
$$\begin{array}{@{}rcl@{}} H_{\rho(t,f)}^{\Delta}=\frac{1}{1-\alpha}log_{2}\int_{t-\Delta/2}^{t+\Delta/2}\int_{f-\Delta/2}^{f+\Delta/2}{C^{\alpha}}(t, f) dt df. \end{array} $$
The different window sizes are defined as
$$ \Delta=\{\Delta_{1},\Delta_{2},...\Delta_{n}\}, $$
$$\Delta_{1}=2 \times 2$$
$$\Delta_{n}=\frac{\text{signal length}}{10}\times \frac{\text{signal length}}{10}.$$
The entropy values \(H_{\rho (t,f)}^{\Delta }(t,f)\) for each window size are given as input to the RICI algorithm to determine the window size for the given point based on the entropy changes. The window chosen by the RICI algorithm, in this case, corresponds to the first inflection point when entropy values are modeled as a curve, suggesting that a change in entropy behavior has occurred. In this case, the change in entropy behavior is an indicator of the point where noise starts to influence the entropy measure.
For every t and f the Rényi entropy \(H_{\rho (t,f)}^{\text {RICI}}(t,f)\) is calculated so that
$$ H_{\rho(t,f)}^{\text{RICI}}(t,f)=\text{RICI}\Big\{ H_{\rho(t,f)}^{\Delta}(t,f)\Big\}, $$
$$ H_{\rho(t,f)}^{\Delta}=\Big\{ H_{\rho(t,f)}^{\Delta_{1}},H_{\rho(t,f)}^{\Delta_{2}},...,H_{\rho(t,f)}^{\Delta_{n}}\Big\}. $$
HΔ represents the entropy calculation at the desired point for a specified window size.
The algorithm results are produced by observing the intersection of confidence intervals of the signal entropy for the given window size in comparison with the confidence intervals of the other proposed window sizes. The aim of applying the RICI rule to \(H_{\rho (t,f)}^{\Delta }(t,f)\) is to track the interval in which the change in the growth of the entropy occurs.
After the calculation is performed for every pair of t and f, the optimal entropy picture is obtained
$$ M= H_{\rho(t,f)}^{\text{RICI}}(t,f), t=1..N, f=1..M, $$
where N represent number of samples, and M represents frequency bins. The RICI algorithm selects the desired window size for entropy calculation by tracking the existence and estimating the amount of the intersection of confidence intervals.
In the RICI algorithm, the number of overlapping confidence intervals is calculated to reduce the estimation bias. The method calculates N confidence intervals for each M(n). To produce the function M(n) with a noticeable difference between the signal and the noise entropy, the overlapping of confidence intervals is calculated and the interval Δ+(n) defines the ideal interval. Δ+(n) presents the last index that has the lowest estimation error [41]. The estimation error is calculated as the pointwise mean squared error (MSE) as
$$ \text{MSE}(n,\Delta)=(\sigma(n,\Delta))^{2}+(\omega(n,\Delta))^{2}, $$
where σ(n,Δ) represents the estimation variance and ω(n,Δ) is the estimation bias.
In [42–44], the asymptotic estimation error is shown to demonstrate the following properties, where β is a constant and it is not signal-dependent
$$ \frac{|\omega(n,\Delta^{+})|}{\sigma(n,\Delta^{+})}=\beta. $$
When Δ>Δ+, β is defined as β>1 and β<1 if Δ<Δ+. The ideal window size Δ+ is the one providing the optimal bias-to-variance trade-off resulting in the best estimate M(n,Δ+).
Every confidence interval is defined by its lower and upper limits
$$\begin{array}{@{}rcl@{}} D(n,\Delta)=[L(n,\Delta),U(n,\Delta)]. \end{array} $$
The lower confidence interval L(n,Δ) limit is defined as
$$\begin{array}{@{}rcl@{}} L(n,\Delta)=M(n,\Delta)-\Gamma \times \sigma(n,\Delta), \end{array} $$
and upper confidence interval limit U(n,Δ) is defined as
$$\begin{array}{@{}rcl@{}} U(n,\Delta)=M(n,\Delta)+\Gamma \times \sigma(n,\Delta), \end{array} $$
where Γ is the threshold parameter of the confidence intervals.
The RICI rule, when compared to the original intersection of confidence interval (ICI) rule, introduces additional tracking of the amount of overlapping of confidence intervals, defined as
$$\begin{array}{@{}rcl@{}} O(n,\Delta) &=& \underline{U}(n,\Delta) - \overline{L}(n,\Delta), \end{array} $$
Δ=1,2,⋯,L. In order to obtain the value belonging to the finite interval [0,1], O(n,Δ) is divided by the size of the confidence interval D(n,Δ) resulting in R(n,Δ) defined as
$$\begin{array}{@{}rcl@{}} R(n,\Delta) &=& \frac{\underline{U}(n,\Delta) - \overline{L}(n,\Delta)}{U(n,\Delta)-L(n,\Delta)}. \end{array} $$
For the optimal window width selection by the RICI rule, the previously described procedure can be expressed as
$$\begin{array}{@{}rcl@{}} R(n,\Delta) &\geq& R_{c}, \end{array} $$
where Rc is a chosen threshold [41, 45, 46]). The window width Δ+ obtained by the RICI rule is defined as
$$\begin{array}{@{}rcl@{}} \Delta^{+}=\max \left\{ \Delta : R(n,\Delta) \geq R_{c} \right\}. \end{array} $$
This results in an image of the signal entropy. The flowchart of the algorithm is reported in Fig. 1.
Flowchart of the proposed algorithm
Next, the mask for the original signal is extracted from the previously obtained time-frequency image again by using the RICI thresholding method.
The RICI thresholding method
To extract a mask from the optimal entropy map, the RICI method is used once again. Namely, the threshold is defined as
$$ \tau ={0.01\times\max(M),0.02\times \max(M),...0.99\times \max(M)}. $$
For every τ, E(Mρ(t,f,τ)) is calculated and it represents the signal energy when a threshold is applied on the entropy map. E(Mρ(t,f,τ)) is the energy of the distribution for the chosen threshold τ. The energy calculation for every threshold is given as input to the RICI algorithm
$$ \tau^{+}=\text{RICI}\left\{ E(M_{\rho}(t,f,\tau))\right\}. $$
With that, the entropy mask is extracted
$$ \chi=M_{\rho}(t,f,\tau^{+}). $$
The next section estimates the performances of the proposed approach.
Experimental setup
The method has been tested on four different types of signals, where two of them were synthetic signals. The resulting error shows the difference between the non-zero elements when the mask of the noise-free signal is subtracted from the mask obtained by the tested method. The correct extraction presents all zeros in the resulting error map, where 1 is false negative and −1 is a false positive. Two measures were used to evaluate the performance of the proposed method. The first one is accuracy, calculated as the difference between a given result and the correct result. In this case, the points where the signal and noise were correctly classified are the 0 elements in the subtraction mask. In the metric calculations they present TruePositives(TP)+TrueNegatives(TN). TruePositives(TP) are correctly classified signal points and TrueNegatives (TN) are correctly classified points where the signal is not present. FalseNegatives(FN) are points where the signal is present but the mask obtained from this method discarded them as noise and the value of those points is 1 in the subtraction matrix. FalsePositives(FP) are points where the method misclassified noise as a signal and are defined as −1 values in the subtraction matrix. Description of the used points is given in Table 2. In that case, accuracy is calculated as follows
$$ \text{accuracy}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+\text{FN}} $$
Table 2 Explanation of points in the map used for validation
As can be seen from the expression above, accuracy is not suitable for unbalanced data sets. In mask extraction, the useful signal takes only a portion of the whole set. F1 score is more suitable in cases when there is an uneven class distribution; in this specific case, it is more suitable as the useful signal takes only a smaller portion of the signal TFD. F1 score considers both precision and recall of the result. It is a harmonic mean between the two
$$ F1= 2\times \frac{\text{precision}\times \text{recall}}{\text{precision}+\text{recall}}, $$
$$ \text{ precision}=\frac{\text{TP}}{\text{TP}+\text{FP}}, $$
$$ \text{recall}=\frac{\text{TP}}{\text{TP}+\text{FN}}. $$
Accuracy and F1 score are usually used as metrics in machine learning for evaluating classification models. In this case, it has been used to determine the fit of the obtained mask for the given noise-free signal. FP and FN are the classification version of statistical error types 1 and 2. This metrics are used in several papers that deal with image [47, 48] and signal processing [49], such as EEG signals [50, 51]. In addition to numerical results, images of the obtained signal masks are shown of Figs. 3, 4, 5, 6, 7, and 8 where the obtained masks are emphasized in yellow.
TFD of the first noise-free signal (a), TFD of the second noise-free signal (b), RIDB distribution
Results for the first tested signal with SNR=-3dB, TFD of the noisy signal (a), obtained optimal entropy map from spectrogram (b), obtained optimal entropy map from RIDB (c), mask obtained from applying RICI threshold on spectrogram (d), mask obtained from applying RICI threshold on RIDB (e), and mask from applying fixed threshold of 15% on signal spectrogram (f)
Results for the first tested signal with SNR=3 dB, TFD of the noisy signal (a), obtained optimal entropy map from spectrogram (b), obtained optimal entropy map from RIDB (c), mask obtained from applying RICI threshold on spectrogram (d), mask obtained from applying RICI threshold on RIDB (e), and mask from applying fixed threshold of 5% on signal spectrogram (f)
Results for the second tested signal with SNR=-3 dB, TFD of the noisy signal (a), obtained optimal entropy map from spectrogram (b), obtained optimal entropy map from RIDB (c), mask obtained from applying RICI threshold on spectrogram (d), mask obtained from applying RICI threshold on RIDB (e), mask from applying fixed threshold of 15% on signal spectrogram (f)
Results for the second tested signal with SNR=3 dB, TFD of the noisy signal (a), obtained optimal entropy map from spectrogram (b), obtained optimal entropy map from RIDB (c), mask obtained from applying RICI threshold on spectrogram (d), mask obtained from applying RICI threshold on RIDB (e), mask from applying fixed threshold of 10% on signal spectrogram (f)
Results for the first real signal of the dolphin sound, TFD of the original signal (a), obtained optimal entropy map from spectrogram (b), obtained optimal entropy map from RIDB (c), mask obtained from applying RICI threshold on spectrogram (d), mask obtained from applying RICI threshold on RIDB (e), and mask from applying fixed threshold of 10% on signal spectrogram (f)
Results for the second real seismology signal, TFD of the original signal (a), obtained optimal entropy map from spectrogram (b), obtained optimal entropy map from RIDB (c), mask obtained from applying RICI threshold on spectrogram (d), mask obtained from applying RICI threshold on RIDB (e), and mask from applying fixed threshold of 10% on signal spectrogram (f)
The first signal to be tested was the combination of three atoms as shown in Fig. 2. Noise was added with different signal-to-noise ratios (SNRs) and the extracted useful information content from signals, for SNR's −3 dB and 3 dB are shown in Figs. 3 and 4.
Results are shown in Tables 3 and 4 for the spectrogram distribution and in the Tables 5 and 6 for the RIDB distribution. Methods are compared by means of accuracy and F1 score from −3 dB to 10 dB SNR.
Table 3 Comparison of the accuracy measure applied on the spectrogram
Table 4 Comparison of the F1 measure applied on the spectrogram
Table 5 Comparison of the accuracy measure applied on the RIDB
Table 6 Comparison of the F1 measure applied on the RIDB
The proposed method was compared to the state-of-the-art algorithm based on local entropy in one dimension described in [35] as well as to the RICI thresholding of the TFD and the fixed thresholding of the signal TFD. The RICI TFD thresholding is performed similarly to the described procedure in Section 2.4 with the only difference that the input to the RICI operator in Eq. 23 is not the energy calculation for different τ of the optimal entropy map, but the energy calculation for different τ of the signal TFD
$$ \tau^{+}=\text{RICI}\left\{ E(\rho(t,f,\tau)))\right\} $$
The extracted mask is then
$$ \chi=\rho(t,f,\tau^{+}). $$
A comparison of the obtained results for the signal spectrogram shows that the proposed method overperforms the fixed TFD thresholding, local entropy-based approach, and the RICI TFD threshold method in most cases.
Figure 3 shows the results obtained for the first synthetic signal with SNR=-3 dB. Figure 3a shows the spectrogram of the noisy signal. Figure 3b and c represent the optimal entropy maps for the spectrogram and RIDB respectively. Results for the RICI thresholding are in Fig. 3d for the spectrogram, and in Fig. fig:atomi-3fixe for the RIDB. The result of fixed thresholding is in Fig. 3f.
Comparison of the methods metrics for the spectrogram distribution is reported in Tables 3 and 4. The fixed thresholding has the highest error, while the proposed method gives similar results to the RICI TFD thresholding. The local entropy-based algorithm does not perform as well as the proposed method. While the proposed method has a higher accuracy of 0.001, the RICI TFD threshold has a slightly higher F1 score. The local entropy-based method seems to perform worse than the fixed threshold as well as the proposed method when applied to spectrogram in this case when SNR= −3 dB.
The proposed method performs far better on the RIDB distribution in Tables 5 and 6. The local entropy-based algorithm does not appear to be sui for the RIDB distribution. When compared to the proposed method and RICI TFD threshold, the difference between the method's measurements are much greater than in the case of the spectrogram. The proposed method has higher accuracy for 0.102 and a higher F1 score for 0.018 when compared to the RICI TFD thresholding.
The fixed threshold method has a lower score in comparison to both the proposed method and the RICI TFD threshold in the case of both spectrogram and RIDB distribution. The local entropy-based algorithm does not perform as well as the proposed method or the RICI TFD thresholding for low SNR values.
The representation of obtained results for SNR=3 dB can be seen in Fig. 4. Figure 4a reports the spectrogram of the noisy signal. The optimal entropy map for the spectrogram and RIDB are in Fig. 4b and c. The results for the RICI thresholding are in Fig. 4d, for the spectrogram, and in Fig. 4e for the RIDB. The result of fixed thresholding is in Fig. 4f.
For the spectrogram, Tables 3 and 4, RICI TFD threshold seems to have the best result with the F1 score higher then the proposed method for 0.066 in case of the first signal spectrogram. The local entropy-based signal has the accuracy lower than the proposed method by 0.015, but the F1 score is better by 0.011.
The proposed method still gives better results when applied to the RIDB distribution. It outperforms the RICI TFD threshold by 0.055 in accuracy and by 0.054 in the F1 score and fixed threshold by 0.094 in accuracy and by 0.051 in the F1 score. The entropy-based method has lower accuracy by 0.041 and F1 score by 0.073.
As can be seen, the proposed method produces similar results as the RICI similar results to the RICI TFD thresholding. Namely, it presents slightly better performance in all cases, except for the first signal when SNR=3 dB (in this case, the F1 score is highest for the RICI thresholding while the accuracy measure is still higher for the proposed method). Differences in the accuracy, for the first signal, range from 0.001, in case of the spectrogram, to 0.102 in the case of the RIDB distribution.
The results for the second multi-component synthetic signal with added noise with SNR= −3 dB are reported in Fig. 5.
Figure 5a reports the spectrogram of the noisy signal. Figure 5b shows the obtained optimal entropy map from the spectrogram, and Fig. 5c represents the obtained optimal entropy map from the RIDB distribution. The map obtained from the RICI threshold is in Fig. 5d for the spectrogram, and in Fig. 5e for the RIDB. The result of fixed thresholding is in Fig. 5f.
The results for the spectrogram are presented in Tables 3 and 4. The accuracy measure is larger for the proposed method for 0.008 when compared to the RICI TFD threshold, and for 0.019 when compared to the best fixed threshold. The F1 measure of the proposed method is 0.001 higher than the RICI TFD threshold and 0.005 then the highest F1 score for the fixed threshold. The local entropy-based method, in this case, has the highest accuracy value but the lowest F1 score when compared to the proposed method and RICI TFD threshold.
From Tables 5 and 6, it is visible that, just as in the case of the first signal, the proposed method outperforms the other three. The difference between the accuracy is 0.058 and 0.091 for the RICI TFD threshold and fixed threshold. Even though the accuracy is higher for the proposed method, the F1 score is in favor of the RICI TFD threshold for 0.081. The local entropy-based method has a slightly higher accuracy measure than the proposed method, but it also has a very low F1 score. In terms of the measures, the proposed method, in this case, has higher accuracy than RICI TFD threshold and higher F1 score than the entropy-based method.
The results for SNR=3 dB are in Fig. 6. Figure 6a represents the spectrogram of the noisy signal. The optimal entropy map is in Fig. 6b for the spectrogram and in Fig. 6c for the RID. Figure 6d and e report the RICI TFD threshold result for the spectrogram and RID while in Fig. 6f the results of the fixed threshold are reported.
The difference in accuracy between the proposed method and the RICI TFD threshold for the spectrogram is 0.002 (Table 3), while between the proposed method and fixed threshold, it is 0.011. The F1 score (Table 4) is higher for the proposed method, compared to all other methods.
When the methods are applied to the RID distribution, accuracy (Table 5) is higher for 0.058 when the proposed method is compared to the RICI TFD threshold and 0.012 when compared to the entropy-based method.
The considerably larger improvement obtained by the proposed method can be observed for the RID distribution. In the case of the RID distribution, the proposed method exceeds all other approaches. Specifically, the proposed optimal entropy map increases the accuracy for different SNRs, when compared to the RICI thresholding, from 0.055 to 0.1 in case of the first signal, and from 0.017 to 0.058 in the case of the second signal, i.e., improvement from 5.86 to 11.53% in the case of the first signal and from 1.83 to 6.79% in the case of the second signal, and when compared to the local entropy-based algorithm, from 0.041 to 0.062 in case of the first signal, and for up to 0.026 in the case of the second signal, i.e., improvement from 4.03 to 6.65% in case of the first signal and up to 2.77% in the case of the second signal.
Differences between the results obtained by the spectrogram and the RID distribution are substantial. The RICI thresholding has considerably better performance on the signal spectrogram, regardless of the tested signal. The proposed method in the case of the first signal performs better on the RID, while in case of the second signal, the results are finer for the spectrogram.
The optimal entropy map provides similar results to the local entropy-based method and RICI TFD threshold when applied to the spectrogram, but it outperforms them when applied to the RID distribution. All three methods are preferred to the fixed thresholding.
Real-life examples
The proposed method has been applied to a real-life signal, i.e., dolphin sound and seismology signals.
In Fig. 7, results obtained by the methods for the first real-life signal are displayed. In Fig. 7a, the RID distribution of the original signal is shown. The optimal entropy maps are in Fig. 7b and c for the spectrogram and RID distribution. Figure 7d and e present the maps obtained on the same distributions but by means of the RICI TFD threshold. The results for the fixed threshold of 30% is in Fig. 7f.
In Fig. 8, the extracted maps for a seismic signal are reported. Figure 8a shows the original signal's TFD. Figure 8b shows the optimal entropy map extracted from the spectrogram and Fig. 8c shows the optimal entropy map extracted from the RID distribution. Figure 8d and e present the maps obtained from the RICI TFD threshold for the same distributions. The result for the fixed threshold of 5% is reported in Fig. 8f.
It is difficult to draw conclusions in the case of the real-life signal as we can not obtain numerical results. Visually, the results are similar in the case of the dolphin sound signal analysis for all tested methods.
For the seismic signal, the largest difference between the obtained signal maps for the different approaches seems to be in the case of the spectrogram, where the optimal map preserves more of the signal. The RID distribution, unlike in the case of the dolphin sound, seems to preserve more of the signal.
Here, we introduced a method for blind source separation of signal components and extraction of useful information from noisy TFDs based on a 2D local Rényi entropy. The method uses adaptive windows, the size of which is calculated utilizing the RICI rule. One of the advantages of the approach is that it does not require any specific knowledge of the signal or noise. Also, the proposed technique performs well for different TFDs, as shown in the paper for different SNRs. The method has been applied to both synthetic and real-world signals. When compared to fixed TFD thresholding, the adaptive approach when the RICI rule is applied directly to TFD thresholding, and the current 1D local entropy-based method, the proposed adaptive 2D Rényi entropy approach is shown to significantly increase classification accuracy and F1 score. Hence, the method can be used as an efficient tool for extracting useful information from noisy data in the time-frequency domain. Future work prospects a combination of the proposed approach with machine learning techniques to yield additional classification improvements.
Please contact the authors for data requests.
Fourier transform
ICI:
Intersection of confidence interval
MSE:
Mean squared error
RID:
Reduced interference distribution
RICI:
Relative intersection of confidence intervals
STFT:
Short-time Fourier transform
SNR:
Signal-to-noise-ratio
TFD:
Time-frequency distribution
B. Boashash, Time-frequency Signal Analysis and Processing: a Comprehensive Reference (Elsevier Academic Press, Australia, 2016).
Z. Hong, W. Qing-ping, P. Yu-jian, T. Ning, Y. Nai-chang, in 2015 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC). A sea corner-reflector jamming identification method based on time-frequency feature (Ningbo, 2015).
P. A. Karthick, G. Venugopal, S. Ramakrishnan, Analysis of surface emg signals under fatigue and non-fatigue conditions using b-distribution based quadratic time frequency distribution. J. Mech. Med. Biol.15(2) (2015).
M. A. Colominas, M. E. S. H. Jomaa, N. Jrad, A. Humeau-Heurtier, P. Van Bogaert, Time-varying time–frequency complexity measures for epileptic eeg data analysis. IEEE Trans. Biomed. Eng.65(8), 1681–8 (2018).
M. Noor Muhammad Hamdi, A. Z. Sha'ameri, Time-frequency represetation of radar signals using doppler-lag block searching wigner-ville distribution. Adv Electr Electron Eng. 16: (2018).
Z. Wang, Y. Wang, L. Xu, in Communications, Signal Processing, and Systems. CSPS 2017. Lecture Notes in Electrical Engineering. Time-frequency ridge-based parameter estimation for sinusoidal frequency modulation signals (SpringerSingapore, 2019).
A. Mjahad, A. Rosado-Muñoz, J. F. Guerrero-Martínez, M. Bataller-Mompeán, J. V. Francés-Villora, M. K. Dutta, Detection of ventricular fibrillation using the image from time-frequency representation and combined classifiers without feature extraction. Appl. Sci.8(11) (2018).
Y. Zhao, S. Han, J. Yang, L. Zhang, H. Xu, J. Wang, A novel approach of slope detection combined with Lv's distribution for airborne SAR imagery of fast moving targets. Remote Sens.10:, 764 (2018).
D. Gabor, Part 1 J. Inst. Electr. Eng. Part III Radio Commun.93:, 429–457 (1946).
S. G. M. and, Matching pursuits with time-frequency dictionaries. IEEE Trans. Sig. Process.41(12), 3397–3415 (1993).
J. A. Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory. 50(10), 2231–2242 (2004).
S. Chen, D. Donoho, M. Saunders, Atomic decomposition by basis pursuit. SIAM Rev.43(1), 129–159 (2001).
I. F. Gorodnitsky, B. D. Rao, Sparse signal reconstruction from limited data using focuss: a re-weighted minimum norm algorithm. IEEE Trans. Sig. Process.45(3), 600–616 (1997).
H. Mohimani, M. Babaie-Zadeh, C. Jutten, A fast approach for overcomplete sparse decomposition based on smoothed ℓ0norm. IEEE Trans. Sig. Process.57(1), 289–301 (2009).
J. Wen, H. Liu, S. Zhang, M. Xiao, A new fuzzy K-EVD orthogonal complement space clustering method. Neural Comput. Appl.24(1), 147–154 (2014).
E. Eqlimi, B. Makkiabadi, in 2015 23rd European Signal Processing Conference (EUSIPCO). An efficient K-SCA based unerdetermined channel identification algorithm for online applications, (2015), pp. 2661–2665.
P. Addabbo, C. Clemente, S. L. Ullo, in 2017 IEEE International Workshop on Metrology for AeroSpace (MetroAeroSpace). Fourier independent component analysis of radar micro-doppler features, (2017), pp. 45–49.
A. Belouchrani, M. Amin, Blind source separation based on time-frequency signal representations. IEEE Trans. Sig. Process.46(11), 2888–2897 (1998).
F. Feng, M. Kowalski, Underdetermined reverberant blind source separation: sparse approaches for multiplicative and convolutive narrowband approximation. IEEE/ACM Tran. Audio Speech. Lang. Process.27(2), 442–456 (2019).
T. -H. Yi, X. -J. Yao, C. -X. Qu, H. -N. Li, Clustering number determination for sparse component analysis during output-only modal identification. J. Eng. Mech.145:, 04018122 (2019).
P. Zhou, Y. Yang, S. Chen, Z. Peng, K. Noman, W. Zhang, Parameterized model based blind intrinsic chirp source separation. Digit Sig. Process.83:, 73–82 (2018).
S. Senay, Time-frequency bss of biosignals. Healthcare Technol. Lett.5(6), 242–246 (2018).
P. Flandrin, R. G. Baraniuk, O. Michel, in Proc. IEEE Int. Conf. Acoustics Speech and Signal Processing ICASSP'94. Time-frequency complexity and information, (1994), pp. 329–332.
R. G. Baraniuk, P. Flandrin, A. J. E. M. Janssen, O. J. J. Michel, Measuring time-frequency information content using the Renyi entropies. IEEE Trans. Inf. Theory. 47(4), 1391–1409 (2001).
K. E. Hild, D. Erdogmus, J. Príncipe, Blind source separation using Renyi's mutual information. IEEE Sig. Process. Lett.8(6), 174–176 (2001).
D. Erdogmus, K. E. Hild Ii, J. C. Principe, Blind source separation using Renyi's α-marginal entropies. Neurocomputing. 49(1–4), 25–38 (2002).
K. E. Hild, D. Pinto, D. Erdogmus, J. C. Principe, Convolutive blind source separation by minimizing mutual information between segments of signals. IEEE Trans. Circ. Syst. I Regular Papers. 52(10), 2188–2196 (2005).
K. E. Hild II, D. Erdogmus, J. C. Principe, An analysis of entropy estimators for blind source separation. Sig. Process.86(1), 182–194 (2006).
X. Yao, T. Yi, C. Qu, H. Li, Blind modal identification using limited sensors through modified sparse component analysis by time–frequency method. Comput-Aided Civil Infrastruct Eng. 33: (2018).
F. Ye, J. Chen, L. Gao, W. Nie, Q. Sun, A mixing matrix estimation algorithm for the time-delayed mixing model of the underdetermined blind source separation problem. Circ. Syst. Sig. Process., 1–18 (2018).
Q. Guo, G. Ruan, L. Qi, A complex-valued mixing matrix estimation algorithm for underdetermined blind source separation. Circ. Syst. Sig. Process.37(8), 3206–3226 (2018).
F. Ye, J. Chen, L. Gao, W. Nie, Q. Sun, A mixing matrix estimation algorithm for the time-delayed mixing model of the underdetermined blind source separation problem. Circ. Syst. Sig. Process.38:, 1–18 (2018).
MathSciNet Google Scholar
Q. Guo, C. Li, R. Guoqing, Mixing matrix estimation of underdetermined blind source separation based on data field and improved fcm clustering. Symmetry. 10:, 21 (2018).
X. -Y. Zhang, W. -R. Wang, C. -Y. Shen, Y. Sun, L. -X. Huang, in Advances in intelligent information hiding and multimedia signal processing, ed. by J. -S. Pan, P. -W. Tsai, J. Watada, and L. C. Jain. Extraction of EEG components based on time - frequency blind source separation (SpringerCham, 2018), pp. 3–10.
N. Saulig, Z. Milanovic, C. Ioana, A local entropy-based algorithm for information content extraction from time-frequency distributions of noisy signals. Digit. Sig. Process.70: (2017).
F. Hlawatsch, G. F. Boudreaux-Bartels, Linear and quadratic time-frequency signal representations. IEEE Sig. Process. Mag.9(2), 21–67 (1992).
L. Cohen, Time-frequency distributions-a review. Proc. IEEE. 77(7), 941–981 (1989).
Zhenyu Guo, L. -. Durand, H. C. Lee, The time-frequency distributions of nonstationary signals based on a Bessel kernel. IEEE Trans. Sig. Process.42(7), 1700–1707 (1994).
C. E. Shannon, A mathematical theory of communication. Bell Syst. Tech. J.27(3), 379–423 (1948).
A. Rényi, in Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics. On measures of entropy and information (University of California PressBerkeley, 1961), pp. 547–561.
J. Lerga, M. Vrankic, V. Sucic, A signal denoising method based on the improved ICI rule. IEEE Sig. Process. Lett.15:, 601–604 (2008).
A. Goldenshluger, A. Nemirovski, On spatial adaptive estimation of nonparametric regression. Math. Methods Stat.6: (1997).
V. Katkovnik, A new method for varying adaptive bandwidth selection. IEEE Trans. Sig. Process.47:, 2567–2571 (1999).
K. Egiazarian, V. Katkovnik, L. Astola, in 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221), 3. Adaptive window size image denoising based on ICI rule, (2001), pp. 1869–18723.
G. Segon, J. Lerga, V. Sucic, Improved LPA-ICI-based estimators embedded in a signal denoising virtual instrument. Sig. Image Video Process.11: (2016).
J. Lerga, M. Franušić, V. Sucic, Parameters analysis for the time-varying automatically adjusted LPA based estimators. J. Autom. Control Eng.2:, 203–208 (2014).
G. Blanco, A. J. M. Traina, C. T. Jr., P. M. Azevedo-Marques, A. E. S. Jorge, D. de Oliveira, M. V. N. Bedo, A superpixel-driven deep learning approach for the analysis of dermatological wounds. Comput. Methods Prog. Biomed.183:, 105079 (2020).
H. Li, H. Li, J. Kang, Y. Feng, J. Xu, Automatic detection of parapapillary atrophy and its association with children myopia. Comput. Methods Prog. Biomed.183:, 105090 (2020).
F. M. Bayer, A. J. Kozakevicius, R. J. Cintra, An iterative wavelet threshold for signal denoising. Sig. Process.162:, 10–20 (2019).
M. Sharma, S. Singh, A. Kumar, R. S. Tan, U. R. Acharya, Automated detection of shockable and non-shockable arrhythmia using novel wavelet-based ECG features. Comput. Biol. Med.115:, 103446 (2019).
J. S. Lee, S. J. Lee, M. Choi, M. Seo, S. W. Kim, QRS detection method based on fully convolutional networks for capacitive electrocardiogram. Expert Syst. Appl.134:, 66–78 (2019).
This work was fully supported by the Croatian Science Foundation under the projects IP-2018-01-3739 and IP-2020-02-4358, Center for Artificial Intelligence and Cybersecurity - University of Rijeka, University of Rijeka under the projects uniri-tehnic-18-17 and uniri-tehnic-18-15, and European Cooperation in Science and Technology (COST) under the project CA17137.
University of Rijeka, Faculty of Engineering, Department of Computer Engineering, Vukovarska 58, Rijeka, 51000, Croatia
Ana Vranković & Jonatan Lerga
University of Rijeka, Center for Artificial Intelligence and Cybersecurity, Radmile Matejcic 2, Rijeka, 51000, Croatia
Jonatan Lerga
Juraj Dobrila University of Pula, Department of Technical Studies, Zagrebacka 30, Pula, 52100, Croatia
Nicoletta Saulig
Ana Vranković
Conceptualization, A.V, J.L., and N.S.; methodology, A.V. and J. L.; software, A.V..; validation, A.V., and N.S.; investigation, A.V., and J.L.; resources, A.V. and J.L.; data curation, J.L.; writing—original draft preparation, A.V.; writing—review and editing, J.L. and N.S.; supervision, J.L.; project administration, J.L.; funding acquisition, N.S. The author(s) read and approved the final manuscript.
Correspondence to Jonatan Lerga.
This research does not contain any individual person's data in any form (including individual details, images, or videos).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Vranković, A., Lerga, J. & Saulig, N. A novel approach to extracting useful information from noisy TFDs using 2D local entropy measures. EURASIP J. Adv. Signal Process. 2020, 18 (2020). https://doi.org/10.1186/s13634-020-00679-2
Rényi entropy
Adaptive thresholding | CommonCrawl |
How to explain molecular geometry without the help of VSEPR, valence bond, or hybridization theories?
I was taught, at the high school level, how to rationalise molecular geometries with the help of VSEPR, valence bond, and hybridization theories.
However, I have recently also come to know that these theories are outdated and have many limitations. So, how can I rationalise these geometries without the help of these theories? Can I use molecular orbital theory for that, and if so, how?
molecular-orbital-theory molecular-structure hybridization valence-bond-theory vsepr-theory
HisabHisab
$\begingroup$ see chemistry.stackexchange.com/questions/33879/… $\endgroup$ – Mithoron Jul 1 '17 at 22:43
Short Answer: No, you can't. Reason being that so far this is the only way devised, which works most of the time.
Long Answer: The Molecular Orbital Theory is, in practice, a complement to the Valence Bond Theory, Hybridization and VSEPR.
There are successes as well as shortcomings of both the Molecular Orbital Theory, and the other three.
In general, the VSEPR theory is used to predict shapes of molecules and ions, not the Molecular Orbital Theory. It can be extended to a large number of Covalent compounds, but it has a few limitations as far as ionic compounds and multi-centered bonds are concerned. The MOT tells us that when atoms combine, they form molecules orbitals, which are lower in energy than the atomic orbitals. It is generalizable, and can be extended to even semiconductors and superconductors. It is very useful when it comes to finding the actual energies associated with every single orbital.
The Valence Bond Theory, along with Hybridization and VSEPR, is the one that is generally used to find the shapes of molecules. The reason is simple: The Molecular Orbital Theory tells us the arrangements of delocalized electrons in Molecular Orbitals, whereas, the other theories tell us the possible positions of the combined atoms after considering that the electrons are localized and repel each other.
However, there is the case of multi-centered $\pi$-bonds, also known as banana bonds, and the 4c-3e bonds, which can be explained only by the Molecular Orbital Theory. A famous example would be Di-Borane, or $\ce{B2H6}$.
Thus, both the Valence Bond approach as well as the Molecular Orbital approach are often required for an accurate prediction of the shape of a molecules.
I quote an article from Chemistry LibreTexts:
Both the MO and VB theories are used to help determine the structure of a molecule. Unlike the VB theory, which is largely based off of valence electrons, the MO theory describes structure more in depth by taking into consideration, for example, the overlap and energies of the bonding and anti-bonding electrons residing in a particular molecular orbital. While MO theory is more involved and difficult, it results in a more complete picture of the structure of a chosen molecule. Despite various shortcomings, complete disregard of one theory and not the other would hinder our ability to describe the bonding in molecules.
Thus, although the VB, VSEPR, and Hybridization theories are outdated and have many limitations, they, along with the Molecular Orbital Theory provide us with a toolbox for almost any molecule possible.
(If there is any place where I have erred, please correct me)
AbhigyanCAbhigyanC
$\begingroup$ This is somewhat true, but maybe a bit too simplistic an answer. MOT offers many ways of predicting structure of molecules (or perhaps more often, rationalising structures of molecules once they have been determined experimentally). In terms of being able to explain structures, it doesn't matter whether the model used is localised or delocalised. $\endgroup$ – orthocresol♦ Jul 1 '17 at 4:00
$\begingroup$ @orthocresol True that.... I was trying to provide a simplistic answer for the person asking, as he sounded like a high school student. In fact, I am not aware about the undermining details of the MOT $\endgroup$ – AbhigyanC Jul 1 '17 at 4:16
$\begingroup$ //the MO theory describes structure more in depth by taking into consideration, for example, the overlap and energies of the bonding and anti-bonding electrons residing in a particular molecular orbital. While MO theory is more involved and difficult, it results in a more complete picture of the structure of a chosen molecule. // How? How can MOT tell us that for example, methane is tetrahedral? How can MOT give us a more complete 'picture' of the structure of a molecule? $\endgroup$ – Hisab Jul 1 '17 at 4:34
$\begingroup$ Fair enough. For methane you may wish to refer to Albright, Burdett, Whangbo "Orbital Interactions in Chemistry" 2nd ed. p 193 onwards. The correlation diagram indicates that a tetrahedral $T_\mathrm{d}$ geometry is favoured over a square planar $D_\mathrm{4h}$ one (for example). Essentially this is linked to better overlap between C and H orbitals in $T_\mathrm{d}$ geometry. More sophisticated calculations will allow one to find the geometry with the minimum energy - this is routinely done with computers nowadays. (Not always with MOT, though.) $\endgroup$ – orthocresol♦ Jul 1 '17 at 4:44
$\begingroup$ @orthocresol I request you to kindly add that and more information to a more complete answer, and post it. I really want to know more. I am willing to delete my answer in favor of a more complete one. $\endgroup$ – AbhigyanC Jul 1 '17 at 4:57
Not the answer you're looking for? Browse other questions tagged molecular-orbital-theory molecular-structure hybridization valence-bond-theory vsepr-theory or ask your own question.
What is actually the difference between valence bond theory and molecular orbital theory?
What is natural bond orbital theory used for?
How can the Bonding in IF7 be explained using LCAO method?
Are there square planar complexes with sp2d hybridization?
Is the VSEPR theory correct in determining the bond angle of sulfur dioxide?
Valence Bond theory, VSEPR theory and predicting the shapes of the molecules
The affect of effective nuclear charge on energy gap between subshells
VSEPR theory and hybridization in determining the shape of a molecule
Molecular geometry of acetylene
Dichlorine monoxide molecular geometry | CommonCrawl |
New keratinolytic bacteria in valorization of chicken feather waste
Wojciech Łaba ORCID: orcid.org/0000-0002-2068-36411,
Barbara Żarowska1,
Dorota Chorążyk2,
Anna Pudło2,
Michał Piegza1,
Anna Kancelista1 &
Wiesław Kopeć2
AMB Express volume 8, Article number: 9 (2018) Cite this article
There is an increasing demand for cost-effective and ecologically-friendly methods for valorization of poultry feather waste, in which keratinolytic bacteria present a great potential. Feather-degrading bacteria were isolated from living poultry and a single strain, identified as Kocuria rhizophila p3-3, exhibited significant keratinolytic properties. The bacterial strain effectively degraded up to 52% of chicken feathers during 4 days of culture at 25 °C. Zymographic analysis revealed the presence of two dominating proteolytic enzymes in the culture fluid. Culture conditions were optimized in order to maximize the liberation of soluble proteins and free amino acids. A two-step procedure was used, comprising a Plackett–Burman screening design, followed by a Box–Behnken design. Concentration of feather substrate, MgSO4 and KH2PO4 were the most influential parameters for the accumulation of soluble proteins in culture K. rhizophila p3-3, while feathers and MgSO4 also affected the concentration of amino acids. The resultant raw hydrolysate supernatant, prior to and after additional treatments, was rich in phenylalanine, histidine, arginine and aspartic acid. Additionally the hydrolysate exhibited radical-scavenging activity and ferric reducing power.
Intense development of human economic activity, including agricultural and animal production, as well as leather processing industries is associated with the discharge of by-products into the environment. Despite the fact that the amount of waste animal tissues from poultry industry is relatively low as compared with processing of other animal products, waste management of hardly degradable keratin, mainly feathers, poses significant difficulties (Kopec et al. 2014). The annual global waste of chicken feathers is at 8.5 million tons (Fellahi et al. 2014). Feathers are composed of 95–98% protein, predominantly β-keratin. The dominating amino acids in its structure comprise: cysteine, glutamine, proline, as well as serine, the most abundant amino acid (Tiwary and Gupta 2012). Keratins are insoluble in water and exhibit high resistance to physical and chemical treatments, as well as typical proteolytic enzymes. The degradation of these proteins is possible with the participation of specific microbial proteolytic enzymes—keratinases, frequently supported by chemical or enzymatic reducing agents (Lange et al. 2016).
Typical techniques for keratin waste processing into feed ingredients include mechanical, hydrothermal and thermo-chemical treatments, to facilitate protein digestion and assimilability. However, these modifications are usually costly and energy-consuming, and the resulting products in large part are characterized by low nutrition value, volatility of the amino acid composition, as well as deficiency in basic amino acids (Coward-Kelly et al. 2006; Staron et al. 2010). Additional treatments with concentrated alkalies (KOH, NaOH, Ca(OH)2) or reducing compounds (Na2SO3, Na2S), despite increased efficiency of keratin hydrolysis lead to the formation of another troublesome effluents, loss in methionine, lysine and tryptophan, followed by formation of non-protein amino acids, lanthionine, lysoalanine (Gupta et al. 2013).
Since severe legal restrictions have been put in 2000 among the European Union on the application of processed animal tissues for feeding livestock, the demand on keratin meals undergoes a significant decline (Korniłłowicz-Kowalska and Bohacz 2011). This is the reason for the increasing interest in novel routes for the management over increasing input of keratinous waste stream.
As biotechnological methods are considered as cost-effective and environment-friendly, an interesting alternative to these techniques is microbial degradation, due to the lower cost, mild process conditions, lack of the ecological hazard and the output of potentially relevant products. Microorganisms break down keratin to peptides and amino acids, that accumulate in culture medium, and are partially metabolized as basic building elements—carbon and nitrogen (Vasileva-Tonkova et al. 2009). The interest in microbiologically obtained keratin hydrolysates is driven by a variety of their prospective applications. Another route for bioconversion of keratin waste is hydrolysis with cell-free keratinase extracts or purified keratinases. This approach allows for more controlled hydrolysis. Moreover, when combined with thermal or thermo-chemical pretreatment, it becomes applicable in production of hydrolysates with advantageous amino acid balance, at high efficiency.
Keratinases and the follow-on keratin hydrolysates may also be applied in obtaining cheap, useful products, such as nitrogen-rich fertilizers, compostable films, biodegradable materials and reinforced fabrics (Singh and Kushwaha 2015). Keratinases could be effective as a components of detergents, in manufacturing of personal care products and modification of fibers, such as wool or silk. Their prospective applications also their use in medicine for the treatment of psoriasis and acne, as an adjunct in the nails diseases treatment, as well as in prion proteins degradation (Gupta and Ramnani 2006; Selvam and Vishnupriya 2012). Moreover, keratin hydrolysis products may be considered as a potential source of bioactive peptides (Choinska et al. 2011). Recently, peptides of various biological activity have been described, after obtaining through microbial fermentation of chicken feathers. Among them, peptides of anti-oxidative potential are of special attention, due to the growing interest in applicable natural antioxidants (Fakhfakh et al. 2011; Fontoura et al. 2014).
Nevertheless, other applications of keratinases should be denoted as exceptionally promising in industrial circumstances. One of the target areas is leather industry, where keratinases support or carry out the dehairing process, allowing to at least partially replace lime-sulfide treatment. Also, application of keratin hydrolysates allowed for the reduction of chromium effluents from the process of tanning (Balaji et al. 2008). Another vital area is the introduction of keratinolytic microorganisms the initial biodegradation stage, preceding the bioconversion keratin hydrolysates into biogas (Patinvoh et al. 2016).
Numerous bacteria, actinomyces and filamentous fungi, including dermatophytic species, have been described as keratin decomposers. The dominant group of microorganisms capable of keratinases biosynthesis are bacteria of the genus Bacillus: among others, B. subtilis, B. pumilus, B cereus, B. coagulans, B. licheniformis or B. megatherium. Degradation of keratin proteins can also be conducted by a number of other Gram-positive bacteria Lysobacter, Nesternokia, Kocuria, Microbacterium, and some Gram-negative bacteria, e.g. Vibrio, Xanthomonas, Stenotrophomonas and Chryseobacterium. Similar abilities were found among microorganisms thermo- and extremophilic, representatives by types: Fervidobacterium, Thermoanaerobacter, Nesternokia, Bacillus (Nam et al. 2002; Gupta and Ramnani 2006; Brandelli et al. 2015).
Here we describe the isolation and screening of keratinolytic bacteria that effectively decompose chicken feathers, as well as optimization of culture conditions for one bacterial isolate to maximize accumulation of proteins and amino acids and characterization of the resultant hydrolysate.
Microbiological material was obtained from domestic birds: chicken (Gallus gallus), goose (Anser anser), turkey (Meleagris gallopavo) and duck (Cairina moschata). Isolation of bacterial strains was performed with two methods: swab samples from 1 cm2 skin surface were washed with 0.1% Tween 80 and by washing 0.1 g feather samples for 30 min under agitation. The obtained suspensions were inoculated onto LB Agar and incubated for 72 h at 25 °C. The resultant colonies were collected, passaged and the isolates were screened for proteolytic activity.
Screening of proteolytic isolates
Screening for proteolytic activity of isolates was performed in two stages. At first, each isolate was inoculated on skim milk agar (skim milk powder 8%, agar 2%) and incubated for 48 h at 25 °C, in order to determine the ratio (Q) between the clear zone around colonies and colony diameter expressed in millimeters. Afterwards, selected isolates with the highest Q were cultured in liquid medium (FM) composed of (% w/v): MgSO4 0.1, KH2PO4 0.01, FeSO4·7H2O 0.001, CaCl2 0.01, yeast extract 0.05 and white chicken feathers (washed and degreased) 1.0. Cultures were carried out for 4 days, at 25 °C under 180 rpm agitation. Maximum values of soluble protein, free amino groups, reduced thiols and proteolytic activity of each isolate were compared. The most effective feather-degrading isolate, selected for further study, was deposited in the Polish Collection of Microorganisms (PCM) of the Institute of Immunology and Experimental Therapy Polish Academy of Sciences under Accession Number PCM 2931.
Identification and molecular phylogenetic studies
The identification of selected bacterial isolates was based on the sequence analysis of the 16S rDNA genes. The product was amplified by the PCR with following universal primers: (27 F) AGAGTTTGATCGTGGCTCAG and (1492l R) GGTTACCTTGTTACGACT under standard procedure. The PCR product was purified from reaction components and sequenced using the same primers. The obtained sequences were subjected to Ribosomal Database Project (RDP) release 10 in order to find related nucleotide sequences. The sequence alignment and phylogenetic study was performed using MAFFT version 6 and Archaeopteryx version 0.9914 (Cole et al. 2014). The nucleotide sequences submitted to the GenBank database of the National Centre for Biotechnology Information (NCBI) under accession numbers listed in Table 2.
Optimization of feather degradation by a selected bacterial isolate
Biodegradation of chicken feathers by a selected bacterial isolate was optimized using three-step methodology: selection of culture temperature, determination of significant factors affecting the process and optimization of three most influential parameters. The release of soluble proteins and amino acids from feathers during bacterial cultures served as measures of substrate biodegradation (dependent variables). Each value of dependent variables was a maximum outcome observed during 4-day cultures. All cultures were carried out in 250 mL conical flasks, in 50 mL of media.
The effect of culture temperature on the maximum level of soluble protein, free amino groups, proteolytic activity and substrate loss was evaluated in FM medium at 25–40 °C with 5 °C interval, under 180 rpm agitation.
Preliminary screening of factors affecting biodegradation of feathers was performed according to a Plackett–Burman factorial design. Seven factors were selected for the screening: concentration of feathers (A), MgSO4·7H2O (B), KH2PO4 (C), CaCl2 (D), yeast extract (E), quantity of inoculum (F) and agitation speed (G), used at two different levels coded as − 1 and + 1 (Table 1).
Table 1 Independent variables for the performed experimental designs in coded and natural values
Statistical optimization of three most influential parameters, concentration of feathers (A), MgSO4·7H2O (B) and KH2PO4 (C), was performed according to a 13-run Box–Behnken design with four replicates at the central point. Each culture run was performed in duplicate. Three levels of each independent variable were coded as − 1, 0 and + 1, according to Table 1. The relationship between the independent variables and the response was formulated as the second-order polynomial equation (Eq. 1):
$$\begin{aligned} {\text{Y}} \, & = \,\upbeta_{\text{0}} +\upbeta_{ 1} {\text{X}}_{ 1} +\upbeta_{ 2} {\text{X}}_{ 2} +\upbeta_{ 3} {\text{X}}_{ 3} +\upbeta_{ 1 1} {\text{X}}_{ 1} {\text{X}}_{ 1} +\upbeta_{ 2 2} {\text{X}}_{ 2} {\text{X}}_{ 2} \\ & \quad +\upbeta_{ 3 3} {\text{X}}_{ 3} {\text{X}}_{ 3} +\upbeta_{ 1 2} {\text{X}}_{ 1} {\text{X}}_{ 2} +\upbeta_{ 1 3} {\text{X}}_{ 1} {\text{X}}_{ 3} +\upbeta_{ 2 3} {\text{X}}_{ 2} {\text{X}}_{ 3} \\ \end{aligned}$$
where Y was the predicted response, β0 was the intercept and regression coefficients were designated as follows: β1, β2, β3 (linear), β11, β22, β33 (square) and β12, β13, β23 (interaction). The Box–Cox transformation, experimental design, polynomial equation fit, regression and ANOVA statistics, were performed with Statistica 12.5 software (StatSoft Inc.). Optimal values were obtained for the three dependent variables simultaneously using the Profiler tool of Statistica 12.5.
Production and treatments of feather hydrolysate
Optimal culture conditions were adapted from the results of the Box–Behnken design to produce feather hydrolysate. After culture, the fluid was subjected to two methods of treatment: autoclaving (121 °C, 1 atm., 20 min) and sonification (5 min in cycles of 0.5 s/0.5 s, at 4 °C). The treated samples were centrifuged and a profile of free amino acids and antioxidative properties were determined for the supernatants.
Analytical determinations
Proteolytic activity
Proteolytic activity was determined on bovine hemoglobin 1 mg/mL (Sigma-Aldrich), in Tris–HCl buffer pH 9.5 (0.05 M), at 55 °C. The reaction was terminated with trichloroacetic acid (TCA) 8%. The mixture was cooled for 20 min, centrifuged (12,000g, 10 min) and the absorbance was measured at the 280 nm wavelength. One unit of proteolytic activity was expressed as 1 μmol of released tyrosine calculated per 1 mL of culture fluid within 1 min.
Soluble proteins
Concentration of soluble proteins in culture fluids was determined using the Coomassie (Bradford) Protein Assay Kit (Thermo Scientific), with bovine serum albumin as a standard.
Concentration of free amino groups in culture fluids was determined with the ninhydrin method, with glycine as a standard (Sun et al. 2006).
Sulfhydryl groups
Concentration of reduced sulfhydryl groups was determined with Ellman's reagent according to Riener et al. (2002).
Feather substrate loss
Substrate loss was determined after separation of the residual substrate on Whatman grade 2 filter paper and drying at 105 °C. The result was expressed as the percent of the initial amount of feathers introduced into culture media, with the consideration of the initial substrates moisture.
Radical scavenging capability
Following methods were applied for measuring the total antioxidant capacity of hydrolysates: ABTS radicals reducing activity of was determined using Trolox-equivalent antioxidant capacity assay according to Re et al. (1999), where the inhibition of ABTS+ radicals was compared to Trolox standard and expressed as micromole Trolox per 100 mL of hydrolysate; DPPH radicals scavenging activity determination was conducted according to Jang et al. (2008), except that the ethanolic solution of DPPH was used as described by Milardovic et al. (2006), where the ability to scavenge the DPPH radicals was calculated from data obtained for Trolox standard and expressed as micromole Trolox per 100 mL of hydrolysate; FRAP (ferric reducing antioxidant power) was assayed according to Benzie and Strain (1996) and was expressed as μmol of Fe2+ in relation to 100 mL of the hydrolysate.
Zymography
Zymographic analysis of the culture supernatant of the selected bacterial isolate was performed. The sample was mixed at 1:1 ratio with the sample buffer (Tris–HCl 0.32 M; pH 6.8; glycerol 48%; SDS 8%; bromophenol blue 0.06%). Sample in the amount of 5 or 10 μL was loaded onto 12% polyacrylamide gel (5% staking gel) containing 0.1% of copolymerized casein. PAGE Ruler prestained (Thermo Scientific) was used as a reference marker. Electrophoresis was performed at constant 18 mA, at 2 °C. Subsequently, the gel was washed twice with Triton-X 2.5%, once with the incubation buffer and incubated for 24 h at 28 °C in the same buffer (Tris–HCl 0.05 M, pH 7.5, containing CaCl2 2 mM and NaN3 0.02%). Proteolytic activity bands were visualized by staining with Coomassie Blue and decolorization with methanol: acetic acid: water (50:10:40).
Microscopic observations
Visual examination of feathers decomposed within bacterial culture was performed using scanning electron microscopy (SEM) on a Hitachi S3400 microscope.
Amino acid profile of feather hydrolysate
The profile of free amino acids in feather hydrolysates was determined with HPLC, as described by Henderson et al. (2000). Initial derivatization with O-phthalaldehyde was performed. The analysis made on a HPLC 1100 Series system (Agilent Technologies) equipped with the ZORBAX Eclipse-AAA column, 4.6 × 150 mm, 3.5 μm (Agilent Technologies).
Isolation and screening of keratinolytic bacteria
Domestic birds plumage and skin surface was used as a convenient source of proteolytic bacteria, of potentially keratinolytic properties. As a result of the isolation procedure, a total number of 55 isolates of proteolytic bacteria was obtained from 36 original samples. Spot tests on skim milk agar revealed several isolates of outstanding proteolytic activity, exhibiting clear zones width around colonies within a range of 5.5 and 10.5 mm (Additional file 1: Figure S1). Eight of the isolates were selected for liquid cultures in medium with feathers as a main nutrient source, where products of substrate decomposition and proteolytic activity were determined. Significant diversity in the concentration of hydrolysis products was observed among the tested isolates.
The concentration of soluble proteins ranged from 77 to 147 μg/mL and free amino acid groups from 5.82 to 20.74 mM (Fig. 1). In cultures of each of the tested isolates the presence of reduced thiols was confirmed, within a range of 0.012–0.082 mM. Likewise, in each case comparably moderate caseinolytic activity was observed, between 0.019 and 0.068 U. Nevertheless, none of the isolates prevailed in terms of each measured factor simultaneously. The isolate p3-3 was selected for further study, as exhibiting reasonable value of all tested parameters, especially high level of reduced thiols and amino acids.
Comparison of keratinolytic potential of selected bacterial isolates grown in feather-containing medium. Maximum concentration of hydrolysis products and proteolytic activity from 4-day cultures was given. Light grey bars indicate concentration of reduced thiols; dark grey bars indicate proteolytic activity; diamonds indicate protein concentration; circles indicate concentration of amino acids
Identification of bacterial isolates
The initial comparison of the 16S rDNA partial sequences of the nine tested isolates with the RDP database revealed that most of them belong to the Kocuria genus, where six isolated were identified as K. rhizophila, and a single strain of Pantoea anthophila, with high identity score (Table 2). The neighbor joining phylogenetic tree demonstrated the location of the strain p3-3 in the branch comprising K. varians, K. salsicia and K. marina, specifically on the sub-branch of K. rhizophila (Fig. 2).
Table 2 Identification results for the selected feather-degrading bacterial isolates
Phylogenetic tree indicating a position of the p3-3 isolate within Kocuria genus based on 16S rDNA. Phylogenetic tree was built with the neighbor-joining method from the relationships of 16S rDNA sequences between the isolate p3-3 and closely related type strains. Bootstrap values are indicated at the branching points (percent values from 500 replicate bootstrap samplings). The bar represents evolutionary distance of 0.01
Degradation of feathers in cultures K. rhizophila p3-3
Biodegradation course of feathers by K. rhizophila p3-3 was analyzed in 4-day submerged cultures in feather-containing medium, in terms of proteolytic activity and accumulation of hydrolysis products (Fig. 3). Highest production of proteases was observed on the initial day (0.072 U) of culture and was followed by a declining trend. The peak of soluble proteins released from the feather substrate appeared on the third day of culture and reached 179 μg/mL. The concentration of free amino groups was increasing throughout the tested culture course to reach a maximum value of 44.5 mM on the fourth day. The presence of reduced thiols in the growth environment was also confirmed.
Culture course of K. rhizophila p3-3 in feather-containing medium. Proteolytic activity and accumulation of hydrolysis products were determined during culture of K. rhizophila p3-3 in the presence of 1% (w/v) feathers in agitated culture. Diamonds indicate protein concentration; circles indicate concentration of amino acids; triangles indicate proteolytic activity; squares indicate concentration of reduced thiols
Zymographic analysis of the culture fluid was performed in polyacrylamide gel copolymerized with casein. Two activity bands were determined: a minor band of approx. 80 kDa and a dominating band between 130 and 180 kDa (Fig. 4).
Casein zymography of proteases from K. rhizophila p3-3. The sample of culture fluid was taken from the 4-th day of culture in feather-containing medium. Lane 1—protein ladder; lane 2—5 μL sample; lane 3—10 μL sample
As a result of a degradative action of K. rhizophila p3-3 on the keratinous substrate, significant deterioration of feather structures was denoted. Detachment and advanced fragmentation of feather barbs, along with the disruption of the surface of rachea, were demonstrated in the SEM images (Fig. 5). Sparse colonization of the substrate surface by bacterial cells was observed.
Scanning electron microscopy observations of feather degradation. SEM images of feather degradation after 4-day culture of K. rhizophila p3-3 depict: fragmentation of feather barbs (a), disruption of rachea (b), deterioration of barbule surface (c, d)
Effect of culture temperature on degradation of feathers
The process of feather biodegradation by K. rhizophila p3-3 was optimized. Determination of suitable culture temperature was performed, prior to the optimization procedure employing statistical models. It was verified, that most significant keratin biodegradation occurred in mesophilic conditions. Culture temperature of 25 °C allowed for both, highest substrate loss and maximum accumulation of hydrolysis products (Table 3). Increasing culture temperature by 5 °C resulted in virtually inhibited accumulation of proteins and amino acids, accompanied by a nearly 10% decreased substrate solubilization, despite comparable proteolytic activity. Additional temperature increment further diminished protease production and feather degradation.
Table 3 Effect of culturing temperature on feather substrate degradation
Screening of independent variables with Plackett–Burman design
The following step of the optimization was based on a Plackett–Burman experimental design, aimed at selection of culture parameters most influential for the release of proteins and amino acids from feathers during cultures of K. rhizophila p3-3.
The Plackett–Burman design is a useful and frequently applied tool for screening of independent variables that pose significant influence on the dependent variable. Nevertheless, its application requires certain discretion in drafting the intervals of tested parameters, as the model is strictly based on linear regression.
It was determined that all selected independent variables influenced the release of soluble proteins from the keratinous substrate (Table 4).
Table 4 Experimental layout and results of the Plackett–Burman experimental design
As drawn from the Pareto graph of standardized effects, the highest influence was bound to the substrate concentration. Also, a negative effect of MgSO4 was observed, as well as a positive effect of KH2PO4 (Fig. 6a). The release of amino acids depended proportionally on the feather content and concentration of CaCl2 and phosphate, but not MgSO4 (Fig. 6b).
Pareto graph of effects. Pareto graph of standarized effects derived from the Plackett–Burman experimental design, concerning the release of soluble proteins (a) and amino acids (b)
Optimization of medium composition with a Box–Behnken design
The final stage of the optimization incorporated major influencing parameters, namely concentration of feathers, MgSO4 and KH2PO4, to define their effect on the release of soluble proteins from the keratinous substrate. Box–Behnken experimental design was applied to formulate the specific relationship between independent and dependent variables. The experiment was run according to the layout in Table 5.
Table 5 Experimental design with actual and predicted responses for the Box–Behnken design where the independent variables were designated: X1—feather content, X2—concentration of MgSO4, X3—concentration of KH2PO4
Prior to the analysis, experimental data was evaluated for the need of transformation. The significant Chi2 of the Box–Cox transformation statistics with lambda of − 0.1467 suggested that residual sum of squares could be reduced (Additional file 2: Table S1). Therefore, a natural logarithm transformation was applied, which resulted in the insignificant Chi2 obtained, which suggested no need for further data transformation. Transformation of the dependent variable representing amino acid concentration was unnecessary, as the p value of Chi2 was below p = 0.05.
A regression model was developed for the process of microbial degradation of feathers, which was characterized by high suitability, as indicated by the high coefficient of determination R2 = 0.9683 (R2 adj. = 0.9206), which implied that over 96% of the variation of the dependent variable is described by the model. According to the model, significance of all three independent variables a was confirmed, however, the concentration of feather substrate (X1) affected positively protein release, while MgSO4 (X2) had a negative effect, however both exhibited linear influence on the response. The concentration of KH2PO4 represented a quadratic and negative effect (Table 6). In addition, significant interaction between variables X1 and X3 was shown.
Table 6 Effect evaluation of two regression models from the Box–Behnken design, for the release of soluble proteins and amino acids
The analysis of standardized effects allowed to establish the following order of independent variables, according to their influence on the dependent variable: X1 > X2 > X3X3 > X1X3. ANOVA results for the obtained model inferred its significance, according to the F value of 20.3, additionally confirmed by the ''lack of fit'' tests of insignificant rank (Additional file 3: Table S2). The response surfaces were plotted to study interactions among tested factors. Linear characteristics of the variables X1 and X2 implied the shape of response surfaces with maximum values located at the edges of the plot (Fig. 7a, b). Hence, maximum applied concentration of substrate and minimum of MgSO4 resulted in maximum response. Non-linear effect of KH2PO4 combined with the interaction with the feather content resulted in a saddle characteristics, where the maximum applied concentration of both was preferential (Fig. 7c).
Response contour plots for the accumulation of soluble proteins (a–c) and amino acids (d) representing interaction effects of tested independent variables. Accumulation of soluble proteins as a function of feathers concentration vs MgSO4 (a), feathers vs KH2PO4 (b), MgSO4 vs KH2PO4 (c). Accumulation of free amino acids as a function of feathers concentration vs MgSO4 (d)
The obtained regression results allowed to define the polynomial equation (Eq. 2) describing the model (significant terms underlined):
Previous screening of significant independent variables revealed that concentration of feathers, MgSO4 and KH2PO4 was also an influential factor for the release of amino acids in cultures of K. rhizophila p3-3, grown in feather medium (Table 6). Based on this fact, an additional regression model was developed, which was characterized by a good coefficient R2 = 0.9097 (R2 adj. = 0.7742) and acceptable F value = 6.7, followed by the "lack of fit" test with p = 0.1758 (Additional file 4: Table S3). When compared to the previously used Plackett–Burman model, it was confirmed that independent variables X1 and X2 were significant, however the variable X3, that represents the concentration of KH2PO4, did not produce a significant response.
The plotted response surface revealed a possible optimal point for the process of amino acids production from feathers, where the concentration of feathers and MgSO4 was at the level of 4.3 and 0.07%, respectively (Fig. 7d).
The regression results were used to define the polynomial equation (Eq. 3) to describe the model (significant terms underlined).
Evaluation of the feather hydrolysate
Finally, a concluding culture of K. rhizophila p3-3 was performed in the culture medium, in which three components were optimized. Specific concentrations of the components were selected, to achieve maximum concentration of soluble proteins, i.e. feathers 5.0%, MgSO4·7H2O 0.03%, KH2PO4 0.01% (Additional file 5: Table S4). Concentration of remaining medium ingredients and culture parameters was taken from the performed Plackett–Burman model, depending on the positive or negative effect of either tested low (− 1) or high (+ 1) value. Maximum concentration of soluble proteins of 659 ± 34 µg/mL (678 µg/mL predicted) was attained on the third day of culture, with simultaneous 48.1 ± 1.5% loss of substrate weight.
Amino acid profile was determined in soluble fractions of feather hydrolysates directly in culture supernatant and after ultrasonic treatment or autoclaving of the raw culture broth. The treatments were aimed to enhance extraction of proteins and amino acids from bacterial cells. In the raw hydrolysate supernatant several dominating amino acids were determined, of which phenylalanine was dominant (approx. 50 μg/mL), as well as arginine, histidine, aspartic acid and alanine (Additional file 6: Table S5). The remaining amino acids were determined to appear below the level 10 μg/mL. The application of additional treatments to the culture broth allowed to increase the concentration of most amino acids in the supernatant by approximately 40% in total, however it did not affect the content of the prevailing phenylalanine.
In addition, anti-oxidative properties of the hydrolysate were evaluated using three analytical methods. Interesting free radical-scavenging potential was observed, mainly in relation to ABTS. Also, ferric reducing antioxidant power was determined. Additional treatments of the broth resulted in the increased anti-oxidative activities in the resultant hydrolysates (Table 7).
Table 7 Anti-oxidative properties of feather hydrolysates prior to and after treatments
One of the current trends in biotechnology is the application of microbial-mediated processes in valorization of food industry by-products, including keratinous wastes prom poultry processing. Exploitation of keratin proteins from poultry feather waste through enzymatic or microbial processes has been widely discussed in terms of prospects and economic conditions, where keratinolytic microorganisms often play a crucial role. From a total number of proteolytic bacterial isolates of poultry origin obtained in the study, nine exhibited considerable keratinolytic potential. Poultry plumage appeared to be a convenient source of keratinolytic microorganisms. Keratin-rich niches are typically considered as best isolation sites, that most include keratin waste dumps or living birds, however, some keratinolytic strains were acquired from other sources like soil or poultry farm sites. Nevertheless, the variety of the isolated bacteria did not reflect a typical composition of microflora in plumage, where occurrence of feather-degrading bacteria from the genera Bacillus, Pseudomonas, Staphylococcus, Streptococcus, Stenotrophomonas and Escherichia are most frequent (Shawkey et al. 2005; Sivakumar and Raveendran 2015). Micrococcus sp. or closely related Kocuria sp., although relatively abundant, are less frequently analyzed in terms of keratin-degrading capabilities, however if so, their present an immense potential. Thoroughly evaluated Kocuria rosea LPB-3, highly effective in biodegradation of chicken feathers was of soil origin (Vidal et al. 2000), while M. luteus also active on feathers was obtained from feather waste (Łaba et al. 2015).
A single isolate of K. rhizophila p3-3 that exhibited significant capabilities for feather degradation, was selected for the optimization study. The tested strain exhibited highest degradative capabilities at temperature 25 °C, in contrast to K. rosea cultured at 40 °C, as reported by Bernal et al. (2003). Despite that, final biodegradation rate of feathers was in high accordance between those two microorganisms.
Initially, typical medium was used, where besides basal components, feathers served as a main source of carbon and nitrogen. The medium was supplemented with yeast extract (0.5 g/L) which is often used to support initial growth of bacteria in the presence of a hardly degradable substrate (Barman et al. 2017). The selection was based on the maximum accumulation of keratin biodegradation products and proteolytic activity during growth in feather-containing medium. Concentration of free amino acids derived from decomposed keratin was superior in culture of K. rhizophila p3-3, as compared with cultures of K. rosea, capable of accumulating up to 26 mM amino acids (Vidal et al. 2000). Nevertheless, the dynamics of the process was comparable and represented a constantly growing trend. Concentration of soluble proteins was notable (179 μg/cm3) but it was lower than in cultures of highly proteolytic bacilli (de Oliveira et al. 2016). Reduced thiols were also detected in the culture fluid, however at the concentration below 0.1 mM. The presence of reduced cysteine residues is often considered as an indirect measure of keratin biodegradation, and is associated with proposed mechanisms of keratinolysis that involved synergistic action of enzymatic or chemical reducing factors (Korniłłowicz-Kowalska and Bohacz 2011). The result is in accordance with M. luteus B1pz, but in contrast to other microorganisms like Bacillus sp. or streptomyces, in cultures of which the concentration of free thiols could largely exceed 1 mM (Ramnani et al. 2005; Ramnani and Gupta 2007; Łaba et al. 2015).
Proteolytic activity K. rhizophila p3-3 in the unoptimized medium was below the level of 0.1 U, slightly lower in comparison proteases of K. rosea, recalculated from a comparable protocol, however in different conditions. Keratinolytic bacteria are typically associated with immense, at least one fold higher proteolytic activity against casein and a variety of proteinaceous substrates. Nevertheless, the undisputed feather-degrading capability of K. rhizophila might suggest the occurrence of complementary keratinolytic mechanisms.
The profile of proteolytic enzymes released into culture medium during growth on a proteinaceous substrate is a species-dependent feature and usually involves multiple activity bands present in zymograms. Proteolytic bacilli, which belong to the most frequently characterized keratinolytic bacteria, typically produce a number of activity bands, e.g. 7 in the case of B. subtilis 1271 (Mazotto et al. 2011) or 7 bands in cultures of B. cereus PCM 2849 (Łaba et al. 2017), both grown in feather-containing media. In contrast, keratinolytic cocci produced fewer extracellular proteases, i.e. two activity bands of > 200 kDa and a band of 90.2 kDa in culture of K. rosea (Bernal et al. 2003), or four proteases, > 200, 185, 139 and 62 kDa in cultures of M. luteus B1pz (Łaba et al. 2015).
Screening of culture parameters most influential for the accumulation of proteins and amino acids was conducted according to the Plackett–Burman design. The most influential parameter was concentration of the main substrate, namely feathers. It is natural that substrate concentration appears as one of the most important factors, not only for accumulation of degradation products, but also on keratinolytic activity (Paul et al. 2014). It determines not only carbon availability for bacteria and the initial output level of hydrolysis products, but also affects bacterial growth and enzyme activity through faster accumulation of products. The presence of additional carbon sources, like saccharides or peptones, besides the keratinous inducer, was confirmed by some authors to be relevant for production of keratinases (Ramnani and Gupta 2004; Cai and Zheng 2009), however it becomes less rational when biodegradation of the feather substrate is the goal. In the case of K. rosea feather substrate concentration along with magnesium sulphate appeared to be most influential for keratinase production (Bernal et al. 2006). Sulphates are typical mineral medium components for culturing bacteria in the presence of keratins, and is often considered in optimization studies. Nevertheless, the results from Plackett–Burman design revealed negative impact of increasing magnesium sulphate concentration in the tested range.
The change in concentration of yeast extract did not pose a significant effect on feather degradation in cultures of K. rhizophila, however, the role of this component varies for different bacteria. As an example, the addition of yeast extract is beneficial for both, proteolytic activity and biomass yield of Micrococcus sp. INIA 528 (Mohedano et al. 1997) and supports keratin biodegradation by B. licheniformis SHG10 (Embaby et al. 2015). Nevertheless, its excessive concentration could limit keratin biodegradation (Zaghloul et al. 2011).
The applied Box–Behnken design allowed to define relationships between three most influential medium components, feathers, MgSO4 and KH2PO4 that affect biodegradation of the feather substrate. The optimum raw feather content for keratinase production varies in different reports and concentrations below 1.5% are most frequent, however to maximize accumulation of hydrolysis products concentrations up to 8% were preferable (Embaby et al. 2010, 2015; Silva et al. 2014; Paul et al. 2014; Maciel et al. 2017). It is noteworthy that to maintain submerged cultivation maximum applicable concentration of raw down feathers is approximately 7% (w/v). In the presented study, concentration of proteins released from feathers almost linearly depended on their initial content, however, specific concentration of 4.25% was beneficial for increasing amino acids content. The negative influence on liberation of soluble proteins was in accordance with Plackett–Burman model, however it stimulated accumulation of amino acids in culture medium. The addition of 0.07% MgSO4 should be in regard if advanced keratin hydrolysis to amino acids is required. The saddle-type effect of KH2PO4 on the concentration of proteins implied that minimizing or even removal of this additional source of phosphorus should be considered.
The amino acid composition of the obtained hydrolysates soluble fraction, was a not only a result of the hydrolytic action of microbial enzymes on the keratinous substrate, but also of the enrichment of the hydrolysate with bacterial cell components. The predominant occurrence of glutamine and aspartic acid was in accordance with the feather hydrolysate produced by K. rosea (Bertsch and Coello 2005). However there were significant differences in the content of valine and leucine, typically most abundant in feather meals, but also histidine, methionine and phenylalanine (Adejumo et al. 2016). High concentration of the latter might be a result of the predominant chymotrypsin-like specificity of proteases, typical for many known keratinases (Brandelli et al. 2010). Nonetheless, according to Bertsch and Coello (2005), fermentation of feathers within a culture of K. rosea was advantageous in order to improve the amino acid balance of the keratin hydrolysate, but also to improve the overall digestibility of the product.
It is notable, that feather hydrolysates obtained during fermentation with K. rhizophila p3-3 exhibited significant free radical-scavenging activity, as well as ferric reducing antioxidant power. Antioxidative properties of protein hydrolysates of plant and animal origin, including feather and wool hydrolysates, are recently of special interest. This antioxidative potential is known to occur due to the presence of bioactive peptides, which in turn, are dependent on enzymes specificity and a substrate applied in the hydrolysis.
FM:
feather medium
revolutions per minute
RDP:
Ribosomal Database Project
MAFFT:
multiple alignment using fast Fourier transform
Tris:
tris(hydroxymethyl)aminomethane
TCA:
ABTS:
2,2′-azino-bis(3-ethylbenzthiazoline-6-sulfonic acid)
DPPH:
FRAP:
ferric reducing antioxidant power
scanning electron microscopy
HPLC:
high-performance liquid chromatography
Adejumo IO, Adetunji CO, Ogundipe K, Osademe SN (2016) Chemical composition and amino acid profile of differently processed feather meal. J Agric Sci 61:237–246. https://doi.org/10.2298/JAS1603237A
Balaji S, Karthikeyan R, Kumar M, Senthil Babu NK, Chandra Sehgal PK (2008) Microbial degradation of horn meal with Bacillus subtilis and its application in leather processing: a twofold approach. J Am Leather Chem Assoc 103:89–93
Barman NC, Zohora FT, Das KC, Mowla MG, Banu NA, Salimullah M, Hashem A (2017) Production, partial optimization and characterization of keratinase enzyme by Arthrobacter sp. NFH5 isolated from soil samples. AMB Expr 7:181. https://doi.org/10.1186/s13568-017-0462-6
Benzie IF, Strain JJ (1996) The ferric reducing ability of plasma (FRAP) as a measure of antioxidant power. The FRAP assay. Anal Biochem 239:70–76. https://doi.org/10.1006/abio.1996.0292
Bernal C, Vidal L, Valdivieso E, Coello N (2003) Keratinolytic activity of Kocuria rosea. World J Microbiol Biotechnol 19:255–261. https://doi.org/10.1023/A:1023685621215
Bernal C, Diaz I, Coello N (2006) Response surface methodology for the optimization of keratinase production in culture medium containing feathers produced by Kocuria rosea. Can J Microbiol 52:445–450. https://doi.org/10.1139/w05-139
Bertsch A, Coello N (2005) A biotechnological process for treatment and recycling poultry feathers as a feed ingredient. Bioresour Technol 96:1703–1708. https://doi.org/10.1016/j.biortech.2004.12.026
Brandelli A, Daroit DJ, Riffel A (2010) Biochemical features of microbial keratinases and their production and applications. Appl Microbiol Biotechnol 85:1735–1750. https://doi.org/10.1007/s00253-009-2398-5
Brandelli A, Sala L, Kalil SJ (2015) Microbial enzymes for bioconversion of poultry waste into added-value products. Food Res Int 73:3–12. https://doi.org/10.1016/j.foodres.2015.01.015
Cai C, Zheng X (2009) Medium optimization for keratinase production in hair substrate by a new Bacillus subtilis KD-N2 using response surface methodology. J Ind Microbiol Biotechnol 36:875–883. https://doi.org/10.1007/s10295-009-0565-4
Choińska A, Łaba W, Rodziewicz A, Bogacka A (2011) Proteolysis of chicken feather keratin using extra-cellular proteolytic enzymes of Bacillus cereus B5e/sz strain. Żywność Nauka Technologia Jakość 6:204–213
Cole JR, Wang Q, Fish JA, Chai B, McGarrell DM, Sun Y, Brown CT, Porras-Alfaro A, Kuske CR, Tiedje JM (2014) Ribosomal Database Project: data and tools for high throughput rRNA analysis. Nucl Acids Res 42(D1):D633–D642. https://doi.org/10.1093/nar/gkt1244
Coward-Kelly G, Agbogbo FK, Holtzapple MT (2006) Lime treatment of keratinous materials for the generation of highly digestible animal feed: 1. Chicken feathers. Bioresour Technol 97:1344–1352. https://doi.org/10.1016/j.biortech.2005.05.017
Embaby AM, Zaghloul TI, Elmahdy AR (2010) Optimizing the biodegradation of two keratinous wastes through a Bacillus subtilis recombinant strain using a response surface methodology. Biodegradation 21:1077–1092. https://doi.org/10.1007/s10532-010-9368-6
Embaby AM, Marey HS, Hussein A (2015) A statistical–mathematical model to optimize chicken feather waste bioconversion via Bacillus licheniformis SHG10: a low cost effective and ecologically safe approach. J Bioprocess Biotech 5:231. https://doi.org/10.4172/2155-9821.1000231
Fakhfakh N, Ktari N, Haddar A, Mnif IH, Dahmen I, Nasri M (2011) Total solubilisation of the chicken feathers by fermentation with a keratinolytic bacterium, Bacillus pumilus A1, and the production of protein hydrolysate with high antioxidative activity. Process Biochem 46:1731–1737. https://doi.org/10.1016/j.procbio.2011.05.023
Fellahi S, Zaghloul TI, Feuk-Lagerstedt E, Taherzadeh MJ (2014) A Bacillus strain able to hydrolyze alpha- and beta-keratin. J Bioprocess Biotech 4:1–7. https://doi.org/10.4172/2155-9821.1000181
Fontoura R, Daroit DJ, Correa APF, Meira SMM, Mosquera M, Brandelli A (2014) Production of feather hydrolysates with antioxidant, angiotensin-I converting enzyme- and dipeptidyl peptidase-IV inhibitory activities. N Biotechnol 31:506–513. https://doi.org/10.1016/j.nbt.2014.07.002
Gupta R, Ramnani P (2006) Microbial keratinases and their prospective applications: an overview. Appl Microbiol Biotechnol 70:21–33. https://doi.org/10.1007/s00253-005-0239-8
Gupta R, Rajput R, Sharma R, Gupta N (2013) Biotechnological applications and prospective market of microbial keratinases. Appl Microbiol Biotechnol 97:9931–9940. https://doi.org/10.1007/s00253-013-5292-0
Henderson JW, Ricker RD, Bidlingmeyer BA, Woodward C (2000) Rapid, accurate, sensitive, and reproducible HPLC analysis of amino acids. Agilent technologies. http://www.chem.agilent.com/Library/chromatograms/59801193.pdf. Accessed 10 Oct 2017.
Jang A, Liu XD, Shin MH, Lee BD, Lee SK, Lee JH, Jo C (2008) Antioxidative potential of raw breast meat from broiler chicks fed a dietary medicinal herb extract mix. Poult Sci 87:2382–2389. https://doi.org/10.3382/ps.2007-00506
Kopeć M, Gondek K, Orłowska K, Kulpa Z (2014) The use of poultry slaughterhouse waste to produce compost. Ecol Eng 37:143–150
Korniłłowicz-Kowalska T, Bohacz J (2011) Biodegradation of keratin waste: theory and practical aspects. Waste Manag 31:1689–1701. https://doi.org/10.1016/j.wasman.2011.03.024
Łaba W, Choińska A, Rodziewicz A, Piegza M (2015) Keratinolytic abilities of Micrococcus luteus from poultry waste. Braz J Microbiol 46:691–700. https://doi.org/10.1590/S1517-838246320140098
Łaba W, Chorążyk D, Pudło A, Trojan-Piegza J, Piegza M, Kancelista A, Kurzawa A, Żuk I, Kopeć W (2017) Enzymatic degradation of pretreated pig bristles with crude keratinase of Bacillus cereus PCM 2849. Waste Biomass Valorization 8:527–537. https://doi.org/10.1007/s12649-016-9603-4
Lange L, Huang Y, Busk PK (2016) Microbial decomposition of keratin in nature—a new hypothesis of industrial relevance. Appl Microbiol Biotechnol 100:2083–2096. https://doi.org/10.1007/s00253-015-7262-1
Maciel JL, Werlang PO, Daroit DJ, Brandelli A (2017) Characterization of protein-rich hydrolysates produced through microbial conversion of waste feathers. Waste Biomass Valorization 8:1177–1186. https://doi.org/10.1007/s12649-016-9694-y
Mazotto AM, de Melo AC, Macrae A, Rosado AS, Peixoto R, Cedrola SM, Couri S, Zingali RB, Villa AL, Rabinovitch L, Chaves JQ, Vermelho AB (2011) Biodegradation of feather waste by extracellular keratinases and gelatinases from Bacillus spp. World J Microbiol Biotechnol 27:1355–1365. https://doi.org/10.1007/s11274-010-0586-1
Milardovic S, Ivekovic D, Grabaric BS (2006) A novel amperometric method for antioxidant activity determination using DPPH free radical. Bioelectrochemistry 68:175–180. https://doi.org/10.1016/j.bioelechem.2005.06.005
Mohedano AF, Fernandez J, Gaya P, Medina M, Nunez M (1997) Effect of pH, temperature and culture medium composition on the production of an extracellular cysteine proteinase by Micrococcus sp. INlA 528J. Appl Microbiol 82:81–86. https://doi.org/10.1111/j.1365-2672.1997.tb03300.x
Nam GW, Lee DW, Lee HS, Lee NJ, Kim BC, Choe EA, Hwang JK, Suhartono MT, Pyun YR (2002) Native-feather degradation by Fervidobacterium islandicum AW-1, a newly isolated keratinase-producing thermophilic anaerobe. Arch Microbiol 178:538–547. https://doi.org/10.1007/s00203-002-0489-0
Oliveira CT, Pellenz L, Pereira JQ, Brandelli A, Daroit DJ (2016) Screening of bacteria for protease production and feather degradation. Waste Biomass Valorization 7:447–453. https://doi.org/10.1007/s12649-015-9464-2
Patinvoh RJ, Feuk-Lagerstedt E, Lundin M, Sárvári Horváth I, Taherzadeh MJ (2016) Biological pretreatment of chicken feather and biogas production from total broth. Appl Biochem Biotechnol 180:1401–1415. https://doi.org/10.1007/s12010-016-2175-8
Paul T, Das A, Mandal A, Halder SK, DasMohapatra PK, Pati BR, Moundal KC (2014) Valorization of chicken feather waste for concomitant production of keratinase, oligopeptides and essential amino acids under submerged fermentation by Paenibacillus woosongensis TKB2. Waste Biomass Valorization 5:575–584. https://doi.org/10.1007/s12649-013-9267-2
Ramnani P, Gupta R (2004) Optimization of medium composition for keratinase production on feather by Bacillus licheniformis RG1 using statistical methods involving response surface methodology. Biotechnol Appl Biochem 40:191–196. https://doi.org/10.1042/BA20030228
Ramnani P, Gupta R (2007) Keratinases vis-à-vis conventional proteases and feather degradation. World J Microbiol Biotechnol 23:1537–1540. https://doi.org/10.1007/s11274-007-9398-3
Ramnani P, Singh R, Gupta R (2005) Keratinolytic potential of Bacillus licheniformis RG1: structural and biochemical mechanism of feather degradation. Can J Microbiol 51:191–196. https://doi.org/10.1139/w04-123
Re R, Pellegrini N, Proteggente A, Pannala A, Yang M, Rice-Evans C (1999) Antioxidant activity applying an improved ABTS radical cation decolorization assay. Free Radic Biol Med 26:1231–1237. https://doi.org/10.1016/S0891-5849(98)00315-3
Riener CK, Kada G, Gruber HJ (2002) Quick measurement of protein sulfhydryls with Ellman's reagent and with 4,4′-dithiodipyridine. Anal Bioanal Chem 373:266–276. https://doi.org/10.1007/s00216-002-1347-2
Selvam K, Vishnupriya B (2012) Biochemical and molecular characterization of microbial keratinase and its remarkable applications. Int J Pharm Biol Sci Arch 3:267–275
Shawkey MD, Mills KL, Dale C, Hill GE (2005) Microbial diversity of wild bird feathers revealed through culture-based and culture-independent techniques. Microbial Ecol 50:40–47. https://doi.org/10.1007/s00248-004-0089-4
Silva LAD, Macedo AJ, Termignoni C (2014) Production of keratinase by Bacillus subtilis S14. Ann Microbiol 64:1725–1733. https://doi.org/10.1007/s13213-014-0816-0
Singh I, Kushwaha RKS (2015) Keratinases and microbial degradation of keratin. Adv Appl Sci Res 6:74–82
Sivakumar N, Raveendran S (2015) Keratin degradation by bacteria and fungi isolated from a poultry farm and plumage. Br Poult Sci 56:210–217. https://doi.org/10.1080/00071668.2014.996119
Staroń P, Banach M, Kowalski Z, Wzorek Z (2010) Unieszkodliwianie wybranych odpadów poubojowych na drodze hydrolizy. Tech Trans Chem 107:333–341
Sun SW, Lin YC, Weng YM, Chen MJ (2006) Efficiency improvements on ninhydrin method for amino acid quantification. J Food Compos Anal 19:112–117. https://doi.org/10.1016/j.jfca.2005.04.006
Tiwary E, Gupta R (2012) Rapid conversion of chicken feather to feather meal using dimeric keratinase from Bacillus licheniformis ER-15. J Bioprocess Biotech 2:123. https://doi.org/10.4172/2155-9821.1000123
Vasileva-Tonkova E, Gousterova A, Neshev G (2009) Ecologically safe method for improved feather wastes biodegradation. Int Biodeterior Biodegrad 63:1008–1012. https://doi.org/10.1016/j.ibiod.2009.07.003
Vidal L, Christen P, Coello MN (2000) Feather degradation by Kocuria rosea in submerged culture. World J Microbiol Biotechnol 16:551–554. https://doi.org/10.1023/A:1008976802181
Zaghloul TI, Embaby AM, Elmahdy AR (2011) Biodegradation of chicken feather waste directed by Bacillus subtilis recombinant cells: scaling up in a laboratory scale fermentor. Bioresour Technol 102:2387–2393. https://doi.org/10.1016/j.biortech.2010.10.10619
WŁ designed the work, created and analyzed optimization experimental designs, performed molecular identification of isolates, performed electrophoretic analysis, wrote the manuscript; BŻ isolated and maintained bacterial strains, led microbiological procedures; DC and AP performed the evaluation of final feather hydrolysates; MP and AK performed biochemical and chemical analyses; WK advised on the concept and methods. All authors read and approved the final manuscript.
Key data concerning our findings is available in the paper and additional files.
Publication supported by Wroclaw Centre of Biotechnology, programme The Leading National Research Centre (KNOW) for years 2014–2018.
Department of Biotechnology and Food Microbiology, Wrocław University of Environmental and Life Sciences, Chełmońskiego 37, 51-630, Wrocław, Poland
Wojciech Łaba, Barbara Żarowska, Michał Piegza & Anna Kancelista
Department of Animal Products Technology and Quality Management, Wrocław University of Environmental and Life Sciences, Chełmońskiego 37, 51-630, Wrocław, Poland
Dorota Chorążyk, Anna Pudło & Wiesław Kopeć
Wojciech Łaba
Barbara Żarowska
Dorota Chorążyk
Anna Pudło
Michał Piegza
Anna Kancelista
Wiesław Kopeć
Correspondence to Wojciech Łaba.
Screening of bacterial isolates for proteolytic activity. Proteolytic activity was determined on skim milk agar plates and expressed as clear zone diameter. Isolates obtained from feather samples were designated with letter "p"; grey bars indicate isolates selected for further study.
Box–Cox transformation statistics of dependent variables.
Analysis of variance (ANOVA) for the obtained regression model for the release of soluble proteins.
Analysis of variance (ANOVA) for the obtained regression model for the release of amino acids.
Determined values of independent variables to maximize different responses.
Concentration of dominant amino acids in feather hydrolysates prior to and after treatments.
Łaba, W., Żarowska, B., Chorążyk, D. et al. New keratinolytic bacteria in valorization of chicken feather waste. AMB Expr 8, 9 (2018). https://doi.org/10.1186/s13568-018-0538-y
Accepted: 15 January 2018
Keratinase
Kocuria rhizophila | CommonCrawl |
Dynamic elementary mode modelling of non-steady state flux data
Abel Folch-Fortuny ORCID: orcid.org/0000-0001-6845-08071,2,
Bas Teusink3,
Huub C.J. Hoefsloot4,
Age K. Smilde4 &
Alberto Ferrer1
BMC Systems Biology volume 12, Article number: 71 (2018) Cite this article
A novel framework is proposed to analyse metabolic fluxes in non-steady state conditions, based on the new concept of dynamic elementary mode (dynEM): an elementary mode activated partially depending on the time point of the experiment.
Two methods are introduced here: dynamic elementary mode analysis (dynEMA) and dynamic elementary mode regression discriminant analysis (dynEMR-DA). The former is an extension of the recently proposed principal elementary mode analysis (PEMA) method from steady state to non-steady state scenarios. The latter is a discriminant model that permits to identify which dynEMs behave strongly different depending on the experimental conditions. Two case studies of Saccharomyces cerevisiae, with fluxes derived from simulated and real concentration data sets, are presented to highlight the benefits of this dynamic modelling.
This methodology permits to analyse metabolic fluxes at early stages with the aim of i) creating reduced dynamic models of flux data, ii) combining many experiments in a single biologically meaningful model, and iii) identifying the metabolic pathways that drive the organism from one state to another when changing the environmental conditions.
Data analysis methods are widely used in Systems Biology to interpret different kinds of data. In the field of fluxomics, principal component analysis (PCA) [1] models have been proposed to obtain a set of key pathways in metabolic networks, assuming steady state conditions [2, 3]. Basically, these key pathways are groups of correlated metabolic fluxes measured in different experiments. Multivariate curve resolution (MCR) [4] was afterwards proposed to obtain this set of metabolic pathways, exploiting the ability of MCR to include constraints in the algorithm, driving the model to a more biologically meaningful solution [5].
The drawback of PCA and MCR is that the components do not represent metabolic routes connecting substrates with end-products, but separate groups of concatenated reactions in the network. To enhance the interpretability of PCA and MCR, principal elementary mode analysis (PEMA) [6] was proposed to build a multivariate model using thermodynamically feasible pathways retrieved directly from the network. In the PEMA model, fluxes from different experiments are projected into the most representative set of elementary modes (EMs) from the metabolic network. The EMs are the simplest representations of pathways in the metabolic network. Basically, each EM connects substrates with end-products concatenating reactions.
In non-steady state conditions, the state of the network at a particular time point of the biological process is defined by the concentration of each metabolite in the cell, and metabolites may interact via one or more reactions. Each reaction is represented by an ordinary differential equation (ODE) relating chemical compounds. Since metabolic networks may have hundreds of reactions, it is hard to build kinetic models requiring kinetic parameters. When given the initial concentrations of metabolites and the full kinetic model (including the values for the kinetic parameters), the concentration of the metabolites along time can be simulated to produce a state transition path or trajectory, i.e. the succession of states adopted by the network over time [7]. Methodologies commonly applied when dealing with the aforementioned ODE systems, however using different data sources, are kinetic modelling [8], dynamic flux balance analysis (DFBA) [9], and a recently proposed approach combining time-resolved metabolomics and dynamic FBA (MetDFBA) [10], among others.
Once the kinetic model is built and the data is gathered, either simulated or (partially) measured, a comparison between experimental conditions can be performed to discover which groups of metabolites, reactions or pathways show differences between substrates, environment, etc. For this purpose, partial least squares regression discriminant analysis (PLS-DA) [11] can be used to find metabolites that are strongly related to a response variable (e.g. group of experiments) [12]. The problem with this approach is that no topological information is included in the multivariate model. The identified metabolites can be scattered in the network, not showing clear metabolic routes, as it happened in PCA with steady state data.
The Goeman's test was proposed in [13] to tackle the lack of topological information in the PLS-DA model. In that case, discrimination between experiments using metabolite concentrations was investigated using the set of pathways retrieved from the Kyoto Encyclopedia of Genes and Genomes (KEGG) database [14–16]. The aim was to find which pathways have a different activation pattern depending on the initial conditions of the experiment at particular time points. This model includes topological information, as metabolites are tested in groups of KEGG pathways, but these pathways sometimes do not connect directly substrates with end products, and the model is not built including all pathways and time points simultaneously.
To solve the aforementioned drawbacks of PLS-DA and the Goeman's global test, a novel framework is proposed to analyse non-steady state metabolite concentrations, based on an extension of the PEMA model. For this, we introduce the concept of dynamic EMs (dynEMs), i.e. EMs activated partially at each time point of the experiment. The dynEMs are used in a discriminant model to identify which metabolic routes have different activations depending on the initial conditions, i.e. which pathways discriminate between experimental conditions (as for example different substrate concentrations). As opposed to PLS-DA, dynEMR-DA integrates topological information to make the model more interpretable, as the set of candidates are drawn from the elementary mode matrix of the metabolic network; and, as opposed to Goeman's test, includes all metabolic routes connecting substrates with end-products and all time points of the experiment in the same discriminant model.
The MATLAB code for dynEMR-DA, related functions and example data are freely available in http://www.bdagroup.nl/content/Downloads/software/software.php, with instructions about how to use the method with own data. This way, practitioners are guided through the procedure, from the definition of the inputs, elementary mode matrix and concentration or flux data (either can be used), to the outputs, i.e. coefficients for the dynamic elementary modes to reconstruct the flux data. The N-way toolbox [17] and efmtool [18] for MATLAB are required to use dynEMR-DA code.
The structure of the article is as follows. In Methods, the metabolic models and data sets of S. cerevisiae are presented and the adaptation of the PEMA model from a steady to a non-steady state environment is introduced, describing dynEMA, dynEMR-DA and the validation scheme. In Results, the output of dynEMR-DA is analysed using simulated and real concentration data. Finally, some conclusions are drawn in the last section.
Metabolic networks
Two metabolic models of the well-known baker's yeast S. cerevisiae are used here to build the multivariate discriminant models (see Additional file 1 for a list of reactions). The first one was used in [19] to study the dynamics in glycolysis. The metabolic network (see Fig. 1a) has M=23 metabolites and K=18 reactions. This metabolic model has 26 elementary modes.
S. cerevisiae metabolic models. Model a), from [19], is used for the simulated study, and b), from [13], for the real case study
The second model was proposed in [10], and comprises M=12 metabolites and K=20 reactions, and describes the glycolysis and the tricarboxylic acid (TCA) cycle (see Fig. 1b). This second metabolic model has 13 elementary modes.
Two models are used in this article since the metabolites whose measurements were available in the real case study were not exactly the same as in the simulated model. Also, kinetic parameters were only available for the simulated case study. However, since both models are describing glycolysis in the same organism, the results are comparable.
Concentration data
The concentration data used in the first model (Fig. 1a) are simulated using COmplex PAthway SImulation (COPASI) software [20]. The initial concentrations of the metabolites match the measurements used in the original paper [19] (see Table 1). In this case, COPASI is used to simulate the concentrations from 0 to 60 s in 20 intervals of 3 s using a deterministic method (LSODA) [21]. The metabolic fluxes and the set of EMs are also obtained directly from COPASI.
Table 1 Initial concentrations in the simulated study. Experimental conditions taken from [19]
The aim in the simulated study consists of discriminating between scenarios using a high versus low initial concentration of glucose. 64 experiments are simulated using the data in Table 1, plus 20% noise, that is: c=(1+0.2ε)c0, where c is the concentration used in the analysis, c0 is the concentration given by COPASI and ε follows a Normal distribution with mean 0 and standard deviation 1. In the first 32 experiments the initial glucose concentration is set to 10mMol/l (plus noise), while in the last 32, this concentration is set to 2.5 mMol/l (also adding noise). These two values are indeed interesting, since they mimic the glucose concentrations used in the real case study (see paragraph below). The other common metabolites between metabolic models have comparable values in both concentration data sets. The set of EMs is obtained in this case using efmtool software [18].
In the real case, the concentrations of S. cerevisiae along 24 time points were obtained experimentally using liquid chromatography–mass spectrometry (LC-MS) [22, 23] at the Biotechnology Department of Delft University of Technology (The Netherlands), and were used afterwards in [13]. 12 different cultures are used in the present work (see Table 2). Regarding experiments 1 to 8, different initial glucose concentrations in aerobic conditions were used in these cultures: 10 mMol of glucose were used in the first 4 experiments and 2.3-2.5 mMol in experiments 5-8. Also, 4 more cultures, experiments 9 to 12, were performed using similar initial glucose concentrations as in experiments 5-8 but in anaerobic conditions (see Availability of data and materials section for more information on these data).
Table 2 Experiments used for the real case study. More details in Availability of data and materials section and in [13, 22, 23]
The aim in the real case study consists of discriminating between i) high and low glucose concentrations (i.e. experiments 1-4 vs 5-8), and ii) aerobic and anaerobic conditions (experiments 5-8 vs 9-12).
Scalar values are represented here as italic capital letters (e.g. N) and indices will appear as italic lower-case letters (e.g. j). Vectors are represented as bold lower-case letters (e.g. v). Data matrices are represented as bold capital letters (e.g. X). Superindex T denotes the transpose of a matrix. Observations or individuals within matrices are represented by rows, while variables are represented as columns. 3-dimensional arrays will be denoted as underlined bold capital letters (e.g. X). The mathematical operator × is used here to denote the size of the modes of a matrix (e.g. Y is a N×M matrix). No mathematical operator is used for products between scalars, vectors and matrices. Operator ∘ denotes the Hadamard element-wise product between vectors or matrices. Finally, operator ⊗ denotes the Kronecker tensor product between vectors or matrices, that is:
$$ \mathbf{X}\otimes \mathbf{Y}=\left[\begin{array}{cc} x_{11} & x_{12} \\ x_{21} & x_{22} \end{array}\right] \otimes \mathbf{Y}=\left[\begin{array}{cc} x_{11}\mathbf{Y} & x_{12}\mathbf{Y} \\ x_{21}\mathbf{Y} & x_{22}\mathbf{Y} \end{array}\right] $$
Squares and rectangles are used in figure drawings as a representation of matrices.
Dynamic elementary mode analysis (dynEMA)
Any steady state flux distribution x=(x1,…,x K ) can be decomposed as a positive linear combination of a set of E EMs [24]:
$$ \mathbf{x}=\sum\limits_{e=1}^{E} \lambda_{e}\mathbf{p}_{e} $$
where K is the number of fluxes (matching the number of reactions in the network), \(\phantom {\dot {i}\!}\mathbf {p}_{e}=(p_{e_{1}},\ldots,p_{e_{K}})\) is the eth EM, λ e is the positive weighting factor of the eth EM, and E is the number of EMs needed to reconstruct the flux distribution x. The set of E EMs is a subset of the complete set of Z EMs of the metabolic network.
Figure 2a shows an example of this modelling using a small network with M=5 metabolites and K=8 reactions. There are Z=3 EMs in the network: (1,1,1,1,0,0,0,0), (1,1,0,0,1,1,0,0) and (1,1,0,0,1,0,1,1). Let us assume that there is only flux on reactions 1 to 6. A linear combination of the first E=2 EMs will reconstruct the flux carried by the reactions in the system in Fig. 2b. In this case, all reactions in each EM are multiplied by the same value. The weighting factors correspond to the flux shown in the graphics beside reactions.
a Small metabolic network. b Steady state flux distribution. In b), the flux carried by each reaction is shown. Reactions 7-8 have no flux
When N flux distributions are considered, coming from different experiments or cultures, a PEMA model can be built:
$$ \mathbf{X}=\boldsymbol{\Lambda}\mathbf{P}^{\mathrm{T}}+\mathbf{F} $$
where X is the N×K flux data matrix, P is the K×E principal elementary mode (PEM) matrix, formed by a subset of E EMs; Λ is the N×E weighting matrix; and F is the N×K residual matrix. A schematic representation of a PEMA model is shown in Fig. 3.
Schematic representation of data matrices in the PEMA model
Non-steady state flux distributions cannot be decomposed as linear combinations of EMs, as in steady state. When the biological system has not reached yet the steady state, the system is not in equilibrium and fluxes can change over time. However, the EMs are indeed the simplest pathways along which the non-steady state fluxes have to flow, but not in a constant fashion. Thus, the EMs must be modified or adapted to fit this dynamical system. These are the so-called dynamic elementary modes (dynEMs).
To adapt an EM, there is not only a single coefficient multiplying the EM (Λ values in PEMA):
$$ \lambda_{e}\mathbf{p}_{e}=(\lambda_{e} p_{e_{1}},...,\lambda_{e} p_{e_{K}}) $$
but a different coefficient multiplying each reaction activated by the EM:
$$ \boldsymbol{\alpha}_{e_{j}}\circ\mathbf{p}_{e}=(\alpha_{e_{j,1}} p_{e_{1}},\ldots\alpha_{e_{j,K}} p_{e_{K}}) $$
where \(\boldsymbol {\alpha }_{e_{j}}\) includes the coefficients that adapt reactions 1 to K in the selected eth dynamic EM to reproduce the metabolic fluxes at time point j, and ∘ is the Hadamard element-wise product of matrices.
Thus, a single non-steady state flux distribution x at time point j can be decomposed as:
$$ \mathbf{x}_{j}=\sum\limits_{e=1}^{E} \boldsymbol{\alpha}_{e_{j}} \circ \mathbf{p}_{e} $$
Consider now a set of non-steady state flux distributions, which can be obtained from a single experiment measuring the concentration of the metabolites at J consecutive time points. Figure 4 shows an example of this scenario using the previous small network. Let us assume that there are fluxes only in reactions 1 to 4. In this case, only E=1 EM is needed. However, at each time point (j=1,…,4) the flux at each reaction (k=1,…,8) is different. High values are registered at the beginning of the experiment in the first reaction (Fig. 4a). Afterwards, the flux reaches all metabolites in the EM (Fig. 4b-c). Finally, the experiment reaches the steady state at the last time point (Fig. 4d), and all fluxes in the reactions are similar.
Small metabolic network with non-steady state fluxes from time point 1 to 4 (a) to d), respectively). Graphics show the flux carried by each reaction, which changes depending on the time point. The first subindex of the weighting factor \(\alpha _{e_{j,k}}\) indicates the EM E=1. The other two subindices indicate time point j=1,..,4 and reaction k=1,..,8
Considering non-steady state flux distributions along J time points, the set of active dynEMs can be obtained, in a PEMA/PCA-like fashion, from the new dynamic elementary mode analysis (dynEMA) model:
$$ \mathbf{X}=(\mathbf{I}_{J}\otimes\mathbf{1}^{\mathrm{T}}_{E}) \lbrack \mathbf{A}\circ(\mathbf{1}_{J}\otimes \mathbf{P}^{\mathrm{T}})\rbrack + \mathbf{F} $$
where A is the EJ×K coefficients matrix, I J is the J×J identify matrix, P is the K×E principal elementary mode (PEM) matrix, 1 E and 1 J represent column vectors of E and J ones respectively, F is the J×K residual matrix (containing the fluxes not explained by the set of dynamic elementary modes) and ⊗ is the Kronecker matrix product. In this case, X is a J×K data matrix representing the non-steady state fluxes from a single experiment along J time points; while in the PEMA model, X is a N×K matrix representing the steady state fluxes of N different experiments. Figure 5 shows a representation of dynEMA model.
Schematic representation of data matrices in the dynEMA model
The coefficients matrix A in the previous equation is, in fact, a E×K×J 3-way matrix unfolded reaction-wise, and each entry in the matrix \(\alpha _{e_{jk}}\) represents the coefficient multiplying reaction k of EM e to reconstruct the flux at time point j. Using this modelling it is possible to study the time evolution of a dynEM, i.e. how the dynEM is adapted or dynamically used along all measured time points for a given experimental condition.
This system of equations is solved similarly to PEMA. The candidates for first dynEM are selected from the complete K×ZEM matrix in a step-wise fashion. After selecting an EM, the coefficients multiplying it (thus creating the dynEM) are obtained solving Eq. 7 using non-negative least squares. Once all EMs are evaluated, the dynEM explaining most variance in data (as in PEMA) is classified as the first dynEM (1st column of PEM matrix P). Afterwards, this first dynEM is set, and the search for the second one starts, recalculating the coefficients in matrix A for both the first and the second dynEMs at each evaluation. In this way, the dynEMA model is built in a greedy way, explaining as much variance as possible at each step.
Regarding the number of dynEM extracted, this depends on the aim of the analysis, as explained in [6] with the PEMA model. For example, when the aim is to identify the main dynamic behaviour, one dynEM is enough. If the aim is to identify the main dynEM utilizing one particular section of the network, the model needs as many dynEMs as required to represent those reactions. Alternatively, one can extract as many dynEMs needed to reach certain percentage of explained variance (e.g. 95%).
The dynEMA model is useful to identify the dynEMs active in an experiment and how each dynEM is used in the culture at different time points of the experiment.
Dynamic elementary mode regression discriminant analysis (dynEMR-DA)
When the aim is to establish differences between environmental or experimental conditions, e.g. presence/absence of a compound or case/control studies, a discriminant model is needed. For this, dynamic elementary mode regression discriminant analysis (dynEMR-DA) is proposed here. This model focuses on finding which are the dynEMs with a strongly different time evolution or performance between conditions. In essence, dynEMR-DA is a two-step procedure. First, it projects the flux data into the space defined by each single dynEM. Then, fits a NPLS-DA [25] model with discriminant purposes.
To build a dynEMR-DA model, the set of different experiments are combined in a single \(\underline {\mathbf {X}}\) 3-way matrix (see Fig. 6). In X we consider N experiments, measuring K fluxes along J time points. Therefore, it is mandatory to have the same time points in all experiments.
dynEMR-DA procedure. XH and XL denote the flux data matrices of two different experimental conditions
The algorithm of dynEMR-DA has the following steps:
For each EM in the metabolic network (candidate to dynEM):
Unfold reaction-wise the N×K×JX matrix in Fig. 6 in a two-way JN×K matrix X.
Calculate the coefficients matrix A using the dynEMA model:
$$ \mathbf{X}=\left(\mathbf{I}_{JN}\otimes\mathbf{1}^{\mathrm{T}}_{E}\right) \left\lbrack \mathbf{A}\circ\left(\mathbf{1}_{JN}\otimes \mathbf{p}^{\mathrm{T}}\right)\right\rbrack + \mathbf{F} $$
where p denotes the candidate EM from step 1.
Reconstruct the flux data \(\hat {\mathbf {X}}\) using the dynEMA model:
$$ \hat{\mathbf{X}}=\left(\mathbf{I}_{JN}\otimes\mathbf{1}^{\mathrm{T}}_{E}\right) \left\lbrack \mathbf{A}\circ\left(\mathbf{1}_{JN}\otimes \mathbf{p}^{\mathrm{T}}\right)\right\rbrack $$
Fold the reconstructed data to build again a three-way data structure \(\underline {\hat {\mathbf {X}}}\)
Fit an NPLS-DA model between the reconstructed data and the y data, where y denotes the class of experiments (having 1s and 0s).
The dynEM whose NPLS-DA model explains most variance in y is classified as the first dynEM.
Check the predictions of NPLS-DA model. If the current model discriminates perfectly, stop. If not, set the first dynEM and repeat steps 1-3 to extract the second dynEM following the dynEMR-DA procedure.
NPLS-DA was proposed for studying N-dimensional data structures with discriminant purposes. NPLS is the natural extension of PLS to N-way structures, which tries to maximize the covariance between the \(\underline {\mathbf {X}}\) and Y data arrays. Y is denoted as y when one variable is predicted. NPLS-DA models in this paper have been computed using the N-way toolbox for MATLAB [17].
The dynEMR-DA algorithm can select many dynEMs until attaining a perfect discrimination. However, in practice, individual dynEMs are able to discriminate between two experimental conditions, so there is no need of considering two dynEMs simultaneously active to obtain a discriminant model. Moreover, some dynEMs are discriminating between initial conditions, but some of their reactions are not used at any time point of the experiment (so the flux does not flow through the metabolic pathway from the beginning to the end). These dynEMs do not represent actual metabolic pathways, so they should be removed when they are selected.
Triple cross-validation (3CV)
Proper validation of multivariate models is a subtle issue in Systems Biology. When enough data are available, single cross-validation procedures may lead to too optimistic models, especially when the aim is discrimination between classes. As commented in [26], when discriminant models, such as PLS-DA, are used on datasets with much more variables than samples, the models cannot be built as accurately as when there are more samples than variables. Then, the high number of variables can lead to chance discriminations, i.e. models that give good results because a variable had by chance lower values in all samples from one group. To avoid this sometimes spurious results, double cross validation (2CV) was proposed [26]. Using this procedure, a subset of the original data is used to model fitting, another subset to decide the complexity of the model (e.g. number of components of a multivariate model), and finally, a third subset is used for validation. This kind of models are especially useful for (N)PLS-DA model validation [26, 27].
In this work, though, we need an extra round of validation. dynEMR-DA models involve the projection, as first step, of the flux data into the space defined by each single dynEM. Afterwards, an NPLS-DA model is fitted, determining at the end which dynEMs are discriminating between groups. Therefore, we propose here a triple cross validation (3CV) scheme (see Fig. 7). This procedure consists of the following steps:
3CV procedure. 75% of the samples from both classes (red and blue) are used in the calibration, projection and test sets (25% in each). The remaining 25% of samples are used in validation set
Divide the data set in four groups: calibration, test, selection, and validation. The latter is left out of the analysis until the final external validation.
Fit a dynEMR-DA model using the calibration set, using a maximum of K components (as many as fluxes).
Project the test set, first to the corresponding dynEM, and then to each of the K NPLS-DA calibration models. At this point, the minimum number of components, A, needed to classify each experiment in its corresponding class, is selected.
Project the selection set into the previous dynEMR-DA model with A NPLS-DA components and evaluate the predictive power of each dynEM.
Steps 2-4 are repeated three times, changing the roles of the subsets. That is, the models are built using, in steps 2 to 4 respectively: calibration-test-selection, test-selection-calibration and selection-calibration-test sets.
The dynEMs with perfect classification rates using the selection set in the three rounds are used finally for validation, so the discrimination power of each dynEM is evaluated with completely external data. This prediction is performed substituting the selection group by these validation samples in the three models previously fitted.
A 2CV strategy is used for the NPLS-DA section of the dynEMR-DA models, but an extra validation round is needed to assess the performance of the selected dynEMs in terms of discrimination. Therefore, the 3CV procedure is built basically replacing the validation step, in the original 2CV, by the selection step, and performing the external validation in the last step.
Simulated flux data
The metabolic model of S. cerevisiae in Fig. 1a is used in this section to assess the performance of dynEMR-DA on simulated data. 64 experiments are simulated using COPASI, with the initial concentrations described in Methods (see Table 1). Thus, 32 experiments have a high initial concentration of glucose and 32 a low concentration. The fluxes derived from the concentration data, and also the set of EMs of the metabolic model, are also obtained using COPASI.
To validate the discriminant models, the 3CV scheme is used here, using the N-way Toolbox for MATLAB [17] to fit the NPLS-DA models. 8 experiments of each class selected at random (16 in total) are used for calibration. 16 more experiments are used to select the number of NPLS-DA components. And 16 more are used as selection samples. As described in Fig. 7, the first 3 subsets are used as calibration, test and selection sets, and then the roles change, i.e. test-selection-calibration and selection-calibration-test (steps 2-4 described in 3CV). Finally, 16 additional experiments are used as validation set.
When applying the dynEMR-DA procedure described in the previous section, only one dynEM (from the whole set of 26 EMs) is able to discriminate perfectly between both experimental conditions: dynEM 8. Finally, the remaining 16 cultures are used for the final validation of this dynEM (see Fig. 7). Again, all experiments are correctly classified in the dynEMR-DA model.
Figure 8a shows dynEM8. This mode covers the whole glycolytic pathway, starting from glucose (GLCo), producing all the intermediate products until reaching pyruvate (PYR), acetate (ACE) and finally ethanol (ETOH). The coefficients multiplying the EM are visualized in Fig. 8b-e. The first three time points (3, 6, and 9 s) reveal changes in the coefficients. Afterwards, changes are small. At 36 s, the system reaches the steady state, when fluxes do not change any more.
Simulated study. a dynEM8 depicted on the metabolic model. b-e dynEM8 coefficients at 3, 6, 9 and 36 s (first 3 times points and when the fluxes reach the steady state). Blue (red) lines show the mean of the coefficients for the high (low) glucose experiments
The differences between both experimental conditions can be seen in Fig. 8b-e (blue versus red bars). The usage of all reactions in the dynEM, i.e. the coefficients in A matrix, are higher in the high glucose concentration experiments than in the low glucose. This implies that these scenarios take advantage of the higher amount of glucose to carry more flux through the glycolysis until reaching ethanol.
It is worth mentioning that the system is close to steady state from the first time point. However, we used this set up to have a simulated case as close as possible to the real case, in order to find out i) whether there are differences between the initial concentrations of glucose, and ii) if the discriminant dynEM resembles the real case one(s) (see next section).
Real flux data
High vs low initial glucose concentrations
To assess the performance of dynEMR-DA in a real case study, a set of cultures of S. cerevisiae are used to discriminate between experiments using a high or a low initial glucose concentration. Unfortunately, the number of available cultures is low for this case study (4 in each class), so no 3CV, neither 2CV, is possible. Therefore, single CV is applied here: 3+3 experiments are used for dynEMR-DA model building and selection of NPLS-DA components, and the remaining 1+1 experiments are used for validation. This procedure is repeated 4 times, leaving out a couple of cultures each time.
The dynEMR-DA model has to be built using fluxes, not concentrations. Therefore, we computed the fluxes based on the changes in the concentrations between two consecutive time points solving an optimization problem (similarly as in [10]). Specifically, the objective function in this formulation makes the fluxes smooth along time (penalizing the sum of the differences between fluxes in consecutive time points) and small (penalizing the sum of squared fluxes), and the constraints force them to fulfil the stoichiometric equations.
In the actual data set, M=12 metabolites are measured in 24 time points within 2 min (1 measurement every 3 s). The metabolic network (see Fig. 1b) has K=20 reactions. Thus, the optimization problem to solve is:
$$ \left\{ \begin{array}{l} \min_{x_{jk}} {\sum\nolimits}_{j=1}^{22} {\sum\nolimits}_{k=1}^{20} (x_{j+1,k}-x_{j,k})^{2} + {\sum\nolimits}_{j=1}^{23} {\sum\nolimits}_{k=1}^{20} x_{j,k}^{2}\\ s.t. \quad \mathbf{S}\mathbf{X}^{\mathrm{T}}=\frac{d\mathbf{C}^{\mathrm{T}}}{dj}\\ \qquad \; \, \mathbf{X}\geq\mathbf{0}\\ \qquad \; \, \mathbf{X}_{0} \: \mathrm{initial \: solution}\\ \end{array} \right. $$
where X={x jk } is the 23×20 (time points × reactions) flux data matrix. The quadratic optimization problem needs an initial guess on X, i.e. X0. This guess is obtained solving \(\mathbf {S}\mathbf {X}_{0}^{\mathrm {T}}=\frac {d\mathbf {C}^{\mathrm {T}}}{dj}\) using non-negative least squares. Indices k and j denote flux number and time point, respectively, S denotes the 12×20 stoichiometric matrix (metabolites × reactions), and C is the 24×12 concentration matrix (time points × metabolites). It is worth noting that, since fluxes are computed based on the differences between concentrations at consecutive time points, there is one time point less in the flux data matrix (J=23) than in the concentration data (24).
The objective function used in the optimization problem resembles the MOMA function (minimize the squared difference of the reaction rates with steady state) used in [10], with the difference that we minimize the flux differences between consecutive time points.
In this case, only dynEM9 (from the set of 20 EMs) is able to discriminate the left out experiments. This dynEM can be visualised, jointly with the coefficients in matrix A, in Fig. 9. The differences between high and low glucose are also clear in this example. The usage of this dynEM is stronger in scenarios with a high initial glucose concentration than with a low concentration.
Real case study. a dynEM9 depicted on the metabolic model. b-e dynEM9 coefficients at 3, 6, 9 and 24 s (when system is close to steady state). Blue (red) lines show the coefficients for the high (low) glucose experiments
The results in this example follows the scheme described in Fig. 4. In both experiments (high and low), the fluxes are higher in the first steps of glycolysis (3, 6, and 9 s) and lower at the end. As time goes by, fluxes in the last part of the glycolysis increase. This shows that the flux data cannot be modelled in the same way at the first time points as when the culture reaches the steady state, therefore it necessitates to use of dynEMs to model non-steady state flux data, instead of applying a PEMA-based approach.
It is worth noting the similarity between the dynEM identified here and dynEM8 of the simulated case study. Both dynEMs are describing the same phenomena, the glycolysis until reaching pyruvate. They are not exactly the same because the metabolic models are different: acetate and ethanol were not measured in experimental conditions. However, when comparing the simulated and the actual data, the dynEM discriminating between experimental conditions is basically the same one.
Finally, it is difficult to assess when the system reaches the steady state in the real case study. In the simulated case, steady state was reached clearly at 36 s (since fluxes did not change anymore). In the real case, after 24 s (see Fig. 9) fluxes do not change significantly. However, since measurement error is present in the real case, it is difficult to asses whether the steady state was reached at 24 s or afterwards.
Aerobic vs anaerobic conditions
For the second real case study, four cultures performed in aerobic conditions versus four more in anaerobic conditions are compared. As in the previous example, fluxes are calculated from the real concentration data using the optimization framework (see Equation 10); also, a single cross validation procedure is applied here.
In this case study, dynEM8 is able to discriminate between both experimental conditions. The dynEM and the coefficients at 3, 6, 9 and 24 s (when system seems to reach steady state) can be visualized in Fig. 10. Again, the differences between both classes can be seen in the plots; the anaerobic experiments having higher coefficients. This behaviour has been outlined also in the literature [28–31]. To satisfy the redox balances, the flux is deviated from glycolysis to the production of glycerol (in our case, after reaction 4, flux is going through reactions 5 and 6). Glycerol is produced by reduction of the glycolytic intermediate dihydroxyacetone phosphate to glycerol 3-phosphate (g3p) followed by a dephosphorylation of g3p to glycerol. Despite glycerol does not appear explicitly in the network, because this metabolite was not measured in all original experiments, it is likely that the flux flowing through g3p produce glycerol at the end, as suggested in the literature.
Real case study. a dynEM8 depicted on the metabolic model. b-e dynEM8 coefficients at 3, 6, 9 and 24 s (when reaching steady state). Blue (red) lines show the coefficients for aerobic (anaerobic) experiments
Comparison to other state-of-the-art techniques
NPLS-DA
As in [6], it is worth to compare the approach of an elementary-mode based projection model to a classical projection method, which in this case, is NPLS-DA. To perform this comparison, the real case studies presented in the two previous subsections have been modelled using NPLS-DA algorithm.
Figure 11 shows the loadings of the fluxes using the high versus low initial glucose data. The model in this case has 3 components, explaining 92 and 95% of variance in flux and discriminant variables, respectively. This number of components corresponds to the most parsimonious model needed to correctly classify all experiments. Firstly, it is difficult to extract from the loading plots which fluxes are the most important for discrimination, as no clear threshold can be drawn in the plot. Secondly, even varying this hypothetical threshold, the significant fluxes (those with high absolute loading coefficient) represent disconnected reactions through the network and do not correspond to physical pathways, since no topological information is included in the model. The NPLS-DA loadings are the elementary modes in dynEMR-DA, therefore interpretation is more straightforward, as they represent real pathways.
NPLS-DA loading plots for the fluxes (high versus low intial glucose data)
Figure 12 shows the results for the aerobic versus anaerobic case study. Here, 6 components are needed, explaining 98 and 99% of variance in flux and discriminant variables, respectively. As in the high versus low initial glucose example, loading plots are very difficult to interpret.
NPLS-DA loading plots for the fluxes (aerobic versus anaerobic data)
The computation time with these case studies is 17 s (dynEMR-DA model) versus 0.5 s (NPLS-DA model). In the dynEMR-DA algorithm, as many NPLS-DA models as EMs (in this model, 13) are fitted to find the most discriminant one, therefore it is clear that one single NPLS-DA model will be faster than dynEMR-DA. However, the time needed to interpret the output of NPLS-DA is longer than the pathway-oriented result that dynEMR-DA provides.
dynEMR-DA, as opposed to NPLS-DA, can be strongly affected by the size of the EMs matrix. When having several hundreds of EMs, a pre-selection of EMs can be performed to speed up the analysis. One strategy would be to study the reactions that are active in all EMs and include only those EMs with different active reactions (i.e. coefficient different from zero). For example, if many elementary modes use the same reactions with the same directionality for the reversible ones, only one EM can be included in the set of EMs to test. Another possibility would be to use the set of extreme pathways of the network instead of the EMs [24].
Goeman's global test
The Goeman's global test was applied in [13] to find which KEGG pathways show differences between experimental conditions. The output in that case was a p-value indicating which pathways were different depending on the groups at discrete time points. Their results showed that glycolysis and TCA cycle were significant but not for all time points when comparing high versus low initial glucose. For the aerobic versus anaerobic case, both the glycolysis and TCA were significant for all time points.
This approach is not directly comparable to dynEMR-DA, as all pathways are tested simultaneously in dynEMR-DA, instead of individual pathway testing. No EM containing TCA was significant here, which can be also due to i) all time points are used simultaneously in dynEMR-DA, instead of discrete time point analysis (4 time points in [13]), and ii) the dynEMs containing TCA might not show differences between experimental conditions in the non-TCA section of the dynEM.
Finally, authors stated in the Goeman's test article [13] that a dynamic model would be more suitable for this type of data, which is what was pursued here.
The approach for dynamic elementary mode modelling proposed here permits decomposing non-steady state flux distributions into a set of active dynEMs. This way, dynEMA can be used to study the active dynEMs in an experiment, or a set of experiments, extending the PEMA model to a dynamic environment. For discrimination purposes, the main interest in this article, dynEMR-DA allows identifying which dynEMs have different patterns of activation depending on the culture initial conditions.
Actual and simulated concentration data of S. cerevisiae have been used here to evaluate dynEMR-DA. When changing the amount of glucose present in the experiment in both data sets, dynEMR-DA is able to identify that the dynEM flowing through the glycolytic pathway from glucose to pyruvate is discriminating between high and low initial glucose concentration experiments. Even considering two different metabolic models, for data availability reasons, the results of dynEMR-DA seem coherent between case studies. When analysing data from aerobic versus anaerobic conditions, dynEMR-DA indicates that the most discriminant dynEM drives the initial glucose concentration to the glycerol production. Previously published research confirms the results obtained using this new methodology.
The framework presented here will serve to create reduced dynamic models of flux data while preserving biological and thermodynamical meaning, as a tool to analyse non-steady state flux distributions in many experiments and to identify the hidden metabolic patterns that drive the organism from one state to another when changing the environmental conditions. dynEMA and dynEMR-DA have potential applications in bioprocess engineering to understand the small changes in cell metabolism at early stages of cultures.
2CV:
double cross-validation
triple cross-validation
COPASI:
complex pathway simulation
DFBA:
dynamic flux balance analysis
dynEM(s):
dynamic elementary mode(s)
dynEMA:
dynamic elementary mode analysis
dynEMR-DA:
dynamic elementary mode regression discriminant analysis
EM(s):
elementary mode(s)
FBA:
flux balance analysis
Kyoto Encyclopaedia of Genes and Genomes
LC-MS:
liquid chromatography–mass spectrometry
MCR:
multivariate curve resolution
MetDFBA:
time-resolved metabolomics and dynamic flux balance analysis
NPLS:
N-way partial least squares regression
NPLS-DA:
N-way partial least squares regression discriminant analysis
ODE:
ordinary differential equation
PEM(s):
principal elementary mode(s)
PEMA:
principal elementary mode analysis
PLS:
partial least squares regression
PLS-DA:
partial least squares regression discriminant analysis
Bro R, Smilde AK. Principal component analysis. Anal Methods. 2014; 6(9):2812–31.
González-Martínez JM, Folch-Fortuny A, Llaneras F, Tortajada M, Picó J, Ferrer A. Metabolic flux understanding of Pichia pastoris grown on heterogenous culture media. Chemometr Intell Lab Syst. 2014; 134:89–99.
Barrett CL, Herrgard MJ, Palsson B. Decomposing complex reaction networks using random sampling, principal component analysis and basis rotation. BMC Syst Biol. 2009; 3(30):1–8.
Jaumot J, Gargallo R, De Juan A, Tauler R. A graphical user-friendly interface for MCR-ALS: A new tool for multivariate curve resolution in MATLAB. Chemometr Intell Lab Syst. 2005; 76(1):101–10.
Folch-Fortuny A, Tortajada M, Prats-Montalbán JM, Llaneras F, Picó J, Ferrer A. MCR-ALS on metabolic networks: Obtaining more meaningful pathways. Chemometr Intell Lab Syst. 2015; 142:293–303.
Folch-Fortuny A, Marques R, Isidro IA, Oliveira R, Ferrer A. Principal elementary mode analysis (PEMA). Mol BioSyst. 2016; 12(3):737–46.
Hood L. Systems biology: Integrating technology, biology, and computation. Mech Ageing Dev. 2003; 124(1):9–16.
Teusink B, Passarge J, Reijenga CA, Esgalhado E, van der Weijden CC, Schepper M, Walsh MC, Bakker BM, van Dam K, Westerhoff HV, Snoep JL. Can yeast glycolysis be understood in terms of in vitro kinetics of the constituent enzymes? Testing biochemistry. Eur J Biochem / FEBS. 2000; 267(17):5313–29.
Mahadevan R, Edwards JS, Doyle FJ. Dynamic flux balance analysis of diauxic growth in Escherichia coli. Biophys J. 2002; 83(3):1331–40.
Willemsen AM, Hendrickx DM, Hoefsloot HCJ, Hendriks MMWB, Wahl SA, Teusink B, Smilde AK, van Kampen AHC. MetDFBA: incorporating time-resolved metabolomics measurements into dynamic flux balance analysis. Mol BioSyst. 2015; 11(1):137–45.
Barker M, Rayens W. Partial least squares for discrimination. J Chemom. 2003; 17(3):166–73.
Bartel J, Krumsiek J, Theis FJ. Statistical methods for the analysis of high-throughput metabolomics data. Comput Struct Biotechnol J. 2013; 4:201301009.
Hendrickx DM, Hoefsloot HCJ, Hendriks MMWB, Canelas AB, Smilde AK. Global test for metabolic pathway differences between conditions. Anal Chim Acta. 2012; 719:8–15.
Kanehisa M, Goto S, Hattori M, Aoki-Kinoshita KF, Itoh M, Kawashima S, Katayama T, Araki M, Hirakawa M. From genomics to chemical genomics: new developments in KEGG. Nucleic Acids Res. 2006; 34(Database issue):354–7.
Kanehisa M, Goto S. KEGG: kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 2000; 28(1):27–30.
Kanehisa M, Goto S, Furumichi M, Tanabe M, Hirakawa M. KEGG for representation and analysis of molecular networks involving diseases and drugs. Nucleic Acids Res. 2010; 38(Database issue):355–60.
Andersson CA, Bro R. The N-way Toolbox for MATLAB. Chemometr Intell Lab Syst. 2000; 52(1):1–4.
Terzer M, Stelling J. Large-scale computation of elementary flux modes with bit pattern trees. Bioinformatics. 2008; 24(19):2229–35.
Heerden JHv, Wortel MT, Bruggeman FJ, Heijnen JJ, Bollen YJM, Planqué R, Hulshof J, O'Toole TG, Wahl SA, Teusink B. Lost in Transition: Start-Up of Glycolysis Yields Subpopulations of Nongrowing Cells. Science. 2014; 343(6174):1245114.
Hoops S, Sahle S, Gauges R, Lee C, Pahle J, Simus N, Singhal M, Xu L, Mendes P, Kummer U. COPASI–a COmplex PAthway SImulator. Bioinformatics. 2006; 22(24):3067–74.
Petzold L. Automatic selection of methods for solving stiff and nonstiff systems of ordinary differential equations. SIAM J Sci Stat Comput. 1983; 4:136–48.
Canelas AB, van Gulik WM, Heijnen JJ. Determination of the cytosolic free NAD/NADH ratio in Saccharomyces cerevisiae under steady-state and highly dynamic conditions. Biotechnol Bioeng. 2008; 100(4):734–43.
Nikerel IE, Canelas AB, Jol SJ, Verheijen PJT, Heijnen JJ. Construction of kinetic models for metabolic reaction networks: Lessons learned in analysing short-term stimulus response data. Math Comput Model Dyn Syst. 2011; 17(3):243–60.
Llaneras F, Picó J. Stoichiometric modelling of cell metabolism. J Biosci Bioeng. 2008; 105(1):1–11.
Bro R. Multiway calibration. Multilinear PLS. J Chemom. 1998; 10(1):47–61.
Westerhuis JA, Hoefsloot HCJ, Smit S, Vis DJ, Smilde AK, Velzen EJJv, Duijnhoven JPMv, Dorsten FAv. Assessment of PLSDA cross validation. Metabolomics. 2008; 4(1):81–9.
Szymańska E, Saccenti E, Smilde AK, Westerhuis JA. Double-check: validation of diagnostic statistics for PLS-DA models in metabolomics studies. Metabolomics. 2012; 8(Suppl 1):3–16.
Rodrigues F, Ludovico P, Leão C. Sugar Metabolism in Yeasts: an Overview of Aerobic and Anaerobic Glucose Catabolism. In: Biodiversity and Ecophysiology of Yeasts. The Yeast Handbook. Berlin: Springer: 2006. p. 101–21.
Larsson K, Ansell R, Eriksson P, Adler L. A gene encoding sn-glycerol 3-phosphate dehydrogenase (NAD+) complements an osmosensitive mutant of Saccharomyces cerevisiae. Mol Microbiol. 1993; 10(5):1101–11.
Eriksson P, André L, Ansell R, Blomberg A, Adler L. Cloning and characterization of GPD2, a second gene encoding sn-glycerol 3-phosphate dehydrogenase (NAD+) in Saccharomyces cerevisiae, and its comparison with GPD1. Mol Microbiol. 1995; 17(1):95–107.
Norbeck J, Pâhlman AK, Akhtar N, Blomberg A, Adler L. Purification and characterization of two isoenzymes of DL-glycerol-3-phosphatase from Saccharomyces cerevisiae. Identification of the corresponding GPP1 and GPP2 genes and evidence for osmotic regulation of Gpp2p expression by the osmosensing mitogen-activated protein kinase signal transduction pathway. J Biol Chem. 1996; 271(23):13875–81.
Authors would like to acknowledge Professor Henk A.L. Kiers (University of Groningen, The Netherlands), for his help during algorithm development, and the Biotechnology Department of Delft University of Technology (The Netherlands), for the real case study data sets.
This research work was partially supported by the Spanish Ministry of Economy and Competitiveness under the project DPI2014-55276-C5-1R.
The metabolic model of the simulated data can be retrieved from [19], with initial concentrations given in Table 1. The concetration data that support the findings of the real case study are available from the Biotechnology Department of Delft University of Technology (The Netherlands) but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of the Biotechnology Department of Delft University of Technology (The Netherlands). Additional information on the concentration data can be found in [22, 23].
Grupo de Ingeniería Estadística Multivariante, Departamento de Estadística e IO Aplicadas y Calidad, Universitat Politècnica de València, Valencia, Spain
Abel Folch-Fortuny & Alberto Ferrer
Genetics BioIT DBC Department, DSM Food Specialties, Delft, The Netherlands
Abel Folch-Fortuny
Systems Bioinformatics, Centre for Integrative Bioinformatics, Free University of Amsterdam, Amsterdam, The Netherlands
Bas Teusink
Biosystems Data Analysis, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands
Huub C.J. Hoefsloot & Age K. Smilde
Huub C.J. Hoefsloot
Age K. Smilde
Alberto Ferrer
AF-F performed the analyses and wrote the manuscript. BT, HCJH and AKS conceived the study. AF-F, HCJH, AKS and AF developed the algorithms. BT, HCJH, AKS and AF and reviewed the manuscript. All authors read and approved the final manuscript.
Correspondence to Abel Folch-Fortuny.
Additional file 1
An additional file is provided with the detailed metabolic models. (PDF 105 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Folch-Fortuny, A., Teusink, B., Hoefsloot, H. et al. Dynamic elementary mode modelling of non-steady state flux data. BMC Syst Biol 12, 71 (2018). https://doi.org/10.1186/s12918-018-0589-3
Metabolic network
Elementary mode
Dynamic modelling
N-way
Cross validation
Methods, software and technology | CommonCrawl |
About CVI
MICROBIAL IMMUNOLOGY
Polyclonal Antibodies to Glutathione S-Transferase- Verotoxin Subunit A Fusion Proteins Neutralize Verotoxins
P. H. M. Leung, J. S. M. Peiris, W. W. S. Ng, W. C. Yam
P. H. M. Leung
1Department of Microbiology, Queen Mary Hospital
J. S. M. Peiris
W. W. S. Ng
2School of Professional and Continual Education, The University of Hong Kong, Hong Kong, Special Administrative Region, People's Republic of China
W. C. Yam
For correspondence: [email protected]
DOI: 10.1128/CDLI.9.3.687-692.2002
The A1 subunits of verotoxin-1 (VT1) and VT2 genes were cloned into pGEX-4T-2 for the expression of glutathione S-transferase (GST) fusion proteins. The N-terminal and the transmembrane regions of the A1 subunits were excluded from the constructs in order to increase the product yields. Polyclonal anti-VT1A1 and anti-VT2A1 antibodies were produced by immunizing rabbits with GST-VT1A1 and GST-VT2A1 fusion proteins, respectively. The antibodies were tested for their ability to neutralize active toxins from 45 VT-producing Escherichia coli (VTEC) strains. The antibodies had significantly high neutralizing activities against their homologous toxins. The average percentages of neutralization of VT1 by anti-GST-VT1A1 and anti-GST-VT2A1 were 76.7% ± 7.9% and 3.6% ± 2.3%, respectively, and those of VT2 were 1.7% ± 2.3% and 82.5% ± 13.9%, respectively. VT2 variant toxin was neutralized by anti-GST-VT2A1, with cross neutralization being a possible consequence of sequence homology between VT2 and a VT2 variant. To our knowledge, this is the first report on the production of polyclonal antibodies from GST-VT fusion proteins. The antibodies were shown to exhibit specific toxin neutralizing activities and may be useful for immunological diagnosis of VTEC infections.
Verotoxin (VT) is the principal virulence factor of verotoxigenic Escherichia coli (VTEC), an emerging food-borne pathogen associated with diseases ranging from uncomplicated diarrhea to the hemolytic-uremic syndrome (1, 24). There are two types of VT, VT1 and VT2; the latter type has variants, including VT2c and VT2e. All VTs belong to the Shiga toxin family, in which the C terminus of the A subunit is encircled by a pentameric ring formed by five identical B subunits (7). This A-B bipartite molecule first binds to a eukaryotic glycolipid receptor via the pentamer of the VT B subunits. Then the catalytic VT A subunit dissociates from the VT B subunit pentamer and translocates into the cytoplasm through a retrograde secretory pathway. Subsequently, inhibition of the 28S rRNA in 60S ribosomal subunits induces programmed cell death (12, 14, 19). The N terminus of the VT A subunit is the A1 fragment, which is a catalytic domain essential for the cytotoxicity of VT (8). The C terminus A2 fragment facilitates the noncovalent interaction between subunits A and B (2). The A1 and A2 fragments are separated by a trypsin-sensitive region. Finally, there is a transmembrane region at the C terminus of fragment A1. This region is involved in toxin translocation across the endoplasmic reticulum membrane (23). The minimal sequence of VT1A required for activity includes the transmembrane region, and the deletion of this region retarded the functional activity of VT1A (11). Structural and biological properties of the VTs have been studied extensively (15); subunits of VTs have been prepared in large quantities as fusion proteins and used in seroepidemiology (10).
Antibodies against VTs have been produced for diagnostic or therapeutic purposes. These antibodies were produced by immunizing animals with VT or VT subunit toxoids (4, 16). The use of a VT fusion protein as an immunogen has not been reported. In this study, we describe the production of VT1A1 and VT2A1 subunits by the glutathione S-transferase (GST) fusion protein technique. The GST fusion constructs were prepared without the N-terminal signal peptide and C-terminal transmembrane region, which enabled hyperexpression of soluble protein products and a single-step purification. As the transmembrane region is required for cytotoxicity (11), it was unknown whether the fusion proteins lacking these regions elicited neutralizing antibodies. Hence, we also used the purified fusion proteins to raise polyclonal antibodies and evaluated the neutralization activities of the antibodies on standard VTEC strains as well as animal and human strains reported in our earlier study (13).
Bacterial strains, culture conditions, and preparation of VT.Forty-five VTEC strains (41 from animals, 4 from humans) isolated in Hong Kong (13) and 8 standard VTEC and non-VTEC strains were included in this study. Details of these strains are listed in Table 1. A single colony was inoculated into 2 ml of brain heart infusion broth (Oxoid, Basingstoke, United Kingdom) and was then incubated overnight at 37°C with agitation. One milliliter of culture was centrifuged at 20,000 × g for 15 min, and the supernatant was collected for a neutralization assay.
Characteristics of VTEC and standard strains used in neutralization studies
PCR amplification of vt1A1 and vt2A1 genesThe A1 subunits of the vt1 and vt2 genes were amplified from standard strains ATCC 43890 and ATCC 43889, respectively. Upon analysis with the computer program TMpred (18), two major hydrophobic regions were found at amino acid positions 3 to 23 and 242 to 263 (nucleotide positions 338 to 400 and 1055 to 1120), respectively, in subunit A of vt1 (GenBank accession number M16625 [6]) (Fig. 1). Similarly, two hydrophobic regions were found in amino acid positions 3 to 19 and 245 to 261 (nucleotide positions 203 to 253 and 929 to 979), respectively, in subunit A of vt2 (GenBank accession number M59432 [20]). Primers were then designed to exclude these hydrophobic regions. BamHI and XhoI restriction enzyme sites, stop codons, and CCC-GGG triplets were included to facilitate subsequent cloning procedures. The primers used for amplification of vt1A1 were KW3 (positions 398 to 415), 5′-CCCGGATCCAAGGAATTTACCTTAGAC-3′, and KW4 (positions 1052 to 1066), 5′-GGGGTCGAG(TCA)TCTTCCTACACGAAC-3′ (Fig. 1). The primers used for amplification of vt2A1 were KW7 (positions 263 to 280), 5′-CCCGGATCCCGGGAGTTTACGATAGAC-3′, and KW8 (positions 914 to 928), 5′-GGGCTCGAG(TCA)TCTCCCCACTCTGAC-3′. In each case, the amplified sequence was located between the two hydrophobic regions. BamHI and XhoI restriction sites are underlined, and stop codons are in parentheses. In order to amplify the entire vt1A1 and vt2A1 subunits, primers were also designed to include the hydrophobic regions of these sequences. The primers used for amplification of the entire vt1A1 gene were F380 (positions 380 to 394), 5′-CCCGGATCCTCAGTTAATGTGGTC-3′, and R1166 (positions 1163 to 1177), 5′-GGGGTCGAG(TCA)CATAGAAGGAAACTC-3′. The primers used for amplification of the entire vt2A1 gene were F212 (positions 212 to 226), 5′-CCCGGATCCTTTAAATGGGTACTG-3′, and R1039 (positions 1025 to 1039), 5′-GGGCTCGAG(TCA)TTCTGGTTGACTCTC-3′. PCR products were electrophoresed, stained, and visualized by UV transillumination, and products were then confirmed by DNA sequencing.
TMpred output for vt1 subunit A. The middle and upper bars represent the amino acid and nucleotide sequences of vt1A1, respectively. Two hydrophobic regions exist in amino acid positions 3 to 23 and 242 to 263. The relative positions of primers KW3 and KW4 are indicated.
Cloning of PCR products.PCR products were purified with the QIAquick kit (Qiagen, Santa Clarita, Calif.) before and after restriction digestion with BamHI and XhoI (Promega, Madison, Wis.). Each of the digested products was cloned into BamHI- and XhoI-digested pGEX-4T-2 (Amersham Pharmacia Biotech, Uppsala, Sweden), and each construct encoded a fusion protein consisting of GST at the N terminus and VT1A1 or VT2A1 at the C terminus. The recombinant constructs were electroporated into E. coli XL1-Blue MRF′ (Stratagene, La Jolla, Calif.). The presence of inserts was confirmed by PCR. The forward primer 5PGEX, 5′-GGGCTGGCAAGCCACGTTTGGTG-3′, was derived from the 3′ end of the GST gene. The reverse primers were KW4 and R1166 for truncated and entire vt1A1, respectively, and KW8 and R1039 for truncated and entire vt2A1, respectively.
Expression and purification of GST-VT fusion proteins.The recombinant proteins were expressed and purified as described by the GST Gene Fusion System (Amersham Pharmacia Biotech). Briefly, 300 ml of E. coli XL1-Blue MRF′ transformed with the recombinant plasmid or plasmid without insert was grown at 37°C in Luria-Bertani medium to an optical density of 0.5 at 600 nm. Isopropyl-β-d-thiogalactopyranoside (IPTG) was then added to the culture at a final concentration of 1 mM. The cells were grown for an additional 2 h at 30°C. The cells were harvested by centrifugation at 7,700 × g for 10 min at 4°C followed by suspension in 12 ml of cold phosphate-buffered saline (PBS) with 1 mM phenylmethanesulfonyl fluoride and lysis by sonication on ice. Cell debris was removed by centrifugation at 12,000 × g for 10 min at 4°C. Fusion protein present in the supernatant was purified by affinity chromatography on glutathione-Sepharose 4B (Amersham Pharmacia Biotech) according to the manufacturer's protocol. The purity of protein products was assessed by sodium dodecyl sulfate-12.5% polyacrylamide gel electrophoresis (SDS-12.5% PAGE), and protein concentrations were estimated by absorbance at 280 nm with bovine serum albumin as the standard.
Immunization of rabbits.New Zealand White rabbits were injected subcutaneously with 150 μg (0.5 ml) of purified GST-VT1A1, GST-VT2A1, or GST mixed with an equal volume of complete Freund's adjuvant. The immunization was repeated at 4 and 6 weeks after the first injection. The animals were bled 1 week after the third injection and then once monthly. Antibody titers were determined by a neutralization assay, and the animals were sacrificed when there was no further increase in antibody titers.
Purification of polyclonal antibodies.Before performing immunoblot analysis, polyclonal antibodies obtained from the immunized rabbits were purified to remove anti-GST present in the sera. This was done by incubating the sera with GST immobilized on glutathione-Sepharose beads. The immunoadsorbent beads were prepared according to the manufacturer's recommendations. Briefly, 200 μl of glutathione-Sepharose beads was coupled with 210 mg of GST. The GST was expressed from a 300-ml culture of E. coli XL1-Blue MRF′ transformed with the plasmid pGEX-4T-2. After coupling of GST, the beads were washed and pelleted, 1 ml of 1/100-diluted polyclonal antibody was added to the beads, and the mixture was incubated at 4°C overnight with gentle agitation. The purified polyclonal antibody was separated from the beads by centrifugation at 500 × g for 5 min.
Immunoblot analysis of polyclonal antibodies.Fusion protein samples were run on SDS-12.5% polyacrylamide gels under reducing conditions and subsequently electroblotted onto nitrocellulose membranes (Bio-Rad Laboratories, Hercules, Calif.). The membrane was blocked at 4°C overnight in blocking buffer (10% nonfat milk-0.03% Tween 20-PBS) and then washed three times in 0.03% Tween 20-PBS. The washed membrane was incubated with 1:2,000-diluted anti-GST-VT at room temperature for 45 min, and the membrane was washed again and incubated for 30 min at room temperature in 1:2,000-diluted anti-rabbit immunoglobulin conjugated with alkaline phosphatase. The membrane was washed again and incubated with the chromogenic substrates 5-bromo-4-chloro-3-indolylphosphate (BCIP) and nitroblue tetrazolium (Boehringer Mannheim, Mannheim, Germany) for 1 to 5 min.
Evaluation of anti-VT1A1 and anti-VT2A1 by sandwich enzyme-linked immunosorbent assay (ELISA).A 96-well microtiter plate precoated with VT-specific monoclonal antibodies was used. The plate was obtained from the Premier EHEC Assay kit (Meridian Diagnostics, Cincinnati, Ohio). A 50-μl aliquot of culture supernatant obtained from a VT- or non-VT-producing strain was diluted in 200 μl of diluent supplied with the assay kit. The diluted supernatant was added to the well and incubated at 37°C for 30 min. The microtiter plate was washed three times with PBS with 0.5% Tween 20, blocked with 2% bovine serum albumin at 37°C for 30 min, and washed as described above. A 100-μl volume of 1/400-diluted anti-VT1A1 or 1/25-diluted anti-VT2A1 was added to the well, incubated at 37°C for 30 min, and washed. A 100-μl aliquot of horseradish peroxidase-conjugated goat anti-rabbit serum (diluted 1/1,200) was added, incubated, and washed as before. After washing, 100 μl of para-nitrophenyl phosphate substrate (Sigma, St. Louis, Mo.) was added and then incubated at 37°C for 10 min. The absorbance was read at 405 nm with a Spectrafluor Plus microplate reader (TECAN GmbH, Salzburg, Austria).
Neutralization assay.The neutralization assay was performed on Vero cell monolayers in Eagle minimum essential medium (Gibco-BRL, Gaithersburg, Md.) with 10% fetal calf serum (Gibco-BRL) grown on 96-well plates. For determination of antibody titers, 50 μl of serially diluted antibody was mixed with an equal volume of culture supernatant prepared from ATCC 43890 or ATCC 43889 containing 40 50% cytotoxic doses (CD50) (20).
For testing of VTEC isolates, 50 μl of 1/400-diluted anti-GST-VT1A1 or 1/25-diluted anti-GST-VT2A1 (dilutions twofold higher than the 50% neutralization dose) was preincubated with an equal volume of diluted culture supernatant from standard strains or animal or human VTEC isolates containing 40 CD50. For toxin control, 50 μl of PBS was added instead of antibody. The mixture was incubated at 37°C for 1 h in a moist chamber, added to the Vero cell monolayer, and incubated for 2 days. The Vero cell monolayer was fixed and stained with crystal violet as described previously (9). The intensity of the color of the stained cells was measured at 620 nm, and the absorbance (A) was proportional to the amount of viable cells. Percentage neutralization was calculated by using the following formula (20): $$mathtex$$\[\frac{A_{620\ (toxin\ {+}\ antibody)}\ {-}\ A_{620\ (toxin)}}{A_{620\ (untreated\ cells)}}\ {\times}\ 100\%\]$$mathtex$$ The Wilcoxon signed-rank test was used to assess the statistical significance of differences in neutralizing activities between the two antibodies when reacting against VT1 or VT2.
PCR amplification and cloning of vt1A1 and vt2A1 genes.The vt1A1 and vt2A1 genes were amplified to the expected sizes, and DNA sequencing confirmed that the selected regions were amplified. Without taking the restriction enzyme sites and the stop codons into account, the amplicon of truncated vt1A1 was a 669-bp fragment between nucleotide positions 398 and 1066 of the vt1 gene. The amplicon of truncated vt2A1 was a 666-bp fragment between nucleotide positions 263 and 928 of the vt2 gene. Amplicons of the entire vt1A1 and vt2A1 subunits had sizes of 798 bp (nucleotide positions 380 to 1180) and 828 bp (nucleotide positions 212 to 1039), respectively. BamHI- and XhoI-digested amplicons were cloned into pGEX-4T-2. The resultant plasmids carrying individual truncated A1 subunits were designated pGEX-vt1A1 and pGEX-vt2A1; plasmids carrying the entire A1 subunit were designated pGEX-vt1A1′ and pGEX-vt2A1′. PCR amplification of these recombinant plasmids confirmed the presence of inserts in these constructs.
Expression and purification of fusion proteins.Aliquots of the sonicated bacterial cell lysate and pre- and postpurified supernatant were analyzed by SDS-PAGE after IPTG induction. A 46-kDa band was observed only in bacterial cells incubated with IPTG (data not shown), and the presence of the induced proteins in the supernatant suggested that these fusion proteins are soluble. The impurities were completely removed after purification by affinity chromatography (Fig. 2). Expression of the GST fusion protein was only observed in bacterial cells transformed with pGEX-vt1A1 but not with pGEX-vt1A1′ (Fig. 3). Similarly, the fusion protein was produced from pGEX-vt2A1 only. The yields of GST-VT1A1 and GST-VT2A1 were estimated to be 1.03 and 0.5 g per liter of culture, respectively.
SDS-PAGE of total bacterial cell lysates and supernatants of E. coli transformed with pGEX-vt1A1 (lanes 1 to 4) and pGEX-vt2A1 (lanes 6 to 9) after IPTG induction. Lanes: 1 and 6, bacterial cell lysates; 2 and 7, cell debris; 3 and 8, supernatants; 4 and 9, supernatants after purification by affinity chromatography; 5, molecular mass marker.
SDS-PAGE of total bacterial cell lysates and supernatants of E. coli transformed with pGEX-vt1A1′ (lanes 1 to 4) and pGEX-vt1A1 (lanes 6 to 9) after IPTG induction. Lanes: 1 and 6, bacterial cell lysates after 1.5-h induction; 2 and 7, bacterial cell lysates after 3-h induction; 3 and 8, cell debris; 4 and 9, supernatants of the cell lysates; 5, molecular mass marker.
Generation of polyclonal antibodies.Purified anti-GST-VT1A1 and anti-GST-VT2A1 were tested for reactivity to the fusion proteins by immunoblot analysis. Both antibodies reacted with their homologous antigens. There was no reaction with the GST protein after removal of anti-GST by affinity chromatography (Fig. 4). The preimmune rabbit serum showed no reactivity with any of the fusion proteins (data not shown).
Immunoblot analysis of reactivities of anti-GST-VT1A1 and anti-GST-VT2A1. Fusion proteins were subjected to SDS-PAGE, immobilized onto nylon membranes, and then blotted with prepurified (A) and postpurified (B) anti-GST-VT1A1. Similar procedures were done on prepurified (C) and postpurified (D) anti-GST-VT2A1. Lanes: 1, GST-VT1A1; 2, GST-VT2A1; 3, GST. Molecular sizes (in kilodaltons) are indicated to the right.
Evaluation of the polyclonal antibodies by sandwich ELISA.The specificities of anti-VT1A1 and anti-VT2A1 were assessed by using three VT1-, five VT2-, and two non-VT-producing strains. When anti-VT1A1 was tested against the culture supernatant from VT1-producing strains, the mean absorbance ± standard deviation was 0.79 ± 0.15; when tested against VT2-producing and non-VT-producing strains, the mean absorbance values were 0.03 ± 0.006 and 0.03 ± 0.01, respectively. When anti-VT2A1 was tested against culture supernatant from VT1-, VT2-, and non-VT-producing strains, the mean absorbance values were 0.52 ± 0.10, 0.57 ± 0.19, and 0.38 ± 0.10, respectively.
In vitro cytotoxicity neutralization of the rabbit antisera.The highest dilutions of anti-GST-VT1A1 and anti-GST-VT2A1 able to achieve 50% neutralization of their homologous antigens were 1/1,600 and 1/100, respectively. When tested with standard VTEC strains, both antibodies at a dilution twofold higher than the 50% neutralization dose achieved 80 to 100% neutralization of toxins (40 CD50) produced from homologous strains and there was 3 to 5% neutralization activity with toxins from heterologous strains. Culture supernatants from the nontoxigenic strains ATCC 43888 and ATCC 25922 had no cytotoxic effect on the Vero cells. Anti-GST-VT2A1 neutralized VT2e produced from strain S1191. Antiserum raised from rabbits immunized with the GST protein had no neutralizing activities against the toxins.
VTs from 41 animal and 4 human VTEC strains were similarly tested in the neutralization assay (Fig. 5). Culture supernatants were reacted with individual antibodies or mixtures of antibodies and then added to Vero cell monolayers to test for residual cytotoxicity of the culture supernatant. Cell supernatants from all VTEC strains were cytotoxic to Vero cells. For the six VTEC strains harboring only the vt1 gene, the average neutralization percentages ± standard deviations were 76.7% ± 7.9% and 3.6% ± 2.3% when reacted with anti-GST-VT1A1 and anti-GST-VT2A1, respectively. For the 29 VTEC strains (25 from animals and 4 from humans) harboring only the vt2 gene, the neutralization percentages were 1.7% ± 2.3% and 82.5% ± 13.9% when reacted with anti-GST-VT1A1 and anti-GST-VT2A1, respectively. In both cases, the differences were statistically significant (P = 0.03 for VT1 and P < 0.0001 for VT2). The antibodies had high neutralizing activities against their homologous toxins. For the six strains producing both VT1 and VT2, the levels of neutralization were low when reacted with each antibody individually. When a mixture of both antibodies was used, strains producing either or both toxins were neutralized ≥75% (Fig. 5). Culture supernatants from the three porcine VTEC strains harboring vt2e alone or with vt2 were neutralized by anti-GST-VT2A1 and antibody mixture but not by anti-GST-VT1A1. Similar results were observed in the cattle VTEC strains carrying the vt variant gene. In both cases, neutralization activities were lower than those against strains carrying vt2.
Neutralization activities of the antisera. VTs produced from human and animal VTEC isolates were reacted with antisera, and the neutralization of cytotoxicity was detected.
In this study, we report the use of GST-VT fusion proteins as immunogens for the production of polyclonal antibodies. The manipulations involved were straightforward: a selected region of VT was fused with GST and the fusion protein was purified from the bacterial lysate (5). Specific polyclonal antibodies were then produced from the fusion proteins by animal immunization. The A1 subunit was selected for the construction of the fusion protein, as this is the catalytic domain of the toxin molecule (11). Several investigators have encountered difficulties in expressing a sufficient amount of protein product due to the presence of the signal peptide at the N terminus, which causes rapid transportation of the product out of the bacterial cell (21, 22). Removal of the signal peptide leads to cytoplasmic retention of the toxin, resulting in higher yields of expressed products. In order to obtain a sufficient amount of fusion protein in soluble form, we have excluded the signal peptide at the N terminus and the hydrophobic transmembrane region at the C terminus during the preparation of the constructs. We showed that upon IPTG induction, bacterial cells transformed with these constructs, but not with constructs carrying entire A1 subunits, expressed large amounts of soluble fusion proteins.
As VT1A1 and VT2A1 were without the transmembrane region, which is required for cytotoxicity (11, 23), the ability of the fusion proteins to produce neutralizing antibodies needed confirmation. Our results demonstrated that the antibodies produced had high levels of neutralization against their homologous native VTs. The titer of the anti-GST-VT1A1 antiserum was higher than that of anti-GST-VT2A1. This may be due to differences in immunogenicity for the rabbits used or degradation of the fusion protein after injection into the animal. Cross neutralization between VT1 and VT2 was not evident. However, anti-GST-VT2A1 was able to neutralize VT2 variants. The cross neutralization between VT2 and VT2 variants is probably due to the high homologies of subunit A between these variants (15). This can be considered an advantage as coverage of the antibody activity is extended to the VT2 variants. When animal and human VTEC strains were tested, neutralization results were consistent with those of the vt1 and vt2 genotypes. Attempts have been made to cleave GST from the fusion proteins before immunization; however, the yields of end products after thrombin cleavage were very low. The entire fusion proteins were subsequently used for animal injections, and anti-GST was also present in the antisera. Our results showed that antiserum raised from rabbits immunized with GST did not neutralize toxins produced from any of the test or standard strains, indicating that anti-GST did not interfere with the neutralization assay. We have tried evaluating the polyclonal antibodies by using ELISA and found that the polyclonal anti-VT1A1 detected VT1 specifically. However, nonspecific reactivity occurred in the polyclonal anti-VT2A1.
VTs have an important role in the pathogenesis of hemolytic-uremic syndrome (17). Molecular methods for the detection of vt genes have been established. However, certain strains carrying vt genes may be nontoxigenic or produce low levels of toxin (25) and the detection of VT production in bacterial isolates by specific antibodies is of ultimate importance. ELISA-based kits such as Premier EHEC (Meridian Diagnostics) and Ridascreen Verotoxin (R-Biopharm GmbH) have been developed for VT detection by using monoclonal antibodies, but false-positive results have been documented in these systems (3). The polyclonal antibodies produced from the GST-VT fusion proteins are specific and useful alternatives for the detection and differentiation of VT1 and VT2. In conclusion, we showed that avoidance of the signal peptide and the hydrophobic transmembrane regions in vt1A1 and vt2A1 sequences resulted in hyperexpression of GST-VT fusion proteins. We also found that these fusion proteins elicited specific neutralizing polyclonal antibodies in rabbits. The availability of these antibodies and the purified VT antigens allows for the development of reliable systems for immunological diagnosis of VTEC infections. The strategy presented here can be a paradigm applied to other systems for the production of fusion proteins and antibodies for diagnostic and therapeutic purposes.
We thank Wong Ka-wing for excellent technical assistance in the preparation of the fusion proteins and N. A. Strockbine from the Centers for Disease Control and Prevention, Atlanta, Ga., for providing the standard VTEC strains. The cooperation of the government abattoir in the isolation of VTEC strains is also gratefully acknowledged.
This work was supported by a grant from the Hong Kong Research Grants Council (HKU 7314/97 M) and a SPACE research grant award (21386308.03982.70300.420.01).
Received 19 November 2001.
Returned for modification 8 January 2002.
Agbodaze, D. 1999. Verocytotoxins (Shiga-like toxins) produced by Escherichia coli: a minireview of their classification, clinical presentations and management of a heterogeneous family of cytotoxins. Comp. Immunol. Microbiol. Infect. Dis.22:221-230.
Austin, P. R., P. E. Jablonski, G. A. Bohach, A. K. Dunker, and C. J. Hovde. 1994. Evidence that the A2 fragment of Shiga-like toxin type I is required for holotoxin integrity. Infect. Immun.62:1768-1775.
Beutin, L., S. Zimmermann, and K. Gleier. 1996. Pseudomonas aeruginosa can cause false-positive identification of verotoxin (Shiga-like toxin) production by a commercial enzyme immune assay system for the detection of Shiga-like toxins (SLTs). Infection24:267-268.
Bielaszewska, M., I. Clarke, M. A. Karmali, and M. Petric. 1997. Localization of intravenously administered verocytotoxins (Shiga-like toxins) 1 and 2 in rabbits immunized with homologous and heterologous toxoids and toxin subunits. Infect. Immun.65:2509-2516.
Calderwood, S. B., D. W. Acheson, M. B. Goldberg, S. A. Boyko, and A. Donohue-Rolfe. 1990. A system for production and rapid purification of large amounts of the Shiga toxin/Shiga-like toxin I B subunit. Infect. Immun.58:2977-2982.
Calderwood, S. B., F. Auclair, A. Donohue-Rolfe, G. T. Keusch, and J. J. Mekalanos. 1987. Nucleotide sequence of the Shiga-like toxin genes of Escherichia coli. Proc. Natl. Acad. Sci. USA84:4364-4368.
Fraser, M. E., M. M. Chernaia, Y. V. Kozlov, and M. N. James. 1994. Crystal structure of the holotoxin from Shigella dysenteriae at 2.5 A resolution. Nat. Struct. Biol.1:59-64.
Garred, O., B. van Deurs, and K. Sandvig. 1995. Furin-induced cleavage and activation of Shiga toxin. J. Biol. Chem.270:10817-10821.
Gentry, M. K., and J. M. Dalrymple. 1980. Quantitative microtiter cytotoxicity assay for Shigella toxin. J. Clin. Microbiol.12:361-366.
Gunzer, F., and H. Karch. 1993. Expression of A and B subunits of Shiga-like toxin II as fusions with glutathione S-transferase and their potential for use in seroepidemiology. J. Clin. Microbiol.31:2604-2610.
Haddad, J. E., A. Y. al-Jaufy, and M. P. Jackson. 1993. Minimum domain of the Shiga toxin A subunit required for enzymatic activity. J. Bacteriol.175:4970-4978.
Inward, C. D., J. Williams, I. Chant, J. Crocker, D. V. Milford, P. E. Rose, and C. M. Taylor. 1995. Verocytotoxin-1 induces apoptosis in Vero cells. J. Infect.30:213-218.
Leung, P. H. M., W. C. Yam, W. W. S. Ng, and J. M. S. Peiris. 2001. The prevalence and characterization of verotoxin-producing Escherichia coli isolated from cattle and pigs in an abattoir in Hong Kong. Epidemiol. Infect.126:173-179.
Lord, J. M., and L. M. Roberts. 1998. Toxin entry: retrograde transport through the secretory pathway. J. Cell Biol.140:733-736.
Melton-Celsa, A. R., and A. D. O'Brien. 1998. Structure, biology, and relative toxicity of Shiga toxin family members for cells and animals, p. 121-128. In J. B. Kaper and A. D. O'Brien (ed.), Escherichia coli O157:H7 and other Shiga toxin-producing E. coli strains. American Society for Microbiology, Washington, D.C.
Nakao, H., N. Kiyokawa, J. Fujimoto, S. Yamasaki, and T. Takeda. 1999. Monoclonal antibody to Shiga toxin 2 which blocks receptor binding and neutralizes cytotoxicity. Infect. Immun.67:5717-5722.
Paton, J. C., and A. W. Paton. 1998. Pathogenesis and diagnosis of Shiga toxin-producing Escherichia coli infections. Clin. Microbiol. Rev.11:450-479.
Persson, B., and P. Argos. 1994. Prediction of transmembrane segments in proteins utilising multiple sequence alignments. J. Mol. Biol.237:182-192.
Saxena, S. K., A. D. O'Brien, and E. J. Ackerman. 1989. Shiga toxin, Shiga-like toxin II variant, and ricin are all single-site RNA N-glycosidases of 28S RNA when microinjected into Xenopus oocytes. J. Biol. Chem.264:596-601.
Schmitt, C. K., M. L. McKee, and A. D. O'Brien. 1991. Two copies of Shiga-like toxin II-related genes common in enterohemorrhagic Escherichia coli strains are responsible for the antigenic heterogeneity of the O157:H− strain E32511. Infect. Immun.59:1065-1073.
Skinner, L. M., and M. P. Jackson. 1998. Inhibition of prokaryotic translation by the Shiga toxin enzymatic subunit. Microb. Pathog.24:117-122.
Suh, J. K., C. J. Hovde, and J. D. Robertus. 1998. Shiga toxin attacks bacterial ribosomes as effectively as eucaryotic ribosomes. Biochemistry37:9394-9398.
Suhan, M. L., and C. J. Hovde. 1998. Disruption of an internal membrane-spanning region in Shiga toxin 1 reduces cytotoxicity. Infect. Immun.66:5252-5259.
Wong, S. S. Y., W. C. Yam, P. H. M. Leung, P. C. Y. Woo, and K. Y. Yuen. 1998. Verocytotoxin-producing Escherichia coli infection: the Hong Kong experience. J. Gastroenterol. Hepatol.13(Suppl.):S289-S293.
Yam, W. C., D. N. Tsang, T. L. Que, M. Peiris, W. H. Seto, and K. Y. Yuen. 1998. A unique strain of Escherichia coli O157:H7 that produces low verocytotoxin levels not detected by use of a commercial enzyme immunoassay kit. Clin. Infect. Dis.27:905-906.
Clinical and Diagnostic Laboratory Immunology May 2002, 9 (3) 687-692; DOI: 10.1128/CDLI.9.3.687-692.2002
Thank you for sharing this Clinical and Vaccine Immunology article.
You are going to email the following Polyclonal Antibodies to Glutathione S-Transferase- Verotoxin Subunit A Fusion Proteins Neutralize Verotoxins
Message Subject (Your Name) has forwarded a page to you from Clinical and Vaccine Immunology
Message Body (Your Name) thought you would be interested in this article in Clinical and Vaccine Immunology.
Submit a Manuscript to mSphere
Print ISSN: 1556-6811; Online ISSN: 1556-679X | CommonCrawl |
Non-Malleable Extractors and Non-Malleable Codes: Partially Optimal Constructions
Abstract: The recent line of study on randomness extractors has been a great success, resulting in exciting new techniques, new connections, and breakthroughs to long standing open problems in several seemingly different topics. These include seeded non-malleable extractors, privacy amplification protocols with an active adversary, independent source extractors (and explicit Ramsey graphs), and non-malleable codes in the split state model. Previously, the best constructions are given in [Li17]: seeded non-malleable extractors with seed length and entropy requirement $O(\log n+\log(1/\epsilon)\log \log (1/\epsilon))$ for error $\epsilon$; two-round privacy amplification protocols with optimal entropy loss for security parameter up to $\Omega(k/\log k)$, where $k$ is the entropy of the shared weak source; two-source extractors for entropy $O(\log n \log \log n)$; and non-malleable codes in the $2$-split state model with rate $\Omega(1/\log n)$. However, in all cases there is still a gap to optimum and the motivation to close this gap remains strong.
In this paper, we introduce a set of new techniques to further push the frontier in the above questions. Our techniques lead to improvements in all of the above questions, and in several cases partially optimal constructions. This is in contrast to all previous work, which only obtain close to optimal constructions. Specifically, we obtain:
1. A seeded non-malleable extractor with seed length $O(\log n)+\log^{1+o(1)}(1/\epsilon)$ and entropy requirement $O(\log \log n+\log(1/\epsilon))$, where the entropy requirement is asymptotically optimal by a recent result of Gur and Shinkar [GurS17];
2. A two-round privacy amplification protocol with optimal entropy loss for security parameter up to $\Omega(k)$, which solves the privacy amplification problem completely;
3. A two-source extractor for entropy $O(\frac{\log n \log \log n}{\log \log \log n})$, which also gives an explicit Ramsey graph on $N$ vertices with no clique or independent set of size $(\log N)^{O(\frac{\log \log \log N}{\log \log \log \log N})}$; and
4. The first explicit non-malleable code in the $2$-split state model with constant rate, which has been a major goal in the study of non-malleable codes for quite some time. One small caveat is that the error of this code is only (an arbitrarily small) constant, but we can also achieve negligible error with rate $\Omega(\log \log \log n/\log \log n)$, which already improves the rate in [Li17] exponentially.
We believe our new techniques can help to eventually obtain completely optimal constructions in the above questions, and may have applications in other settings.
Category / Keywords: cryptographic protocols / non-malleable code, privacy amplification, non-malleable extractor
Date: received 11 Apr 2018, last revised 18 Apr 2018
Contact author: lixints at cs jhu edu | CommonCrawl |
Journal of Petroleum Exploration and Production Technology
December 2018 , Volume 8, Issue 4, pp 1169–1181 | Cite as
A new practical method to evaluate the Joule–Thomson coefficient for natural gases
N. Tarom
Md. Mofazzal Hossain
Azar Rohi
Original Paper - Production Engineering
First Online: 26 October 2017
The Joule–Thomson (JT) phenomenon, the study of fluid temperature changes for a given pressure change at constant enthalpy, has great technological and scientific importance for designing, maintenance and prediction of hydrocarbon production. The phenomenon serves vital role in many facets of hydrocarbon production, especially associated with reservoir management such as interpretation of temperature logs of production and injection well, identification of water and gas entry locations in multilayer production scenarios, modelling of thermal response of hydrocarbon reservoirs and prediction of wellbore flowing temperature profile. The purpose of this study is to develop a new method for the evaluation of JT coefficient, as an essential parameter required to account the Joule–Thomson effects while predicting the flowing temperature profile for gas production wells. To do this, a new correction factor, C NM, has been developed through numerical analysis and proposed a practical method to predict C NM which can simplify the prediction of flowing temperature for gas production wells while accounting the Joule–Thomson effect. The developed correlation and methodology were validated through an exhaustive survey which has been conducted with 20 different gas mixture samples. For each sample, the model has been run for a wide range of temperature and pressure conditions, and the model was rigorously verified by comparison of the results estimated throughout the study with the results obtained from HYSYS and Peng–Robinson equation of state. It is observed that model is very simple and robust yet can accurately predict the Joule–Thomson effect.
Joule–Thomson effect Gas mixture compositions Z factor Equation of state Empirical Z factor correlation
List of symbols
Nathan–Mofazzal correction factor
Fluid heat capacity (Btu/(lb-mole °F))
Joule–Thomson
Critical pressure (psia)
Pseudo-critical pressure (psia)
Pseudo-reduced pressure
Universal gas constant ((ft)3(psia)(lb-mole)−1(°R)−1))
Critical temperature (°F)
Pseudo-critical temperature (°F)
Pseudo-reduced temperature
\( \left( {\frac{\partial Z}{\partial T}} \right)_{\text{p}} \)
Variations of Z factor at different temperatures with respect to a constant pressure
\( \left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{kc}} }} \)
Variations of Z factor at different temperatures with respect to a constant pressure when the gas mixture compositions are known
\( \left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{uc}} }} \)
Variations of Z factor at different temperatures with respect to a constant pressure when the gas mixture compositions are unknown
1 Btu
5.40395 ((lbf/in2).ft3)
μJT
Joule–Thomson coefficient (F/psi)
γg
Gas specific gravity
Fluid density (lbm/ft3)
Compressibility factor
Joule–Thomson (JT) phenomenon explains the increase or decrease in gas mixture temperature when freely expand through a restriction such as perforations when no heat is supposed to be exchanged with the surrounding media and no external mechanical work is done (Perry and Green 1984; Reif 1965). The JT value is important and virtually dependent on the properties of gas mixture and gas flow rate rather than the exchange of heat with the surrounding which concludes the positive and negative values due to high and low gas pressure, respectively (Jeffry 2009; Pinto et al. 2013; Steffensen and Smith 1973; Ziabakhsh-Ganji and Kooi 2014; Tarom and Hossain 2017). In production engineering, the JT effect may become of interest due to its significant influence while analyzing the temperature logs especially for gas injection/production wells, evaluation of wellbore temperature profile, determination of fluid flow from multiple production layers and identification of the locations of water and gas entry point. However, the evaluation of reliable JT coefficient for gas mixtures is still a challenge for the production engineers due to the complexity involved in production and injection wells. This study aims to develop a new and reliable practical method for the evaluation of JT coefficient which can be applied for both production and injection scenarios to accurately evaluate the flowing temperature profile for injection or production wells.
The accurate prediction of JT coefficient, the accurate determination of gas compressibility factor (Z) of desired gas mixture and the variation of Z factor with temperatures at a constant pressure play a crucial role. In the light of available field and laboratory data plus whether the gas mixture compositions are known or unknown, different approaches such as equation of states (EOSs), empirical Z factor correlations can be used for the determination of gas compressibility factor (Z) and its variations due to change in temperature and pressure conditions which are required for the determination of JT coefficient. For instance, when the gas mixture compositions are known, any of the equation of states (EOSs) such as van der Waals (vdW), Soave–Redlich–Kwong (SRK), Peng–Robinson (PR) can be used for the determination of Z factor and its variations. When the compositions of gas mixture are unknown, the empirical Z factor correlations such as Beggs and Brill (1973), Bahadori et al. (2007) correlation, Heidaryan et al. (2010) correlation, Hall and Yarborough (1973) correlation and Dranchuk and Abou-Kassem (1975) are widely used as routine industry practice for the determination of Z factor.
Recently, a simplified mathematical model was developed for the prediction of JT coefficient which can be applied for the evaluation of flowing temperature profile along a gas-producing well when gas compositions of a gas mixture are unknown (Tarom and Hossain 2015). In this model, the correction factor was expressed as a function of gas gravity for a given constant pressure and temperature. Since the JT effect also depends on pressure and temperate, the previous correlation lacks effectiveness of the model to deal with the change in pressure and temperatures. In this study, a new correction factor is developed as a function of gas gravity, temperature and pressure of producing gas. The proposed correction factor named as Nathan–Mofazzal correction factor, C NM, and tested rigorously for 20 different gas mixtures and applied to evaluate the JT coefficient for gas mixtures when gas mixture compositions are unknown.
Mathematical model of the JT coefficient
The combination of hydrocarbon and non-hydrocarbon components, with methane as a main constituent, normally forms natural gases within gas reservoirs. N-alkanes (e.g. methane, ethane and propane) are mainly hydrocarbon components, and N2, CO2 and H2S are examples of the non-hydrocarbon components of natural gases. In single-phase gas cases plus referring to the concept of real gas law, PV = ZnRT, the JT coefficient for 1 mol (i.e. n = 1) of a desired gas mixture is generally expressed as (Cengel and Boles 2008):
$$ \mu_{\text{JT}} = \frac{1}{{C_{\text{p}} }}\left[ {\frac{T}{Z\rho }\left( {\frac{\partial Z}{\partial T}} \right)_{\text{p}} } \right] $$
where µ JT, C p, T, Z, ρ and P explains the JT coefficient, heat capacity, temperature, gas compressibility factor, density of gas and pressure, respectively. Also, in this equation, C p is BTU/(lb-mole °F) and ρ is lbm/ft3, whereas one BTU is equal to 5.40395 ((lbf/in2) ft3).
The estimation of the isobaric heat capacity (C p) of ideal and natural gas has been extensively studied by numerous researchers (Kareem et al. 2014); Jarrahian and Heidaryan 2014; Abou-Kassem and Dranchuk 1982). Kareem et al. (2014) presented correlation given by Eq. 1a in field unit to estimate isobaric specific heat capacity for natural gas as a function of temperature, and gas gravity based upon their generated 200 samples of natural gas mixture with methane component ranging from 0.74 to 0.9985 using normally distributed experimental design. The correlation is recommended for natural gas gravity ranging from 0.55 to 1.00 and temperature ranging from − 280 to 2240 °F.
$$ C_{\text{p}} = \left( {8.0211\gamma_{\text{g}} + 3.3359} \right) + \left( {2.0744 \times 10^{ - 2} \gamma_{\text{g}} - 4.2441 \times 10^{ - 3} } \right)T + \left( { - 8.1528 \times 10^{ - 6} \gamma_{\text{g}} + 4.8536 \times 10^{ - 9} } \right)T^{2} + \left( {1.2887 \times 10^{ - 9} \gamma_{g} - 1.1626 \times 10^{ - 9} } \right)T^{3} $$
Determination of the JT coefficient
For prediction of the JT coefficient in Eq. 1, term \( \left( {\frac{\partial Z}{\partial T}} \right)_{\text{p}} \) needs to be evaluated. In order to achieve the goal, in this article, terms \( \left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{kc}} }} \) and \( \left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{uc}} }} \) will replace term \( \left( {\frac{\partial Z}{\partial T}} \right)_{\text{p}} \) in Eq. 1. Terms \( \left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{kc}} }} \) and \( \left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{uc}} }} \) explain the gas mixture compositions of producing gas when gas compositions are known and unknown, respectively.
For determination of term \( \left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{kc}} }} \) in Eq. 1 when compositions of a desired gas mixture are known, Peng–Robinson equation of state (PR-EOS) is found to be the most reliable and appropriate method for evaluation of phase behaviour and volumetric properties of both mixture and pure fluids. Applying PR-EOS, term \( \left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{kc}} }} \) can be explained as follows:
$$ \left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{kc}} }} = \frac{{\left( {\frac{\partial A}{\partial T}} \right)_{\text{p}} \left( {B - Z} \right) + \left( {\frac{\partial B}{\partial T}} \right)_{\text{p}} \left( {6BZ + 2Z - 3B^{2} - 2B + A - Z^{2} } \right)}}{{3Z^{2} + 2\left( {B - 1} \right)Z + \left( {A - 2B - 3B^{2} } \right)}} $$
where A and B are:
$$ A = {\raise0.7ex\hbox{${aP}$} \!\mathord{\left/ {\vphantom {{aP} {\left( {\text{RT}} \right)^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {\text{RT}} \right)^{2} }$}} $$
$$ B = {\raise0.7ex\hbox{${bP}$} \!\mathord{\left/ {\vphantom {{bP} {\text{RT}}}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\text{RT}}$}} $$
where a and b are PR-EOS mixture parameters. Details of derivation of Eq. 2 are shown in 'Appendix 1'.
When gas mixture compositions are not available, the term \( \left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{uc}} }} \) can be expressed by Eq. 3 (Tarom and Hossain 2015):
$$ \left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{uc}} }} = \left( {\frac{\partial Z}{{\partial T_{\text{pr}} }}} \right)_{{{\text{p}}_{\text{uc}} }} \left( {\frac{{\partial T_{pr} }}{\partial T}} \right)_{{{\text{p}}_{\text{uc}} }} = \frac{1}{{T_{\text{pc}} }}\left( {\frac{\partial Z}{{\partial T_{\text{pr}} }}} \right)_{{{\text{p}}_{\text{uc}} }} $$
In Eq. 3, the Katz–Standing chart (Ahmed 1946) can be a reliable method for evaluation of term \( \left( {\frac{\partial Z}{{\partial T_{\text{pr}} }}} \right)_{{{\text{p}}_{\text{uc}} }} \). To accomplish the task of evaluation of term \( \left( {\frac{\partial Z}{{\partial T_{\text{pr}} }}} \right)_{{{\text{p}}_{\text{uc}} }} \) in Eq. 3, a correlation published by Bahrami (2012), which is a simplified mathematical form of the Katz–Standing chart, has been applied in this study. The details of mathematical derivations to evaluate term \( \left( {\frac{\partial Z}{{\partial T_{\text{pr}} }}} \right)_{{{\text{p}}_{\text{uc}} }} \) are presented in Tarom and Hossain (2015) and 'Appendix 2'.
Correction factor, C NM
Considering Eqs. 2 and 3, it can be inferred that:
$$ \left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{uc}} }} = \frac{1}{{T_{\text{pc}} }}\left( {\frac{\partial Z}{{\partial T_{\text{pr}} }}} \right)_{{{\text{p}}_{\text{uc}} }} = \left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{kc}} }} $$
A computer program called wellbore flowing temperature profile (WTP) was developed to study the application of Eqs. 2 and 3 considering various gas mixture samples as presented in Table 1 to investigate Eq. 4 at various pressure/temperature conditions. The terms \( \left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{kc}} }} \) and \( \left( {\frac{\partial Z}{{\partial T_{\text{pr}} }}} \right)_{{{\text{p}}_{\text{uc}} }} \) in Eqs. 2 and 3 are separately evaluated using the developed program for the considered gas mixtures (Table 1) at different pressure/temperature conditions. Considerable anomalies are observed when correction factor is considered as independent of pressure and temperature. Therefore, Eq. 4 is redefined as:
$$ \left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{uc}} }} = \frac{{C_{\text{NM}} }}{{T_{\text{pc}} }}\left( {\frac{\partial Z}{{\partial T_{\text{pr}} }}} \right)_{{{\text{p}}_{\text{uc}} }} = \left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{kc}} }} $$
where C NM is a correction factor named as Nathan–Mofazzal correction factor which is defined as:
$$ C_{\text{NM}} = \frac{{T_{\text{pc}} *\left( {\frac{\partial Z}{\partial T}} \right)_{{{\text{p}}_{\text{kc}} }} }}{{\left( {\frac{\partial Z}{{\partial T_{\text{pr}} }}} \right)_{{{\text{p}}_{\text{uc}} }} }} $$
where C NM is the function of gas gravity (for unknown compositions), pressure and temperature.
Gas component data
iC 4
Gas gravity
Evaluation of correction factor, C NM
Twenty random gas samples with different compositions as presented in Table 1 are considered in this study for the evaluation of proposed correction factor, C NM. The predicted correction factor, C NM, spans a large range of pressure (1000 to 5000 psi) and temperature (100–300 °F) conditions. The evaluated data are plotted in Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 and 11. The results demonstrate that the correction factor, C NM, depends on specific gravity, temperature and pressure for given gas mixtures (Table 1). This part of the study focuses on the analysis of the outcomes of 'isotherm' and 'isobar' plots to demonstrate the applied method for the evaluation of correction factor, C NM.
Calculation of C NM at 100 °F and different pressures
Fig. 10
The correction factor, C NM plotted in Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 and 11 as a function of gas specific gravity at different pressure and isothermal conditions are termed as 'isotherm plots' in this study. Each of the isotherm plots provides four sets of data, which are shown in blue, red, green and purple colours explaining the predicted data for pressure at 1000, 2000, 2500 and 3000 psi, respectively. A linear trend of C NM, with a negative slope from low to high specific gravity, is observed in Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 and 11 for all pressures and temperatures. However, the slopes of each condition (i.e. pressure and temperature) are different. It is observed from Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 and 11 that the value of C NM decreases with increase in the pressure at a given temperature and increases with increase in temperature at a given pressure for all gas mixtures considered in this study.
Therefore, the analysis of the data presented in Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 and 11 demonstrates that the correction factor, C NM, not only depends on gas specific gravity (Tarom and Hossain 2015), but also depends on pressure and temperature, which can be expressed as:
$$ C_{\text{NM}} = f\left( {\gamma_{\text{g}} , P, T} \right) $$
where C NM is named as Nathan–Mofazzal correction factor and γ g, P and T indicate specific gravity, pressure and temperature of a gas mixture, respectively. The gas specific gravity, γ g, in Eq. 7 also depends on the compositions of mixture, which can be determined either using appropriate EOS for known gas compositions or empirical correlation for a particular gas mixture, when the composition of gas mixture is unknown.
Equation 7 may be derived empirically through laboratory experiment or numerically through regression analysis. The current study is based on numerical regression analysis using MATLAB. The predicted value of C NM for considered gas mixtures (Table 1) is plotted in three-dimensional Cartesian coordinate system as a function of gas gravity and temperature for constant pressure (i.e. isobar condition) and presented in Figs. 12, 13, 14 and 15. Each of the surfaces in Figs. 12, 13, 14 and 15 represents the relation of C NM with gas gravity and temperature for a constant pressure and is termed as isobar plots.
Isobar plots for natural gases in Table 1 at 1000 and 2000 psi
Isobar plots for natural gases in Table 1 at 1000, 2000, 2500 and 3000 psi
Isobar plot for natural gases in Table 1 at 5000 psi
Isobar plot for natural gases in Table 1 at different pressure conditions
Figure 12 describes the changes in C NM at low-pressure conditions (≤ 2000 psi) following a smooth trend. However, such trend becomes diverging as the gas mixture pressure increases (Fig. 13). Moreover, for the gas mixture pressure up to 3000 psi, Figs. 12 and 13 demonstrate that the value of C NM is appeared to be highest at low gas specific gravities and high gas temperature conditions. In contrast, at high gas specific gravities and low gas temperature conditions, the value of C NM appears to be the minimum. Such behaviour may describes the fact that the gas mixtures with high gas specific gravity and low temperature are likely to be in the form of liquid phase for which the JT coefficient may become negative due to the cooling effect (Jeffry 2009; Pinto et al. 2013; Steffensen and Smith 1973).
The values of C NM for different gas specific gravities and temperature conditions at 5000 psi are also plotted in Fig. 14. Figure 14 demonstrates that the trend of the changes in C NM at high-pressure conditions (≫ P pc) is fluctuating, which may involve inaccuracy of PR-EOS for gas mixture conditions near and above critical points (Pinto et al. 2013; Tarom et al. 2006; Baled et al. 2012; Danesh 1998; Chueh and Prausnitz 1967). Figure 15 also compares results for different pressure conditions of 1000, 2000, 2500, 3000 and 5000 psi.
In summary, the slope of each surfaces and the change in C NM values are observed to be different as shown in Figs. 12, 13, 14 and 15 and consequently it makes very difficult to define a unique polynomial equation as a function of gas gravity, pressure and temperature. However, it is observed from this study that C NM can be best represented in the form of surface polynomials, as presented in Eqs. 8–12 for considered gas mixture as presented in Table 1. In this study, it is shown in Eqs. 8–12 that the correction factor C NM depends on specific gravities of gas mixture to the power of three as well as temperature to the power of two for any individual pressure condition.
$$ \begin{aligned} C_{{{\text{NM }}@ 1000{\text{psi}}}} & = 1.188 - 4.703\gamma_{\text{g}} + 0.0009404T + 6.821\gamma_{\text{g}}^{2} - 0.002842\gamma_{\text{g}} T \\ & \quad + 2.301e^{ - 06} T^{2} - 3.185\gamma_{\text{g}}^{3} + 0.001578T\gamma_{\text{g}}^{2} - 1.725e^{ - 06} \gamma_{\text{g}} T^{2} \\ \end{aligned} $$
$$ \begin{aligned} C_{{{\text{NM}}@ 2000{\text{psi}}}} & = - 0.8281 + 4.225\gamma_{\text{g}} - 0.0002417T - 6.281\gamma_{\text{g}}^{2} - 0.0002337\gamma_{\text{g}} T \\ & \quad + 2.733e^{ - 06} T^{2} + 3.036\gamma_{\text{g}}^{3} + 0.0006345T\gamma_{\text{g}}^{2} - 3.138e^{ - 06} \gamma_{\text{g}} T^{2} \\ \end{aligned} $$
$$ \begin{aligned} C_{{{\text{NM}}@ 2500{\text{psi}}}} & = - 3.752 + 16.96\gamma_{\text{g}} - 0.001103T - 24.11\gamma_{\text{g}}^{2} - 0.001583\gamma_{\text{g}} T \\ & \quad + 8.574e^{ - 06} T^{2} + 10.74\gamma_{\text{g}}^{3} + 0.005694T\gamma_{\text{g}}^{2} - 1.34e^{ - 05} \gamma_{\text{g}} T^{2} \\ \end{aligned} $$
$$ \begin{aligned} C_{{{\text{NM}}@ 3000{\text{psi}}}} & = - 5.349 + 24.10\gamma_{\text{g}} - 0.001244T - 34.72\gamma_{\text{g}}^{2} + 0.0004097\gamma_{\text{g}} T \\ & \quad + 5.751e^{ - 06} T^{2} + 15.75\gamma_{\text{g}}^{3} + 0.004646T\gamma_{\text{g}}^{2} - 1.11e^{ - 05} \gamma_{\text{g}} T^{2} \\ \end{aligned} $$
$$ \begin{aligned} C_{{{\text{NM}}@ 5000{\text{psi}}}} & = - 42.08 + 174.6\gamma_{\text{g}} - 0.02104T - 196.2\gamma_{\text{g}}^{2} - 0.2556\gamma_{\text{g}} T \\ & \quad + 5.238e^{ - 04} T^{2} + 54.79\gamma_{\text{g}}^{3} + 0.3631T\gamma_{\text{g}}^{2} - 6.918e^{ - 04} \gamma_{\text{g}} T^{2} \\ \end{aligned} $$
Linear regression techniques in MATLAB have been used to fit the curves presenting polynomial Eqs. 8–12. Table 2 shows 'R-square', 'adjusted R-square' and 'RMSE' information for these polynomials which provide the accuracy of statistical measurements of the response values to fit the curves. Although the correlations proposed in Eqs. 8, 9 are based upon the gas gravity of the natural gas systems presented in Table 1, it can be noticed that the range of gas gravity of considered systems (Table 1) covers a range of gases typically seen in petroleum reservoir. Consequently, it is believed that the proposed correlations are applicable for any natural gas systems typically found in petroleum reservoir when only gas gravity of the gas system is known.
Curve fitting and statistical measurements information
Curves (psi)
R-square
Adjusted R-square
RMSE
At 1000
Validation of proposed model
The gas compressibility factor (Z) and term \( \left( {\frac{\partial Z}{\partial T}} \right)_{\text{p}} \) play essentially important roles for the accurate evaluation of JT coefficient and the proposed correction factor, C NM. To address this issue, a set of different gas mixtures (Table 1) are considered, and Z factor of each set of gas mixture is calculated to investigate the accuracy of proposed model. The predicted compressibility factors (Z) presented in Table 3 are compared with the Z factors calculated by reliably industrial standardised software, HYSYS, and PR-EOS including calculation of mean absolute percentage error (MAPE) as expressed in Eq. 13:
$$ {\text{MAPE}} = \frac{1}{n}\mathop \sum \limits_{1}^{n} \left| {\frac{{Z_{\text{HYSYS}} - Z_{\text{Predicted}} }}{{Z_{\text{HYSYS}} }}} \right|*100 $$
Estimation and comparison of compressibility factors (Z) for gas mixtures in Table 1
MAPE = 1.86%
Z factor
PR-EOS
HYSYS
Error (%)
Similarly, the predicted value of the term \( \left( {\frac{\partial Z}{\partial T}} \right)_{\text{p}} \) using proposed method and PR-EOS is shown in Table 4.
Comparison of the predicted value of term (∂Z/∂T)p for the given gas mixture in Table 1
(∂Z/∂T)p
5.08E − 05
It can be observed in Tables 3 and 4 that the mean absolute percentage of error (MAPE) is 1.84 and 5.59%, respectively, for Z factor and the term, \( \left( {\frac{\partial Z}{\partial T}} \right)_{\text{p}} \), which warrants that proposed method can provide similar results with high level of accuracy. It is also observed that the proposed method is far simpler as compared to existing method and can be used as a simplified important tool for routine industry application.
To support the validation of this work, a range of attempts have also been made on the study to evaluate Z factors and JT coefficients and compare with different scientific sources. For instance, Z factors have been evaluated for first ten components in Table 1 at different pressure and temperature conditions. The estimated Z factors have been compared with the results from HYSYS (Table 5). Very good agreement between the calculated Z factor using this work and HYSYS is observed. Also, Z factors and JT coefficients have been evaluated for different methane–n-butane systems in the gaseous and liquid regions and compared with the works published by Sage et al. (1940) and Budenholzer et al. (1940) (Table 6) in which good agreement between results can be seen as well.
Evaluation of Z factor at different pressure and temperature conditions
At 150 F and 1000 psia
Evaluation of Z factors and JT coefficients for different methane–n-butane systems in the gaseous and liquid regions
% CH4
Temperature (F)
Pressure (psia)
C p g
Z a
µ b
Z c
µ c
µ Joule–Thomson coefficient (F/psi), C p isobaric heat capacity (Btu/(lb. F))
aEvaluated by Sage et al
bEvaluated by Budenholzer et al
cEvaluated by this work
A new and simple method is developed in this study for the evaluation of JT coefficient for natural gas mixtures including reservoir conditions. A new correction factor named as Nathan–Mofazzal correction factor, C NM, is developed, which can be effectively used for the estimation of JT coefficient for gas mixtures, when the gas mixture compositions are unknown. The study demonstrates that C NM depends on gas specific gravity as well as pressure and temperature condition of the gas mixtures. Throughout of this study, 'isotherm' and 'isobar' plots have been plotted using excel spread sheet and MATLAB for the evaluation of proposed correction factor, C NM. The study demonstrates that for an isobar condition, C NM appears to be higher at lower gas specific gravity and higher temperature conditions. In contrast, for the same pressure condition, at higher gas specific gravity and lower temperature conditions, the C NM is lower. The comparison of results obtained from proposed method and that from commercial simulator, HYSYS, warrants that the proposed method can be reliably used as an important tool for routine industry environment. The scope of this proposed method can be broadened including other reliable correlations for Z factor to cover a wider range of pressure and temperature conditions.
Appendix 1: Derivative of compressibility factor (Z) using PR-EOS
The cubic polynomial form of PR-EOS is written as:
$$ f\left( Z \right) = Z^{3} + \alpha Z^{2} + \beta Z + \gamma = 0 $$
$$ \alpha = B - 1 $$
$$ \beta = A - 2B - 3B^{2} $$
$$ \gamma = B^{3} + B^{2} - AB $$
$$ A = {{aP} \mathord{\left/ {\vphantom {{aP} {\left( {\text{RT}} \right)^{2} }}} \right. \kern-0pt} {\left( {\text{RT}} \right)^{2} }} $$
$$ a = a_{\text{c}} \left[ {1 + m\left( {1 - \sqrt {\frac{T}{{T_{\text{c}} }}} } \right)} \right]^{2} $$
$$ a_{\text{c}} = 0.457235\;{{R^{2} T_{\text{c}}^{2} } \mathord{\left/ {\vphantom {{R^{2} T_{\text{c}}^{2} } {P_{\text{c}} }}} \right. \kern-0pt} {P_{\text{c}} }} $$
$$ m = 0.37464 + 1.54226\omega - 0.26992\omega^{2} $$
$$ b = 0.077796\;{{{\text{RT}}_{\text{c}} } \mathord{\left/ {\vphantom {{{\text{RT}}_{\text{c}} } {P_{\text{c}} }}} \right. \kern-0pt} {P_{\text{c}} }} $$
Mixing rules
Equations of state are basically used for description volumetric and phase behaviour of pure components. Therefore, mixing rules are applied to extent the application of equations of state for mixture fluids.
For a fluid with n-component compositions, following empirical relations are applied to calculate the mixture parameters of a and b using Eqs. 24 and 25 as given below:
$$ a = \mathop \sum \limits_{i = 1}^{n} \mathop \sum \limits_{j = 1}^{n} w_{i} w_{j} \left( {a_{i} a_{j} } \right)^{0.5} \left( {1 - k_{ij} } \right) $$
$$ b = \mathop \sum \limits_{i = 1}^{n} w_{i} b_{i} $$
where k ij in Eq. 24 is called binary interaction coefficient and is known as an interaction parameter between non-similar molecules. The value of k ij is equal to zero when i = j and is nonzero for non-hydrocarbon–hydrocarbon components. Also, the value of k ij is close to zero for hydrocarbon–hydrocarbon interaction. The value for k ij is tabulated in the literature (Ahmed 1946), and this literature also suggests the following equation for evaluation of k ij .
$$ \left( {1 - k_{ij} } \right) = \left[ {\frac{{2\left( {V_{ci}^{1/3} V_{cj}^{1/3} } \right)^{1/2} }}{{V_{ci}^{1/3} + V_{cj}^{1/3} }}} \right]^{n} $$
Danesh (1998) in his book suggested the theoretical value of n = 6; however, Chueh and Prausnitz (1967) believed that n = 3 gives better results.
Using Eq. 14, the details of derivatives in Eq. 2 are as follows:
$$ \left( {\frac{\partial Z}{\partial T}} \right)_{\text{p}} = \frac{{\left( {\frac{\partial A}{\partial T}} \right)_{\text{p}} \left( {B - Z} \right) + \left( {\frac{\partial B}{\partial T}} \right)_{\text{p}} \left( {6BZ + 2Z - 3B^{2} - 2B + A - Z^{2} } \right)}}{{3Z^{2} + 2\left( {B - 1} \right)Z + \left( {A - 2B - 3B^{2} } \right)}} $$
$$ \left( {\frac{\partial A}{\partial T}} \right)_{\text{p}} = \frac{P}{{R^{2} T^{2} }}\left( {\frac{{{\text{d}}a}}{{{\text{d}}T}} - \frac{2a}{T}} \right) $$
$$ \left( {\frac{\partial B}{\partial T}} \right)_{\text{p}} = \frac{ - bP}{{{\text{RT}}^{2} }} $$
Also, Eq. 24 applies for the evaluation of term \( \frac{{{\text{d}}a}}{{{\text{d}}T}} \) for an n-component fluid.
$$ \frac{{{\text{d}}a}}{{{\text{d}}T}} = \frac{1}{2}\mathop \sum \limits_{i = 1}^{n} \mathop \sum \limits_{j = 1}^{n} w_{i} w_{j} \left( {a_{i} a_{j} } \right)^{0.5} \left[ {\sqrt {\frac{{a_{j} }}{{a_{i} }}} \frac{{{\text{d}}a_{i} }}{{{\text{d}}T}} + \sqrt {\frac{{a_{i} }}{{a_{j} }}} \frac{{{\text{d}}a_{j} }}{{{\text{d}}T}}} \right] $$
where Eq. 20 is applied for the evaluation of term \( \frac{{{\text{d}}a_{i} }}{{{\text{d}}T}} \).
$$ \frac{{{\text{d}}a_{i} }}{{{\text{d}}T}} = \frac{{ - m_{i} a_{i} }}{{\left[ {1 + m_{i} \left( {1 - \sqrt {\frac{T}{{T_{\text{c}} }}} } \right)} \right]\sqrt {{\text{TT}}_{ci} } }} $$
The Bahrami et al. correlation (Cengel and Boles 2008) is given as follows:
$$ Z = C_{1} + C_{2} P_{\text{pr}} + C_{3} P_{\text{pr}}^{2} + C_{4} P_{\text{pr}}^{3} + C_{5} P_{\text{pr}}^{4} $$
It is found to be relatively more accurate when T pr > 1.25 (Cengel and Boles 2008).
Parameters C 1 to C 5 in Eq. 14 are calculated as follows:
$$ C_{1} = 0.96 + 0.008T_{\text{pr}} + \frac{0.22}{{T_{\text{pr}}^{2} }} $$
$$ C_{2} = 0.29 - 0.0635T_{\text{pr}} - \frac{0.865}{{T_{\text{pr}}^{2} }} $$
$$ C_{3} = \frac{{0.00032 + 0.2T_{\text{pr}}^{ - 5.58} }}{{0.45 + T_{\text{pr}}^{ - 5.57} }} $$
$$ C_{4} = \frac{{ - 0.025 + 0.00013T_{\text{pr}}^{5.47} }}{{0.665 + T_{\text{pr}}^{5.47} }} $$
$$ C_{5} = - 0.0001 + \frac{{9*10^{ - 5} }}{{1 - 6.466e^{{\left( { - 1.815T_{\text{pr}} } \right)}} }} $$
$$ T_{\text{pr}} = \frac{T}{{T_{\text{pc}} }} $$
$$ P_{\text{pr}} = \frac{P}{{P_{\text{pc}} }} $$
Derivatives C 1 to C 5
$$ \left( {\frac{{\partial C_{1} }}{{\partial T_{\text{pr}} }}} \right)_{\text{p}} = 0.008 - \frac{0.44}{{T_{\text{pr}}^{3} }} $$
$$ \left( {\frac{{\partial C_{2} }}{{\partial T_{\text{pr}} }}} \right)_{\text{p}} = - 0.0635 + \frac{1.73}{{T_{\text{pr}}^{3} }} $$
$$ \left( {\frac{{\partial C_{3} }}{{\partial T_{\text{pr}} }}} \right)_{\text{p}} = - \frac{{0.5022T_{\text{pr}}^{ - 6.57} - 0.0017824T_{\text{pr}}^{ - 6.58} + 0.002T_{\text{pr}}^{ - 12.15} }}{{\left( {0.45 + T_{\text{pr}}^{ - 5.57} } \right)^{2} }} $$
$$ \left( {\frac{{\partial C_{4} }}{{\partial T_{\text{pr}} }}} \right)_{\text{p}} = \frac{{0.137223T_{\text{pr}}^{4.47} }}{{\left( {0.665 + T_{\text{pr}}^{5.47} } \right)^{2} }} $$
$$ \left( {\frac{{\partial C_{5} }}{{\partial T_{\text{pr}} }}} \right)_{\text{p}} = \frac{{0.001056e^{{\left( { - 1.815T_{\text{pr}} } \right)}} }}{{\left( {1 - 6.466e^{{\left( { - 1.815T_{\text{pr}} } \right)}} } \right)^{2} }} $$
Therefore, Eq. 14 may be applied to express \( \left( {\frac{\partial Z}{{\partial T_{\text{pr}} }}} \right)_{\text{p}} \) as follows:
$$ \left( {\frac{\partial Z}{{\partial T_{\text{pr}} }}} \right)_{\text{p}} = \left( {\frac{{\partial C_{1} }}{{\partial T_{\text{pr}} }}} \right)_{\text{p}} + \left( {\frac{{\partial C_{2} }}{{\partial T_{\text{pr}} }}} \right)_{\text{p}} P_{\text{pr}} + \left( {\frac{{\partial C_{3} }}{{\partial T_{\text{pr}} }}} \right)_{\text{p}} P_{\text{pr}}^{2} + \left( {\frac{{\partial C_{4} }}{{\partial T_{\text{pr}} }}} \right)_{\text{p}} P_{\text{pr}}^{3} + \left( {\frac{{\partial C_{5} }}{{\partial T_{\text{pr}} }}} \right)_{\text{p}} P_{\text{pr}}^{4} $$
Abou-Kassem JH, Dranchuk PM (1982) Isobaric heat capacities of natural gases at elevated pressures and temperatures. Society of Petroleum Engineers, SPE 10980-MSGoogle Scholar
Ahmed TH (1946) Reservoir engineering handbook, 3rd edn. Gulf Professional Pub, BostonGoogle Scholar
Bahadori A, Mokhatab S, Towler BF (2007) Rapidly estimating natural gas compressibility factor. J Nat Gas Chem 16(4):349–353CrossRefGoogle Scholar
Bahrami H (2012) Effect of sand lens size and hydraulic fractures parameters on gas in place estimation using 'P/Z vs Gp method' in tight gas reservoirs. In: SPE/EAGE European unconventional resources conference and exhibition. Vienna, SPEGoogle Scholar
Baled H, Enick RM, Wu Y, McHugh MA, Burgess W, Tapriyal D, Morreale BD (2012) Prediction of hydrocarbon densities at extreme conditions using volume-translated SRK and PR equations of state fit to high temperature, high pressure PVT data. J Fluid Phase Equilib 317:65–76CrossRefGoogle Scholar
Beggs DH, Brill JP (1973) A study of two-phase flow in inclined pipes. J Pet Technol 25:607–617CrossRefGoogle Scholar
Budenholzer RA et al (1940) Phase equilibria in hydrocarbon systems Joule–Thomson coefficients for gaseous mixtures of methane and n-butane. Ind Eng Chem 32(3):384–387CrossRefGoogle Scholar
Cengel BD, Boles MA (2008) Thermodynamics—an engineering approach, 6th edn. Tata McGraw Hill, New DelhiGoogle Scholar
Chueh PL, Prausnitz JM (1967) Vapor-liquid equilibria at high pressures: calculation of partial molar volumes in nonpolar liquid mixtures. AIChE 13(6):1099–1107CrossRefGoogle Scholar
Danesh A (1998) PVT and phase behaviour of petroleum reservoir fluids. Elsevier, New YourkGoogle Scholar
Dranchuk PM, Abou-Kassem JH (1975) Calculation of Z factors for natural gases using equations of state. J Can Pet Technol 14(3):34–36CrossRefGoogle Scholar
Hall KR, Yarborough L (1973) A new equation of state for Z-factor calculations. Oil Gas J 71(25):82–92Google Scholar
Heidaryan E, Salarabadi A, Moghadasi J (2010) A novel correlation approach for prediction of natural gas compressibility factor. J Nat Gas Chem 19(2):189–192CrossRefGoogle Scholar
Jarrahian A, Heidaryan E (2014) A simple correlation to estimate natural gas thermal conductivity. J Nat Gas Sci Eng 18:446–450CrossRefGoogle Scholar
Jeffry A (2009) Field cases: nonisothermal behavior due to Joule–Thomson and transient fluid expansion/compression effects. In: SPE annual technical conference and exhibition, New Orleans, LouisianaGoogle Scholar
Kareem LA et al (2014) Isobaric specific heat capacity of natural gas as a function of specific gravity, pressure and temperature. J Nat Gas Sci Eng 19:74–83CrossRefGoogle Scholar
Perry RH, Green DW (1984) Perry's chemical engineers' handbook. McGraw-Hill, New YorkGoogle Scholar
Pinto M, Karale C, Das P (2013) A simple and reliable approach for estimation of Joule–Thomson coefficient of reservoir gas at bottomhole conditions, SPE-158116-MSSPETT 2012 energy conference and exhibition, 11–13 June. Port-of-Spain, TrinidadGoogle Scholar
Reif F (1965) Fundamentals of statistical and thermal physics. McGraw-Hill, New YorkGoogle Scholar
Sage BH et al (1940) Phase equilibria in hydrocarbon systems methane–n-butane system in the gaseous and liquid regions. Ind Eng Chem 32(9):1262–1277CrossRefGoogle Scholar
Steffensen RJ, Smith RC (1973) The importance of Joule–Thomson heating (or cooling) in temperature log interpretation. In: Fall meeting of the society of petroleum engineers of AIME1973. American Institute of Mining, Metallurgical, and Petroleum Engineers, Inc., Las Vegas, NevadaGoogle Scholar
Tarom N, Hossain MM (2015) A practical method for the evaluation of the Joule Thomson effects to predict flowing temperature profile in gas producing wells. J Nat Gas Sci Eng 26:1080–1090CrossRefGoogle Scholar
Tarom N, Hossain M (2017) A practical numerical approach for the determination of flow contribution of multi-zones wellbores. Society of Petroleum Engineers, SPE 185505-MSGoogle Scholar
Tarom N, Jalali F, Al-Sayegh A, Moshfeghian M (2006) Numerical algorithms for determination of retrograde region of gas condensate reservoir. Pol J Chem 80(1):51–64Google Scholar
Ziabakhsh-Ganji Z, Kooi H (2014) Sensitivity of Joule–Thomson cooling to impure CO2 injection in depleted gas reservoirs. Appl Energy 113:434–451CrossRefGoogle Scholar
1.Department of Petroleum EngineeringCurtin UniversityBentleyAustralia
Tarom, N., Hossain, M.M. & Rohi, A. J Petrol Explor Prod Technol (2018) 8: 1169. https://doi.org/10.1007/s13202-017-0398-z
First Online 26 October 2017 | CommonCrawl |
On construction of a cloud storage system with heterogeneous software-defined storage technologies
Chao-Tung Yang1,
Shuo-Tsung Chen2,
Yu-Wei Chan ORCID: orcid.org/0000-0003-1455-78063 &
Yu-Chuan Shen1
With the rapid development of networks and Information technologies, cloud computing is not only becoming popular, the types of cloud services available are also increasing. Through cloud services, users can upload their requirements via the Internet to the cloud environment and receive responses following post-processing, for example, with cloud storage services. Software-Defined Storage (SDS) is a virtualization technology for cloud storage services. SDS uses software to integrate storage resources and to improve the accessibility and usability of storage services. Currently, there are many different open source projects available for SDS development. This work aims to utilize these open source projects to improve the efficiency of integration for hardware and software resources. In other words, in this work, we propose a cloud storage system that integrates various open source SDS software to make cloud storage services more compatible and user friendly. The cloud service systems can also be managed in a more convenient and flexible manner. The experimental results demonstrate the benefits of the proposed system.
In the last decade, cloud computing has attracted more and more attentions in both industry and academia [1,2,3,4,5,6,7,8]. It deeply changed people's lives due to its inherent advantages, such as on-demand self-service, resource pooling and rapid resource elasticity, etc.. With the services provided by cloud computing, users can upload their requirements via the Internet to a cloud environment and receive responses following post-processing in the cloud environment. Among these services, cloud storage service is one of the important and indispensable services [9,10,11,12]. Cloud storage makes data storage a service in which data is outsourced to a cloud server maintained by a cloud provider. With the service, data could be stored remotely into the cloud efficiently and safely. Thus, this service attracts many people, especially enterprises, due to that it brings appealing benefits, e.g., avoidance of capital expenditure on hardware and software, relief of the burden for storage management, etc. [13,14,15].
Nowadays, many large IT enterprises, such as Google, Microsoft, Amazon and Yahoo provide the service. Although the services could be provided by these large IT enterprises and the services have many advantages, some issues have to be concerned to be widely used by the government and users. For instance, in cloud storage, the data owner does not posses data physically after data is outsourced into the cloud service providers who are not fully trusted. Therefore, government and academic institutions choose to build their clouds by themselves. However, building cloud servers is very expensive due to the equipment cost and the corresponding maintenance cost. Thus, how to reduce the system construction cost and enhance the system's usability and accessibility is the main problem we concern. In this work, we have implemented a cloud system, in which various software-defined storage technologies and the cubic spline interpolation and distribution mechanisms are used together to provide a more easy-to-use, efficient, reliable and user-friendly cloud storage system. The main contributions of this work are summarized as follows:
We implemented a cloud storage system to integrate various SDS technologies using cubic spline interpolation and distribution mechanisms. The proposed system consisted of three main components, they were the storage service, the file distribution mechanism and the user service, respectively. In addition, since the user's file size cannot be predicted and the received files were not the same with our measured results, we successfully solve this problem by integrating the cubic spline interpolation method.
In the system architecture, we used open source software to make the system more compatible. In addition, a file was assigned automatically to an appropriate storage location after users uploaded files.
We designed a user-friendly interface, users could easily upload their files and realized the usage percentages of storage as well as the status of their uploading jobs. Also, the parameters could be set freely to make the system more flexible by managers.
The rest of this paper was organized as follows. In the related work section, we introduced the literature review and related works. In the section of system design and implementation, we presented the system architecture and the corresponding methods. The experimental results were shown in the section of experimental results. Finally, concluding remarks were given.
During the early development of cloud services, the exact meaning of software-defined service was inconclusive. The concept of "software-defined data center" was first proposed by VMware as software became more important. By employing the concept of virtualization in developing hardware resources as a resource pool, software could be employed to control the arrangement of hardware resources. When using programmable software to control the arrangement of hardware resources, there is no need to think about how to manipulate servers and security or allocate resources. In other words, all the resources function perfectly [16,17,18]. Cloud computing gave rise to more possibilities, enabling software-defined services to be different concepts in hardware and software architectures. These concepts have in turn enabled the creation of custom functions and the automation of operations. Accordingly, many research papers and commercial products related to software-defined storage have been proposed.
Yang et al. [19] proposed an integrated storage service. They used the open source software—OpenStack [20] to build and manage cloud services, and also used software to integrate storage resources, including Hadoop HDFS, Ceph and Swift on Open Stack to achieve an SDS design. Software users can integrate different storage devices to provide an integrated storage array and to build a virtual storage pool, such that the services provided for users are not limited by the storage devices. Our work primarily follows the concepts in [19], but we improve the system architecture and propose a mechanism to store data efficiently. In addition, we provide a new and more friendly user interface.
The EMC Virtualization Platform Reinvented (ViPR) [21] is a logical storage system, not a physical storage system. It can integrate EMC storage and third-party storage in a storage pool, and manage them as a single system while retaining the value of the original storage. ViPR can replicate data across different locations and data centers with different storage products, and provides a unified block store, object store, file system and other services. ViPR also provides a unified metadata service and self-service deployment, as well as measurement and monitoring services.
A file system architecture that efficiently organizes data and metadata and enables sharing in addition to exploiting the power of storage virtualization and maintaining simplicity in such a highly complex and virtualized environment was proposed by Ankur Agrrawal et al. [22]. Tahani Hussain assessed the performance of an existing enterprise network before and after deploying distributed storage systems [23]. Additionally, simulation of an enterprise network with 680 clients and 54 servers followed by redesigning the system led to improvements in the storage system throughput by 13.9%, a reduction in average response time by 24.4% and a reduction in packet loss rate by 38.3%.
Chengzhang et al. [24] proposed a solution for building a cloud storage service system based on the open-source distributed database. Dejun Wang [25] proposed an efficient cloud storage mode for heterogeneous cloud infrastructures, and validated the model with numerical examples through extensive testing. He also highlighted the differences in a cloud storage system using traditional storage. For example, the demand from the performance point of view, data security, reliability, efficiency and other indicators need to be taken into consideration for cloud storage services, which are services in a wide range of complex network environments designed to meet the demands of large-scale users.
System design and implementation
In this section, we introduce the system architecture and the implementation, which adopts open-source software for better development and maintenance in the future. The integrated heterogeneous storage technologies employed in the system are useful and complete object storage systems. In addition, a graphical user interface is provided so that an administrator can change the parameters to make the system more flexible.
The proposed system architecture, as shown in Fig. 1, is divided into three layers. The first layer is the hardware layer, which consists of many computer hardware and network devices. The second layer is the virtualization layer designed with OpenStack, with several components including the compute portion, the network portion and the storage portion. Through the virtualization technology provided by the OpenStack platform, the hardware resources, including the compute, network and storage resources can be fully utilized by the integrated virtual machines (VMs) to constitute our services, including the storage and control services. The storage service consists of many storage systems, including Swift [26], Ceph [27] and other storage systems. In addition, Nova Compute is a component within the OpenStack platform developed to provide on-demand, scalable and self-service access to compute resources, such as VMs, containers and bare metal servers. The architectures of the Swift and Ceph systems are presented in Figs. 2 and 3, respectively.
The system architecture
The Swift architecture
The Ceph architecture
Swift is a scalable redundant storage system, in which objects and files are written to multiple disks spread throughout servers in the data center. As shown in Fig. 2, the colored icons are the main components of the system, and are divided into four parts:
The cyan colored components are in charge of calculating hash in real time.
The pink colored components are in charge of indexing the hash of suffix and partition directories, receiving and sending requests to compare the hash of a partition or suffix and generating jobs replicating suffix directories to the replication queue.
The gray colored component, which is called the partition-monitor, is in charge of checking whether to move the partition at various intervals.
The green colored component, which is called the suffix-transporter, is in charge of monitoring the replication-queue and invoking rsync to sync the suffix directories.
On the other hand, the control service, which is built into the controller node, is responsible for managing the storage services, which are constructed using storage functions. Through the control service and the storage functions, the controller can control the storage devices and resources indirectly. In addition, the controller node has its own distribution mechanism. The mechanism can automatically assign files to the appropriate storage functions after users upload their files. The third layer of the system provides a graphical user interface via a web browser to present our system functions, such that users can easily access the proposed cloud system services. Figure 4 shows the design flow of our system based on the controller architecture.
The design flow of our system
The implementation of the proposed system consists of three main components, the storage service deployment, the file distribution mechanism and the user services. In the following subsections, each component will be introduced in detail.
The deployment of storage services
In the first part, we introduce the storage services. We create VMs that form a storage cluster. Then, we use the open source software OpenStack to build and manage the cloud system.
The mechanism of file distribution
In the second part, we introduce the mechanism of file distribution. We first use the Cloud he user's status is summarizedObject Storage Benchmark (COSBench) [28] to measure the file transfer speed. COSBench is a benchmark tool for measuring the performance of cloud object storage services. The measured results of our testing are marked on the coordinate diagram, as shown in Fig. 5. In this work, considering that the user's file size cannot be predicted and the received files will not be the same as our measured results, we need a mechanism to coordinate the interpolation into a linear equation. Based on the promising features studied in the reference works [29,30,31], we therefore choose to use the cubic spline interpolation method to solve this problem.
The measurement results of the transfer speed of one file
Interpolations using cubic splines have been well studied in [29,30,31]. In [29], the basis of cubic spline interpolation was introduced. Miao et al. [30] employed the cubic spline method to predict the storage volume of a data center by interpolating the storage volume time series such that an entire time series with the same number as the former series can be reconstructed. In addition, Mastorakis [31] showed that the cubic spline method is well suited for application to the problem of anomaly detection in cloud environments. A cubic spline is a spline constructed of piecewise third-order polynomials that pass through a set of m control points. The second derivative of each polynomial is commonly set to zero at the endpoints, since this provides a boundary condition that completes the system of m-2 equations. This produces a so-called "natural" cubic spline and leads to a simple tridiagonal system that can be solved easily to give the coefficients of the polynomials.
By using the Cubic Spline, we obtain a new coordinates diagram and plot the interpolation figures for Swift and Ceph, as shown in Fig. 6. This can be used as the decision criteria when processing files. Certainly, this will not be the only method in our mechanism. We also consider the use of storage capacity for the environmental effect. Similar to the previous method of measurement, we perform measurements for storage environments with different capacities.
The measurement results for all file transfer speeds obtained using the Cubic Spline method
In addition, we propose Eq. (1) to obtain the transfer speed of the storage service, which is used to determine which of the storage services is better.
$$\begin{aligned} f_K(S)=\alpha f_{t}(S)+\beta f_{c}(S). \end{aligned}$$
\(f_t(S)\) represents the transfer speed obtained in the transfer speed experiment when the file size is S.
\(f_c(S)\) represents the transfer speed obtained in the storage capacity experiment when the file size is S.
\(\alpha\) and \(\beta\) are the weights, with default values of 0.5. The sum of these two weights equals one.
\(f_K(S)\) represents the resulting transfer speed of the storage service, which is used to compare the performance of the storage services.
For example, we perform an experiment to determine the transfer speed for Swift and Ceph, and consequently obtain two functions, \(f_{ts}(S)\) and \(f_{tc}(S)\). Another experiment is performed to test the storage capacity of Swift and Ceph to obtain two functions, \(f_{cs}(S)\) and \(f_{cc}(S)\). The resulting functions \(f_{Swift}(S)\) and \(f_{Ceph}(S)\) are listed in Eqs. (2) and (3), respectively.
$$\begin{aligned} f_{Swift}(S) & = & {} \alpha f_{ts}(S)+\beta f_{cs}(S). \end{aligned}$$
$$\begin{aligned} f_{Ceph}(S) & = & {} \alpha f_{tc}(S)+\beta f_{cc}(S). \end{aligned}$$
After calculation, we obtain two values \(f_{Swift}(S)\) and \(f_{Ceph}(S)\). The following mechanism compares these two values to determine which storage technology is better. If these two values are equal, we add a condition that depends on storage usage. The mechanism will choose the system with lower usage.
$$\begin{aligned} f=\left\{ \begin{array}{l l} f_{Swift}(S), &{} ( f_{Swift}(S)>f_{Ceph}(S)) ~\text{ or }\\ &{}(f_{Swift}(S)=f_{Ceph}(S) ~ \& \\ &{}Usage_{Swift}>Usage_{Ceph} )\\ f_{Ceph}(S), &{} ( f_{Swift}(S)<f_{Ceph}(S)) ~\text{ or }\\ &{}(f_{Swift}(S)=f_{Ceph}(S) ~ \& \\ &{}Usage_{Swift}<Usage_{Ceph}) \end{array} \right. \end{aligned}$$
Our mechanism is scalable. We can add any condition that may affect the transfer speed. For example, as shown in Eq. 4, the function \(f_{n}(S)\) is another consideration for time consumption with a weight of \(\gamma\) and the sum of the three weights \(\alpha\), \(\beta\), and \(\gamma\) must be one.
$$\begin{aligned} f_K(S)=\alpha f_{t}(S)+\beta f_{c}(S)+\gamma f_{n}(S). \end{aligned}$$
The experimental results
In this section, we show the experimental results and the system implementation performance. We first perform efficacy experiments to demonstrate the benefits of our system infrastructure. Next, we measure the speed of each storage object. This measurement is the basis of the file distribution mechanism. Finally, we show the user interface for our system.
Setup of the experimental environment
In the setup for the experimental environment, we use OpenStack to build our cloud platform, which is then used to create and manage the distributed storage system. In the system, we adopt two heterogeneous storage technologies, namely Ceph and Swift. We use Ceph to build a storage system that consists of four VMs with dual core CPUs, 4 GB of memory and a total of 160 GB of storage space. The VM named ceph01 is MON and OSD, and the others are OSD . These VMs form a Ceph cluster. On the other hand, we use Swift to build a storage system consisting of four VMs, which include one proxy server and four storage nodes, with the same specifications of dual core CPUs, 4 GB of memory, and a total of 160 GB of storage space. Tables 1, 2, and 3 sequentially present the specifications for the software, hardware, and storage environments.
Table 1 Hardware specifications
Table 2 Storage environment specifications
Table 3 Software specifications
Performance evaluations of our system
To evaluate the performance of our system, two metrics are used, specifically network throughput and disk writing speed. In this experiment, we first install four VMs as the experimental nodes in the OpenStack environment. The four VMs are called swift01, swift02, swift03 and swift04, respectively. Since network throughput is a key factor for measuring cluster performance, we use a client-server connection to measure the TCP and UDP bandwidths. The results are illustrated in Fig. 7. In the resulting histogram, the horizontal axis represents the number of tests and the vertical axis represents the transmission bandwidth. As depicted in Fig. 7, the VMs are divided into group A and group B. Group A contains Swift01 and Swift03 VMs while group B consists of Swift02 and Swift04 VMs. The experimental results show that the bandwidth for group A is almost 7000 Mbits/s, while the bandwidth for group B is only about 900 Mbits/s. The large difference in the achieved bandwidth between the two groups is because they are deployed on different physical machines. The VMs in group A are used in the compute01 machine while those in group B are used in the compute02 machine. The results indicate that when the VMs communicate between the two physical machines, they communicate through the physical network. On the contrary, when the VMs communicate with each other in the same physical node, they communicate through the virtual network.
The comparison results of network throughput for all virtual machines
Next, we will discuss the comparison results with respect to the metric of the disk writing speed, which is a key factor for system performance. In this experiment, we use the Linux command dd, which is mainly employed to convert and copy files and to measure the disk writing speed. The results are illustrated in Figs. 8 and 9.
The comparison results of disk writing speed for all virtual machines
The comparison results of disk reading speed for all virtual machines
According to the previous results from measuring the network bandwidth, if VMs are deployed on the same host, their bandwidths are almost the same. Thus, we select swift01, swift02, OpenStack compute01 and OpenStack compute02 for comparison of their disk writing and reading speeds. The results show that the VMs cannot take full advantage of the reading and writing resources and therefore require deployment of the storage system. These I/O tests can be used to debug and improve bottlenecks when problems are encountered. In addition, the experimental results for disk reading and writing speed help us decide on the number of VMs deployed on the physical machine and understand how best to deploy the storage cluster.
Figure 10 shows the comparison results for the upload speed in the Ceph and Swift storage clusters. In the figure, the blue hollow circle represents the upload measurements in the Swift storage cluster while the red hollow circle represents the corresponding values in the Ceph storage cluster. In addition, we apply cubic spline to obtain continuous curves with respect to the Ceph and Swift clusters. From the figure, we see that the upload speed in the Swift cluster stabilizes at about 20-30 MB/s, with a significant increase when the file size is larger than 800 MB. On the contrary, in the Ceph cluster, the upload speed is almost 15 MB/s. These two curves intersect once when the file size is about 50 MB. Thus, the upload speed for Ceph is faster than that of Swift when the file size is less than 50 MB and is slower when the file size is larger than 50 MB.
The comparison results of uploading speed for all virtual machines
Figure 11 shows the experimental results with respect to the download speed in the Ceph and Swift storage clusters. The results show that the download speed for the Ceph cluster is faster than that of the Swift cluster.
The comparison results of downloading speed for all virtual machines
In this subsection, we will introduce the design of the user interface in our system. An overview of the website map is shown in Fig. 12. The user interface in our system mainly consists of three parts: the system overview page (as shown in Fig. 13), the my storage page (as shown in Fig. 14) and the account page (as shown in Fig. 16). In the system overview page, the user's status is summarized, and users can review their storage usage and account information. The my storage page is the main part of the user interface in the system. It consists of basic operations, such as upload, download, remove and modify operations. The account page shows the user information. Users can modify their personal information via this page.
Overview of website map of our system
Overview of system pages of the storage usage percentage and the account list
The my storage page
The file uploading page in our system
The account page in our system
As shown in Fig. 13, there are two panels in the system overview page. The two panels are used to show the storage usage percentages and the account list. We use three small liquid fill gauges to display the percentages for the total usage, the Swift usage and the Ceph usage. More detailed information is shown when the mouse moves over the liquid fill gauge, as shown in Fig. 13. In addition, there is a table that shows information for all the accounts when the user logs into the administrator mode.
The my storage page is the major operating part of our system. When the page is loaded, a file list is shown in the middle of the page and a drop down menu pops up when the right mouse button clicks a file name, as shown in Fig. 14. The drop down menu has four functions: download, delete, rename and detailed information. All functions related to the storage operations are displayed in this page.
We use AJAX, JQuery and the bootstrap framework to implement the uploading process. The web page pops up a window upon left clicking the upload button, as shown in Fig. 15. The figures show four files in the list. One file is ready to upload, two are uploading and the last is being processed. The upload function allows multiple files to be uploaded at the same time. The files have individual upload progress bars and the total upload progress bar is shown near the top of the page. The total progress bar shows detailed upload information including the transfer speed, the remaining time and the completed percentage. The upload functions have the following advantages:
Friendly user interface: a visualization of the upload progress is provided. This makes it easy for users to monitor and control their uploading jobs.
Supports the upload of multiple files: users can upload multiple files at the same time.
Background processing: users can upload their files in the background while accessing other functions simultaneously in the my storage page.
The last part is the accounting page, as shown in Fig. 16. The accounting page has two main functions, which are the viewing and the editing. Through these functions, detailed accounting information can be viewed and edited. The design of all the pages in the system follows the design concept of RWD. Whatever the device used, the bootstrap framework displays the appropriate web layout according to the screen size.
In this work, we implemented a cloud storage system by integrating the open source storage software to provide a software-defined storage service. In the system, we used the distributed cloud architecture to provide high reliable and scalable cloud services which integrate several software storage technologies. In addition, we provided an user interface with high usability to make the proposed system more user friendly. In the future, we plan to build a larger system with more VMs and integrating more heterogeneous storage technologies.
Zhou Z, Ota K, Dong M, Xu C (2017) Energy-efficient matching for resource allocation in d2d enabled cellular networks. IEEE Trans Vehicul Technol 66(6):5256–5268
Xu C, Gao C, Zhou Z, Chang Z, Jia Y (2017) Social network-based content delivery in device-to-device underlay cellular networks using matching theory. IEEE Access 5:924–937
Mo Y, Peng M, Xiang H, Sun Y, Ji X (2017) Resource allocation in cloud radio access networks with device-to-device communications. IEEE Access 5:1250–1262
Foster I, Zhao Y, Raicu I, Lu S (2008) Cloud computing and grid computing 360-degree compared. In: Proceedings of the 2008 grid computing environments workshop: 2008; Austin, USA, pp 1–10
Nurmi D, Wolski R, Grzegorczyk C, Obertelli G, Soman S, Youseff L, Zagorodnov D (2009) The eucalyptus open-source cloud-computing system. In: Proceedings of the 2009 9th IEEE/ACM international symposium on cluster computing and the grid: 2009; Shanghai, China, pp 124–131
Satyanarayanan M, Bahl P, Caceres R, Davies N (2009) The case for vm-based cloudlets in mobile computing. IEEE Pervasive Comput 8:14–23
Buyya R, Yeo CS, Venugopal S (2008) Market-oriented cloud computing: Vision, hype, and reality for delivering it services as computing utilities. In: Proceedings of the 10th IEEE international conference on high performance computing and communications: 2008; Dalian, China, pp 5–13
Kim H-W, Jeong Y-S (2018) Secure authentication-management human-centric scheme for trusting personal resource information on mobile cloud computing with blockchain. Human-centric Comput Inform Sci 8(1):11
Vernik G, Shulman-Peleg A, Dippl S, Formisano C, Jaeger MC, Kolodner EK, Villari M (2013) Data on-boarding in federated storage clouds. In: Proceedings of the 2013 IEEE sixth international conference on cloud computing: 2013; Santa Clara, USA, pp 244–251
Kolodner EK, Tal S, Kyriazis D, Naor D, Allalouf M, Bonelli L, Brand P, Eckert A, Elmroth E, Gogouvitis SV, Harnik D, Hernandez F, Jaeger MC, Lakew EB, Lopez JM, Lorenz M, Messina A, Shulman-Peleg A, Talyansky R, Voulodimos A, Wolfsthal Y (2011) A cloud environment for data-intensive storage services. In: Proceedings of the 2011 IEEE third international conference on cloud computing technology and science: 29 Nov.-1 Dec. 2011; Athens, Greece, pp 357–366
Rhea S, Wells C, Eaton P, Geels D, Zhao B, Weatherspoon H, Kubiatowicz J (2001) Maintenance-free global data storage. IEEE Internet Comput 5:40–49
Mesnier M, Ganger GR, Riedel E (2003) Object-based storage. IEEE Commun Mag 41:84–90
Mesbahi MR, Rahmani AM, Hosseinzadeh M (2018) Reliability and high availability in cloud computing environments: a reference roadmap. Human-centric Comput Inform Sci 8(1):20
Zhang Y, Xu C, Liang X, Li H, Mu Y, Zhang X (2017) Efficient public verification of data integrity for cloud storage systems from indistinguishability obfuscation. IEEE Trans Inform Forensic Sec 12(3):676–688
Ren Z, Wang L, Wang Q, Xu M (2018) Dynamic proofs of retrievability for coded cloud storage systems. IEEE Trans Serv Comput 11(4):685–698
Li Y, Feng D, Shi Z (2013) An effective cache algorithm for heterogeneous storage systems. Sci World J 2013:693845
Lin W, Wu W, Wang JZ (2016) A heuristic task scheduling algorithm for heterogeneous virtual clusters. Sci Program 2016:7040276
Callegati F, Cerroni W, Contoli C (2016) Virtual networking performance in openstack platform for network function virtualization. J Elec Comput Eng 2016:266–267
Yang C-T, Lien W-H, Shen Y-C, Leu F-Y (2015) Implementation of a software-defined storage service with heterogeneous storage technologies. In: Proceedings of the 2015 IEEE 29th international conference on advanced information networking and applications workshops (WAINA): 24-27 March 2015, pp 102–107
OpenStack. https://www.openstack.org/ (2015)
EMC ViPR. http://www.emc.com/vipr (2015)
Agrrawa A, Shankar R, Akarsh S, Madan P (2012) File system aware storage virtualization management. In: Proceedings of the 2012 IEEE international conference on cloud computing in emerging markets (CCEM): 11-12 Oct. 2012; Bangalore, India, pp 1–11
Hussain T, Marimuthu PN, Habib SJ (2013) Managing distributed storage system through network redesign. In: Proceedings of the 2013 15th Asia-Pacific network operations and management symposium (APNOMS): 25-27 Sept. 2013; Hiroshima, Japan, pp 1–6
Peng C, Jiang Z (2011) Building a cloud storage service system. Procedia Environ Sci 10:691–696
Wang D (2011) An efficient cloud storage model for heterogeneous cloud infrastructures. Procedia Eng 23:510–515
OpenStack Swift. https://wiki.openstack.org/wiki/Swift (2015)
Weil SA, Brandt SA, Miller EL, Long DD, Maltzahn C (2006) Ceph: A scalable, high-performance distributed file system. In: Proceedings of the 7th symposium on operating systems design and implementation: 6-8 November 2006; Seattle, USA, pp 307–320
Zheng Q, Chen H, Wang Y, Zhang J, Duan J (2013) Cosbench: Cloud object storage benchmark. In: Proceedings of the 4th ACM/SPEC international conference on performance engineering (ICPE 2013): 21-24 April 2013; Prague, Czech Republic, pp 199–210
Knott GD (2012) Interpolating Cubic Splines. Springer, Berlin
Miao B, Dou C, Jin X (2016) Main trend extraction based on irregular sampling estimation and its application in storage volume of internet data center. Comput Intell Neurosci 2016:1–12
Mastorakis G (2015) Resource management of mobile cloud computing networks and environments. IGI Global, Hershey
C-TY conceptualized the study and proposed the system design. S-TC implemented the system and wrote the manuscript. Y-WC wrote and revised the manuscript. Y-CS performed the experiments. All authors read and approved the final manuscript.
This work was supported in part by the Ministry of Science and Technology, Taiwan ROC, under Grant Numbers 106-2622-E-029-002-CC3, 107-2221-E-029-008, and 107-2218-E-029-003.
Department of Computer Science, Tunghai University, No. 1727, Sec. 4, Taiwan Boulevard, Xitun District, 40704, Taichung, Taiwan
Chao-Tung Yang & Yu-Chuan Shen
College of Future, Bachelor Program in Interdisciplinary Studies, National Yunlin University of Science and Technology, 123 University Road, Section 3, Douliou, Yunlin, 64002, Taiwan
Shuo-Tsung Chen
College of Computing and Informatics, Providence University, 200, Sec. 7, Taiwan Boulevard, Shalu Dist., Taichung, 43301, Taiwan
Yu-Wei Chan
Chao-Tung Yang
Yu-Chuan Shen
Correspondence to Yu-Wei Chan.
Yang, CT., Chen, ST., Chan, YW. et al. On construction of a cloud storage system with heterogeneous software-defined storage technologies. Hum. Cent. Comput. Inf. Sci. 9, 12 (2019). https://doi.org/10.1186/s13673-019-0173-x
Automatic distribution
Big data, IoT, and Cloud Computing for Human-centric Computing | CommonCrawl |
An analysis of economic incentives to encourage organ donation: evidence from Chile
Marcela Parada-Contzen ORCID: orcid.org/0000-0002-5649-75921 &
Felipe Vásquez-Lavín2
We perform a cost–benefit analysis on the introduction of monetary incentives for living kidney donations by estimating the compensation that would make an individual indifferent between donating and not donating a kidney while alive using Chilean data. We find that monetary incentives of US$12,000 save US$38,000 to health care system per donor and up to US$169,871 when we consider the gains in quality of life of receiving an organ. As one allows the incentives to vary depending on the individual position on the wage distribution, the compensation ranges from US$4214 to US$83,953. Importantly, introducing payments to living donors payable by a third party helps patients who currently may not have access to necessary medical treatment. Therefore, exclusions in access for organs due to the monetary constraints can be prevented.
Advances in medical technology have made organ transplants one of the best health treatment alternatives for several diseases, generating a significant increase in organ demand. However, the supply of organs, both from living or postmortem donors, has not increased at the same rate (Howard 2007). Policy makers have suggested different strategies for increasing organ donation, including the introduction of financial and monetary incentives (Stoler et al. 2017).
The scarcity of organs for transplantation is a worldwide problem. In the United States, 10,000 people die every year while waiting for an organ, and the median waiting time goes from 2 to 6 years (Beard et al. 2013; Roth et al. 2005). In Western Europe, approximately 40,000 patients are waiting for an organ (Mossialos et al. 2008). Developed countries tend to have higher cadaveric donation rates than developing countries, while, the reverse is true for living donation (International Registry in Organ Donation and Transplantation 2017).
Based on the discussion on incentivizing organ donation using financial mechanisms, this paper performs a cost–benefit analysis on the introduction of monetary incentives for living kidney donations in a developing country. We consider the case for Chile, where, according to the National Transplant Corporation, 2000 patients are enrolled on a waiting list, while the average number of donors per year is 125. As data for non-market economic valuation tend to be scarce in Latin America, our results provide information for policymakers about the economic benefits of implementing compensation for kidney donations in developing countries.Footnote 1
For the U.S., there is mixed evidence concerning the effect that the introduction of monetary incentives would have on organ donation rates (Bilgel and Gelle 2015; Venkataramani et al. 2012; Wellington and Sayre 2011; Schnier et al. 2018). Outside the U.S., some countries have implemented policies in this regard. For instance, Israel implemented a reform in 2012 that allows compensation to living donors (Lavee et al. 2013).
Preliminary policy evaluations for the Israeli case indicate that there was a positive response in donation rates (Lavee et al. 2013). This policy includes the following: earning loss reimbursement before the donation and during recovery; transportation refund for the donor and relatives during and after the donation; and a 5-year reimbursement of medical expenses, work capability loss, life insurance, and psychological consultation and treatment. Together with that policy, Israel implemented a priority condition in organ donor waiting lists to individuals who are registered as donors, a law that has also been implemented in Singapore, China, and Chile (Stoler et al. 2017). Among developing countries with high donation rates, Iran is also an interesting case since it is the only country where open payments to livings donors are allowed (Bilgel 2013).
This paper considers the model developed by Becker and Elias (2007), where the donor's compensation includes reimbursement for the donation procedure as well as compensation for loss of earnings and increased risks of death and injury. Based on this, we estimate the compensation that would make an individual indifferent between donating and not donating a kidney while alive, relying on estimates of the value of statistical life (VSL) and injury (VSI) found in the literature. We use estimates provided by the literature from both revealed preferences (RP, hedonic wage method) and stated preference approaches (SP, choice experiments). We then compare this amount with the benefits that an additional donor provides to the health system through costs avoided. We also evaluate whether or not the estimated compensation based on Becker's analysis is sufficient to induce donation.
The contributions of this paper are the following. First, we extend the policy evaluation of introducing compensation to living donors using a cost–benefit analysis rather than a cost-effectiveness analysis. Although this has been recently done for the U.S., evidence for other countries is very rare. Second, we evaluate the compensation across the wage distribution and, therefore, consider the possibility that participation outcomes could be unequal, as payments would induce poor people to participate while the wealthiest segments would exclude themselves. Third, we evaluate whether the payments (based on Becker's analysis) are sufficient to induce donations. This problem is an important issue that has not been empirically studied in detail.
The estimated compensations range between US$4214 and US$83,953. The results indicate that a compensation computed at the 95th percentile of the distribution would still generate savings to the health system, even when using conservative values for the cost–benefit analysis. The efficiency gain allows for the introduction of participation premiums to the poor segments to avoid unequal payments. This last result is relevant as it might help to alleviate ethical concerns on participation outcomes.
This paper also estimates the value of a kidney for a donor using the prevalence of chronic kidney disease in the Chilean population and provides evidence that individuals' expected costs, in terms of lifespan and quality of life, of having a kidney disease range between US$1085 and US$110,875. Consequently, we consider these values to represent the lower bound of individuals willing to pay for a healthy organ so that they can avoid having a kidney disease in the future. The compensation to donors should at least be commensurate with their own willingness to pay for an organ. The estimated kidney values are higher than the estimated compensation but still affordable given the savings that an additional donor provides to the health system. We propose a premium that is payable over the 90th percentile of the wage distribution.
Since kidney donation can come from a living donor and it is the most frequently transplanted organ coming from living donation. Consequently, there is great controversy regarding the best mechanism, if any, to encourage donations from living people. Answering this question inevitably invokes ethical and philosophical issues involved in organ donations. Economists have analyzed the problem from a "market perspective" in which the scarcity of the good is modeled as a market failure (Beard et al. 2013). Researchers have evaluated the impact that economic instruments could have on supply and have proposed economic incentives to encourage donation (Becker and Elias 2007).
Diesel (2010) shows that out of 72 economists who have studied organ donations, 68% are in favor of the "liberalization of the market," that is, the introduction of economic incentives to increase organ supply. Nevertheless, the specific policy design of this incentive varies among researchers. Some propose cash transfers to donors or beneficiaries, while others propose non-monetary benefits, such as memorial services or access to medical treatments (Gaston et al. 2006; Howard 2007; Diesel 2010). Supporters of market mechanisms argue that the relevant issue is that incentives could reduce the gap between supply and demand and prevent a significant number of avoidable deaths (Beard et al. 2013). However, more importantly, there is a consensus among researchers that instead of a simple "market for organs", the health system should introduce a regulated compensation system to living donors, in which a third party defines the amount of compensation (Barnett et al. 2001; Matas 2004; Becker and Elias 2007). Of course, there is also controversy surrounding this argument; Brooks (2003) claims that a government payment for an organ may be highly inefficient. Regardless, there is evidence that this compensation would be worth the investment given the savings that it could generate (Barnett et al. 1999).
It may be seen as unethical to commoditize organs because it could generate perverse incentives, inequalities, and reductions of altruistic donations (Liverman and Childress 2006; Beard et al. 2013). Furthermore, the examination of this problem exclusively from an economics perspective might not be advisable for making decisions in any modern society (Abadie and Gay 2006). Nevertheless, we think it is important to contribute to the debate from all possible angles to provide information for policymakers; if a compensation policy passes the cost–benefit analysis, then the final design should require a combination of economic incentives along with regulations and mechanisms that assure that policies are in harmony with society's moral views.
Some authors claim that it is impossible to estimate the real impact of such a policy until payments are allowed (Barnett et al. 2001; Becker and Elias 2007). Since payments have not been incorporated, most of the attempts to measure the efficiency of this policy have used a cost-effectiveness analysis. Evans and Kitzmann (1998) compared dialysis to a kidney transplant and concluded that the kidney transplant is the best alternative in terms of quality of life and long-term costs. Matas and Schnitzler (2004) claim that because the government of the United States or private agencies already pay for the long-term treatment of dialysis, it is feasible to pay a kidney donor and, therefore, save the cost of dialysis, an amount estimated between US$90,000 and US$270,000. Becker and Elias (2007) propose a compensation method and provide estimates in the range of US$7600 to US$27,700 for a kidney per living donor in the United States, which represents a significant efficiency gain [US$122,700 per donor based on Held and Port (2003)]. For developing countries, Harrison et al. (2010) estimate savings in the range of US$50,616 to US$182,218 for the Chilean health system per additional donor.
The value of statistical life and safety
We rely on estimates of the value of a statistical life (VSL) and the value of a statistical injury (VSI) to perform the cost–benefit analysis. The VSL is defined as the willingness to pay to reduce fatal risk, while the VSI is the willingness to pay to reduce a non-fatal risk (or injury risk). In this setting, the hedonic wages method (HWM) is the most common approach used to estimate the VSL and VSI (Viscusi 1993; Viscusi and Aldy 2003). It recovers the VSL and VSI after estimating the wage-risk trade-off, relying on the idea that wage differences for diverse jobs reflect differences in the level of risk of death and injury faced by workers (Viscusi and Aldy 2003).
Estimates for Chile are in the range of US$5 to US$13.7 million for the VSL and approximately 33,000 dollars for the VSI (Parada-Contzen et al. 2013). These are the only VSL and VSI estimates using revealed preferences for Chile. To estimate the VSL and VSI, they use a cross-section of the National Socio-Economic Survey and statistics from the Chilean Safety Association for the year 2006. They estimate a hedonic log-wage equation while considering correction for selection into the labor market and endogeneity bias arising due to simultaneous determination of observed wages and risks. While they reduce potential sources of estimation bias, it is important to note that the results are only representative of Chilean workers as they rely on labor market data. In the estimation, they incorporate both fatal and non-fatal risk as control variables in the wage equation, together with other individual and job characteristics.Footnote 2 Final estimates are obtained after implementing a Heckman correction for labor market selection and an instrumental variable approach that accounts for endogeneity between risks and wages. While they suggest that future work should consider further disaggregation of the risk data, there is no new data available for Chile for addressing this limitation. Recent studies, provide estimates between 0.61 and 8.68 million dollars for the VSL but do not consider VSI estimations (Parada-Contzen 2019).
Due to the scarce availability of estimates for the VSL in several developing countries, some researchers extrapolate the VSL estimates using North American, European, and Asian VSL estimates, adjusting by per capita GDP. In the case of Chile, indirect estimates are approximately US$0.64 to US$0.96 million (Miller 2000; Bowland and Beghin 2001; De Blaeij et al. 2003; Bellavance et al. 2009; Hammitt and Robinson 2011).
Another approach that has been broadly used to estimate the VSL is the stated preference (SP) method (Ortuzar and Cifuentes 2000; Rizzi 2003; Iraguen and Ortuzar 2004; Hojman et al. 2005). Here, respondents face hypothetical scenarios in which they can express their preferences for different states of nature which differ in their implicit risk of death. The VSL can be obtained using optimal design approaches to select the levels of the attributes faced by respondents. Estimates for Chile using SP provide values from US$0.28 million (Rizzi 2003) to US$5.2 million (GreenLabUC 2014), with several values in between (Ortuzar and Cifuentes 2000; Rizzi 2003; Iraguen and Ortuzar 2004). SP reports values that came from different risk causes, such as road safety and air pollution. All estimates available using SP for Chile are lower than the values provided by Parada-Contzen et al. (2013); therefore, using the latter value provides a lower bound of the net benefits associated with allowing kidney transactions.
Becker and Elias' compensation framework
Becker and Elias (2007)'s model for compensation for donors relies on the idea that donors should be compensated for additional costs, such as losses in income and increasing mortality and injury risks. Examples of monetary costs could be the cost of surgery and the forgone income while donors are engaged in donation procedures. A living kidney donor faces higher mortality risks during the procedure and risk of reducing his or her quality of life as a consequence of the donation. Specifically, the resulting loss in quality of life could cause challenges such as health conditions, certain types of job restrictions, or restricted recreational activities. As a result, the individual's reservation price has three components: monetary compensation for an increase in the risk of death, monetary compensation for the reduction in quality of life and monetary compensation for the time allocated to surgery and recovery.
Conceptually, the reservation price captures the minimum amount that an individual should receive to compensate for the economic costs of participating in an organ donation process. Therefore, any incentives program should at least consider this amount. Economically, the reservation price is the money amount that leaves the individual indifferent between the utility from not donating an organ and the expected utility of being a living donor, considering its mortality and injury risks. This paper relies on Becker and Elias (2007)'s model for the estimation of this amount. For these purposes, we estimate the three components they describe using data from Chile.
The first component is computed by weighting the VSL, which corresponds to the amount of money that an average person requires to accept a marginal increase in the probability of death. Further, Becker and Elias (2007) calculate the second component using an arbitrary value and performing a sensitivity analysis to see how the results change as different quality of life values are used. For the second component, we alternatively weight the VSI, or the amount of money that an average person requires to accept a marginal increase in the probability of injury, by the risk of a post-surgery complication. We additionally evaluate different quality of life components to see how the results change. Finally, the forgone income due to the time spent in surgery and recovery is evaluated using labor market data. We follow the same approach.
Note that we also compute the compensations across the wage distribution. For these purposes, we consider the specific VSL and VSI depending on the individual's position in the distribution. For this computation, the average wage-risk trade-offs are considered while specific wage per decile is used for obtaining the VSL and VSI, and thus, the first and second component of Becker and Elias (2007)'s model. For the third component, we use the forgone income using the percentiles wages.
In this paper, we also consider the option value of an organ. Differently to the reservation price, the option value relates to the individual's willingness to accept payments. For this calculation, we consider the expected costs of suffering kidney disease. The economic reasoning behind this concept follows the idea that if an individual can pay to avoid the loss of any good, she should be compensated for at least that willingness to pay to avoid the loss to induce her to give that good away. Empirically, the reservation price and the option value of an organ may differ. In this case, the option value would represent the required money to incentivize participation in an organ donation program.
We estimate the reservation price of a kidney using the VSL and VSI estimates proposed by Parada-Contzen et al. (2013). To examine the distribution of payments, we replicate their method and sample and estimate payments across the wage distribution. In particular, we use the same cross-section (2006) of the National Socio-Economic Survey and statistics from the Chilean Safety Association. We construct the same estimation sample than the one described in Parada-Contzen et al. (2013) and reestimate the hedonic log-wage equation using the same observed characteristics. As in the benchmark paper, we correct for both selection and endogeneity bias using the methodology suggested by Parada-Contzen et al. (2013). To compute the distribution of values for the VSL and VSI, we use the estimated wage-risk trade-offs for the entire sample and the specific observed wage per decile. With this computation, we obtain an average VSL and VSI per decile of the observed wage distribution.
Data regarding the fatal risk, non-fatal risk and recovery time associated with kidney extractions were obtained from the Catalan Transplant Foundation since there are no validated data available for Chile or other Latin American countries with similar characteristics. Nevertheless, Spain has a very advanced and sophisticated donation system and, therefore, has reliable data. Based on these data, the fatal risk of an organ extraction surgery is 0.045%, and the risk of complications derived from the extraction is 13%. The average recovery time is 3 months.
For measuring benefits, we rely on the work of Harrison et al. (2010). They model the benefits and costs of alternative treatments for kidney diseases (i.e., dialysis and transplant) and compute the system's present value of savings associated with a transplant since dialysis costs are avoided. These costs include items such as initial surgery, pre-transplantation, and follow-up studies, and immunological therapy. For the dialysis treatment, there are specific costs associated with blood treatments, such as hemodialysis and peritoneal dialysis. That study reports that each new donor would generate savings of US$50,616 for the Chilean health system. Furthermore, when adding the benefits of improving the recipient's length and quality of life by weighting the time spent by the patient in different health statuses and correcting by life years, the savings per donor increases to US$182,218. Throughout the paper, we refer to the first value as savings in costs and to the second value as savings in costs plus benefits in quality of life (QALY). The benefits in QALY computation follow the standard methodology in the literature (Harrison et al. 2010). Data on procurement costs are obtained from Dominguez et al. (2011).
Lastly, data for chronic kidney disease prevalence rates are obtained from the Chilean population. Estimates from the first wave of the Chilean National Health Survey in 2003 (in Spanish, Encuesta Nacional de Salud) report that 10–14% of the adult population has chronic kidney disease in 1 of its 5 levels (for details, see Alvo (2009) and Flores (2010)). Specifically, 5.7% of adults are in level 3, 0.2% are in level 4, and 0.1% are in level 5. The remaining adults with chronic kidney disease are in levels 1 and 2. Level 5 is the most severe category. Note that patients in category 5 are in dialysis treatment if no kidney is available for transplantation, while patients in level 4 are candidates for dialysis treatment or a kidney transplant.
Compensation to living donors
The estimated compensation is presented in Table 1. Column 1 presents the results using the hedonic estimation for VSL and VSI, while columns 2, 3, and 4 evaluate different quality of life components. For the estimated VSL and VSI, the compensation to accept a higher probability of death is US$6179 (VSL*risk of death…. the compensation for a reduction in the quality of life is US$4292 (VSI*risk of injury \(= 33,016 \times 13{\text{\% }}\)), and the compensation for recovery time is US$1876.
Table 1 Components for a kidney compensation for transplantation (US$ of 2013)
To compute the total "price" or cost, we input a cost of US$3426 for procurement costs (Dominguez et al. 2011). But, note that the procurement cost does not enter the individual reservation prices. In particular, procurement costs enter the compensation depending on the legal framework for living donations. Since the procurement costs in Chile are paid by the recipient's health insurance, which is subsidized by the government, we do not consider the extraction cost as part of the compensation to induce donation.
As a result, the reservation price for a kidney is estimated to be US$12,347. We also adjust the payment considering variation in the VSI, with reservation prices that range between US$9355 and US$13,255.
To compute the efficiency of introducing payments to living donors, we consider the benefits per donor of a transplant estimated by Harrison et al. (2010). Since they present benefits to the system with and without additional benefits due to improvements in quality of life after transplantation, we are also able to consider both cases. For the computation of both efficiency measures, we just need to subtract the reservation price to the amounts estimated by Harrison et al. (2010). Thus, we have that per donor, a compensation of US$12,347 generates savings of US$38,269 (= US$50,616 − 12,347). Furthermore, if we consider the benefits gains for the patient due to quality of life improvement, we have savings of US$169,871 (= US$182,218 − 12,347). Based on this information, introducing monetary incentives for kidney donations of US$ 12,347 would confer savings to the health care system in the range of US$38,269–169,871.
Since VSL and VSI values are wage-dependent, we now compute the reservation price for the entire wage distribution and not only at the mean. The results are shown in the top panel of Table 2. The reservation price ranges between US$788 and US$80,528 (Column 5) depending on the specific compensations for time loss (column 2), death risk (column 3) and quality risk (column 4) depending on the wage percentile. The reservation prices are computed by adding these three compensations per decile.
Table 2 Estimated kidney compensation and savings for transplantation across the wage distribution (US$ of 2013)
Savings to the health system are presented in the bottom panel of Table 2. Since the compensations vary according to the individual's position in the wage distribution, savings to the system also vary depending on the specific compensation. For computing the savings to the system, we subtract the reservation price in column (5) to the amounts estimated by (Harrison et al. 2010).Footnote 3 As a result, savings to the health system range—US$29,912 (costs) to US$49,829 (or between US$101,690 and US$181,431 when considering benefits in QUALYs). Since savings to the system reported by Harrison et al. (2010) are fixed amounts that do not depend on the wage distribution, while the reservation price does, variation in savings only come from differences in the compensations.
As a result, compensations at the 95th percentile are efficient to the health care system under a conservative measure of benefits. These computations are used to argue that introducing premiums to the first deciles of the distribution still generates savings to the health care system and may be a way to address the unequal payment issue discussed in Sect. 1.
Value of a kidney for a donor
As a proxy for the lowest bound of the willingness to accept payments, we compute the option value of a kidney for a donor by estimating the expected cost of suffering kidney disease. For this calculation, we consider both quality of life effects (weighting the VSI for these purposes) and risks of suffering malfunctioning kidney conditions (weighting the VSL for these purposes). The reasoning behind these computations follows the same argument than Becker and Elias (2007).
If individuals can avoid the expected cost of suffering a kidney disease by paying for an organ for transplantation (e.g., they will replace their malfunctioning kidney), then a donor's value of a kidney is equivalent to her willingness to pay for avoiding the expected cost of having a chronic kidney disease. If individuals are at least compensated for these expected costs, then they will be willing to participate in the organ donation mechanism. The reasoning behind this idea is equivalent to any standard economic decision: if an individual can pay to avoid the loss of any good, she should be compensated for at least that willingness to pay to avoid the loss to induce her to give that good away.Footnote 4
We propose that the compensation to living donors should be at least the individual's expected cost of having a chronic kidney disease. To begin with the analysis, we consider two pessimistic scenarios: case (i) the highest estimated prevalence rate is considered (i.e., 14%) and case (ii) all individuals in the fourth and fifth category have a non-functioning kidney. For case (i), the option value is computed per percentile weighting the VSI by 0.14 and the VSL by 0.001. For case (ii), the option value is computed per percentile weighting the VSI by 0.10 and the VSL by 0.003. The risks data enter according to the Chilean National Health Survey (2003). Results for both cases, including option values and participation premiums, are presented in Table 3.
Table 3 Option value and participation premiums to induce donation (US$ of 2013)
For case (i), we find donor's valuation at the mean is US$18,321, while that for case (ii) the donor's valuation at the mean is US$44,398. The difference between the values under both scenarios is solely given by the different assumptions on prevalence rates versus the risk of having a malfunctioning kidney.
Conditional on the scenario, we consider this computation to be a lower bound of an individual's valuation (i.e., for each scenario, some characteristics are not entering into the calculations). Specifically, some individual unobservable variables are not being considered. For example, individuals may value other variables such as the importance of not participating in optional surgeries or keeping both of her kidneys in case a relative needs one. An important implication for policy design is that this value is higher than the reservation price computed using Becker and Elias (2007) compensation method, so the compensation computed in the previous section should not be enough to induce donation.
However, since there are savings of a transplantation procedure versus other alternatives in both cases, it is possible to pay higher compensations. From the previous section, we have that savings at the mean are at least US$38,269 and up to US$169,871 when considering benefits in QALY. Therefore, parts of these savings can be used to induce donation by compensating individuals the amounts that correspond to their individual valuation. In this setting, increased compensations would take the form of a participation premium.
Participation premiums needed to induce donation are also presented in Table 3 (column 3). This premium is computed subtracting the reservation price in Table 2 calculated using Becker and Elias (2007)'s model to the Option Value (column 2). Important is to note that participation premiums are required at any point of the wage distribution, meaning that Becker and Elias (2007)'s amounts are not enough to induce participation at any point of the distribution. As we move along the distribution, this premium increases.
Let us take the following examples. For case (i), at the mean, since the donor's valuation is US$18,321 and the proposed compensation is US$12,347, then a participation premium of US$5974 (\(= US\$ 18,321 - 12,347)\) should be offered. Across the wage distribution, the premium goes from US$381 (\(= US\$ 1,169 - 788\)) to US$38,960 (\(= US\$ 119,488 - 80,528\)). As a result, increasing the payment is possible up to the 90th percentile of the wage distribution when considering the values of conservative costs and is possible up to the 99th percentile when considering the benefits of improving the patient's quality of life (see details in Table 3).
For case (ii), at the mean, the proposed participation premium corresponds to US$32,051 (\(= US\$ 44,398 - 12,347\)). Depending on the individual's position in the wage distribution, the premium ranges between US$2045 (\(= US\$ 2,833 - 788\)) and US$209,031 (\(= US\$ 289,560 - 80,528\)). In this scenario, because participation premiums are higher relative to case (i), increased participation premiums are efficient up to the 75th percentile of the wage distribution for conservative costs savings and up to the 95% of the distribution when adding QALY benefits.
Despite the different specific values of participation premiums and savings amounts computed under case (i) and case (ii), the general result is prevalent and robust: participation premiums are efficient up to the 75th percentile even under the pessimistic scenario considering all individuals in the fourth and fifth category have a non-functioning kidney.
Once again, because there might be ethical tensions concerning payments, one could argue that all participation premiums can be computed at the 75th percent of the distribution and distributed to individuals no mattering their relative position in the distribution.Footnote 5 In this setting, the relevant policy instrument from this analysis is that a participation premium of US$31,355 is efficient to the system considering conservative estimates and would incentivize individuals from the 75th percentile and below to join in the mechanism. When considering all benefits, even participation premiums to induce the richest segment of the population to participate at the 95th percentile (e.g., US$102,239), would generate benefits to the health care system while relieving ethical strains.
Note that this calculation assumes that an individual has two kidneys, and, therefore, she can actually donate one. While the risks of suffering kidney disease with one kidney are not different than when the individual has two kidneys, the individual might value the good differently. In that case, the option value of the individual should be higher, and therefore, risk premiums should be higher than the ones presented here. It is important to note that the difference in valuations does not come from differences in risks but due to individual appreciations of the relative scarcity of the good interacted with other characteristics such as risk aversion. Still, in that case, organ donation is to plausible, while eventually the option values could be computed.
To conclude this section, we also perform a sensitivity analysis considering a less pessimistic scenario where the lowest prevalence rate (i.e., 10% rather than 14%) and lowest non-functioning risks (i.e., only individuals in the fifth category have a non-functioning kidney) are considered.
The general pattern of results holds. The savings to the system increased as participation premiums in this last scenario are smaller. With respect to case (i), option values and participation premiums decreased but not substantially. It is efficient to compensate individuals by their option value up to the 90th percent of the distribution when only considering savings in costs, and efficient for the entire distribution when adding benefits due to improvements in individuals' life quality. With respect to case (ii), the differences in option values and participation premiums are even bigger. All details are available in Table 4 and are presented for the reader's analysis.
Table 4 Option value of a kidney and participation premiums in an optimistic scenario (US$ of 2013)
Evaluation using stated preference estimates
We now compare the results presented above with the estimates obtained using stated preference estimates for Chile. As the estimates for the VSL are lower for stated preference estimates than for the hedonic estimation, the estimated reservation price is lower, increasing the efficiency of introducing payments to living donors. The SP studies and values are presented in Table 5 (columns 1 and 2). Except for one value (GreenLabUC 2014), the estimates for VSL are similar in magnitude across stated preference studies. Consequently, this estimate drives large intervals in the efficiency evaluation. Since not all papers in consideration estimate the VSI for Chile, we first use the VSI estimated under the hedonic wage method.
Table 5 Components for a kidney compensation for transplantation using stated preferences studies (US$ of 2013)
We compute every component of the reservation price and the reservation price itself for every VSL found in the state preference literature (see columns 3, 4 and 5 from Table 5). Since the VSL vary across studies, the reservation price and its corresponding efficiency also vary. As before, we compute savings in costs and savings in cost plus benefits in life quality by subtracting the reservation prices from the benefits per donor provided by (Harrison et al. 2010).Footnote 6 Results for the efficiency analysis for all available SP values are presented in columns 7 and 8 of Table 5. In general, we estimate reservation prices between US$6294 and US$8598, predicting efficiency gains between US$42,019 and US$175,925.
Lastly, Table 6 presents the reservation price and savings under alternative values for the VSI. First, based on the results of Hojman et al. (2005), we define the VSI to be 40% of the estimated VSL. They find that the VSI is 41% for one of the cases and 50% for the other. Here, we use the lowest proportion. We then try three alternative cases with VSI values of US$10,000, US$20,000, and US$30,000. As before, savings are computed using the references values from Harrison et al. (2010). The reservation price ranges from US$3302 to US$285,106, and efficiency gains (losses) range from US$178,917 to US$(234,489).
Table 6 Compensation and Savings under different quality risk compensations (US$ of 2013)
Using labor market estimates of the value of life and safety, on average, we estimate a compensation for living donors of US$12,347, finding a net benefit ranging from US$38,269 to US$169,871. As we introduce variation across the wage distribution, we find compensation in the range of US$4214 to US$83,953 to induce donations, as well as savings from negative levels US$(29,912) to US$181,431 per additional donor, depending on the wage distribution. As a result, the savings per donors allow for the compensation of donors at the 95th percentile of the wage distribution. Accordingly, we argue that it would be efficient to introduce a participation premium to prevent unequal payments across the distribution. The premium ranges roughly between US$300 and US$30,000 per donor. Moreover, the efficiency gains are higher when considering stated preference estimates of life and safety.
Despite the efficiency gains, the literature has not established how compensation to living donors would increase the number of donations. To the best of our knowledge, this is the first paper to estimate the value of a kidney for a donor using her willingness to pay for avoiding a kidney disease. From the analysis, we conclude that the compensation computed based on Becker's analysis should not be enough to increase donations. Consequently, we propose donation premiums for effectively inducing donation, computed by considering the donor's willingness to accept payments. The premium ranges roughly between US$300 and US$30,000, and its introduction is efficient up to at least the 90th percentile of the distribution when only considering savings in costs without benefits in quality of life, and efficient for the entire distribution when adding QALY benefits.
The set of VSL and VSI estimates we use in the main set of results are among the highest labor market estimates found in the literature. Viscusi and Masterman (2017) report that among the best-set estimates, the median for the VSL for the United States is US$9.6 million and US$22.7 million at the 90th percentile of the distribution. For other countries, the median VSL estimate is US$7.8 million and US$39.4 at the 90th percentile of the distribution. For this analysis, if lower estimates are used, then the efficiency gains of introducing payments to living donors would be larger.
Since payments to donors have not been generally implemented, it is hard to compare our range of estimated compensation with actual payments. For comparison matters, we now consider the scarce evidence from black markets. For example, reports from kidney black-market sales indicate prices in the range of US$10,000 to US$20,000. Most of these sales come from Asia and Latin America. Specifically, there is evidence of sales in Brazil with prices from approximately US$2000 to US$10,000, in the Philippines from US$2500 to US$10,000, in Turkey from US$3000 to US$10,000, and in India from US$1000 to US$2000 (Beard et al. 2013; Becker and Elias 2007). From these countries, Brazil might be the most similar to the Chilean structure. Values from Brazil are in similar ranges than compensations estimated for Chilean between the 5 and 50% of the wage distribution. Generally, these amounts from developing countries are in the same range than estimates for the first half of the Chilean wage distribution (50th percentile), except for India whose values are similar to the reservation prices of the bottom 5% of the distribution.
In developed countries, evidence from illegal transactions shows prices up to US$100,000 for the United States and from US$9000 to US$12,000 for England. On the other hand, evidence from the Israeli reform predicts that out-of-pocket direct and indirect costs for actual living kidney donors increased to US$20,000 (Tushla et al. 2015). Estimates for Canada suggest that the average productivity loss for living donors may rise to US$6700, equivalent to a compensation for recovery time of US$1876 (Klarenbach et al. 2014).
Our results are consistent with the findings in the literature. For the U.S., Becker and Elias (2007) estimate an efficiency gain of US$122,700 per donor. Held et al. (2016) finds that a conservative compensation of US$45,000 per donor would generate total net welfare gain for society of US$46 billion per year. However, there is little research on compensation estimates for developing countries, and we believe that this analysis could be extended to other Latin-American countries. For extrapolation of results, it is important to note again that Chile is classified as a developing country by the International Monetary Fund and as a high-income OCDE country by the World Bank.
Note that the wage for the wealthiest segment of the Chilean population (99th percentile) is in similar ranges to the average annual American salary.Footnote 7 Viscusi and Masterman (2017) report an average wage for VSL studies in the U.S. between of roughly US$45,000 for U.S. government and from the Census of Fatal Occupational Injuries (CFOI) data sources, and about US$58,000 for U.S. Non-government sources. Based on this, we could compare the compensation estimated by Becker and Elias (2007) to the US$80,528 estimated for Chileans in the 99th percentile and the option value of US$119,488.
An advantage of introducing compensation payable by a third party is that exclusions in access for organs due to the monetary constraints can be prevented. We think that this policy helps patients who currently may not have access to necessary medical treatment. This access argument might be especially important in developing countries. Res et al. (2017) estimate that 93% of patients receiving renal replacement treatment reside in high- or high-middle-income countries, while only 7% reside in low-income countries.
Additionally, compensations could be introduced as credit or a payment to the donors' health insurance contract for the recommended follow-up consultations with medical transplantation team (Glotzer et al. 2013). Compensations could also pay improved psychological treatments as there is evidence that more and better psychological care for living donors should be provided (Giessing et al. 2004). There could also be space for educational programs and campaigns to promote donations and to promote other initiatives to increase donation rates, such as the Donor Action program (Roels et al. 2003). In this paper, we do not address crowd-out of altruistic donations since our strategy consists on evaluating indifference values for an average person.
All data are publicly available as explain in Sect. 3.2. The dataset for replicating the study can be obtained at http://observatorio.ministeriodesarrollosocial.gob.cl/casen/casen_usuarios.php.
Chile is classified as a developing country by the International Monetary Fund and as a high-income OCDE country by the World Bank.
In particular, they incorporate individual characteristics such as schooling, work experience, its squared, gender, and the Mills ratio for selection into the labor market. Into training and job information variables, they incorporate tenure, daily hours worked, work contract status, specific job training status, firm size, and geographical location using the Chilean administrative division for territorial organization.
For example, For the percentile 10%, savings in costs equal \(US\$ 47,465 = 50,616 - 3,151\) and savings in costs plus benefits in QALY equal \(US\$ 179,067 = 182,218 - 3151.\)
Note that in a market setting, the individual pays for an organ at least the same amount that she will save from having the disease. While we are not suggesting that such a market should exist, we are using economic reasoning to calculate how much an individual will save if she has such a disease.
This case considers the second pessimistic scenario (i.e., case ii) and the conservative costs savings computation (i.e., no QALY included).
For example, for Ortuzar et al. 2000 (road safety) savings to the system in costs equal \(US\$ 44,143 = 50,616 - 6474\) and savings to the system in costs plus QALY benefits equal \(US\$ 175,745 = 182,218 - 6474.\)
Annual wage for the 99th percentile of the Chilean wage distribution equals US$48,936.
Abadie A, Gay S (2006) The impact of presumed consent legislation on cadaveric organ donation: a cross-country study. J Health Econ 25(4):599–620
Alvo M (2009) Prevención de la enfermedad renal crónica I: aspectos generales. Medwave 9:1
Barnett A, Kaserman D, Adams A (1999) Market for organs: the question of supply. Contemp Econ Pol 17(2):147–155
Barnett W, Michael S, Walker D (2001) A free market in kidneys: efficient and equitable. Indep Rev 5(3):373–385
Beard R, Kaserman D, Osterkamp R (2013) The global organ shortage: economic causes, human consequences, policy responses. Stanford University Press, Palo Alto
Becker G, Elias J (2007) Introducing incentives in the market for live and cadaveric organ donations. J Econ Persp 21(3):3–24
Bellavance F, Dionne G, Lebeau M (2009) The value of a statistical life: a meta-analysis with a mixed effects regression model. J Health Econ 28(2):444–464
Bilgel F (2013) The effectiveness of transplant legislation, procedures and management: cross-country evidence. Health Policy 110(2):229–242
Bilgel F, Gelle B (2015) Financial incentives for kidney donation. A comparative case study using synthetic controls. J Health Econ 43:103–117
Bowland B, Beghin J (2001) Robust estimates of value of a statistical life for developing economies. J Policy Model 23(4):385–396
Brooks M (2003) A free market in kidneys would be efficient and equitable: a case of too much romance. Indep Rev 7(4):587–594
Chilean National Health Survey (2003) Ministry of Health, Chile
De Blaeij A, Florax R, Rietveld P, Verhoef E (2003) The value of statistical life in road safety: a meta-analysis. Accid Anal Prev 35(6):973–986
Diesel J (2010) Do economists reach a conclusion on organ liberalization? Econ J Watch 7(3):320–336
Dominguez J, Harrison R, Atal R (2011) Cost-Benefit Estimation of Cadaveric Kidney Transplantation: the Case of a Developing Country. Transpl Proc 46(6):2300–2304
Evans R, Kitzmann D (1998) An economic analysis of kidney transplantation. Surg Clin 78(1):149–174
Flores J (2010) Enfermedad renal crónica: epidemiología y factores de riesgo. Revista Medica Clinica Las Condes 21(4):502–507
Gaston R et al (2006) Limiting financial disincentives in live organ donation: a rational solution to the kidney shortage. Am J Transpl 6(11):2548–2555
Giessing M et al (2004) Quality of life of living kidney donors in Germany: a survey with the validated short form-36 and giessen subjective complaints list-24 questionnaires. Transplantation 78(6):864–872
Glotzer O et al (2013) Long-term quality of life after living kidney donation. Transpl Proc 45(9):3225–3228
GreenLabUC (2014) Estimación del valor de la vida estadística asociado a contaminación atmosférica y accidentes de tránsito, s.l.: s.n
Hammitt J, Robinson L (2011) The income elasticity of the value per statistical life: transferring estimates between high and low income populations. J Benafit Cost Anal 2(1):1–29
Harrison R, Dominguez JLL, Contreras D, Atal R (2010) Evaluacion del sistema de trasplante en Chile: propuestas de intervencion. In: Propuestas para Chile. Camino al Bicentenario. s.l.:Pontificia Universidad Catolica de Chile
Held P, Port F (2003) The impact of the organ shortage beyond the immediate loss of life: social, medical, and economic costs. s.l., s.n
Held P, McCormick F, Ojo A, Roberts J (2016) A cost-benefit analysis of government compensation of kidney donors. Am J Transpl 16(3):877–885
Hojman P, Ortuzar J, Rizza L (2005) On the joint valuation of averting fatal and severe injuries in highway accidents. J Saf Res 34(6):377–386
Howard D (2007) Producing organ donors. J Econ Persp 21(3):25–36
International Registry in Organ Donation and Transplantation, 2017. IRODaT Newsletter 2017
Iraguen P, Ortuzar J (2004) Willingness-to-pay for reducing fatal accident risk in urban areas: an Internet-based Web page stated preference survey. Accid Anal Prev 36(4):513–524
Klarenbach S et al (2014) Economic consequences incurred by living kidney donors: a Canadian multi-center prospective study. Am J Transplant 14(4):916–922
Lavee J et al (2013) Preliminary marked increase in the national organ donation rate in Israel following implementation of a new organ transplantation law. Am J Transpl 13(3):780–785
Liverman C, Childress J (2006) Organ donation: opportunities for action. National Academies Press, Washington, D.C
Matas A (2004) The case for living kidney sales: rationale, objections and concerns. Am J Transplant 4(12):2007–2017
Matas A, Schnitzler M (2004) Payment for living donor (vendor) kidneys: a cost-effectiveness analysis. Am J Transpl 4(2):216–221
Miller T (2000) Variations between countries in values of statistical life. J Transp Econ Policy 34(2):169–188
Mossialos E, Costa-Font J, Rudisill C (2008) Does organ donation legislation affect individuals' willingness to donate their own or their relative's organs? Evidence from European Union survey data. BMC Health Serv Res 8(1):48
Ortuzar J, Cifuentes WH (2000) Application of willingness-to-pay methods to value transport externalities in less developed countries. Environ Plan A 32(11):2007–2018
Parada-Contzen M (2019) The Value of a statistical life for risk-averse and risk-seeking individuals. Risk Anal. https://doi.org/10.1111/risa.13329
Parada-Contzen M, Riquelme-Won A, Vasquez-Lavin F (2013) The value of a statistical life in Chile. Empir Econ 45(3):1073–1087
Rees M et al (2017) Kidney exchange to overcome financial barriers to kidney transplantation. Am J Transpl 17(3):782–790
Rizzi LOJ (2003) Stated preference in the valuation of interurban road safety. Accid Anal Prev 35(1):9–22
Roels L et al (2003) Cost-benefit approach in evaluating investment into donor action: the German case. Transpl Int 16(5):321–326
Roth A, Sönmez T, Unver U (2005) Pairwise kidney exchange. J Econ Theory 125(2):151–188
Schnier K, Merion R, Turgeon N, Howard D (2018) Subsidizing altruism in living organ donation. Econ Inq 56(1):398–423
Stoler A et al (2017) Incentivizing organ donor registrations with organ allocation priority. Health Econ 26(4):500–510
Tushla L et al (2015) Living-donor kidney transplantation: reducing financial barriers to live kidney donation—recommendations from a consensus conference. Clin J Am Soc Nephrol 10(9):1696–1702
Venkataramani A, Martin E, Vijayan A, Wellen J (2012) The impact of tax policies on living organ donations in the United States. Am J Transpl 12(8):2133–2140
Viscusi WK (1993) The value of risks to life and health. J Econ Liter 31(4):1912–1946
Viscusi WK, Aldy J (2003) The value of a statistical life: a critical review of market estimates throughout the world. J Risk Uncertainty 27(1):5–76
Viscusi WK, Masterman C (2017) Anchoring biases in international estimates of the value of a statistical life. J Risk Uncertainty 54(2):103–128
Wellington A, Sayre E (2011) An evaluation of financial incentive policies for organ donations in the United States. Contemp Econ Policy 31(4):1912–1946
Marcela Parada-Contzen thanks the funding granted by Fondecyt (National Fund for Scientific and Technological Development-Chile), Project No. 3180155.
Marcela Parada-Contzen receive funding granted by Fondecyt (National Fund for Scientific and Technological Development-Chile), Project No. 3180155.
Departamento de Ingeniería Industrial, Facultad de Ingeniería, Universidad de Concepción, Edmundo Larenas 219, Concepción, Chile
Marcela Parada-Contzen
Facultad de Economía y Negocios, Universidad del Desarrollo, Ainavillo 456, Concepción, Chile
Felipe Vásquez-Lavín
Search for Marcela Parada-Contzen in:
Search for Felipe Vásquez-Lavín in:
Both the authors analyzed data, discussed empirical findings, and extensions. Both the authors equally contribute in writing the manuscript. Both authors read and approved the final manuscript.
Correspondence to Marcela Parada-Contzen.
Parada-Contzen, M., Vásquez-Lavín, F. An analysis of economic incentives to encourage organ donation: evidence from Chile. Lat Am Econ Rev 28, 6 (2019) doi:10.1186/s40503-019-0068-2
Compensations to living donors
Cost–benefit analysis | CommonCrawl |
A note on optimization modelling of piecewise linear delay costing in the airline industry
JIMO Home
Adaptive large neighborhood search Algorithm for route planning of freight buses with pickup and delivery
July 2021, 17(4): 1795-1807. doi: 10.3934/jimo.2020046
Network data envelopment analysis with fuzzy non-discretionary factors
Cheng-Kai Hu 1, , Fung-Bao Liu 2, , Hong-Ming Chen 3, and Cheng-Feng Hu 4,,
Department of International Business, Kao Yuan University, Kaohsiung, 82151, Taiwan
Department of Mechanical and Automation Engineering, I-Shou University, Kaohsiung, 84001, Taiwan
Department of Applied Mathematics, Tunghai University, Taichung 40704, Taiwan
Department of Applied Mathematics, National Chiayi University, Chiayi, 60004, Taiwan
* Corresponding author: C.-F. Hu
Received January 2019 Revised September 2019 Published July 2021 Early access March 2020
Network data envelopment analysis (DEA) concerns using the DEA technique to measure the relative efficiency of a system, taking into account its internal structure. The results are more meaningful and informative than those obtained from the conventional DEA models. This work proposed a new network DEA model based on the fuzzy concept even though the inputs and outputs data are crisp numbers. The model is then extended to investigate the network DEA with fuzzy non-discretionary variables. An illustrative application assessing the impact of information technology (IT) on firm performance is included. The results reveal that modeling the IT budget as a fuzzy non-discretionary factor improves the system performance of firms in a banking industry.
Keywords: Network DEA, non-discretionary, fuzzy decision making.
Mathematics Subject Classification: Primary: 90B50, 90C08; Secondary: 91B06.
Citation: Cheng-Kai Hu, Fung-Bao Liu, Hong-Ming Chen, Cheng-Feng Hu. Network data envelopment analysis with fuzzy non-discretionary factors. Journal of Industrial & Management Optimization, 2021, 17 (4) : 1795-1807. doi: 10.3934/jimo.2020046
R. D. Banker and R. Morey, Efficiency analysis for exogenously fixed inputs and outputs, Oper. Res., 34 (1986), 501-653. doi: 10.1287/opre.34.4.513. Google Scholar
M. Barat, G. Tohidi and M. Sanei, DEA for nonhomogeneous mixed networks, Asia Pac. Manag. Rev., 24 (2018), 161-166. doi: 10.1016/j.apmrv.2018.02.003. Google Scholar
R. E. Bellman and L. A. Zadeh, Decision making in a fuzzy environment, Manag. Sci., 17 (1970), B141–B164. doi: 10.1287/mnsc.17.4.B141. Google Scholar
L. Castelli, R. Pesenti and W. Ukovich, DEA-like models for the efficiency evaluation of hierarchically structured units, Eur. J. Oper. Res., 154 (2004), 465-476. doi: 10.1016/S0377-2217(03)00182-6. Google Scholar
J. Zhu, Data Envelopment Analysis: A Handbook of Modeling Internal Structures and Networks, International Series in Operations Research & Management Science, 238. Springer, New York, 2016. doi: 10.1007/978-1-4899-7684-0. Google Scholar
J. M. Cordero-Ferrera, F. Pedraja-Chaparro and D. Santín-González, Enhancing the inclusion of non-discretionary inputs in DEA, J. Oper. Res. Soc., 61 (2010), 574-584. doi: 10.1057/jors.2008.189. Google Scholar
R. Färe and S. Grosskopf, Intertemporal Production Frontiers: With Dynamic DEA, Boston: Kluwer Academic Publishers, 1996. Google Scholar
R. Färe and S. Grosskopf, Network DEA, Socio. Econ. Plann. Sci., 4 (2000), 35-49. Google Scholar
D. U. A. Galagedera, Modelling social responsibility in mutual fund performance appraisal: A two-stage data envelopment analysis model with non-discretionary first stage output, Eur. J. Oper. Res., 273 (2019), 376-389. doi: 10.1016/j.ejor.2018.08.011. Google Scholar
B. Golany and Y. Roll, Some extensions of techniques to handle non-discretionary factors in data envelopment analysis, J. Prod. Anal., 4 (1993), 419-432. doi: 10.1007/BF01073549. Google Scholar
C. Kao, Network data envelopment analysis: A review, Eur. J. Oper. Res., 239 (2014), 1-16. doi: 10.1016/j.ejor.2014.02.039. Google Scholar
C. Kao, Efficiency decomposition and aggregation in network data envelopment analysis, Eur. J. Oper. Res., 255 (2016), 778-786. doi: 10.1016/j.ejor.2016.05.019. Google Scholar
C. Kao and S.-N. Hwang, Efficiency measurement for network systems: IT impact on firm performance, Decis. Support Syst., 48 (2010), 437-446. doi: 10.1016/j.dss.2009.06.002. Google Scholar
R. J. Kauffman and P. Weill, An evaluative framework for research on the performance effects of information technology investment, Proceedings of the 10th International Conference on Information Systems, (1989), 377–388. doi: 10.1145/75034.75066. Google Scholar
M. A. Muniz, J. Paradi, J. Ruggiero and Z. Yang, Evaluating alternative DEA models used to control for non-discretionary inputs, Comput. Oper. Res., 33 (2006), 1173-1183. Google Scholar
L. Simar and P. W. Wilson, Estimation and inference in two-stage, semi-parametric models of production processes, J. Econom., 136 (1997), 31-64. doi: 10.1016/j.jeconom.2005.07.009. Google Scholar
M. Taleb, R. Ramli and R. Khalid, Developing a two-stage approach of super efficiency slack-based measure in the presence of non-discretionary factors and mixed integer-valued data envelopment analysis, Expert. Syst. Appl., 103 (2018), 14-24. doi: 10.1016/j.eswa.2018.02.037. Google Scholar
C. H. Wang, R. Gopal and S. Zionts, Use of data envelopment analysis in assessing information technology impact on firm performance, Ann. Oper. Res., 73 (1997), 191-213. Google Scholar
M. Zerafat Angiz L and A. Mustafa, Fuzzy interpretation of efficiency in data envelopment analysis and its application in a non-discretionary model, Knowl.-Based Syst., 49 (2013), 145-151. Google Scholar
12]">Figure 1. General network systems [12]
18]">Figure 2. Network system discussed in [18]
Table 1. Data set for assessing IT impact on firm performance
j IT Fixed No. of Deposits Profit Fraction
$ \rm {budget}$ $ {\mbox{assets}} $ $ {\mbox{employees }}$ of loans
$({$ \ \mbox{billions})}$ $({$ \ \mbox{billions})} $ $ ({$ \ \mbox{billions})} $ $({$ \ \mbox{billions})} $ $ ({$ \ \mbox{billions})} $ ${\mbox{recovered}}$
$ X_1 $ $ X_2 $ $ X_3 $ $ Z $ $ Y_1$ $ Y_2 $
1 $ 0.150 $ $ 0.713 $ $ 13.3 $ $ 14.478 $ $ 0.232 $ $ 0.986 $
5 $ 0.133 $ $ 0.409 $ $ 18.485 $ $ 15.206 $ $ 0.237 $ $ 0.984 $
6 $ 0.497 $ $ 5.846 $ $ 56.42 $ $ 81.186 $ $ 1.103 $ $ 0.955 $
9 $ 1.500 $ $ 18.120 $ $ 89.51 $ $ 124.072 $ $ 1.858 $ $ 0.972 $
10 $ 0.120 $ $ 1.821 $ $ 19.8 $ $ 17.425 $ $ 0.274 $ $ 0.983 $
15 $ 0.431 $ $ 4.504 $ $ 41.1 $ $ 52.63 $ $ 0.741 $ $ 0.981 $
17 $ 0.053 $ $ 0.450 $ $ 7.6 $ $ 9.512 $ $ 0.067 $ $ 0.980 $
27 $ 0.0106 $ $ 1.757 $ $ 12.7 $ $ 20.670 $ $ 0.253 $ $ 0.988 $
Table 2. The system efficiency, $ \theta_p^{\ast}, $ and the membership degree, $ \alpha_p, p = 1, 2, \cdots, 27. $
j Model (2)
$ \theta^{\ast} $ ${ \text{Model (6)}}$ DMU
$ \theta^{\ast} $ $ { \text{Model (6)}}$
$\alpha^{\ast}$ $ 1-\alpha^{\ast}$ $ \alpha^{\ast} $ $ 1-\alpha^{\ast} $
$ 1 $ $ 0.6388 $ $ 0.3612 $ $ 0.6388 $ $ 15 $ $ 0.6582 $ $ 0.3418 $ $ 0.6582 $
$ 10 $ $ 0.4961 $ $ 0.5039 $ $ 0.4961 $ $ 24 $ $ 0.9300 $ $ 0.0700 $ $ 0.9300 $
$ 14 $ $ 0.5880 $ $ 0.4120 $ $ 0.5880 $
Table 3. The results of solving the proposed fuzzy non-discretionary Model (14)
$ \begin{array}{c} \mbox{DMU}\\ j \end{array} $ Fuzzy non-discretionary input
$ \bar{X}_{1j}^{\ast} $ $ \bar{X}_{2j}^{\ast} $ $ \bar{X}_{3j}^{\ast} $ $ \alpha^{\ast} $ $ 1-\alpha^{\ast} $ Rank
1 0.1102 0.5236 9.6335 0.2654 0.7346 18
2 0.1260 0.7723 12.4259 0.2589 0.7411 17
7 0.0600 0.9180 56.4200 0.0000 1.0000 1
9 1.0908 13.1471 64.8529 0.2728 0.7272 20
10 0.0798 1.0715 12.8295 0.3351 0.6649 26
12 0.0376 0.6544 9.7883 0.2490 0.7510 14
13 0.3519 5.3291 11.8900 0.0488 0.9512 5
Naziya Parveen, Prakash N. Kamble. An extension of TOPSIS for group decision making in intuitionistic fuzzy environment. Mathematical Foundations of Computing, 2021, 4 (1) : 61-71. doi: 10.3934/mfc.2021002
Muhammad Qiyas, Saleem Abdullah, Shahzaib Ashraf, Saifullah Khan, Aziz Khan. Triangular picture fuzzy linguistic induced ordered weighted aggregation operators and its application on decision making problems. Mathematical Foundations of Computing, 2019, 2 (3) : 183-201. doi: 10.3934/mfc.2019013
Feyza Gürbüz, Panos M. Pardalos. A decision making process application for the slurry production in ceramics via fuzzy cluster and data mining. Journal of Industrial & Management Optimization, 2012, 8 (2) : 285-297. doi: 10.3934/jimo.2012.8.285
Zhen Ming Ma, Ze Shui Xu, Wei Yang. Approach to the consistency and consensus of Pythagorean fuzzy preference relations based on their partial orders in group decision making. Journal of Industrial & Management Optimization, 2021, 17 (5) : 2615-2638. doi: 10.3934/jimo.2020086
Harish Garg, Dimple Rani. Multi-criteria decision making method based on Bonferroni mean aggregation operators of complex intuitionistic fuzzy numbers. Journal of Industrial & Management Optimization, 2021, 17 (5) : 2279-2306. doi: 10.3934/jimo.2020069
Harish Garg, Kamal Kumar. Group decision making approach based on possibility degree measure under linguistic interval-valued intuitionistic fuzzy set environment. Journal of Industrial & Management Optimization, 2020, 16 (1) : 445-467. doi: 10.3934/jimo.2018162
Harish Garg. Some robust improved geometric aggregation operators under interval-valued intuitionistic fuzzy environment for multi-criteria decision-making process. Journal of Industrial & Management Optimization, 2018, 14 (1) : 283-308. doi: 10.3934/jimo.2017047
G.S. Liu, J.Z. Zhang. Decision making of transportation plan, a bilevel transportation problem approach. Journal of Industrial & Management Optimization, 2005, 1 (3) : 305-314. doi: 10.3934/jimo.2005.1.305
Ana F. Carazo, Ignacio Contreras, Trinidad Gómez, Fátima Pérez. A project portfolio selection problem in a group decision-making context. Journal of Industrial & Management Optimization, 2012, 8 (1) : 243-261. doi: 10.3934/jimo.2012.8.243
Ruiyue Lin, Zhiping Chen, Zongxin Li. A new approach for allocating fixed costs among decision making units. Journal of Industrial & Management Optimization, 2016, 12 (1) : 211-228. doi: 10.3934/jimo.2016.12.211
Hamed Fazlollahtabar, Mohammad Saidi-Mehrabad. Optimizing multi-objective decision making having qualitative evaluation. Journal of Industrial & Management Optimization, 2015, 11 (3) : 747-762. doi: 10.3934/jimo.2015.11.747
Saeed Assani, Muhammad Salman Mansoor, Faisal Asghar, Yongjun Li, Feng Yang. Efficiency, RTS, and marginal returns from salary on the performance of the NBA players: A parallel DEA network with shared inputs. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021053
Saeed Assani, Jianlin Jiang, Ahmad Assani, Feng Yang. Scale efficiency of China's regional R & D value chain: A double frontier network DEA approach. Journal of Industrial & Management Optimization, 2021, 17 (3) : 1357-1382. doi: 10.3934/jimo.2020025
Harish Garg. Novel correlation coefficients under the intuitionistic multiplicative environment and their applications to decision-making process. Journal of Industrial & Management Optimization, 2018, 14 (4) : 1501-1519. doi: 10.3934/jimo.2018018
Xue Yan, Heap-Yih Chong, Jing Zhou, Zhaohan Sheng, Feng Xu. Fairness preference based decision-making model for concession period in PPP projects. Journal of Industrial & Management Optimization, 2020, 16 (1) : 11-23. doi: 10.3934/jimo.2018137
Jian Jin, Weijian Mi. An AIMMS-based decision-making model for optimizing the intelligent stowage of export containers in a single bay. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1101-1115. doi: 10.3934/dcdss.2019076
Gholam Hassan Shirdel, Somayeh Ramezani-Tarkhorani. A new method for ranking decision making units using common set of weights: A developed criterion. Journal of Industrial & Management Optimization, 2020, 16 (2) : 633-651. doi: 10.3934/jimo.2018171
Saber Saati, Adel Hatami-Marbini, Per J. Agrell, Madjid Tavana. A common set of weight approach using an ideal decision making unit in data envelopment analysis. Journal of Industrial & Management Optimization, 2012, 8 (3) : 623-637. doi: 10.3934/jimo.2012.8.623
Weichao Yue, Weihua Gui, Xiaofang Chen, Zhaohui Zeng, Yongfang Xie. Evaluation strategy and mass balance for making decision about the amount of aluminum fluoride addition based on superheat degree. Journal of Industrial & Management Optimization, 2020, 16 (2) : 601-622. doi: 10.3934/jimo.2018169
Gleb Beliakov. Construction of aggregation operators for automated decision making via optimal interpolation and global optimization. Journal of Industrial & Management Optimization, 2007, 3 (2) : 193-208. doi: 10.3934/jimo.2007.3.193
Cheng-Kai Hu Fung-Bao Liu Hong-Ming Chen Cheng-Feng Hu | CommonCrawl |
Written by Colin+ in arithmetic.
That @solvemymaths is an excellent source of puzzles and whathaveyou:
Meanwhile, back in 1940 when everything was basically shit... pic.twitter.com/A5eKXOunFC
— Ed Southall (@solvemymaths) October 7, 2017
How would you find $\sqrt[3]{\frac{1-x^2}{x}}$ when $x=0.962$, using log tables or otherwise?
I would start by trying to make the numbers nicer: I note that $x=(1-0.038)$, which means we can rewrite the expression (using difference of two squares) as:
$\sqrt[3]{\frac{(1.962)(0.038)}{1-0.038}}$
Does that help? Well, maybe. It's easy enough to estimate $\ln(1.962)$ - 1.962 is a small amount - 1.9% - short of 2, so $\ln(1.962)\approx \ln(2) - 0.019$.
Similarly, on the bottom, $\ln(0.962) \approx - 0.038$.
But how about $\ln(0.038)$? We could work out $\ln(38) - \ln(1000)$, which is simple enough. Thirty-eight is a little more than 36, and $\ln(38) \approx 2\ln(6) + \frac{1}{18}$. Meanwhile, $\ln(1000) = 3\ln(10)$.
So, the logarithm of everything under the square root is $\ln(1.962)+\ln(0.038)-\ln(0.962)$, which we estimate as $\ln(2) - 0.19 + 2\ln(6) + \frac{1}{18} - 3\ln(10) + 0.38$.
Ugly as decimals are, we can write $\frac{1}{18}$ as $0.0\dot{5}$ and combine everything: we get $\ln(72) - \ln(1000) + 0.13$ or $+0.14$, give or take.
Now, we need the cube root of that, so we're going to divide everything by 3. The final two terms are easy enough: $\frac{1}{3} \ln(1000) = \ln(10)$ and we'll call the decimal bit 0.045. How about $\ln(72)$? Well, $\ln(72) = \ln(8) + \ln(9)$, and a third of that is $\ln(2)$ plus a third of $2.196$, from memory, which is $0.732$.
OK: so we now have $\ln(2) + 0.732 - \ln(10) + 0.045$, which is $0.779 - \ln(5)$. $\ln(5) \approx 1.608$, so we end up with -0.828 as the logarithm of the answer.
What about $e^{-0.828}$? It's $e^{-0.693} \times e^{-0.135}$, so a fair guess would be 13.5% smaller than a half, which is 0.43 or so.
A brief play with the calculator says that the answer is 0.426 - which, for a by-hand estimate, isn't bad at all.
The Mathematical Ninja lets the student investigate… cube roots
A Digital Root Puzzle
Ask Uncle Colin: A Short, Sweet Limit | CommonCrawl |
Câu hỏi Tiếng anh
The financial director kept us ______________ for almost an hour.
to wait
to be waited
Đáp án:C
Đề thi trắc nghiệm 90 phút ôn luyện tiếng anh lớp 11 - Đề số 5
Find a mistake in the four underlined parts of the sentence and correct it:
The list of the Seven Wonders of the Ancient World is originally compiled around the second century BC.
Choose a word that has different stress pattern:
expand hostess outlook sneaky
Choose the word that has the underlined part pronounced differently from the rest:
servant her very verse
Choose one sentence that has the same meaning as the root one:
This is the first time I meet such a beautiful shell.
I found that the number of universities which accept foreign students have been increasing.
The Asian Games, which also called the ASIAD, is a multi-sport event held every four years
among athletes from all over Asia.
costumes ceases forces decreases
He has a high fever, so the nurse needs to _____________ his temperature.
I paint almost every day although sometimes I do not have so many time.
You look pale. Why don't you see a doctor, Mary?" Billy said.
influence notify increase celebrate
wear pair shear square
I am looking forward to see the final of the elocution contest which will be organized at the auditorium.
We couldn't understand the teacher if he _____________ too fast.
A little does he know (= He knows nothing about it), but we're flying to Geneva next weekend to celebrate his birthday.
Choose the correct sentence which is built from the words and phrases given:
table / collapse / you / stand / on / it / .
luxury example exist exhaust
I always feel safe tell my close friends my secrets.
What problems do population explosion cause to the world?
How dare you ____________ my letter!
Should two close friends understand each other so well that ________ can be no suspicion between them?
appear address area agree
English / borrow / words / more than 50 different languages
The concert _____________ I listened last weekend was boring.
statue person suppose little
The president was made resign by the rebels.
Only prositive comments are ______________ on the first days of the new year.
She said to us, ''Don't be late again.''
height bright mind weight
There ___________ mutual understanding between friends.
Anybody can sympathize with the sufferings of a friend, but it requires a very fine nature to
sympathize with a friend's successful.
shoemaker practical generous improvement
chimney child line sign
She stood in the mucky yard and hands plunged into the pockets of her ____________ coat.
It's Winnie's graduation tomorrow. She has finally _____________ her dream.
The only sign of ________________________ is that he keeps glancing at his expensive watch.
I had the roof _____________ yesterday.
envelope pagoda repentance reunion
coup group soup tough
A person's or animal's hearing is the sense which makes it possible fo
them to be aware of ________________________ .
Choose the word whose stress is on the second syllable:
voluntary hospital victory sufficient
come capture coexist appreciate
The students should be treat as individuals with both their strengths and
their weaknesses.
These are the data collecting during our survey trip.
Thank you very much for your_____________________ support you have given me.
reveal survey project station
When you got lost in the forest you _________________________ very frightened.
purpose complaint service package
I have invited some friends to our house for _____________________________ dinner.
dishes watches boxes potatoes
generation adventurous aeronautics international
His daughter continued to cry until he was out of sight.
I always suffer from feelings of _________________ when I am talking about art with him.
It is a waste of time ______________ such a boring party.
The traffic was making so much noise that I couldn't hear what he ___________.
butter put sugar push
What a pity you failed the final exam!
While those efforts have helped Vietnam expand production of oil and natural gas, domestic
consumption of these resources has also increased as a result of rapid economic grow.
I / arrived / the / at / station / train / the / had / left / when /.
corridor enormous mystery separate
Cho hàm số . Tập nghiệm của phương trình f'(x) ≤ f(x) là:
Em hãy chọn một trạng ngữ chỉ nơi chốn thích hợp điền vào chỗ trống (…) để hoàn chỉnh câu sau:
…, cánh cò trắng nghiêng mình chấp chới bay.
Số electron tối đa trong lớp thứ n là :
Tính: $40\,\,000 - 20\,\,000 \times 2.$
Nếu đất có tầng đất mặt mỏng, khô hạn, nghèo dinh dưỡng, hoạt động của các VSV yếu thì có biện pháp cải tạo:
Quan hệ sản xuất trong lĩnh vực nông nghiệp ở Tây Âu đầu thế kỉ XVI là quan hệ gì?
Xà phòng hóa một hợp chất có công thức phân tử C10H14O6 trong dung dịch NaOH (dư), thu được glixerol và hỗn hợp gồm ba muối (không có đồng phân hình học). Công thức của ba muối đó là:
Dòng nào sau đây giải thích đúng ý nghĩa của cụm từ "Khu sản xuất" ?
Một phụ nữ có bố bị mù màu lấy một người chồng bị mù màu. Kiểu gen có thể có của bố mẹ người chồng này là: | CommonCrawl |
Displacement Formulations for Deformation and Vibration of Elastic Circular Torus
Bohua Sun
Subject: Physical Sciences, Acoustics Keywords: circular torus; deformation; vibration; Gauss curvature; Maple
The formulation used by most of the studies on an elastic torus are either Reissner mixed formulation or Novozhilov's complex-form one, however, for vibration and some displacement boundary related problem of a torus, those formulations face a great challenge. It is highly demanded to have a displacement-type formulation for the torus. In this paper, I will carry on my previous work [ B.H. Sun, Closed-form solution of axisymmetric slender elastic toroidal shells. J. of Engineering Mechanics, 136 (2010) 1281-1288.], and with the help of my own maple code, I am able to simulate some typical problems and free vibration of the torus. The numerical results are verified by both finite element analysis and H. Reissner's formulation. My investigations show that both deformation and stress response of an elastic torus are sensitive to the radius ratio, and suggest that the analysis of a torus should be done by using the bending theory of a shell, and also reveal that the inner torus is stronger than outer torus due to the property of their Gaussian curvature. Regarding the free vibration of a torus, our analysis indicates that both initial in u and w direction must be included otherwise will cause big errors in eigenfrequency. One of the most intestine discovery is that the crowns of a torus are the turning point of the Gaussian curvature at the crown where the mechanics' response of inner and outer torus is almost separated.
Closed Form Solution of Plane-Parallel Turbulent Flow Along an Unbounded Plane Surface
Subject: Physical Sciences, Fluids & Plasmas Keywords: turbulent flow; Prandtl mixing length; Reynolds number; boundary layer
Online: 8 June 2022 (12:27:48 CEST)
A century-old scientific conundrum is solved in this paper. The Prandtl mixing length modelled plane boundary turbulent flow is described by: $\frac{du^+}{d{y^+} }+\kappa^2{y^+} ^2\left( \frac{du^+}{d{y^+} }\right)^2=1$, together with boundary condition $ {y^+} =0:\, u^+=0$. Only an approximate solution to this nonlinear ordinary differential equation (ODE) has been sought so far, however, the exact solution to this ODE has not been obtained. By introducing a transformation, $2\kappa y^+=\sinh \xi$, I successfully find the exact solution of the ODE as follows: $u^+=\frac{1}{\kappa}\ln(2\kappa {y^+} +\sqrt{1+4\kappa^2{y^+} ^2})-\frac{2{y^+} }{1+\sqrt{1+4\kappa^2{y^+} ^2}}$.
Thermodynamic Foundation of Generalized Variational Principle
Subject: Physical Sciences, General & Theoretical Physics Keywords: variational principle; elasticity; Lagrangian multipliers; thermodynamics; entropy
One open question remains regarding the theory of the generalized variational principle, that is, why the stress-strain relation still be derived from the generalized variational principle while the Lagrangian multiplier method is applied in vain? This study shows that the generalized variational principle can only be understood and implemented correctly within the framework of thermodynamics. As long as the functional has one of the combination $A(\epsilon_{ij})-\sigma_{ij}\epsilon_{ij}$ or $B(\sigma_{ij})-\sigma_{ij}\epsilon_{ij}$, its corresponding variational principle will produce the stress-strain relation without the need to introduce extra constraints by the Lagrangian multiplier method. It is proved herein that the Hu-Washizu functional $\Pi_{HW}[u_i,\epsilon_{ij},\sigma_{ij}]$ and Hu-Washizu variational principle comprise a real three-field functional.
Nonlinear Elastic Deformation of Mindlin Torus
Subject: Physical Sciences, Acoustics Keywords: circular torus; nonlinear deformation; shear deformation; Mindlin; Gauss curvature; Maple
The nonlinear deformation and stress analysis of a circular torus is a difficult undertaking due to its complicated topology and the variation of the Gauss curvature. A nonlinear deformation (only one term in strain is omitted) of Mindlin torus was formulated in terms of the generalized displacement, and a general Maple code was written for numerical simulations. Numerical investigations show that the results obtained by nonlinear Mindlin, linear Mindlin, nonlinear Kirchhoff-Love, and linear Kirchhoff-Love models are close to each other. The study further reveals that the linear Kirchhoff-Love modeling of the circular torus gives good accuracy and provides assurance that the nonlinear deformation and stress analysis (not dynamics) of a Mindlin torus can be replaced by a simpler formulation, such as a linear Kirchhoff-Love theory of the torus, which has not been reported in the literature.
Geometry-Induced Rigidity in Elastic Torus from Circular to Oblique Elliptic Cross-Section
Subject: Physical Sciences, Applied Physics Keywords: elliptic torus; oblique; nonlinear deformation; vibration; Gauss curvature; Maple
For a given material, different shapes correspond to different rigidities. In this paper, the radii of the oblique elliptic torus are formulated, a nonlinear displacement formulation is presented and numerical simulations are carried out for circular, normal elliptic, and oblique tori, respectively. Our investigation shows that both the deformation and the stress response of an elastic torus are sensitive to the radius ratio, and indicate that the analysis of a torus should be done by using the bending theory of shells rather than membrance theory. A numerical study demonstrates that the inner region of the torus is stiffer than the outer region due to the Gauss curvature. The study also shows that an elastic torus deforms in a very specific manner, as the strain and stress concentration in two very narrow regions around the top and bottom crowns. The desired rigidity can be achieved by adjusting the ratio of minor and major radii and the oblique angle.
Deformation and Stress Analysis of Catenary Shell of Revolution
Subject: Physical Sciences, Acoustics Keywords: Catenary; surface of revolution; Gauss curvature; minimal surface; shells; deformation; stress; Maple
The catenary shells of revolution are widely used in constructions due to their unique mechanics' feature. However, no publications on this type of shells can be found in the literature. To have a better understanding of the deformation and stress of the catenary shells of revolution, we formulate the principal radii for two kinds of catenary shells of revolution and their displacement type governing equations. Numerical simulations are carried out based on both Reissner-Meissner mixed formulations and displacement formulations. Our investigations show that both deformation and stress response of elastic catenary shells of revolution are sensitive to its geometric parameter $c$, and reveal that the mechanics of the catenary shells of revolution does much better than the spherical shells. Two complete codes in Maple are provided.
Small Symmetrical Deformation of Thin Torus with Circular Cross-Section
Subject: Engineering, General Engineering Keywords: toroidal shell; deformation; Gauss curvature; Heun function; hypergeometric function; Maple
By introducing a variable transformation $\xi=\frac{1}{2}(\sin \theta+1)$, a complex-form ordinary differential equation (ODE) for the small symmetrical deformation of an elastic torus is successfully transformed into the well-known Heun's ODE, whose exact solution is obtained in terms of Heun's functions. To overcome the computational difficulties of the complex-form ODE in dealing with boundary conditions, a real-form ODE system is proposed. A general code of numerical solution of the real-form ODE is written by using Maple. Some numerical studies are carried out and verified by both finite element analysis and H. Reissner's formulation. Our investigations show that both deformation and stress response of an elastic torus are sensitive to the radius ratio, and suggest that the analysis of a torus should be done by using the bending theory of a shell.
Gol'denveizer Problem of Elastic Torus
Subject: Physical Sciences, Acoustics Keywords: torus; elastic; deformation; symmetric; Gauss curvature
The Gol'denveizer problem of a torus can be described as follows: a toroidal shell is loaded under axial forces and the outer and inner equators are loaded with opposite balanced forces. Gol'denveizer pointed out that the membrane theory of shells is unable to predict deformation in this problem, as it yields diverging stress near the crowns. Although the problem has been studied by Audoly and Pomeau (2002) with the membrane theory of shells, the problem is still far from resolved within the framework of bending theory of shells. In this paper, the bending theory of shells is applied to formulate the Gol'denveizer problem of a torus. To overcome the computational difficulties of the governing complex-form ordinary differential equation (ODE), the complex-form ODE is converted into a real-form ODE system. Several numerical studies are carried out and verified by finite-element analysis. Investigations reveal that the deformation and stress of an elastic torus are sensitive to the radius ratio, and the Gol'denveizer problem of a torus can only be fully understood based on the bending theory of shells.
Deformation and Vibration of an Oblique Elliptic Torus
Subject: Physical Sciences, Acoustics Keywords: elliptic torus; oblique; deformation; vibration; Gauss curvature; Maple
The formulation used by the most of studies on an elastic torus are either Reissner mixed formulation or Novozhilov's complex-form one, however, for vibration and some displacement boundary related problem of the torus, application of those formulations has encountered great difficulty. It is highly demanded to have a displacement-type formulation for the torus. In this paper, I will simulate some typical problems and free vibration of the torus. The numerical results are verified by both finite element analysis and H. Reissner's formulation. My investigations show that both deformation and stress response of an elastic torus are sensitive to the radius ratio, and suggest that the analysis of a torus should be done by using the bending theory of a shell, and also reveal that the inner torus is stronger than outer torus due to the property of their Gaussian curvature. Regarding the free vibration of the torus, our analysis indicates that both initial in u and w direction must be included otherwise will cause big errors in eigenfrequency.
Influence of Physical Parameters on the Collapse of a Spherical Bubble
Subject: Physical Sciences, Acoustics Keywords: Bubble collapse, Rayleigh's modelling, physical parameters, numerical simulation, Maple
This paper examines the influence of physical parameters on the collapse dynamics of a spherical bubble filled with diatomic gas ($\kappa=7/5$). The present numerical investigation shows that each physical parameter affects the bubble collapse dynamics differently. After comparing the contribution of each physical parameter, it appears that, of all the parameters, the surrounding liquid environment affects the bubble collapse dynamics the most. Meanwhile, surface tension has the weakest influence and can be ignored in the bubble collapse dynamics. However, surface tension must be retained in the initial analysis since this, as well as the pressure difference jointly control initial bubble formation. As an essential part of this study, a general Maple code is provided.
Universal Scaling Laws of Hydraulic Fracturing
Subject: Engineering, Automotive Engineering Keywords: hydraulic fracturing; scaling law; fracture
The hydraulic fracturing is studied by using dimensional analysis. A universal scaling law of the hydraulic fracturing is obtained. This simple relation has not been seen in the literature.
Hertz Elastic Dynamics of Two Colliding Elastic Spheres
Subject: Physical Sciences, General & Theoretical Physics Keywords: Hertz impact dynamics; exact solution; numerical solution; inversion; Maple
This paper revisits a classic problem in physics - Hertz elastic dynamics of two colliding elastic spheres. This study obtains impact period in terms of hypergeometric function and successfully combines Deresiewicz's three segmental solutions into one single solution. Our numerical investigation confirms that Deresiewicz's inversion is a good approximation. As an essential part of this study, a general Maple code is provided.
Solving Prandtl-Blasius Boundary Layer Equation Using Maple
Subject: Physical Sciences, Fluids & Plasmas Keywords: Prandtl boundary layer; Prandtl-Blasius equation; numerical solution; Runge-Kutta method; Maple
A solution for the Prandtl-Blasius equation is essential to all kinds of boundary layer problems. This paper revisits this classic problem and presents a general Maple code as its numerical solution. The solutions were obtained from the Maple code, using the Runge-Kutta method. The study also considers convergence radius expanding and an approximate analytic solution is proposed by curve fitting. Similarly, the study resolves some boundary layer related problems and provide relevant Maple codes for these.
Numerical Solution of Euler's Rotation Equations for a Rigid Body about a Fixed Point
Subject: Physical Sciences, General & Theoretical Physics Keywords: Euler's equation; rigid body; rotation; Maple
Finding a solution for Euler's equations is a classic mechanics problem. This study revisits the problem with numerical approaches. For ease of teaching and research, a Maple code comprising 2 lines is written to find a numerical solution for the problem. The study's results are validated by comparing these with previous studies. Our results confirm the correctness of the principle of maximum moment of inertia of the rotating body, which is verified by thermodynamics. As an essential part of this study, the Maple code is provided.
A Conjecture on the Solution Existence of the Navier-Stokes Equation
Subject: Physical Sciences, Fluids & Plasmas Keywords: solution existence condition; the Navier-Stokes equations; velocity gradient; tensor determinant
For the solution existence condition of the Navier-Stokes equation, we propose a conjecture as follows: "\emph{The Navier-Stokes equation has a solution if and only if the determinant of flow velocity gradient is not zero, namely $\det (\bm \nabla \bm v)\neq 0$.}"
The Monotonic Rising and Oscillating of Capillary Driven Flow in Circular Cylindrical Tubes
Subject: Physical Sciences, Applied Physics Keywords: capillary rise; dynamics; tube radius criteria; oscillation; monotonic rising
Among the best-known capillarity phenomena is a capillary rise, the understanding of which is essential in fluidics. Some capillary flows rise monotonically whereas others oscillate, but until now no criteria have been formulated for this scenario. In this paper, the Levine's capillary rise modelling is computed numerically, then the critical radius of the capillary tube is formulated by using the dimensional method and data fitting for identification of exponent index. The phase space diagram of capillary velocity versus height is obtained for the first time and shows that the phase transition from oscillating to monotonic rising happens when the phase trajectory decreases exponentially to somewhere other than the "attractor." Two general Maple codes of the problem are provided as an essential part of this paper.
Correct Expression of the Material Derivative in Continuum Physics
Subject: Physical Sciences, Fluids & Plasmas Keywords: material derivative; continuum physics; solution existence condition; the Navier-Stokes equation
The material derivative is important in continuum physics. This Letter shows that the expression $\frac{d }{dt}=\frac{\partial }{\partial t}+(\bm v\cdot \bm \nabla)$, used in most literature and textbooks, is incorrect. The correct expression $ \frac{d (:)}{dt}=\frac{\partial }{\partial t}(:)+\bm v\cdot [\bm \nabla (:)]$ is formulated. The solution existence condition of Navier-Stokes equation has been proposed from its form-solution, the conclusion is that "\emph{The Navier-Stokes equation has a solution if and only if the determinant of flow velocity gradient is not zero, namely $\det (\bm \nabla \bm v)\neq 0$.}"
On Symmetrical Deformation of Toroidal Shell with Circular Cross-Section
Subject: Physical Sciences, Applied Physics Keywords: toroidal shell; deformation; Gauss curvature; Heun's function; hypergeometric function; Maple
By introducing a variable transformation $\xi=\frac{1}{2}(\sin \theta+1)$, the symmetrical deformation equation of elastic toroidal shells is successfully transferred into a well-known equation, namely Heun's equation of ordinary differential equation, whose exact solution is obtained in terms of Heun's functions. The computation of the problem can be carried out by symbolic software that is able to with the Heun's function, such as Maple. The Gauss curvature of the elastic toroidal shells shows that the internal portion of the toroidal shells has better bending capacity than the outer portion, which might be useful for the design of metamaterials with toroidal shells cells. Through numerical comparison study, the mechanics of elastic toroidal shells is sensitive to the radius ratio. By slightly adjustment of the ratio might get a desired high performance shell structure.
Explicit Representation of SO(3) Rotation Tensor for Deformable Bodies
Subject: Physical Sciences, General & Theoretical Physics Keywords: finite deformation; deformation gradient; rotation tensor
Computing the rotation tensor is vital in the analysis of deformable bodies. This paper describes an explicit expression for the SO(3) rotation tensor R of the deformation gradient F, and successfully establishes an intrinsic relation between the exponential mapping Q = exp A and the deformation F. As an application, Truesdell's simple shear deformation is revisited.
Universal Scaling Law for the Velocity of Dominoes Toppling Motion
Subject: Physical Sciences, General & Theoretical Physics Keywords: dominoes; toppling motion; velocity; height; thickness; separation
By using directed dimensional analysis and data fitting, an explicit universal scaling law for the velocity of dominoes toppling motion is formulated. The scaling law shows that domino propagational velocity is linearly proportional to the 1/2 power of domino separation and thickness, and -1/2 power of domino height and gravitation. The study also proved that dominoes width and mass have no influence on the domino wave traveling velocity. The scaling law obtained in this Letter is very useful to the dominoes game and will help the domino player to place the dominoes for fast speed and have a quick estimation on the speed without doing complicated multi-bodies dynamical simulation.
Zheng's Scaling Laws of Jet Produced by Shaped Charge
Subject: Physical Sciences, Applied Physics Keywords: shaped charge; jet,stability; breakup diameter; breakup time; dimensional analysis
This Letter revisits the instability of jet produced by shaped charge. Scaling laws for shaped charge jet are derived by dimensional analysis. It shows that the scaling laws are universal when cross-sectional shrinkage rate is not included in the formulation.
Reynolds' Turbulence Solution
Subject: Physical Sciences, Fluids & Plasmas Keywords: turbulence; number of unknowns; the Reynolds stress tensor; RANS; turbulence closure problem
This study revisits the Reynolds-averaged Navier--Stokes equations (RANS) and finds that the existing literature is erroneous regarding the primary unknowns and the number of independent unknowns in the RANS. The literature claims that the Reynolds stress tensor has six independent unknowns, but in fact the six unknowns can be reduced to three that are functions of the three velocity fluctuation components, because the Reynolds stress tensor is simply an integration of a second-order dyadic tensor of flow velocity fluctuations rather than a general symmetric tensor. This difficult situation is resolved by returning to the time of Reynolds in 1895 and revisiting Reynolds' averaging formulation of turbulence. The study of turbulence modeling could focus on the velocity fluctuations instead of on the Reynolds stress. An advantage of modeling the velocity fluctuations is, from both physical and experimental perspectives, that the velocity fluctuation components are observable whereas the Reynolds stress tensor is not.
Universal Scaling Laws of Origami Paper Springs
Subject: Keywords: origami; paper spring; elastic; bending; twist; deformation
This letter solves an open question of paper spring risen by Yoneda (2019). Universal scaling laws of a paper spring are proposed by using both dimensional analysis and data fitting. It is found that spring force obeys power square law of spring extension, however strong nonlinear to the total twist angle. Without doing any additional works, we have successfully generalize the scaling laws for Poisson ratio 0.3 to the materials with an arbitrary Poisson's ratio with the help of dimensional analysis.
Notes on the Lie Symmetry Exact Explicit Solutions for Nonlinear Burgers' Equation
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Lie group; Burgers equation; exact solution; general solution; elementary function
In light of Liu \emph{at el.}'s original works, this paper revisits the solution of Burgers's nonlinear equation $u_t=a(u_x)^2+bu_{xx} $. The study found two exact and explicit solutions for groups $G_4$ and $G_6$, as well as a general solution. A numerical simulation is carried out. In the appendix a Maple code is provided
On Plastic Dislocation Density Tensor
Subject: Physical Sciences, Applied Physics Keywords: plastic dislocation density tensor
This letter attempts to clarify an issue regarding the proper definition of plastic dislocation density tensor. This study shows that the Ortiz's and Berdichevsky's plastic dislocation density tensor are equivalent with each other, but not with Kondo's one. To fix the problem, we propose a modified version of Kondo's plastic dislocation density tensor.
An Intrinsic Formulation of Incompressible Navier-Stokes Turbulent Flow
Subject: Physical Sciences, Fluids & Plasmas Keywords: turbulence; mean velocity; uctuation velocity; the Reynolds stress tensor; vorticity; turbulence closure problem
This paper proposed an explicit and simple representation of velocity fluctuation and the Reynolds stress tensor in terms of the mean velocity field. The proposed turbulence equations are closed. The proposed formulations reveal that the mean vorticity is the key source of producing turbulence. It is found that there are no velocity fluctuation and turbulence if there were no vorticity. As a natural consequence, the laminar- turbulence transition condition was obtained in a rational way.
On the Reynolds-Averaged Navier-Stokes Equations
Subject: Physical Sciences, Fluids & Plasmas Keywords: turbulence; the Reynolds stress tensor; turbulence closure problem
This paper attempts to clarify an long-standing issue about the number of unknowns in the Reynolds-Averaged Navier-Stokes equations (RANS). This study shows that all perspectives regarding the numbers of unknowns in the RANS stem from the misinterpretation of the Reynolds stress tensor. The current literature consider that the Reynolds stress tensor has six unknown components; however, this study shows that the Reynolds stress tensor actually has only three unknown components, namely the three components of fluctuation velocity. This understanding might shed a light to understand the well-known closure problem of turbulence.
Classical and Quantum Kepler's Third Law of N-Body System
Subject: Physical Sciences, General & Theoretical Physics Keywords: Kepler's third law; n-body system; periodic orbits; dimensional analysis, classical and quantum mechanics
Inspired by amazing result obtained by Semay [1], this study revisits generalised Kepler's third law of an n-body system from the perspective of dimension analysis. To be compatible with Semay's quantum n-body result, this letter reports a conjecture which had not be included in author's early publication [2] but formulated in the author's research memo. The new conjecture for quantum N-body system is proposed as follows: Tq|Eq|3/2 = πG/√2[(ΣNi=1 ΣNj=i+1 mimj)3/(ΣNk=1 mk)]1/2. This formulae is, of course, consistent with the Kepler's third law of 2-body system, and exact same as Semay's quantum result for identical bodies.
Modification of Prandtl Wind Tunnel
Subject: Engineering, General Engineering Keywords: Wind tunnel, Prandtl's configuration, corners, vortex, turbulence, pressure loss
Wind tunnels are devices that enable researchers to study the flow over objects of interest, the forces acting on them and their interaction with the flow, which is nowadays playing an increasingly important role due to noise pollution. Since the first closed circuit wind tunnel with variable cross-section was built in G¨ottingen, its Prandtl configuration has little change. The wind tunnel with Prandtl configuration has four corners and vanes, more than 50% of the total pressure loss are caused by the corners and vanes. How to reduce the total pressure loss is a world class problem in the wind tunnel design. This study attempts to propose a novel configuration of wind tunnel, where the corners have been replaced by semi-circular tunnel. Sun wind tunnel 2 has only two corners and vanes, while Sun wind tunnel 1 has no corners and vanes at all. It is expected the new wind tunnel can reduce the total pressure loss from 50% to 10%.
Some Second-Order Tensor Calculus Identities and Applications in Continuum Mechanics
Subject: Physical Sciences, Mathematical Physics Keywords: tensor; gradient; divergence; curl; elasto-plastic deformation
To extend the calculation power of tensor analysis, we introduce four new definition of tensor calculations. Some useful tensor identities have been proved. We demonstrate the application of the tensor identities in continuum mechanics: momentum conservation law and deformation superposition.
Scaling Law for Liquid Splashing inside a Container Drop Impact on a Solid Surface
Subject: Physical Sciences, Applied Physics Keywords: liquid splashing; dimensional analysis; directed dimensional analysis
This letter attempts to find splashing height of liquid-filled container drop impact to a solid surface by dimensional analysis (DA). Two solutions were obtained by both traditional DA and directed DA without solving any governing equations. It is found that the directed DA can provide much more useful information than the traditional one. This study shows that the central controlling parameter is called splash number $\mathrm{Sp}=\mathrm{Ga} \mathrm{La}^\beta=(\frac{gR^3}{\nu^2})(\frac{\sigma R}{\rho \nu^2})^\beta$, which is the collective performance of each quantity. The splash height is given by $ \frac{h}{H}=(\frac{\rho\nu^2}{\sigma R})^\alpha f[\frac{gR^3}{\nu^2}(\frac{R\sigma}{\rho\nu^2})^\beta]=\frac{1}{\mathrm{La}^\alpha}f(\mathrm{Ga}\cdot \mathrm{La}^\beta)$. From the physics of the splashing number, we can have a fair good picture on the physics of the liquid splashing as follows: the jets propagation will generate vortex streets from the container bottom due to sudden pressure increasing from drop impact (water-hammer effect), which will travel along the container sidewall to the centre of the container and subsequently excite a gravity wave on the liquid surface. The interaction between the gravitational force, surface force and viscous force is responsible for creating droplet splash at the liquid surface.
The Riemann Spring
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Riemann Zeta function, matrix, tensor
This paper attempts to propose a Riemann spring model via an analogy between the Riemann Zeta function of a complex number and the elastic spring in series.
The Riemann Zeta Function of a Matrix/Tensor
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Riemann Zeta function; matrix; tensor
This paper attempts to extend the Riemann Zeta function of a complex number to a function of a matrix and/or a tensor $A$, namely $$\zeta (A)=\sum _{n=1}^{\infty} \frac{1}{n^{A}}= \sum _{n=1}^{\infty} \sum_{k=1}^n\lambda_k A^k$$ and inverse $$A=\sum _{n=1}^{\infty} \sum_{k=1}^n \mu(n)\lambda_k \zeta(A^k)$$ where $\mu(n)$ is the M\"obius function, $A$ is a complex matrix or tensor with any order, and $\lambda_k$ is eigenvalue of the matri/tensor $A$. This kind of calculations on the Riemann Zeta function has never been seen in the literature. Some examples are provided.
On closure problem of incompressible turbulent flow
Subject: Physical Sciences, Fluids & Plasmas Keywords: Turbulence, the Reynolds stress tensor, turbulence closure problem
This paper attempts to clarify an issue regarding the lasting unsolved problem of turbulence, namely the closure problem. This study shows that all perspectives regarding the numbers of un- known quantities in the Reynolds turbulence equations stem from the misunderstandings of physics of the Reynolds stress tensor. The current literatures have a consensus that the Reynolds stress tensor has six unknowns; however, this study shows that the Reynolds stress tensor actually has only three ones, namely the three components of fluctuation velocity. With this new understanding, the closed turbulence equations for incompressible flows are proposed.
A Novel Simplification of the Reynolds-Chou-Navier-Stokes Turbulence Equations of Incompressible Flow
Subject: Physical Sciences, Fluids & Plasmas Keywords: turbulence; mean velocity; fluctuation velocity; the Reynolds stress tensor; vorticity; turbulence closure problem
Based on author's previous work [Sun, B. The Reynolds Navier-Stokes Turbulence Equations of Incompressible Flow Are Closed Rather Than Unclosed. Preprints 2018, 2018060461 (doi: 10.20944/preprints201806.0461.v1)], this paper proposed an explicit representation of velocity fluctuation and formulated the Reynolds stress tensor in terms of the mean velocity field. The proposed closed Reynolds Navier-Stokes turbulence formulations reveal that the mean vorticity is the key source of producing turbulence.
Dislocation Density Tensor of Thin Elastic Shells at Finite Deformation
Subject: Physical Sciences, Mathematical Physics Keywords: dislocation density tensor, thin shells, Riemann curvature tensor
The dislocation density tensors of thin elastic shells have been formulated explicitly in terms of the Riemann curvature tensor. The formulation reveals that the dislocation density of the shells is proportional to KA3=2, where K is the Gauss curvature and A is the determinant of metric tensor of the middle surface.
The Reynolds Navier-Stokes Turbulence Equations of Incompressible Flow Are Closed Rather Than Unclosed
This paper shown that turbulence closure problem is not an issue at all. All mistakes in the literature regarding the numbers of unknown quantities in the Reynolds turbulence equations stem from the misunderstandings of physics of the Reynolds stress tensor, i.e., all literatures have stated that the symmetric Reynolds stress tensor has six unknowns; however, it actually has only three unknowns, i.e., the three components of fluctuation velocity. We shown the integral-differential equations of the Reynolds mean and fluctuation equations have exactly eight equations, which equal to the numbers of quantities in total, namely, three components of mean velocity, three components of fluctuation velocity, one mean pressure and one fluctuation pressure. That is why we claim in this paper, that the Reynolds Navier-Stokes turbulence equations of incompressible flow are closed rather than unclosed. This study may help to solve the puzzle that has eluded scientists and mathematicians for centuries.
Turbulent Poiseuille Flow Modelling by an Modified Prandtl-van Driest Mixing Length
Subject: Physical Sciences, Fluids & Plasmas Keywords: Turbulent flow; Poiseuille flow; Prandtl mixing length; high heels velocity profile; Reynolds number
The turbulent Poiseuille flow between two parallel plates is one of the simplest possible physical situations, and it has been studied intensively. In this paper, we propose a modified Prandtl-van Driest mixing length that satisfies both boundary conditions and wall damping effects. With our new formulations, we numerically solve the problem and, moreover, propose an approximate analytical solution of mean velocity. As applications of our solution, an approximate analytical friction coefficient of turbulent Poiseuille flow is proposed.
A Self-closed Turbulence Model for the Reynolds-averaged Navier-Stokes Equations
Subject: Physical Sciences, Fluids & Plasmas Keywords: Turbulence model; Reynolds stresses; RANS; validation rule
In this paper, for the Reynolds-averaged Navier-Stokes equations, a self-closed turbulence model without any adjustable parameter is formulated. The validation rule for self-closed turbulence model is rigorously derived from the Reynolds-averaged Navier-Stokes equation. The rule is not effected by turbulence modelling on the Reynolds stresses.
Similarity Solutions of Two Dimensional Turbulent Boundary Layers
Subject: Physical Sciences, Fluids & Plasmas Keywords: Turbulent boundary layers; laminar boundary layers; similarity transformation; similarity solution; Prandtl mixing length; Reynolds number
The exact similarity solutions (also called as special exact solutions) of two dimensional laminar boundary layer were obtained by Blasius in 1908, however, no similarity solutions for two dimensional turbulent boundary layers have been reported in the literature. With the help of dimensional analysis and invariance principle, Prandtl mixing length $\ell=\kappa y$ for one dimensional turbulent boundary layer is extended to $\ell(x,y)=\kappa y (1-\frac{y}{\delta})\sqrt{\frac{\nu}{U\delta}} $ for the two dimensional turbulent boundary layers, furthermore, with a similarity transformation, we successfully transform the two dimensional turbulent boundary layers partial differential equations into a single ordinary differential equation $f'''+ ff''+\beta(1-f'^2)+\kappa^2[\eta^2(1-\eta)^2f''|f''|]'=0$. As an application, similarity solutions of the two dimensional turbulent boundary layer on a flat plate at zero incidence have been studied in detail. To solve the ordinary differential equation numerically, a complete Maple code is provided.
Similarity Solutions of Two Dimensional Turbulent Boundary Layers by Prandtl Mixing Length Modelling
The exact similarity solutions of two dimensional laminar boundary layer were obtained by Blasius in 1908, however, for two dimensional turbulent boundary layers, no similarity solutions (special exact solutions) have ever been found. In the light of Blasius' pioneer works, we extend Blasius similarity transformation to the two dimensional turbulent boundary layers, and successfully transform the two dimensional turbulent boundary layers partial differential equations into a single ordinary differential equation. By author's Maple code, we numerically solve the ordinary differential equation and produce some useful quantities.
Buckling Behavior of Different Types of Spherical Weave Structures Under Vertical Compression Loads
Guangkai Song, Bohua Sun
Subject: Engineering, Mechanical Engineering Keywords: Spherical weave structure; In-plane curvatures; Buckling; Test; Buckling load
Weaving technology can convert two-dimensional structures such as ribbons into three-dimensional structures by specific connections. However, most of the 3D structures fabricated by conventional weaving methods using straight ribbons have some topological defects. In order to obtain smoother continuous 3D surface structures, Baek et al. proposed a novel weaving method using naturally curved (in-plane) ribbons to fabricated three-dimensional curved structures and using this method to weave new spherical weave structures that are closer to perfect spheres. We believe that this new spherical weave structure with smooth geometric properties must correspond to new mechanical properties. To this end, we investigated the buckling characteristics of different types of spherical weave structures by the combination of test and finite element method. The results of calculations and experiments show that the failure mode of the spherical weave structure under vertical loading can be divided into two stages: a flat contact region forms between the spherical weave structure and the rigid plate and inward dimple of ribbons. The spherical weave structures using naturally curved (in-plane) ribbon weaving have better buckling stability than those woven with straight ribbons. The vertical buckling load of spherical weave structures using naturally curved ribbon increases with the width and thickness of the ribbon. In addition, this paper combines test, theoretical and finite element analysis to propose the buckling load equation and buckling correction factor equation for the new spherical weave structure under vertical compression load.
Buckling Behavior of Different Types of Woven Structures Under Axial Compression Loads
Subject: Engineering, Mechanical Engineering Keywords: In-plane curvatures; Buckling; Woven structure; 3D print; Gaussian curvature
Weaving is an ancient and effective structural forming technique characterized by the ability to convert two-dimensional ribbons to three-dimensional structures. However, most 3D structures woven from straight ribbons have topological defects. Baek et al. proposed a method to weave smoother continuous 3D surface structures using naturally curved (in-plane) ribbons, obtained a new surface structure with relatively continuous variation of Gaussian curvature, and analyzed its geometric properties. We believe that this new 3D surface structure with smooth geometric properties must correspond to new mechanical properties. To this end, we investigated a 3D surface structure using naturally curved (in-plane) ribbon weaving, and the results of calculations and experiments show that such structures have better buckling stability than those woven with straight ribbons. It is observed that the number of ribbons influences the buckling behavior of different types of woven structures.
Nonlinear Investigation of Gol'denveizer's Problem of a Circular and Elliptic Elastic Torus
Subject: Engineering, Mechanical Engineering Keywords: circular torus; elliptic torus; finite element method; buckling; nonlinear analysis; Gaussian curvature
Gol'denveizer's problem of a torus has been analyzed by Audoly and Pomeau (2002) and Sun (2021). However, all of the investigations of Gol'denveizer's problem of an elastic torus have been linear. In this paper, the finite element method is used to more accurately address this problem. Furthermore, Sun (2021) cannot be solved by nonlinear analysis. We research the nonlinear mechanical properties of Gol'denveizer's problem of circular and elliptic tori, and relevant nephograms are given. We study the buckling of Gol'denveizer's problem of an elastic torus, and propose failure patterns and force-displacement curves of tori in the nonlinear range. Investigations reveal that circular tori have more rich buckling phenomena as the parameter a increases. Gol'denveizer's problem of the buckling of an elliptic torus is analyzed, and we find a new buckling phenomenon called a "skirt." As a/b increases, the collapse load of an elliptic torus of the Gol'denveizer problem is enhanced gradually.
Audoly-Pomeau Linear Law of the Gol'denveizer Torus
Subject: Engineering, Mechanical Engineering Keywords: circular torus; finite element method; analytical solution; Gaussian curvature
The Gol'denveizer problem of a torus was studied analytically by Audoly and Pomeau (2002), and the accuracy of the Audoly and Pomeau linear law was confirmed numerically by Sun (2021). However, the law does not include the major radius R of the torus. To find the influence of the major radius, we used finite element numerical simulation to simulate different cases, and we propose a modified Audoly and Pomeau linear law for vertical deformation, which includes R. A linear law of horizontal deformation is presented as well. Our studies show that the Audoly and Pomeau linear law has high accuracy. With modified vertical and horizontal deformation, a displacement-compatible relation between them is formulated.
On the Pull-out Mechanical Properties of the Diabolical Ironclad Beetle Bionic Jigsaw Connection
Jie Wei, Bohua Sun
Subject: Physical Sciences, Applied Physics Keywords: Theoretical analysis; bionics; jigsaw connection; 3D Printing; finite element
In engineering, connections between components are often weak areas. Unreasonable connection methods can easily reduce the strength of components, resulting in unpredictable failure modes. In nature, numerous connection methods for biological structures with excellent mechanical properties have evolved. Studying the connection methods of organisms in nature can inspire new ideas for bionic connection methods. When the diabolical ironclad beetle is under pressure, the elytra are not easy to separate, which ensures the stability of the beetle's external structure, thus making the beetle extremely resistant to pressure. The reason for this is the interlocking and toughening effect of the unique jigsaw connection between the elytra. Therefore, in this paper a theoretical analysis model is established and used to analyze the mechanical behavior of the diabolical ironclad beetle's jigsaw connection during the drawing process and determine the influence of factors such as quantity, angle, and geometric characteristics on the mechanical properties of the jigsaw connection. The results of the theoretical analysis are then compared with the results of experiments and ABAQUS finite element simulation.
Clamping Force of A Multilayered Cylindrical Clamp with Internal Friction
Bohua Sun, Xiao-Lin Guo
Subject: Physical Sciences, Applied Physics Keywords: Clamping force; multilayer clamp; curvature; bending; friction; dissipation; deflection
Holding an object by clamping force is a fundamental phenomena, layered or laminated architectures with internal sliding features are essential mechanism in natural and man-made structural system. In this paper, we combine the layered architecture and clamping mechanism to form a multilayered clamp and study the clamping force with internal friction. Our investigations show that the clamping force and energy dissipation are very mich depend on the number of layers, its geometry and elasticity, as well as internal friction. The central goal of studying the covered book is not only to predict the clamping force of the clamp, but also as a representative case to help finding some clue on the universal behaviours of multilayered architectures with internal friction.
Assembly and Disassembly Mechanics of a Cylindrical Snap Fit
Xiao-Lin Guo, Bohua Sun
Subject: Physical Sciences, Applied Physics Keywords: Snap fit, elasticity, friction, geometry, beam, symmetry breaks
Snap fit is a common mechanical mechanism. It uses the physical asymmetry that is easy to assemble but difficult to disassemble to provide a simple and fast link between objects. The ingenious combination of geometric shape, bending elasticity and friction of the snap fit is the mechanism behind the easy to assemble but difficult to disassemble disassemble. Yoshida and Wada (2020) has done a groundbreaking work in the analysis of the elastic snap fit. During our study of their paper, while we really enjoyed their research, unfortunately we detected several questioning formulations. After careful checking, we found that those formulations are not typographical, therefore it is necessary to make corrections. This paper reformulates the linear elasticity of a cylindrical snap fit, obtains an exact solution and proposes an accurate relation between the opening angle and bending tangent angle. Under the first order approximation, our formulations can reduced to the results of Yoshida and Wada and hence confirms the scientific correctness of Yoshida and Wada's work. Furthermore, this paper also derives a correct vertical displacement expression, and propose a new way of disassembly by bending for the first time and formulate a scaling law by data fitting. All formulations are validated by finite element simulation and experiment. The research here is helpful to the design of elastic snap fit or adjustable mechanical mechanism and metamaterial cell.
Pneumatic Shape and Drag Scaling Laws of the Dandelion
Subject: Physical Sciences, Fluids & Plasmas Keywords: Dandelion; pappus; flexible filament; wind-dispersal; aerodynamic shape; drag; Reynolds number; scaling laws
The common dandelion uses a bundle of drag-enhancing bristles (the pappus) that enables seed dispersal over formidable distances; however, the scaling laws of pneumatic/aerodynamic drag underpinning pappus-mediated flight remains unresolved. In this paper, we will study the pneumatic/aerodynamic shape of dandelion and the scaling law of resistance, and find that the drag resistance coefficient is proportional to the -2/3 power of the dandelion pappus Reynolds number. As a by-product, the terminal velocity analytical expression of the dandelion seed is also obtained.
Aerodynamic Shape and Drag Scaling Laws of Flexible Fibre in Flowing Medium
Subject: Physical Sciences, Applied Physics Keywords: flexible fibre; flow medium; aerodynamic shape; drag; scaling laws
The study of a flexible body immersed in a flowing medium is one of best way to find its aerodynamic shape. This Letter revisited the problem first studied by Alben, Shelley and Zhang (Nature 420, 479-481, 2002). The aerodynamic shape of the fibre is found by simpler approach and universal drag scaling laws of the flexible fibre in flowing medium are proposed by using dimensional analysis. The Alben scaling laws is being generalized and confirmed to be universal. Our study show that the Alben number is a measurement of maximum curvature of the fibre forced by dynamic pressure. A complete Maple code is provided for finding aerodynamic shape of the fibre in the flowing medium.
Bending Analysis of Integrated Multilayer Corrugated Sandwich Panel Based on 3D Printing
Wen Dang, Xuanting Liu, Bohua Sun
Subject: Materials Science, Polymers & Plastics Keywords: multi-layer core corrugated sandwich panel; three-point bending; 3D printing; core shape; number of core layers
Single-layer core corrugated sandwich panels generally consist of a corrugated core and two layers of panels, while multi-layer core corrugated sandwich panels are formed by stacking multiple layers of panels with multiple layers of core layers. In this study, integrated multilayer core corrugated sandwich panels with different shapes of corrugated cores (triangular, trapezoidal, and rectangular) and the different number of core layers were fabricated using 3D printing technology, and the mechanical behavior of such multilayer core corrugated sandwich panels under quasi-static three-point bending was investigated using experiments and numerical simulations. The effects of core shape and number of core layers on the bending deformation process, damage mode, load carrying capacity, and bending energy dissipation capacity of multilayer core sandwich panels are discussed. Parametric design of multilayer triangular core corrugated sandwich panels was also carried out by finite element software ABAQUS. It was found that a new multilayer corrugated sandwich panel with a multi-layer core is better than the single core shape multilayer corrugated sandwich panel in terms of bending load capacity, energy dissipation capacity and deformation capacity can be obtained through the combination design of different core shapes.
Morphological Transformation of Arched Ribbon Driven by Torsion
Yuanfan Dai, bohua sun, Yi Zhang, Xiang Li
Subject: Physical Sciences, Acoustics Keywords: ribbon, morphological transformation, torsion , critical width
The morphological transformation of an arched ribbon driven by torsion is a scientific problem that is connected with daily life and requires thorough analysis. An arched ribbon can achieve an instantaneous high speed through energy transformation and then return to the original shape of the structure. In this paper, based on the characteristics of the ribbon structure, the dynamic mathematical model of the arched ribbon driven by torsion is established from the Kirchhoff rod equation. The variations of the Euler angle of each point on the center line of the ribbon with the arc coordinate s and the rotation angle of the supports $\phi$ was examined. The relationship between the internal force distribution of each point in the direction of $\hat{d}_{a}$ and the material, cross-sectional properties, and rotation angle of the supports was obtained. We used ABAQUS, a nonlinear finite element analysis tool, to simulate the morphological transformations of the ribbons, verified our theory with simulation results, and reproduced the experimental results of Sano. Furthermore, we redefined the concept of the ``critical flipping point'' of Sano. In this paper, the dimensional analysis method was used to fit the simulation data. The following relationship between the critical width $ w ^{*} $, thickness h, and the radius R of the ribbon with different cross sections was obtained: $w^{*}=A\cdot R\left(h/R\right)^{0.6}$, where A is 3.19 for rectangular cross sections and 3.06 for elliptical cross sections. By analyzing the simulation data, we determined the variation behavior of the out-of-plane deflection of the center point of the ribbon with the radius R, width w, and thickness h. Our research has guiding significance for understanding and designing arched ribbons driven by torsion, and the results can be applied to problems of different scales.
Spaghetti Breaking Dynamics
Yi Zhang, Xiang Li, Yuanfan Dai, Bohua Sun
Subject: Physical Sciences, Acoustics Keywords: curvature; brittle cracking; elastic rod; diameter-to-length ratio; spaghetti
Why are pieces of spaghetti generally broken into three to ten segments instead of two as one thinks? How can one obtain the desired number of fracture segments? For these problems, in this paper a strand of spaghetti is considered an elastic rod, and the finite-element software ABAQUS is used to simulate the detailed fracture dynamics of the elastic rod. By changing the size (length and diameter) of the rod, the relevant data on the fracture limit curvature and the number of fractured segments of the elastic rod are obtained. The ABAQUS simulation results confirm the scientific judgment of B. Audoly and S. Neukirch (Fragmentation of rods by cascading cracks: Why spaghetti does not break in half. Phys Rev Lett 95: 095505). Using dimensional analysis to fit the finite-element data, two relations of the elastic rod fracture dynamics are obtained: (1) the relationship between the fracture limit curvature and the diameter, and (2) the relationship between the number of fracture segments and the diameter-length ratio. Results reveal that when the length is constant, the larger the diameter (the smaller the diameter-length ratio D/L), the smaller the limit curvature; and the larger the diameter-length ratio D/L, the fewer the number of fractured segments. The relevant formulations can be used to obtain the desired number of broken segments of spaghetti by changing the diameter-to-length ratio.
Dynamics of Rubber Band Stretch Ejection
Xiang Li, Bohua Sun, Yi Zhang, Yuanfan Dai
Subject: Physical Sciences, Acoustics Keywords: ejection rubber band; elastodynamics; bending effect; hyperelastic materials; maple
Why do stretched rubber bands not hit the hand after ejection? What is the mechanism behind the rubber band ejection dynamics? These questions represent a fascinating scientific problem. Because the size of a rubber band in the circumferential direction is much larger than that in the other two directions of its cross-section, we regard the rubber band as a slender beam and establish a mathematical model of the dynamics of the rubber band stretching and ejection. Furthermore, we obtain the dependence of the dynamic curvature of the rubber band on the arc length and time. We used the finite element software ABAQUS to simulate the dynamic process of a rubber band stretching and ejection. The simulation results and dimensional analysis were performed to examine the effect of the bending elastic rebound velocity. The mathematical model and simulation results revealed that the relationship between the curvature and time at the end of the rubber band ($s =0$) was as follows: $\kappa\sim t^{-{1}/{2}}$. This research has guiding significance for the design of rubber bands as elastic energy storage devices.
Bending Response and Energy Dissipation of Interlayered Slidable Friction Booklike-Plates
Bohua Sun, Wen Dang, Xuanting Liu, Xiao-Lin Guo
Subject: Physical Sciences, Applied Physics Keywords: Hardcover book; multilayer; plates; curvature; bending; friction; dissipation; deflection
To have a better protection, strong toughness and good flexibility, all lives and plants must have skins, similarly, all books should have covers. In this paper, considering interlayer friction as a perturbation, we formulate a hardcover-book-like laminated plates with internal friction. For a quasi-static problem, detail analysis on the bending response and energy dissipation are carried out for a three-point-supported plates. Our numerical investigations show that the hardcover is more essential than the core layers in terms of bending response, and the energy dissipation within per loading-unloading cycle can be made to vary by a considerable amount. The central goal of studying is not only to predict the bending deformation of the book-like-plates, but also as a representative case to help finding some clue on the universal behaviours of multilayered architectures with internal friction. | CommonCrawl |
The Shortest Crease
What this might be about?
18 October 2015, Created with GeoGebra
An interesting optimization problem has been offered by Henry Ernest Dudeney:
Fold a page, so that the bottom outside corner touches the inside edge and the crease is the shortest possible. That is about as simple a question as we could put, but it will puzzle a good many readers to discover just where to make the fold. I give two examples of folding:
It will be seen that the crease on the right is longer than that on the left, but the latter is not the shortest possible.
H. E. Dudeney, 536 Puzzles & Curious Problems, Charles Scribner's Sons, 1967
Bisect $AB$ in $M.$ Bisect $AM$ at $E.$ Draw the line at $M$ perpendicular to $AB$ and the semicircle with diameter $BE.$ Let $N$ be the intersection of the two.
The line $EN$ gives the direction of the shortest possible crease under the conditions.
The solution below is straightforward application of the Pythagorean theorem and the very beginning of the calculus.
Taking into account that Dudeney "folds a page," assume $AD\gt AB=4.$ Denote $x=BE.$
Then successively: $AE=4-x;$ $EB'=BE=x;$ $AB'=\sqrt{x^{2}-(4-x)^{2}}.$
Further, triangles $AEB'$ and $FSB'$ are similar so that $\displaystyle FS=\frac{AE\cdot B'F}{AB'}=\frac{4(4-x)}{\sqrt{x^{2}-(4-x)^{2}}};$
$\begin{align} BS&=BF+FS\\ &=\sqrt{x^{2}-(4-x)^{2}}+\frac{4(4-x)}{\sqrt{x^{2}-(4-x)^{2}}}\\ &=\frac{x^{2}-(4-x)^{2}+4(4-x)}{\sqrt{x^{2}-(4-x)^{2}}}\\ &=\frac{4x}{\sqrt{x^{2}-(4-x)^{2}}}\\ &=\frac{2x}{\sqrt{2(x-2)}}. \end{align}$
$\begin{align} ES^{2}&=BE^2+BS^2\\ &=x^{2}+\frac{4x^{2}}{2(x-2)}\\ &=\frac{x^{3}}{x-2}\\ \end{align}$
Consider the function $\displaystyle f(x)=\frac{x^{3}}{\sqrt{x-2}}.$
$\displaystyle f'(x)=\frac{3x^{2}(x-2)-x^{3}}{(x-2)^{2}}=\frac{2x^{2}(x-3)}{(x-2)^2}.$
$x=0$ leads to no crease, but $x=3$ does. We next find that $f''(3)\gt 0$ so that $x=3$ is a local minimum.
This shows why Dudeney's answer is correct. With $x=3,$ $MN=\sqrt{2}$ and $BS=3\sqrt{2},$ implying that $ES$ passes through $N.$ Thus Dudeney's answer is a byproduct of the calculus solution. It would be exciting to arrive at his result without calculus.
Thus, according to Dudeney the minimum crease of a page with base $4$ equals $3\sqrt{3}.$ This crease can be obtained for any page whose vertical size is at least $3\sqrt{2}.$ However, it is obvious that, with the vertical side $b\lt 3\sqrt{3},$ the vertical crease through the midpoint $M$ will be shorter than $3\sqrt{3}.$ Obviously, Dudeney assumed that, for a "page," the ratio of the vertical to the horizontal side exceeds $3\sqrt{3}/4.$
Paper Folding Geometry
An Interesting Example of Angle Trisection by Paperfolding
Angle Trisection by Paper Folding
Angles in Triangle Add to 180o
Broken Chord Theorem by Paper Folding
Dividing a Segment into Equal Parts by Paper Folding
Egyptian Triangle By Paper Folding
Egyptian Triangle By Paper Folding II
Egyptian Triangle By Paper Folding III
My Logo
Paper Folding And Cutting Sangaku
Parabola by Paper Folding
Radius of a Circle by Paper Folding
Regular Pentagon Inscribed in Circle by Paper Folding
Trigonometry by Paper Folding
Folding Square in a Line through the Center
Tangent of 22.5o - Proof Without Words
Regular Octagon by Paper Folding
Fold Square into Equilateral Triangle
Circle Center by Paperfolding
Folding and Cutting a Square
A Sample of Optimization Problems III
Mathematicians Like to Optimize
Mathematics in Pizzeria
The Distance to Look Your Best
Residence at an Optimal Distance
Distance Between Projections
Huygens' Problem
Optimization in a Crooked Trapezoid
Greatest Difference in Arithmetic Progression
Area Optimization in Trapezoid
Minimum under Two Constraints
Optimization with Many Variables
Minimum of a Cyclic Sum with Logarithms
A Problem with a Magical Solution from Secrets in Inequalities
Leo Giugiuc's Optimization with Constraint
Problem 4033 from Crux Mathematicorum
An Unusual Problem by Leo Giugiuc
A Cyclic Inequality With Constraint in Two Triples of Variables
Two Problems by Kunihiko Chikaya
An Inequality and Its Modifications
A 2-Variable Optimization From a China Competition
|Contact| |Front page| |Contents| |Geometry| | CommonCrawl |
Can the unresolved X-ray background be explained by emission from the optically-detected faint galaxies of the GOODS project?
by M. A. Worsley; A. C. Fabian; F. E. Bauer; D. M. Alexander; W. N. Brandt; B. D. Lehmer
The emission from individual X-ray sources in the Chandra Deep Fields and XMM-Newton Lockman Hole shows that almost half of the hard X-ray background above 6 keV is unresolved and implies the existence of a missing population of heavily obscured active galactic nuclei (AGN). We have stacked the 0.5-8 keV X-ray emission from optical sources in the Great Observatories Origins Deep Survey (GOODS; which covers the Chandra Deep Fields) to determine whether these galaxies, which are individually...
Source: http://arxiv.org/abs/astro-ph/0602605v1
The XMM deep survey in the CDF-S. IX. An X-ray outflow in a luminous obscured quasar at z~1.6
by C. Vignali; K. Iwasawa; A. Comastri; R. Gilli; G. Lanzuisi; P. Ranalli; N. Cappelluti; V. Mainieri; I. Georgantopoulos; F. J. Carrera; J. Fritz; M. Brusa; W. N. Brandt; F. E. Bauer; F. Fiore; F. Tombesi
In active galactic nuclei (AGN)-galaxy co-evolution models, AGN winds and outflows are often invoked to explain why super-massive black holes and galaxies stop growing efficiently at a certain phase of their lives. They are commonly referred to as the leading actors of feedback processes. Evidence of ultra-fast (v>0.05c) outflows in the innermost regions of AGN has been collected in the past decade by sensitive X-ray observations for sizable samples of AGN, mostly at low redshift. Here we...
Topics: High Energy Astrophysical Phenomena, Astrophysics, Astrophysics of Galaxies
Source: http://arxiv.org/abs/1509.05413
The ALMA Frontier Fields Survey III: 1.1 mm Emission Line Detections in Abell 2744, MACSJ0416.1-2403, MACSJ1149.5+2223, Abell 370, and Abell S1063
by J. González-López; F. E. Bauer; M. Aravena; N. Laporte; L. Bradley; M. Carrasco; R. Carvajal; R. Demarco; R. Kneissl; A. M. Koekemoer; A. M. Muñoz Arancibia; P. Troncoso; E. Villard; A. Zitrin
Most sub-mm emission line studies of galaxies to date have targeted sources with known redshifts where the frequencies of the lines are well constrained. Recent blind line scans circumvent the spectroscopic redshift requirement, which could represent a selection bias. Our aim is to detect emission lines present in continuum oriented observations. The detection of such lines provides spectroscopic redshift and yields properties of the galaxies. We perform a search for emission lines in the ALMA...
Topics: Astrophysics of Galaxies, Astrophysics
A New, Faint Population of X-ray Transients
by F. E. Bauer; E. Treister; K. Schawinski; S. Schulze; B. Luo; D. M. Alexander; W. N. Brandt; A. Comastri; F. Forster; R. Gilli; D. A. Kann; K. Maeda; K. Nomoto; M. Paolillo; P. Ranalli; D. P. Schneider; O. Shemmer; M. Tanaka; A. Tolstov; N. Tominaga; P. Tozzi; C. Vignali; J. Wang; Y. Xue; G. Yang
We report on the detection of a remarkable new fast high-energy transient found in the Chandra Deep Field-South, robustly associated with a faint ($m_{\rm R}=27.5$ mag, $z_{\rm ph}$$\sim$2.2) host in the CANDELS survey. The X-ray event is comprised of 115$^{+12}_{-11}$ net 0.3-7.0 keV counts, with a light curve characterised by a $\approx$100 s rise time, a peak 0.3-10 keV flux of $\approx$5$\times$10$^{-12}$ erg s$^{-1}$ cm$^{-2}$, and a power-law decay time slope of $-1.53\pm0.27$. The...
Topics: Astrophysics, High Energy Astrophysical Phenomena
X-rays from the First Massive Black Holes
by W. N. Brandt; C. Vignali; D. P. Schneider; D. M. Alexander; S. F. Anderson; F. E. Bauer; X. Fan; G. P. Garmire; S. Kaspi; G. T. Richards
X-ray studies of high-redshift (z > 4) active galaxies have advanced substantially over the past few years, largely due to results from the new generation of X-ray observatories. As of this writing X-ray emission has been detected from nearly 60 high-redshift active galaxies. This paper reviews the observational results and their implications for models of the first massive black holes, and it discusses future prospects for the field.
NuSTAR reveals the extreme properties of the super-Eddington accreting SMBH in PG 1247+267
by G. Lanzuisi; M. Perna; A. Comastri; M. Cappi; M. Dadina; A. Marinucci; A. Masini; G. Matt; F. Vagnetti; C. Vignali; D. R. Ballantyne; F. E. Bauer; S. E. Boggs; W. N. Brandt; M. Brusa; F. E. Christensen; W. W. Craig; A. C. Fabian; D. Farrah; C. J. Hailey; F. A. Harrison; B. Luo; E. Piconcelli; S. Puccetti; C. Ricci; C. Saez; D. Stern; D. J. Walton; W. W. Zhang
PG1247+267 is one of the most luminous known quasars at $z\sim2$ and is a strongly super-Eddington accreting SMBH candidate. We obtained NuSTAR data of this intriguing source in December 2014 with the aim of studying its high-energy emission, leveraging the broad band covered by the new NuSTAR and the archival XMM-Newton data. Several measurements are in agreement with the super-Eddington scenario for PG1247+267: the soft power law ($\Gamma=2.3\pm0.1$); the weak ionized Fe emission line and a...
A Possible New Population of Sources with Extreme X-Ray / Optical Ratios
by Anton M. Koekemoer; D. M. Alexander; F. E. Bauer; J. Bergeron; W. N. Brandt; E. Chatzichristou; S. Cristiani; S. M. Fall; N. A. Grogin; M. Livio; V. Mainieri; L. Moustakas; P. Padovani; P. Rosati; E. J. Schreier; C. M. Urry
We describe a possible new class of X-ray sources that have robust detections in ultra-deep Chandra data, yet have no detections at all in our deep multi-band GOODS Hubble Space Telescope (HST) ACS images, which represent the highest quality optical imaging obtained to date on these fields. These extreme X-ray / Optical ratio sources ("EXO"s) have values of Fx/Fopt at least an order of magnitude above those generally found for other AGN, even those that are harbored by reddened hosts....
Uncovering the Near-IR Dwarf Galaxy Population of the Coma Cluster with Spitzer IRAC
by L. P. Jenkins; A. E. Hornschemeier; B. Mobasher; D. M. Alexander; F. E. Bauer
We present the first results of a Spitzer IRAC (Infrared Array Camera) wide-field survey of the Coma cluster. The observations cover two fields of different galaxy densities; the first is a 0.733 deg^2 region in the core of the cluster (Coma 1), the second a 0.555 deg^2 off-center region located ~57 arcmin (1.7 Mpc) south-west from the core (Coma 3). The observations, although short 70-90 s exposures, are very sensitive; we detect ~29,200 sources at 3.6 micron over the total ~1.3 deg^2 survey...
Source: http://arxiv.org/abs/0705.3681v2
Broadband Observations of the Compton-thick Nucleus of NGC 3393
by Michael J. Koss; C. Romero-Canizales; L. Baronchelli; S. H. Teng; M. Balokovic; S. Puccetti; F. E. Bauer; P. Arevalo; R. Assef; D. R. Ballantyne; W. N. Brandt; M. Brightman; A. Comastri; P. Gandhi; F. A. Harrison; B. Luo; K. Schawinski; D. Stern; E. Treister
We present new NuSTAR and Chandra observations of NGC 3393, a galaxy reported to host the smallest separation dual AGN resolved in the X-rays. While past results suggested a 150 pc separation dual AGN, three times deeper Chandra imaging, combined with adaptive optics and radio imaging suggest a single, heavily obscured, radio-bright AGN. Using VLA and VLBA data, we find an AGN with a two-sided jet rather than a dual AGN and that the hard X-ray, UV, optical, NIR, and radio emission are all from...
A "high-hard" outburst of the black hole X-ray binary GS 1354-64
by K. I. I. Koljonen; D. M. Russell; J. M. Corral-Santana; M. Armas Padilla; T. Muñoz-Darias; F. Lewis; M. Coriat; F. E. Bauer
We study in detail the evolution of the 2015 outburst of GS 1354-64 (BW Cir) at optical, UV and X-ray wavelengths using Faulkes Telescope South/LCOGT, SMARTS and Swift. The outburst was found to stay in the hard X-ray state, albeit being anomalously luminous with a peak luminosity of L$_{X} >$ 0.15 L$_{Edd}$, which could be the most luminous hard state observed in a black hole X-ray binary. We found that the optical/UV emission is tightly correlated with the X-ray emission, consistent with...
The nature of supernovae 2010O and 2010P in Arp 299 - II. Radio emission
by C. Romero-Cañizales; R. Herrero-Illana; M. A. Pérez-Torres; A. Alberdi; E. Kankare; F. E. Bauer; S. D. Ryder; S. Mattila; J. E. Conway; R. J. Beswick; T. W. B. Muxlow
We report radio observations of two stripped-envelope supernovae (SNe), 2010O and 2010P, which exploded within a few days of each other in the luminous infrared galaxy Arp 299. Whilst SN 2010O remains undetected at radio frequencies, SN 2010P was detected (with an astrometric accuracy better than 1 milli arcsec in position) in its optically thin phase in epochs ranging from ~1 to ~3yr after its explosion date, indicating a very slow radio evolution and a strong interaction of the SN ejecta with...
Topics: Astrophysics of Galaxies, Astrophysics, Cosmology and Nongalactic Astrophysics
Source: http://arxiv.org/abs/1403.1036
The NuSTAR View of Nearby Compton-thick AGN: The Cases of NGC 424, NGC 1320 and IC 2560
by M. Baloković; A. Comastri; F. A. Harrison; D. M. Alexander; D. R. Ballantyne; F. E. Bauer; S. E. Boggs; W. N. Brandt; M. Brightman; F. E. Christensen; W. W. Craig; A. Del Moro; P. Gandhi; C. J. Hailey; M. Koss; G. B. Lansbury; B. Luo; G. M. Madejski; A. Marinucci; G. Matt; C. B. Markwardt; S. Puccetti; C. S. Reynolds; G. Risaliti; E. Rivers; D. Stern; D. J. Walton; W. W. Zhang
We present X-ray spectral analyses for three Seyfert 2 active galactic nuclei, NGC 424, NGC 1320, and IC 2560, observed by NuSTAR in the 3-79 keV band. The high quality hard X-ray spectra allow detailed modeling of the Compton reflection component for the first time in these sources. Using quasi-simultaneous NuSTAR and Swift/XRT data, as well as archival XMM-Newton data, we find that all three nuclei are obscured by Compton-thick material with column densities in excess of ~5 x $10^{24}$...
Topics: Astrophysics of Galaxies, High Energy Astrophysical Phenomena, Astrophysics
Reliable Identification of Compton-thick Quasars at z~2: Spitzer Mid-IR spectroscopy of HDF-oMD49
by D. M. Alexander; R. R. Chary; A. Pope; F. E. Bauer; W. N. Brandt; E. Daddi; M. Dickinson; D. Elbaz; N. A. Reddy
Many models that seek to explain the origin of the unresolved X-ray background predict that Compton-thick Active Galactic Nuclei (AGNs) are ubiquitious at high redshift. However, few distant Compton-thick AGNs have been reliably identified to date. Here we present Spitzer-IRS spectroscopy and 3.6-70um photometry of a z=2.2 optically identified AGN (HDF-oMD49) that is formally undetected in the 2Ms Chandra Deep Field-North (CDF-N) survey. The Spitzer-IRS spectrum and spectral energy distribution...
Young Galaxy Candidates in the Hubble Frontier Fields IV. MACS J1149.5+2223
by W. Zheng; A. Zitrin; L. Infante; N. Laporte; X. X. Huang; J. Moustakas; H. C. Ford; X. W. Shu; J. X. Wang; J. M. Diego; F. E. Bauer; P. T. Iribarren; T. Broadhurst; A. Molino
We search for high-redshift dropout galaxies behind the Hubble Frontier Fields (HFF) galaxy cluster MACS J1149.5+2223, a powerful cosmic lens that has revealed a number of unique objects in its field. Using the deep images from the Hubble and Spitzer space telescopes, we find 11 galaxies at z>7 in the MACS J1149.5+2223 cluster field, and 11 in its parallel field. The high-redshift nature of the bright z~9.6 galaxy MACS1149-JD, previously reported by Zheng et al., is further supported by...
Mid-infrared luminous quasars in the GOODS-Herschel fields: a large population of heavily-obscured, Compton-thick quasars at z~2
by A. Del Moro; D. M. Alexander; F. E. Bauer; E. Daddi; D. D. Kocevski; D. H. McIntosh; F. Stanley; W. N. Brandt; D. Elbaz; C. M. Harrison; B. Luo; J. R. Mullaney; Y. Q. Xue
We present the infrared (IR) and X-ray properties of a sample of 33 mid-IR luminous quasars ($\nu$L(6 micron)>6x10$^{44}$ erg/s) at redshift z~1-3, identified through detailed spectral energy distribution analyses of distant star-forming galaxies, using the deepest IR data from Spitzer and Herschel in the GOODS-Herschel fields. The aim is to constrain the fraction of obscured, and Compton-thick (CT, N$_H$>1.5x10$^{24}$ cm$^{-2}$) quasars at the peak era of nuclear and star-formation...
The Phoenix galaxy as seen by NuSTAR
by A. Masini; A. Comastri; S. Puccetti; M. Baloković; P. Gandhi; M. Guainazzi; F. E. Bauer; S. E. Boggs; P. G. Boorman; M. Brightman; F. E. Christensen; W. W. Craig; D. Farrah; C. J. Hailey; F. A. Harrison; M. J. Koss; S. M. LaMassa; C. Ricci; D. Stern; D. J. Walton; W. W. Zhang
Aims. We study the long-term variability of the well-known Seyfert 2 galaxy Mrk 1210 (a.k.a. UGC 4203, or the Phoenix galaxy). Methods. The source was observed by many X-ray facilities in the last 20 years. Here we present a NuSTAR observation and put the results in context of previously published observations. Results. NuSTAR observed Mrk 1210 in 2012 for 15.4 ks. The source showed Compton-thin obscuration similar to that observed by Chandra, Suzaku, BeppoSAX and XMM-Newton over the past two...
The Chandra Deep Field North Survey. XII. The Link Between Faint X-ray and Radio Source Populations
by F. E. Bauer; D. M. Alexander; W. N. Brandt; A. E. Hornschemeier; C. Vignali; G. P. Garmire; D. P. Schneider
We investigate the relationship between faint X-ray and 1.4 GHz radio source populations detected within 3' of the Hubble Deep Field North using the 1 Ms Chandra and 40 uJy VLA surveys. Within this region, we find that ~42% of the 62 X-ray sources have radio counterparts and ~71% of the 28 radio sources have X-ray counterparts; thus a 40 uJy VLA survey at 1.4 GHz appears to be well-matched to a 1 Ms Chandra observation. Among the different source populations sampled, we find that the majority...
The NuSTAR Extragalactic Surveys: Overview and Catalog from the COSMOS Field
by F. Civano; R. C. Hickox; S. Puccetti; A. Comastri; J. R. Mullaney; L. Zappacosta; S. M. LaMassa; J. Aird; D. M. Alexander; D. R. Ballantyne; F. E. Bauer; W. N. Brandt; S. E. Boggs; F. E. Christensen; W. W. Craig; A. Del-Moro; M. Elvis; K. Forster; P. Gandhi; B. W. Grefenstette; C. J. Hailey; F. A. Harrison; G. B. Lansbury; B. Luo; K. Madsen; C. Saez; D. Stern; E. Treister; M. C. Urry; D. R. Wik; W. Zhang
To provide the census of the sources contributing to the X-ray background peak above 10 keV, NuSTAR is performing extragalactic surveys using a three-tier "wedding cake" approach. We present the NuSTAR survey of the COSMOS field, the medium sensitivity and medium area tier, covering 1.7 deg2 and overlapping with both Chandra and XMM-Newton data. This survey consists of 121 observations for a total exposure of ~3 Ms. To fully exploit these data, we developed a new detection strategy,...
Young Galaxy Candidates in the Hubble Frontier Fields. I. Abell 2744
by W. Zheng; X. Shu; J. Moustakas; A. Zitrin; H. C. Ford; X. Huang; T. Broadhurst; A. Molino; J. M. Diego; L. Infante; F. E. Bauer; D. D. Kelson; R. Smit
We report the discovery of 24 Lyman-break candidates at 7
NuSTAR unveils a Compton-thick Type 2 quasar in Mrk 34
by P. Gandhi; G. B. Lansbury; D. M. Alexander; D. Stern; P. Arévalo; D. R. Ballantyne; M. Baloković; F. E. Bauer; S. E. Boggs; W. N. Brandt; M. Brightman; F. E. Christensen; A. Comastri; W. W. Craig; A. Del Moro; M. Elvis; A. C. Fabian; C. J. Hailey; F. A. Harrison; R. C. Hickox; M. Koss; S. M. LaMassa; B. Luo; G. M. Madejski; A. F. Ptak; S. Puccetti; S. H. Teng; C. M. Urry; D. J. Walton; W. W. Zhang
We present Nustar 3-40 keV observations of the optically selected Type 2 quasar (QSO2) SDSS J1034+6001 or Mrk 34. The high-quality hard X-ray spectrum and archival XMM-Newton data can be fitted self-consistently with a reflection-dominated continuum and strong Fe Kalpha fluorescence line with equivalent-width >1 keV. Prior X-ray spectral fitting below 10 keV showed the source to be consistent with being obscured by Compton-thin column densities of gas along the line-of-sight, despite...
Topics: Astrophysics of Galaxies, High Energy Astrophysical Phenomena, Astrophysics, Cosmology and...
A Chandra Study of the Circinus Galaxy Point-Source Population
by F. E. Bauer; W. N. Brandt; R. M. Sambruna; G. Chartas; G. P. Garmire; S. Kaspi; H. Netzer
We have used the Chandra X-ray Observatory to resolve spatially and spectrally the X-ray emission from the Circinus Galaxy. We report here on the nature of the X-ray emission from the off-nuclear point sources associated with the disk of Circinus. We find that many of the serendipitous X-ray sources are concentrated along the optical disk of the galaxy, but few have optical counterparts within 1" of their X-ray positions down to V=23-25. At 3.8 Mpc, their intrinsic 0.5-10 keV luminosities...
NuSTAR catches the unveiling nucleus of NGC 1068
by A. Marinucci; S. Bianchi; G. Matt; D. M. Alexander; M. Balokovic; F. E. Bauer; W. N. Brandt; P. Gandhi; M. Guainazzi; F. A. Harrison; K. Iwasawa; M. Koss; K. K. Madsen; F. Nicastro; S. Puccetti; C. Ricci; D. Stern; D. J. Walton
We present a NuSTAR and XMM-Newton monitoring campaign in 2014/2015 of the Compton-thick Seyfert 2 galaxy, NGC 1068. During the August 2014 observation, we detect with NuSTAR a flux excess above 20 keV ($32\pm6 \%$) with respect to the December 2012 observation and to a later observation performed in February 2015. We do not detect any spectral variation below 10 keV in the XMM-Newton data. The transient excess can be explained by a temporary decrease of the column density of the obscuring...
Nuclear Activity is more prevalent in Star-Forming Galaxies
by D. J. Rosario; P. Santini; D. Lutz; H. Netzer; F. E. Bauer; S. Berta; B. Magnelli; P. Popesso; D. Alexander; W. N. Brandt; R. Genzel; R. Maiolino; J. R. Mullaney; R. Nordon; A. Saintonge; L. Tacconi; S. Wuyts
We explore the question of whether low and moderate luminosity Active Galactic Nuclei (AGNs) are preferentially found in galaxies that are undergoing a transition from active star formation to quiescence. This notion has been suggested by studies of the UV-to-optical colors of AGN hosts, which find them to be common among galaxies in the so-called "Green Valley", a region of galaxy color space believed to be composed mostly of galaxies undergoing star-formation quenching. Combining...
Weak Hard X-ray Emission from Broad Absorption Line Quasars: Evidence for Intrinsic X-ray Weakness
by B. Luo; W. N. Brandt; D. M. Alexander; D. Stern; S. H. Teng; P. Arévalo; F. E. Bauer; S. E. Boggs; F. E. Christensen; A. Comastri; W. W. Craig; D. Farrah; P. Gandhi; C. J. Hailey; F. A. Harrison; M. Koss; P. Ogle; S. Puccetti; C. Saez; A. E. Scott; D. J. Walton; W. W. Zhang
We report NuSTAR observations of a sample of six X-ray weak broad absorption line (BAL) quasars. These targets, at z=0.148-1.223, are among the optically brightest and most luminous BAL quasars known at z 330 times weaker than expected for typical quasars. Our results from a pilot NuSTAR study of two low-redshift BAL quasars, a Chandra stacking analysis of a sample of high-redshift BAL quasars, and a NuSTAR spectral analysis of the local BAL quasar Mrk 231 have already suggested the existence...
Cosmic evolution and metal aversion in super-luminous supernova host galaxies
by S. Schulze; T. Krühler; G. Leloudas; J. Gorosabel; A. Mehner; J. Buchner; S. Kim; E. Ibar; R. Amorín; R. Herrero-Illana; J. P. Anderson; F. E. Bauer; L. Christensen; M. de Pasquale; A. de Ugarte Postigo; A. Gallazzi; J. Hjorth; N. Morrell; D. Malesani; M. Sparre; B. Stalder; A. A. Stark; C. C. Thöne; J. C. Wheeler
The SUperluminous Supernova Host galaxIES (SUSHIES) survey aims to provide strong new constraints on the progenitors of superluminous supernovae (SLSNe) by understanding the relationship to their host galaxies. Here, we present the photometric properties of 53 H-poor and 16 H-rich SLSN host galaxies out to $z\sim4$. We model the spectral energy distributions of the hosts to derive physical properties (e.g., stellar mass and star-formation-rate distribution functions), which we compare with...
Topics: Astrophysics, Astrophysics of Galaxies
Rapidly growing black holes and host galaxies in the distant Universe from the Herschel Radio Galaxy Evolution Project
by G. Drouart; C. De Breuck; J. Vernet; N. Seymour; M. Lehnert; P. Barthel; F. E. Bauer; E. Ibar; A. Galametz; M. Haas; N. Hatch; J. R. Mullaney; N. Nesvadba; B. Rocca-Volmerange; H. J. A. Rottgering; D. Stern; D. Wylezalek
We present results from a survey of 70 radio galaxies (RGs) at redshifts 12.5 are higher than the sSFR of typical star-forming galaxies over the same redshift range but are similar or perhaps lower than the galaxy population for RGs at z
A New Population of Compton-Thick AGN Identified Using the Spectral Curvature Above 10 keV
by Michael J. Koss; R. Assef; M. Balokovic; D. Stern; P. Gandhi; I. Lamperti; D. M. Alexander; D. R. Ballantyne; F. E. Bauer; S. Berney; W. N. Brandt; A. Comastri; N. Gehrels; F. A. Harrison; G. Lansbury; C. Markwardt; C. Ricci; E. Rivers; K. Schawinski; E. Treister; C. Megan Urry
We present a new metric that uses the spectral curvature (SC) above 10 keV to identify Compton-thick AGN in low-quality Swift BAT X-ray data. Using NuSTAR, we observe nine high SC-selected AGN. We find that high-sensitivity spectra show the majority are Compton-thick (78% or 7/9) and the remaining two are nearly Compton-thick (NH~5-8x10^23 cm^-2). We find the SC_bat and SC_nustar measurements are consistent, suggesting this technique can be applied to future telescopes. We tested the SC method...
Topics: Astrophysics, High Energy Astrophysical Phenomena, Astrophysics of Galaxies
Searching for molecular outflows in Hyper-Luminous Infrared Galaxies
by D. Calderón; F. E. Bauer; S. Veilleux; J. Graciá-Carpio; E. Sturm; P. Lira; S. Schulze; S. Kim
We present constraints on the molecular outflows in a sample of five Hyper-Luminous Infrared Galaxies using Herschel observations of the OH doublet at 119 {\mu}m. We have detected the OH doublet in three cases: one purely in emission and two purely in absorption. The observed emission profile has a significant blueshifted wing suggesting the possibility of tracing an outflow. Out of the two absorption profiles, one seems to be consistent with the systemic velocity while the other clearly...
Weighing the Black Holes in z~2 Submillimeter-Emitting Galaxies Hosting Active Galactic Nuclei
by D. M. Alexander; W. N. Brandt; I. Smail; A. M. Swinbank; F. E. Bauer; A. W. Blain; S. C. Chapman; K. E. K. Coppin; R. J. Ivison; K. Menendez-Delmestre
We place direct observational constraints on the black-hole masses of the cosmologically important z~2 submillimeter-emitting galaxy (SMG; f850>4mJy) population, and use measured host-galaxy masses to explore their evolutionary status. We employ the well-established virial black-hole mass estimator to 'weigh' the black holes of a sample of z~2 SMGs with broad Halpha or Hbeta emission. The average black-hole mass and Eddington ratio (eta) of the lower-luminosity broad-line SMGs (L_X~10^44...
NuSTAR unveils a heavily obscured low-luminosity Active Galactic Nucleus in the Luminous Infrared Galaxy NGC 6286
by C. Ricci; F. E. Bauer; E. Treister; C. Romero-Canizales; P. Arevalo; K. Iwasawa; G. C. Privon; D. B. Sanders; K. Schawinski; D. Stern; M. Imanishi
We report the detection of a heavily obscured Active Galactic Nucleus (AGN) in the luminous infrared galaxy (LIRG) NGC 6286, identified in a 17.5 ks NuSTAR observation. The source is in an early merging stage, and was targeted as part of our ongoing NuSTAR campaign observing local luminous and ultra-luminous infrared galaxies in different merger stages. NGC 6286 is clearly detected above 10 keV and, by including the quasi-simultaneous Swift/XRT and archival XMM-Newton and Chandra data, we find...
Topics: Astrophysics, High Energy Astrophysical Phenomena, Cosmology and Nongalactic Astrophysics,...
Supermassive Black Hole Growth in Starburst Galaxies over Cosmic Time: Constraints from the Deepest Chandra Fields
by D. A. Rafferty; W. N. Brandt; D. M. Alexander; Y. Q. Xue; F. E. Bauer; B. D. Lehmer; B. Luo; C. Papovich
We present an analysis of deep multiwavelength data for z ~ 0.3-3 starburst galaxies selected by their 70 um emission in the Extended-Chandra Deep Field-South and Extended Groth Strip. We identify active galactic nuclei (AGNs) in these infrared sources through their X-ray emission and quantify the fraction that host an AGN. We find that the fraction depends strongly on both the mid-infrared color and rest-frame mid-infrared luminosity of the source, rising to ~ 50-70% at the warmest colors and...
Spectroscopy of superluminous supernova host galaxies. A preference of hydrogen-poor events for extreme emission line galaxies
by G. Leloudas; S. Schulze; T. Kruehler; J. Gorosabel; L. Christensen; A. Mehner; A. de Ugarte Postigo; R. Amorin; C. C. Thoene; J. P. Anderson; F. E. Bauer; A. Gallazzi; K. G. Helminiak; J. Hjorth; E. Ibar; D. Malesani; N. Morrell; J. Vinko; J. C. Wheeler
Superluminous supernovae (SLSNe) are very bright explosions that were only discovered recently and that show a preference for occurring in faint dwarf galaxies. Understanding why stellar evolution yields different types of stellar explosions in these environments is fundamental in order to both uncover the elusive progenitors of SLSNe and to study star formation in dwarf galaxies. In this paper, we present the first results of our project to study SUperluminous Supernova Host galaxIES, focusing...
The NuSTAR X-ray spectrum of the low-luminosity AGN in NGC 7213
by F. Ursini; A. Marinucci; G. Matt; S. Bianchi; A. Tortosa; D. Stern; P. Arévalo; D. R. Ballantyne; F. E. Bauer; A. C. Fabian; F. A. Harrison; A. M. Lohfink; C. S. Reynolds; D. J. Walton
We present an analysis of the 3-79 keV NuSTAR spectrum of the low-luminosity active galactic nucleus NGC 7213. In agreement with past observations, we find a lower limit to the high-energy cut-off of Ec > 140 keV, no evidence for a Compton-reflected continuum, and the presence of an iron Kalpha complex, possibly produced in the broad-line region. From the application of the MYTorus model, we find that the line-emitting material is consistent with the absence of a significant Compton...
Topics: High Energy Astrophysical Phenomena, Astrophysics
NuSTAR observations of WISE J1036+0449, a Galaxy at z$\sim1$ obscured by hot dust
by C. Ricci; R. J. Assef; D. Stern; R. Nikutta; D. M. Alexander; D. Asmus; D. R. Ballantyne; F. E. Bauer; A. W. Blain; S. Boggs; P. G. Boorman; W. N. Brandt; M. Brightman; C. S. Chang; C. -T. J. Chen; F. E. Christensen; A. Comastri; W. W. Craig; T. Díaz-Santos; P. R. Eisenhardt; D. Farrah; P. Gandhi; C. J. Hailey; F. A. Harrison; H. D. Jun; M. J. Koss; S. LaMassa; G. B. Lansbury; C. B. Markwardt; M. Stalevski; F. Stanley; E. Treister; C. -W. Tsai; D. J. Walton; J. W. Wu; L. Zappacosta; W. W. Zhang
Hot, Dust-Obscured Galaxies (Hot DOGs), selected from the WISE all sky infrared survey, host some of the most powerful Active Galactic Nuclei (AGN) known, and might represent an important stage in the evolution of galaxies. Most known Hot DOGs are at $z> 1.5$, due in part to a strong bias against identifying them at lower redshift related to the selection criteria. We present a new selection method that identifies 153 Hot DOG candidates at $z\sim 1$, where they are significantly brighter and...
The XMM-Newton serendipitous survey IV. The AXIS X-ray source counts and angular clustering
by F. J. Carrera; J. Ebrero; S. Mateos; M. T. Ceballos; A. Corral; X. Barcons; M. J. Page; S. R. Rosen; M. G. Watson; J. Tedds; R. Della Ceca; T. Maccacaro; H. Brunner; M. Freyberg; G. Lamer; F. E. Bauer; Y. Ueda
AXIS (An XMM-Newton International Survey) is a survey of 36 high Galactic latitude XMM-Newton observations covering 4.8 deg2 and containing 1433 serendipitous X-ray sources detected with 5-sigma significance. We have studied the X-ray source counts in four energy bands soft (0.5-2 keV), hard (2-10 keV), XID (0.5-4.5 keV) and ultra-hard (4.5-7.5 keV). We have combined this survey with shallower and deeper surveys. Our source counts results are compatible with most previous samples in the soft,...
The nature of the torus in the heavily obscured AGN Markarian 3: an X-ray study
by M. Guainazzi; G. Risaliti; H. Awaki; P. Arevalo; F. E. Bauer; S. Bianchi; S. E. Boggs; W. N. Brandt; M. Brightman; F. E. Christensen; W. W. Craig; K. Forster; C. J. Hailey; F. Harrison; M. Koss; A. Longinotti; C. Markwardt; A. Marinucci; G. Matt; C. S. Reynolds; C. Ricci; D. Stern; J. Svoboda; D. Walton; W. Zhang
In this paper we report the results of an X-ray monitoring campaign on the heavily obscured Seyfert galaxy Markarian 3 carried out between the fall of 2014 and the spring of 2015 with NuSTAR, Suzaku and XMM-Newton. The hard X-ray spectrum of Markarian 3 is variable on all the time scales probed by our campaign, down to a few days. The observed continuum variability is due to an intrinsically variable primary continuum seen in transmission through a large, but still Compton-thin column density...
The 2 Ms Chandra Deep Field-North Survey and the 250 ks Extended Chandra Deep Field-South Survey: Improved Point-Source Catalogs
by Y. Q. Xue; B. Luo; W. N. Brandt; D. M. Alexander; F. E. Bauer; B. D. Lehmer; G. Yang
We present improved point-source catalogs for the 2 Ms Chandra Deep Field-North (CDF-N) and the 250 ks Extended Chandra Deep Field-South (E-CDF-S), implementing a number of recent improvements in Chandra source-cataloging methodology. For the CDF-N/E-CDF-S, we provide a main catalog that contains 683/1003 X-ray sources detected with wavdetect at a false-positive probability threshold of $10^{-5}$ that also satisfy a binomial-probability source-selection criterion of $P
Variability Selected Low-Luminosity Active Galactic Nuclei in the 4 Ms Chandra Deep Field-South
by M. Young; W. N. Brandt; Y. Q. Xue; M. Paolillo; D. M. Alexander; F. E. Bauer; B. D. Lehmer; B. Luo; O. Shemmer; D. P. Schneider; C. Vignali
The 4 Ms Chandra Deep Field-South (CDF-S) and other deep X-ray surveys have been highly effective at selecting active galactic nuclei (AGN). However, cosmologically distant low-luminosity AGN (LLAGN) have remained a challenge to identify due to significant contribution from the host galaxy. We identify long-term X-ray variability (~month-years, observed frame) in 20 of 92 CDF-S galaxies spanning redshifts z~0.08-1.02 that do not meet other AGN selection criteria. We show that the observed...
Photometric Redshifts in the Hawaii-Hubble Deep Field-North (H-HDF-N)
by G. Yang; Y. Q. Xue; B. Luo; W. N. Brandt; D. M. Alexander; F. E. Bauer; W. Cui; X. Kong; B. D. Lehmer; J. -X. Wang; X. -B. Wu; F. Yuan; Y. -F. Yuan; H. Y. Zhou
We derive photometric redshifts (\zp) for sources in the entire ($\sim0.4$ deg$^2$) Hawaii-Hubble Deep Field-North (\hdfn) field with the EAzY code, based on point spread function-matched photometry of 15 broad bands from the ultraviolet (\bandu~band) to mid-infrared (IRAC 4.5 $\mu$m). Our catalog consists of a total of 131,678 sources. We evaluate the \zp~quality by comparing \zp~with spectroscopic redshifts (\zs) when available, and find a value of normalized median absolute deviation...
Resolving the Source Populations that Contribute to the X-ray Background: The 2 Ms Chandra Deep Field-North Survey
by D. M. Alexander; F. E. Bauer; W. N. Brandt; G. P. Garmire; A. E. Hornschemeier; D. P. Schneider; C. Vignali
With ~2 Ms of Chandra exposure, the Chandra Deep Field-North (CDF-N) survey provides the deepest view of the Universe in the 0.5-8.0 keV band. Five hundred and three (503) X-ray sources are detected down to on-axis 0.5-2.0 keV and 2-8 keV flux limits of ~1.5x10^{-17} erg cm^{-2} s^{-1} and ~1.0x10^{-16} erg cm^{-2} s^{-1}, respectively. These flux limits correspond to L_{0.5-8.0 keV}~3x10^{41} erg s^{-1} at z=1 and L_{0.5-8.0 keV}~2x10^{43} erg s^{-1} at z=6; thus this survey is sensitive...
The X-ray Properties of the Cometary Blue Compact Dwarf galaxies Mrk 59 and Mrk 71
by T. X. Thuan; F. E. Bauer; Y. I. Izotov
We present XMM-Newton and Chandra observations of two low-metallicity cometary blue compact dwarf (BCD) galaxies, Mrk 59 and Mrk 71. The first BCD, Mrk 59, contains two ultraluminous X-ray (ULX) sources, IXO 72 and IXO 73, both associated with bright massive stars and H II complexes, as well as one fainter extended source associated with a massive H II complex at the head of the cometary structure. The low-metallicity of Mrk 59 appears to be responsible for the presence of the two ULXs. IXO 72...
{\it NuSTAR} Reveals an Intrinsically X-ray Weak Broad Absorption Line Quasar in the Ultraluminous Infrared Galaxy Markarian 231
by Stacy H. Teng; W. N. Brandt; F. A. Harrison; B. Luo; D. M. Alexander; F. E. Bauer; S. E. Boggs; F. E. Christensen; A. Comastri; W. W. Craig; A. C. Fabian; D. Farrah; F. Fiore; P. Gandhi; B. W. Grefenstette; C. J. Hailey; R. C. Hickox; K. K. Madsen; A. F. Ptak; J. R. Rigby; G. Risaliti; C. Saez; D. Stern; S. Veilleux; D. J. Walton; D. R. Wik; W. W. Zhang
We present high-energy (3--30 keV) {\it NuSTAR} observations of the nearest quasar, the ultraluminous infrared galaxy (ULIRG) Markarian 231 (Mrk 231), supplemented with new and simultaneous low-energy (0.5--8 keV) data from {\it Chandra}. The source was detected, though at much fainter levels than previously reported, likely due to contamination in the large apertures of previous non-focusing hard X-ray telescopes. The full band (0.5--30 keV) X-ray spectrum suggests the active galactic nucleus...
The KMOS AGN Survey at High redshift (KASHz): the prevalence and drivers of ionised outflows in the host galaxies of X-ray AGN
by C. M. Harrison; D. M. Alexander; J. R. Mullaney; J. P. Stott; A. M. Swinbank; V. Arumugam; F. E. Bauer; R. G. Bower; A. J. Bunker; R. M. Sharples
We present the first results from the KMOS AGN Survey at High redshift (KASHz), a VLT/KMOS integral-field spectroscopic survey of z>0.6 AGN. We present galaxy-integrated spectra of 89 X-ray AGN (Lx=10^42-10^45 erg/s), for which we observed [O III] (z=1.1-1.7) or Halpha emission (z=0.6-1.1). The targets have X-ray luminosities representative of the parent AGN population and we explore the emission-line luminosities as a function of X-ray luminosity. For the [O III] targets, ~50 per cent have...
Topics: Astrophysics, Astrophysics of Galaxies, High Energy Astrophysical Phenomena
Infrared power-law galaxies in the Chandra Deep Field South: AGN and ULIRGs
by A. Alonso-Herrero; P. G. Perez-Gonzalez; D. M. Alexander; G. H. Rieke; D. Rigopoulou; E. Le Floc'h; P. Barmby; C. Papovich; J. R. Rigby; F. E. Bauer; W. N. Brandt; E. Egami; S. P. Willner; H. Dole; J. -S. Huang
We investigate the nature of a sample of 92 Spitzer/MIPS 24 micron selected galaxies in the CDFS, showing power law-like emission in the Spitzer/IRAC 3.6-8 micron bands. The main goal is to determine whether the galaxies not detected in X-rays (47% of the sample) are part of the hypothetical population of obscured AGN not detected even in deep X-ray surveys. The majority of the IR power-law galaxies are ULIRGs at z>1, and those with LIRG-like IR luminosities are usually detected in X-rays....
Compton-thick Accretion in the local Universe
by C. Ricci; Y. Ueda; M. J. Koss; B. Trakhtenbrot; F. E. Bauer; P. Gandhi
Heavily obscured accretion is believed to represent an important stage in the growth of supermassive black holes, and to play an important role in shaping the observed spectrum of the Cosmic X-ray Background (CXB). Hard X-ray (E$>$10 keV) selected samples are less affected by absorption than samples selected at lower energies, and are therefore one of the best ways to detect and identify Compton-thick (CT, $\log N_{\rm\,H}\geq 24$) Active Galactic Nuclei (AGN). In this letter we present the...
Correlations between bright submillimetre sources and low-redshift galaxies
by O. Almaini; J. S. Dunlop; C. J. Willott; D. M. Alexander; F. E. Bauer; C. T. Liu
We present evidence for a positive angular correlation between bright submillimetre sources and low-redshift galaxies. The study was conducted using 39 sources selected from 3 contiguous, flux-limited SCUBA surveys, cross-correlated with optical field galaxies with magnitudes R 10mJy. We conduct Monte-Carlo simulations of clustered submm populations, and find that the probability of obtaining these correlations by chance is less than 0.4 per cent. The results may suggest that a larger than...
The Chandra Deep Field-South Survey: 7 Ms Source Catalogs
by B. Luo; W. N. Brandt; Y. Q. Xue; B. Lehmer; D. M. Alexander; F. E. Bauer; F. Vito; G. Yang; A. R. Basu-Zych; A. Comastri; R. Gilli; Q. -S. Gu; A. E. Hornschemeier; A. Koekemoer; T. Liu; V. Mainieri; M. Paolillo; P. Ranalli; P. Rosati; D. P. Schneider; O. Shemmer; I. Smail; M. Sun; P. Tozzi; C. Vignali; J. -X. Wang
We present X-ray source catalogs for the $\approx7$ Ms exposure of the Chandra Deep Field-South (CDF-S), which covers a total area of 484.2 arcmin$^2$. Utilizing WAVDETECT for initial source detection and ACIS Extract for photometric extraction and significance assessment, we create a main source catalog containing 1008 sources that are detected in up to three X-ray bands: 0.5-7.0 keV, 0.5-2.0 keV, and 2-7 keV. A supplementary source catalog is also provided including 47 lower-significance...
Identifications and Photometric Redshifts of the 2 Ms Chandra Deep Field-South Sources
by B. Luo; W. N. Brandt; Y. Q. Xue; M. Brusa; D. M. Alexander; F. E. Bauer; A. Comastri; A. Koekemoer; B. D. Lehmer; V. Mainieri; D. A. Rafferty; D. P. Schneider; J. D. Silverman; C. Vignali
[Abridged] We present reliable multiwavelength identifications and high-quality photometric redshifts for the 462 X-ray sources in the ~2 Ms Chandra Deep Field-South. Source identifications are carried out using deep optical-to-radio multiwavelength catalogs, and are then combined to create lists of primary and secondary counterparts for the X-ray sources. We identified reliable counterparts for 446 (96.5%) of the X-ray sources, with an expected false-match probability of ~6.2%. A...
The NuSTAR Serendipitous Survey: The 40 month Catalog and the Properties of the Distant High Energy X-ray Source Population
by G. B. Lansbury; D. Stern; J. Aird; D. M. Alexander; C. Fuentes; F. A. Harrison; E. Treister; F. E. Bauer; J. A. Tomsick; M. Balokovic; A. Del Moro; P. Gandhi; M. Ajello; A. Annuar; D. R. Ballantyne; S. E. Boggs; N. Brandt; M. Brightman; C. J. Chen; F. E. Christensen; F. Civano; A. Comastri; W. W. Craig; K. Forster; B. W. Grefenstette; C. J. Hailey; R. Hickox; B. Jiang; H. Jun; M. Koss; S. Marchesi; A. D. Melo; J. R. Mullaney; G. Noirot; S. Schulze; D. J. Walton; L. Zappacosta; W. Zhang
We present the first full catalog and science results for the NuSTAR serendipitous survey. The catalog incorporates data taken during the first 40 months of NuSTAR operation, which provide ~20Ms of effective exposure time over 331 fields, with an areal coverage of 13 sq deg, and 497 sources detected in total over the 3-24 keV energy range. There are 276 sources with spectroscopic redshifts and classifications, largely resulting from our extensive campaign of ground-based spectroscopic followup....
The Evolution of Normal Galaxy X-ray Emission Through Cosmic History: Constraints from the 6 Ms Chandra Deep Field-South
by B. D. Lehmer; A. R. Basu-Zych; S. Mineo; W. N. Brandt; R. T. Eufrasio; T. Fragos; A. E. Hornschemeier; B. Luo; Y. Q. Xue; F. E. Bauer; M. Gilfanov; P. Ranalli; D. P. Schneider; O. Shemmer; P. Tozzi; J. R. Trump; C. Vignali; J. -X. Wang; M. Yukita; A. Zezas
We present measurements of the evolution of normal-galaxy X-ray emission from $z \approx$ 0-7 using local galaxies and galaxy samples in the 6 Ms Chandra Deep Field-South (CDF-S) survey. The majority of the CDF-S galaxies are observed at rest-frame energies above 2 keV, where the emission is expected to be dominated by X-ray binary (XRB) populations; however, hot gas is expected to provide small contributions to the observed- frame < 1 keV emission at $z < 1$. We show that a single...
Topics: Astrophysics, Cosmology and Nongalactic Astrophysics, Astrophysics of Galaxies | CommonCrawl |
Annotating activation/inhibition relationships to protein-protein interactions using gene ontology relations
Volume 12 Supplement 1
Selected articles from the 16th Asia Pacific Bioinformatics Conference (APBC 2018): systems biology
Soorin Yim1,2,
Hasun Yu2,
Dongjin Jang1 &
Doheon Lee1,2
BMC Systems Biology volume 12, Article number: 9 (2018) Cite this article
Signaling pathways can be reconstructed by identifying 'effect types' (i.e. activation/inhibition) of protein-protein interactions (PPIs). Effect types are composed of 'directions' (i.e. upstream/downstream) and 'signs' (i.e. positive/negative), thereby requiring directions as well as signs of PPIs to predict signaling events from PPI networks. Here, we propose a computational method for systemically annotating effect types to PPIs using relations between functional information of proteins.
We used regulates, positively regulates, and negatively regulates relations in Gene Ontology (GO) to predict directions and signs of PPIs. These relations indicate both directions and signs between GO terms so that we can project directions and signs between relevant GO terms to PPIs. Independent test results showed that our method is effective for predicting both directions and signs of PPIs. Moreover, our method outperformed a previous GO-based method that did not consider the relations between GO terms. We annotated effect types to human PPIs and validated several highly confident effect types against literature. The annotated human PPIs are available in Additional file 2 to aid signaling pathway reconstruction and network biology research.
We annotated effect types to PPIs by using regulates, positively regulates, and negatively regulates relations in GO. We demonstrated that those relations are effective for predicting not only signs, but also directions of PPIs. The usefulness of those relations suggests their potential applications to other types of interactions such as protein-DNA interactions.
A cell reacts to stimuli through signaling pathways, in which proteins physically interact with each other to transmit signals. Those signals propagate inside a cell, causing various responses such as cell proliferation and differentiation [1,2,3,4]. Abnormal signal transduction triggers aberrant biological processes that might result in diseases such as cancer [2,3,4,5]. To understand how such signals flow, various high-throughput experiments have been developed to detect protein-protein interactions (PPIs) such as yeast two hybrid and affinity purification-mass spectroscopy [6]. Even though such high-throughput experiments can determine whether two proteins bind to each other or not, they are not sufficient for reconstructing signaling pathways.
To reconstruct signaling pathways from PPI networks, we need to know two aspects of PPIs: 'directions', and 'signs'. Directions of PPIs represent upstream/downstream relationships, indicating the direction of signal flow. Signs of PPIs represent whether the interactions have positive effects or negative effects. By combining directions with signs, we can define activation/inhibition relationships of PPIs, which we call 'effect types'.
Effect types are indispensable for not only reconstructing signaling pathways, but also other research areas such as network pharmacology [7, 8]. Without directions, we cannot know causality. This leads to many false positive results that arise from mistaking an effect as a cause [8]. Without signs, we cannot distinguish whether a result is desirable or harmful. For example, when signs are unavailable for drug-disease associations, we cannot differentiate whether a drug cures a disease, or causes a disease as a side effect [9].
Despite effect types are important, no experimental method is available that determines effect types of PPIs in a high-throughput way. Satisfying this need, several computational methods have been proposed to predict signs of PPIs systemically [10,11,12]. Based on data they used, previous works can be categorized into phenotype-based methods [10, 11] and a Gene Ontology (GO) [13]-based method [12]. Phenotype-based methods used RNA interference (RNAi) screening to identify phenotypes that were affected by a gene knockdown. Then, they predicted signs of PPIs based on the hypothesis that proteins resulting in similar phenotypes would interact positively [10, 11]. Even though they were effective, they have two limitations. Firstly, they ignored directions even though their aim was predicting effect types. Secondly, predicted signs cannot be generically applied to human PPIs. Because conducting RNAi screening for all proteins is experimentally expensive, they applied their method to smaller Drosophila melanogaster [10], or HeLa cells [11].
To overcome these limitations, a recent method utilized GO, which is more directly related to proteins [12]. Their hypothesis was that proteins with similar GO annotations would interact positively. They used GO terms as features for representing PPIs, and trained L2-regularized logistic regression model. Even though they improved the performance by using more direct data, they have mainly three limitations. Firstly, still they did not consider direction, leaving the causality between two proteins unknown. Secondly, similar GO annotations not necessarily means two proteins interact positively. In either positive PPIs or negative PPIs, two proteins interact with each other in any case. Therefore, negatively interacting proteins might also participate in the same biological process or have similar molecular function. In fact, the exactly same feature encoding was used for predicting whether two proteins interact or not, treating positive PPIs and negative PPIs equally [13]. Thirdly, they did not consider GO relations. However, GO has positively regulates and negatively regulates relations, which indicate signs between GO terms. These relations might help to differentiate negative PPIs in which one protein negatively regulates a biological process in which the other protein participates. Moreover, those relations indicate directions between GO terms, suggesting their potential use in predicting effect types of molecular interactions.
Here, we propose a method for annotating directions as well as signs to PPIs. We hypothesized that directions and signs between GO terms, represented by regulates, positively regulates, and negatively regulates relations, can be used for predicting directions and signs of PPIs. The rationale behind this hypothesis is as follows. Let us assume that protein p1 and p2 interact with each other, and there is a significant tendency in which GO terms involving p1 positively regulates GO terms involving p2. Since protein p1 and p2 interact with each other, the tendency of positive regulation might result from activation of p2 by p1. Based on this hypothesis, we predicted directions of undirected, unsigned PPIs first. Then, we predicted sign for each directed, unsigned PPI. PPIs were represented by features that were generated from regulates, positively regulates, and negatively regulates relations. Then, we trained logistic regression models for predicting directions and signs. Independent test results demonstrated that our method outperforms previous GO-based method, especially for negative PPIs. In addition, we annotated effect types to human PPIs and validated highly confident predictions against literature.
Method overview
The overall method for annotating effect types to PPIs is illustrated in Fig. 1. The input was an undirected, unsigned PPI network. For each undirected, unsigned PPI, we predicted its direction first. We trained two logistic regression models that predicted whether a signal can flow in left-to-right direction and right-to-left direction, respectively. The two models shared the same feature vectors, which were composed of pairs of GO terms between which regulates, positively regulates, or negatively regulates relation hold. By combining outputs of these two models, we decided final direction of a PPI as one of 'left-to-right', 'right-to-left', and 'bi-directional'. Then, we predicted sign for each directed, unsigned PPI. If the PPI is bi-directional, we predicted each sign for both directions. For predicting signs, we trained two logistic regression models that predicted whether a directed PPI can act as activation and inhibition, respectively. The two models shared an identical feature vector, which was composed of pairs of GO terms between which positively regulates or negatively regulates relation hold. By combining outputs of these two models, we decided final effect type as one of the followings: 'activation', 'inhibition', 'activation&inhibition', and 'affect'. As a result, we obtained PPIs with effect type.
Method overview. a Input was an undirected, unsigned PPI network. b, c, d For each undirected, unsigned PPI, direction was predicted. We trained two logistic regression models that predicted whether the signal can flow in left-to-right direction and right-to-left direction, respectively. Feature vector was composed of pairs of GO terms between which regulates, positively regulates, or negatively regulates relation holds. e A directed, unsigned PPI was obtained as a result of direction prediction. f A sign of PPI was predicted for each directed PPI. We trained two logistic regression models that predicted whether a directed PPI can act as activation or inhibition, respectively. Feature vector was composed of a pair of GO terms between which positively regulates or negatively regulates relation holds. g, h As a result of direction prediction and sign prediction, we annotated effect types to PPI network. Abbreviations: GO, gene ontology; PPI, protein-protein interaction; LR, logistic regression
PPI dataset
We collected three PPI datasets: a training set, an independent test set, and a prediction set. To gather reliable datasets, we applied following policies to all datasets: (1) We collected human PPIs only. (2) We removed functional associations and self-interactions. (3) We collected PPIs only when at least one regulates relation holds between GO terms to which constituent proteins are annotated. (4) We mapped protein families to their members. (5) We integrated multiple instances of the same PPI to remove redundancy.
For the training set and the independent test set, we determined directions for each protein pair, and then determined sign for each directed PPI. For proteins protein 1 (p1) and protein 2 (p2), if a signal can flow in only one direction, for example from p1 to p2, the direction is 'uni-directional'. On the other hand, if the signal can flow in both directions depending on a context, the direction is 'bi-directional'. Then, we determined sign for each directed PPI. If a PPI is bi-directional, we determined sign for each direction independently. The same directed PPI can act as both activation and inhibition, depending on a context. For example, naked cuticle (NKD) binds to dishevelled segment polarity protein (DVL). This PPI acts as a switch from canonical Wnt signaling pathway to planar cell polarity (PCP) Wnt signaling pathway [14]. This means that NKD inhibits DVL in the aspect of canonical Wnt signaling pathway, whereas NKD activates DVL in the aspect of PCP Wnt signaling pathway. To deal with such context dependency, we categorized effect types of PPIs into four classes: 'activation', 'inhibition', both activation and inhibition are possible depending on a context ('activation&inhibition'), and neither activation nor inhibition ('affect').
We collected PPIs with known effect types as a training set from Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway [15]. KEGG is a manually curated database for pathways, and contains the largest number of PPIs whose effect types are known. Following the policies, we collected 'PPrels' whose subtypes were one of the followings: activation, inhibition, phosphorylation, dephosphorylation, glycosylation, and methylation.
We gathered another set of PPIs with known effect types as an independent test set from Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) [16]. STRING is an integrated database for protein-protein associations, including functional and inferred associations. To secure reliable PPIs, we collected PPIs that were experimentally validated. Moreover, we used PPIs whose scores were higher than 800 out of 1000, which resulted in about 1.28% of the PPIs available in STRING. In addition, we removed PPIs that were in the training set.
We collected prediction set from Biological General Repository for Interaction Datasets (BioGRID), whose effect types were previously unknown and predicted by our method [17]. We collected multi-validated PPIs that were validated in at least two experimental systems or two publications. We removed PPIs that were in the training set or the independent test set. As a result, we obtained 20,192 PPIs as a training set, 3420 PPIs as an independent test set, and 28,742 PPIs as a prediction set as shown in Table 1.
Table 1 PPI dataset statistics
GO dataset
We collected ontologies and GO annotations from GO [18]. We defined a concept of 'regulators' and 'regulatees' for GO terms. A regulator is a GO term that regulates another GO term. If it positively regulates another GO term, then it is a positive regulator, whereas it is a negative regulator if it negatively regulates one. Hereinafter we collectively refer to regulates, positively regulates, and negatively regulates as (positively/negatively) regulates when any of them are applicable. A regulatee is a GO term that is (positively/negatively) regulated by another GO term. For example, 'chromatin silencing' negatively regulates 'transcription, DNA-templated'. Therefore, 'chromatin silencing' is a negative regulator whereas 'transcription, DNA-templated' is a regulatee.
To find all (positive/negative) regulators, we composed GO relations to form a composite relation, such that
$$ \mathrm{relation}\ 1\circ \mathrm{relation}\ 2\to \mathrm{composite}\ \mathrm{relation}.\kern6.75em $$
For instance, composing is a with positively regulates becomes positively regulates. Since 'actin nucleation' is a 'positive regulation of actin filament polymerization', which positively regulates 'actin filament polymerization', 'actin nucleation' becomes a positive regulator. This is called 'relation reasoning', and all possible composite relations are listed in Additional file 1: Table S1. We iteratively applied relation reasoning to find all (positive/negative) regulators, thereby increase coverage of our method. Hereinafter, we do not differentiate whether a regulator directly regulates regulatee, or indirectly regulates by a composite relation. The statistics of GO terms are shown in Fig. 2. Two hundred fifty six molecular function terms were regulators among 10,940 molecular function terms. For biological process terms, 11,820 terms were regulators out of 29,584 biological process terms.
The statistics of GO terms. a Among 10,940 molecular function terms, 256 terms (2.3%) were regulators. b There were 11,820 (40.0%) regulators among 29,584 biological process terms, among which 67 terms were both positive and negative regulator
Feature generation for representing PPIs
To encode the directions and signs between GO terms, we defined the concept of a p1 → p2 (positive/negative) regulation pair. A p1 → p2 (positive/negative) regulation pair is a pair of a (positive/negative) regulator and a corresponding regulatee, in which protein p1 is annotated to the (positive/negative) regulator and protein p2 is annotated to the regulatee. For example, if protein p1 is annotated to 'chromatin silencing' and protein p2 is annotated to 'transcription, DNA-templated', then 'chromatin silencing' and 'transcription, DNA-templated' constitute a p1 → p2 negative regulation pair as depicted in Fig. 3a.
Feature generation for directions and signs. a 'chromatin silencing' negatively regulates 'transcription, DNA-templated'. Thus, 'chromatin silencing' is negative regulator whereas 'transcription, DNA-templated' is a regulatee. If protein p1 and p2 is annotated to 'chromatin silencing' and 'transcription, DNA-templated' respectively, the two GO terms compose a p1 → p2 negative regulation pair, which represents direction and sign between two GO terms. b Feature generation procedures are explained with a toy example related to Wnt signaling pathway. There are four regulators (GO:0030177, GO:0030111, GO:0090263, GO:0090090), among which two are positive regulators (GO:0030177, GO:0090263) and one is a negative regulator (GO:0090090). Protein p1 is annotated to four GO terms (GO:0016055, GO:0030177, GO:0030111, GO:0090263), whereas protein p2 is annotated to three GO terms (GO:0030111, GO:0060070, GO:0090090). Protein p2 is not directly annotated to 'Wnt signaling pathway', but to 'canonical Wnt signaling pathway'. Nonetheless, since 'canonical Wnt signaling pathway' is a 'Wnt signaling pathway', protein p2 is related to 'Wnt signaling pathway'. c For GO1 and GO2 to which protein p1 and p2 are annotated respectively, we determined whether GO1 (positively/negatively) regulates GO2. If it did, GO1 and GO2 became a p1 → p2 (positive/negative) regulation pair. If it did not, we determined whether GO1 (positively/negatively) regulates any ancestors of GO2. Then, GO1 and the most specific ancestor of GO2 became a p1 → p2 (positive/negative) regulation pair. That way, we found (positive/negative) regulation pairs for p1 → p2, and p2 → p1 direction. To represent PPIs, we used regulation pairs as features. For directions, directions of regulation pairs were encoded as feature values. p1 → p2 (positive/negative) regulation pairs had the value of one, whereas p2 → p1 (positive/negative) regulation pairs had − 1. d For signs, signs of regulation pairs were encoded as feature values. p1 → p2 positive regulation pairs had the value of one, whereas p1 → p2 negative regulation pairs had − 1. Abbreviations: GO, gene ontology; WSP: Wnt signaling pathway
A p1 → p2 (positive/negative) regulation pair indicates direction and sign between GO terms. We projected such directions and signs between GO terms to PPIs, by using each p1 → p2 (positive/negative) regulation pair as a feature for representing PPIs. We describe feature generation procedure with a toy example illustrated in Fig. 3b, which is a subset of GO terms related to Wnt signaling pathway.
Features were generated by following procedures. Firstly, we collected all GO terms to which proteins were annotated. In our toy example, protein p1 is annotated to four GO terms: 'Wnt signaling pathway', 'positive regulation of Wnt signaling pathway', 'regulation of Wnt signaling pathway', and 'positive regulation of canonical Wnt signaling pathway'. On the other hand, protein p2 is annotated to three GO terms: 'regulation of Wnt signaling pathway', 'canonical Wnt signaling pathway', and 'negative regulation of canonical Wnt signaling pathway'.
Secondly, for all possible pairs of GO1 and GO2 in which p1 is annotated to GO1 and p2 is annotated to GO2, we determined whether GO1 (positively/negatively) regulates GO2. If it did, then we regarded GO1 and GO2 as a p1 → p2 (positive/negative) regulation pair. However, in many cases, GO1 did not (positively/negatively) regulate GO2 itself. In such cases, we regarded that if p2 is annotated to GO2, then p2 is also annotated to ancestors of GO2 that have is a or part of relation with GO2. For example, despite p2 is not directly annotated to 'Wnt signaling pathway', we can say that p2 is related to 'Wnt signaling pathway' because p2 is annotated to 'canonical Wnt signaling pathway'. This kind of extending GO annotations of a protein by using is a and part of relations in GO is called 'annotation grouping'. To increase our coverage, if GO1 did not (positively/negatively) regulates GO2 itself, we applied annotating grouping and determined whether GO1 (positively/negatively) regulates any ancestors of GO2. If it did, then we found the most specific ancestor of GO2 with the highest information content that is regulated by GO1 [19]. Then, we regarded GO1 and the most specific ancestor of GO2 that is regulated by GO1 as a p1 → p2 (positive/negative) regulation pair. Since excessive annotation grouping might result in too high-dimensional feature vectors in which features are highly correlated, we did not applied annotation grouping when GO1 regulates GO2 itself. For the same reason, we used only the most specific ancestor of GO2, not all the ancestors.
For example, since p1 is annotated 'positive regulation of canonical Wnt signaling pathway' and p2 is annotated to 'canonical Wnt signaling pathway', the two GO terms form a p1 → p2 positive regulation pair. On the other hand, 'positive regulation of Wnt signaling pathway' does not regulate 'canonical Wnt signaling pathway'. However, p2 is related to 'Wnt signaling pathway' when we apply annotation grouping. Thus, 'positive regulation of Wnt signaling pathway' and 'Wnt signaling pathway' constitute another p1 → p2 positive regulation pair.
We repeated the same procedure for determining whether GO2 (positively/negatively) regulates GO1 or its ancestors. As a result, we found six kinds of regulation pairs: p1 → p2 regulation pairs, p1 → p2 positive regulation pairs, p1 → p2 negative regulation pairs, p2 → p1 regulation pairs, p2 → p1 positive regulation pairs, and p2 → p1 negative regulation pairs. The six kinds of regulation pairs were used as features for predicting directions and signs of PPIs.
Feature generation for predicting directions of PPIs
For predicting directions of PPIs, we only considered the directions of (positive/negative) regulation pairs; whether it is from p1 to p2, or from p2 to p1. We did not differentiate regulation pairs, positive regulation pairs, and negative regulation pairs. The value of a (positive/negative) regulation pair GO1-GO2 is defined as:
$$ {\mathrm{f}}_{\mathrm{direction}}\left[\mathrm{GO}1-\mathrm{GO}2\right]=\left\{\begin{array}{c}1\kern1.25em \mathrm{if}\ \mathrm{GO}1-\mathrm{GO}2\ \mathrm{is}\ \mathrm{a}\ \mathrm{p}1\to \mathrm{p}2\ \left(\mathrm{positive}/\mathrm{negative}\right)\ \mathrm{regulation}\ \mathrm{p}\mathrm{a}\mathrm{ir},\mathrm{exclusively}\\ {}-1\ \mathrm{if}\ \mathrm{GO}1-\mathrm{GO}2\ \mathrm{is}\ \mathrm{a}\ \mathrm{p}2\to \mathrm{p}1\ \left(\mathrm{positive}/\mathrm{negative}\right)\ \mathrm{regulation}\ \mathrm{p}\mathrm{a}\mathrm{ir},\mathrm{exclusively}\\ {}0\kern31.75em \mathrm{otherwise}\end{array}\right. $$
In our toy example, since 'positive regulation of canonical Wnt signaling pathway' and 'canonical Wnt signaling pathway' constitute a p1 → p2 positive regulation pair, but not a p2 → p1 positive regulation pair, it has the value of one. On the other hand, 'negative regulation of canonical Wnt signaling pathway' and 'Wnt signaling pathway' form a p2 → p1 negative regulation pair, exclusively. Thus, it has the value of − 1. If the direction of a (positive/negative) regulation pair is both from p1 to p2 and from p2 to p1, then the feature value is zero. In the toy example, since p1 → p2 (positive/negative) regulation pairs outnumber p2 → p1 (positive/negative) regulation pairs, the direction of PPI is more likely to be from p1 to p2. We removed (positive/negative) regulation pairs that were not used in the training set and the number of features for direction was 37,617.
Feature generation for predicting signs of PPIs
Feature generation for predicting signs of PPIs are similar to that for directions; We used each regulation pair as a feature. However, there are also some differences: (1) Since we predicted sign for a directed PPI, we used regulation pairs whose directions were consistent with the direction of the PPI. (2) Since simple regulation pairs are uninformative for predicting signs, we used positive regulation pairs and negative regulation pairs only. (3) We removed regulation pairs that is both positive and negative. For example, 'cell cycle switching, mitotic to meiotic cell cycle' positively regulates 'meiotic cell cycle' and negatively regulates 'mitotic cell cycle'. Thus, 'cell cycle switching, mitotic to meiotic cell cycle' both positively and negatively regulates 'cell cycle'. We removed those regulation pairs since they are meaningless. (4) Compared to a feature vector for direction, which signifies the direction between GO terms, a feature vector for sign indicates signs. For predicting sign of a directed PPI p1 → p2, the value of a positive/negative regulation pair GO1-GO2 is defined as:
$$ {\mathrm{f}}_{\mathrm{sign}}\left[\mathrm{GO}1-\mathrm{GO}2\right]=\left\{\begin{array}{c}1\kern1.75em \mathrm{if}\ \mathrm{GO}1-\mathrm{GO}2\ \mathrm{is}\ \mathrm{a}\ \mathrm{p}1\to \mathrm{p}2\ \mathrm{p}\mathrm{ositive}\ \mathrm{regulation}\ \mathrm{p}\mathrm{a}\mathrm{ir},\mathrm{exclusively}\\ {}-1\kern0.75em \mathrm{if}\ \mathrm{GO}1-\mathrm{GO}2\ \mathrm{is}\ \mathrm{a}\ \mathrm{p}1\to \mathrm{p}2\ \mathrm{negative}\ \mathrm{regulation}\ \mathrm{p}\mathrm{a}\mathrm{ir},\mathrm{exclusively}\\ {}0\kern27em \mathrm{otherwise}\end{array}\right. $$
In our toy example, since 'positive regulation of canonical Wnt signaling pathway' and 'canonical Wnt signaling pathway' is a p1 → p2 positive regulation pair, it has the value of one. On the other hand, since 'negative regulation of canonical Wnt signaling pathway' and 'Wnt signaling pathway' is a p2 → p1 negative regulation pair, not p1 → p2, it has the value of zero. In the toy example, since p1 → p2 positive regulation pairs outnumber p1 → p2 negative regulation pairs, p1 more likely activates p2 rather than inhibits p2. We removed regulation pairs that were not used in the training set. The number of features for sign was 20,077.
Model generation for performance evaluation
We used L2-regularized logistic regression for predicting directions and signs of PPIs. We used logistic regression because it is interpretable [20]. Moreover, L2-regularization reduces overfitting that might be caused by high-dimensionality of feature vectors. As shown in Table 1, we had much more activating PPIs than inhibiting ones. To overcome this imbalance, we adopted cost-sensitive learning in which class-weight was inversely proportional to the class frequency [21].
Model generation for predicting directions of PPIs
We trained two L2-regularized logistic regression models that shared the same feature vectors for predicting directions of undirected, unsigned PPIs; they predicted whether a signal could flow in left-to-right direction and right-to-left direction, respectively. By combining outputs from the two models, we determined final directions as one of the followings: 'left-to-right', 'right-to-left', or 'bi-directional'. For example, if a signal is predicted to be able to flow in 'left-to-right' direction, but not in 'right-to-left' direction, then the final direction of PPI is 'left-to-right'. If a signal can flow in both direction, then the final direction of PPI is 'bi-directional'. Instead of training one classifier that predicts three outcomes, we trained two classifiers separately because uni-directional PPIs highly outnumbered bi-directional ones as shown in Table 1. During the training and test phase, we randomly divided uni-directional PPIs into two equal-sized sets: left-to-right PPIs, and right-to-left PPIs.
Model generation for predicting signs of PPIs
For each directed, unsigned PPI, we predicted its effect type as one of 'activation', 'inhibition', 'activation&inhibition', and 'affect'. Similar to directions, signs of PPIs were highly imbalanced; the number of 'activation&inhibition' and 'affect' class were very low. Thus, we trained two classifiers, rather than a single classifier that predicts four possible outcomes. The two classifiers shared the same feature vectors and predicted whether a directed PPI can act as activation and inhibition, respectively. Then, final effect types were determined by combining outputs of two classifiers. If a directed PPI can act as 'activation', but not as 'inhibition', then its effect type were determined as 'activation', and vice versa.
Results and discussions
Cross-validation performances
We applied our method to KEGG dataset, and conducted 10-fold cross-validation. In 10-fold cross validation, KEGG dataset is split into ten disjoint subsets. Then, we trained logistic regression models by using nine subsets, and tested the models on the remaining one subset. This procedure was repeated such that the models can be evaluated for each subset. The performance was obtained for each subset, and the mean performance is listed in Table 2. For directions, the performance of left-to-right and right-to-left classifiers were almost identical. F1-score and accuracy were as high as 0.89 for both left-to-right and right-to-left classifiers. Area under receiver operating characteristics (AUROC) and area under precision-recall curve (AUPRC) was 0.95 and 0.94 for both classifiers, respectively [see Additional file 1: Figure S1].
Table 2 Performance of classifiers for 10-fold cross validation
For signs, f1-score of activation and inhibition classifier were 0.91 and 0.80, respectively. F1-score of activation classifier was higher because activating PPIs outnumbered inhibiting ones. Accuracy of two classifiers were identical as 0.88. AUROC were 0.94 and 0.93 for activation and inhibition classifiers, respectively. Also, AUPRC were 0.96 and 0.89 for activation and inhibition classifiers, respectively [see Additional file 1: Figure S1].
Independent test performances
To see how well our model generalizes to datasets from different sources, we conducted an independent test. In an independent test, we trained logistic regression models with KEGG dataset, and tested with STRING dataset. The performance for predicting directions and signs are listed in Table 3. The performance of left-to-right classifier and right-to-left classifier were similar. Accuracy of left-to-right classifier and right-to-left classifier were around 0.6 and 0.59, respectively. The AUROC of left-to-right classifier and right-to-left classifier were 0.64 and 0.63, respectively [see Additional file 1: Figure S2].
Table 3 Performance of classifiers for independent test
The performance for predicting sign was higher than predicting directions. The accuracy of activation classifier and inhibition classifier were both 0.69. However, because the dataset was imbalanced, f1-score was much higher in activation classifier than inhibition classifier. AUROC of activation classifier and inhibition classifier were 0.67 and 0.63, respectively [see Additional file 1: Figure S2].
Comparison with the previous GO-based method
To demonstrate the effectiveness of GO relations, we compared the performance of our method to the performance of the previous GO-based method that did not use GO relations. Since the previous work did not consider directions of PPIs, we slightly modified our PPI dataset and feature values for the comparison. For PPI datasets, we originally defined effect types for directed PPIs. Therefore, bi-directional PPIs have two effect types, one effect type for each direction. To compare performance with the previous work, we mapped effect types of directed PPIs to undirected PPIs by using 'OR' operation so that bi-directional PPIs have only one effect type. For an undirected PPI p1-p2, if either directed PPIs p1 → p2 or p2 → p1 can act as activation, then p1-p2 has 'activation' as its effect type. Likewise, if either p1 → p2 or p2 → p1 can act as 'inhibition', then p1-p2 has 'inhibition' as its effect type. The statistics of PPI dataset that were used for the performance comparison is listed in Table 4.
Table 4 Statistics of PPI datasets that were used for comparison with previous GO-based method
For feature generation, when we predict effect type of a directed PPI p1 → p2, originally we considered p1 → p2 positive/negative regulation pairs only. p2 → p1 positive/negative regulation pairs were omitted because the direction between GO terms were inconsistent with the direction of the PPI. However, since the directions were not considered in the performance comparison, we used both p1 → p2 positive/negative regulation pairs and p2 → p1 positive/negative regulation pairs.
We conducted independent test for the previous work and our work. In the independent test, logistic regression models were trained with KEGG dataset, and tested with STRING dataset. The independent test results are shown in Fig. 4. For activation classifier, the performance was identical or slightly better in our method. On the other hand, in inhibition classifier, our method outperformed the previous work, especially in terms of recall. These results show that our method solved the second and third limitations of the previous work that we mentioned. The second limitation was that even if one protein inhibits another protein, the two proteins might share the same GO terms because they interact with each other. This may result in similar feature vector between activating PPIs and inhibiting PPIs in the previous work, suggesting that simply considering whether two proteins share the same GO terms are not sufficient for predicting signs of PPIs. We solved this problem by using positively regulates and negatively regulates relations in GO, not using which was the third limitation. Enhanced performance demonstrates that those relations help to predict signs of PPIs, especially for inhibiting ones.
Performance comparison with the previous GO-based method. We conducted independent test to compare performance with the previous GO-based method that did not consider GO relations. We compared the performance of activation and inhibition classifiers in terms of AUROC, AUPRC, precision, recall, f1-score, and accuracy. Since the previous work did not consider directions, performance for predicting signs was compared. a For predicting activation, our work performed equally, or slightly better than the previous work. b For predicting inhibition, our work outperformed the previous work by all metrics. These results show that overlap or difference between GO annotations of two proteins are not sufficient discriminating factor for signs of PPIs, given that inhibiting PPIs also participate in the same biological process. Moreover, positively regulates and negatively regulates relations in GO can be used for enhancing performance for predicting signs of PPIs. Abbreviations: AUROC, area under receiver operating characteristics; AUPRC, area under precision-recall curve
Even though our method outperformed the previous GO-based method, our method has one drawback; slightly lower coverage. Since we need at least one positive/negative regulation pair for predicting effect types, we covered 89% of PPIs that were covered by the previous method, as shown in Table 4.
Annotation of effect types to human PPIs
We applied our method to the prediction set. We validated top five most confident predictions against literature, each for activation and inhibition. We were able to find five publications supporting our prediction, as shown in Table 5.
Table 5 Literature validation of top five activating and inhibiting PPIs from effect type-annotated human PPIs
To demonstrate that our method can predict effect types of PPIs even when one direction is activation and the other direction is inhibition, we conducted a case study. Hras proto-oncogene, GTPase (HRAS) activates mitogen-activated protein kinase 1 (MAPK1) in axon guidance, a process in which axon growth cone migrates in specific direction. On the other hand, MAPK1 inhibits HRAS in neutrophin signaling pathway. In 10-fold cross-validation, our method correctly predicted effect types of two directions, even when one direction is activation and the other is inhibition. This means that since we predict signs for each directed PPI, and use positive/negative regulation pairs only when their directions are consistent with the directions of PPIs, our method is able to predict effect types of PPIs independently for both directions. Regulation pairs used for the prediction is illustrated in Fig. 5. For predicting that HRAS activates MAPK1, 'positive regulation of MAPK cascade' and 'positive regulation of MAP kinase activity' were used. On the other hand, 'negative regulation of cell differentiation' was used for predicting that MAPK1 inhibits HRAS, which is related to the function of neutrophin signaling pathway. This means that some regulation pairs reflect the context in which the PPI occurs.
A case study for PPI where one direction is activation and the other is inhibition. HRAS activates MAPK1 in axon guidance, whereas MAPK1 inhibits HRAS in neutrophin signaling pathway. Both effect types were correctly predicted in 10-fold cross validation, showing that our method can predict sign for each direction independently. Positive/negative regulation pairs that contributed to the sign prediction are shown in the figure. Interestingly, 'negative regulation of cell differentiation' and 'cell differentiation' were used to predict the inhibition, which is related to the function of neutrophin signaling pathway. Abbreviations: HRAS, hras proto-oncogene, GTPase; MAPK1, mitogen-activated protein kinase 1
In this work, we predicted effect types of PPIs by using regulates, positively regulates, and negatively regulates relations in GO. We hypothesized that directions and signs between GO terms can be used to predict directions and signs of PPIs. For an undirected, unsigned PPI, we predicted its direction first. We trained two logistic regression models to predict whether a signal can flow in left-to-right direction and right-to-left direction, respectively. The directions of (positively/negatively) regulates relations were encoded as features for representing direction of PPI. Then, we predicted sign for each directed PPI, thereby predicting effect type of PPI. We also trained two logistic regression models for predicting whether a directed PPI can act as activation and inhibition, respectively. We represented a directed PPI with features whose values were signs of positively/negatively regulates relations. As a result, we annotated effect types to PPIs, thereby turning PPI network into a directed, signed graph.
Our contribution is two-fold. Firstly, we proposed a concept of p1 → p2 (positive/negative) regulation pair, which is effective for predicting directions, as well as signs of PPIs. This solves the limitation of previous works that were not able to predict directions of PPIs. Secondly, we demonstrated usefulness of (positively/negatively) regulates relations in GO. Up to date, most of GO-related works have used only is a and part of relations. In this work, we showed that (positively/negatively) regulates relations are effective for predicting directions and signs of PPIs, suggesting their extension to other types of interactions. For example, those relations might be used for predicting signs of protein-DNA interactions; whether a transcription factor activates or represses expression of a target gene.
Even though our work improved the performance for predicting signs of PPIs, we have some drawbacks. Since we need at least one positive/negative regulation pairs for predicting signs, our method has lower coverage than the previous GO-based method. We applied relation reasoning and annotation grouping to compensate low coverage of (positively/negatively) regulates relations, nevertheless, some PPIs were not covered. At this moment, our method did not consider specificity of GO terms. However, more specific GO terms have clearer meaning. This suggests that reflecting specificity of GO terms might improve our method. In addition, the performance of inhibition classifier was much lower than activation classifier since there were much more activating PPIs than inhibiting ones. This imbalance was so significant that cannot be perfectly corrected by cost-sensitive learning. We expect that accumulation of more inhibiting PPIs will enhance the performance in future.
To facilitate signaling pathway reconstruction and network biology research, we provided effect type-annotated human PPIs in Additional file 2. The annotated effect types turned PPI network into a directed signed graph, opening up opportunities for discovering new characteristics of PPI network or signaling pathways. For example, signs of PPIs can be used for measuring stability of PPI network [10, 22]. In addition, effect types of PPIs can be used for discovering novel regulators of signaling pathways [10, 23], and improving performance for predicting efficacies of drugs [8].
AUPRC:
Area under precision-recall curve
AUROC:
Area under receiver operating characteristics
DVL:
Dishevelled segment polarity protein
HRAS:
Hras proto-oncogene, GTPase
MAPK1:
Mitogen-activated protein kinase 1
NKD:
Naked cuticle
PCP:
Planar cell polarity
PPI:
Protein-protein interaction
RNAi:
Anastas JN, Moon RT. WNT signalling pathways as therapeutic targets in cancer. Nat Rev Cancer. 2013;13(1):11–26.
Stewart DJ. Wnt signaling pathway in non–small cell lung cancer. Journal of the National Cancer Institute. 2014;106(1):djt356.
Takebe N, Miele L, Harris PJ, Jeong W, Bando H, Kahn M, et al. Targeting notch, hedgehog, and Wnt pathways in cancer stem cells: clinical update. Nat Rev Clin Oncol. 2015;12(8):445–64.
Yu H, Lee H, Herrmann A, Buettner R, Jove R. Revisiting STAT3 signalling in cancer: new and unexpected biological functions. Nat Rev Cancer. 2014;14(11):736.
Baron R, Kneissel M. WNT signaling in bone homeostasis and disease: from human mutations to treatments. Nat Med. 2013;19(2):179–92.
Snider J, Kotlyar M, Saraon P, Yao Z, Jurisica I, Stagljar I. Fundamentals of protein interaction network mapping. Mol Syst Biol. 2015;11(12) https://doi.org/10.15252/msb.20156351.
Gu H, Ma L, Ren Y, He W, Wang Y, Qiao Y. Exploration of the mechanism of pattern-specific treatments in coronary heart disease with network pharmacology approach. Comput Biol Med. 2014;51:198–204.
Yu H, Choo S, Park J, Jung J, Kang Y, Lee D, editors. Prediction of drugs having opposite effects on disease genes in a directed network. BMC systems biology. BioMed Central Ltd. 2016;10:17–25.
Yu L, Huang J, Ma Z, Zhang J, Zou Y, Gao L. Inferring drug-disease associations based on known protein complexes. BMC Med Genet. 2015;8(2):S2.
Vinayagam A, Zirin J, Roesel C, Hu Y, Yilmazel B, Samsonova AA, et al. Integrating protein-protein interaction networks with phenotypes reveals signs of interactions. Nat Methods. 2014;11(1):94–9.
Suratanee A, Schaefer MH, Betts MJ, Soons Z, Mannsperger H, Harder N, et al. Characterizing protein interactions employing a genome-wide siRNA cellular phenotyping screen. PLoS Comput Biol. 2014;10(9):e1003814.
Mei S, Zhang K. Multi-label ℓ2-regularized logistic regression for predicting activation/inhibition relationships in human protein-protein interaction networks. Scientific Reports. 2016;6:36453.
Mei S. Probability weighted ensemble transfer learning for predicting interactions between HIV-1 and human proteins. PLoS One. 2013;8(11):e79606.
Yan D, Wallingford JB, Sun T-Q, Nelson AM, Sakanaka C, Reinhard C, et al. Cell autonomous regulation of multiple Dishevelled-dependent pathways by mammalian Nkd. Proc Natl Acad Sci. 2001;98(7):3802–7.
Kanehisa M, Furumichi M, Tanabe M, Sato Y, Morishima K. KEGG: new perspectives on genomes, pathways, diseases and drugs. Nucleic Acids Res. 2017;45(D1):D353–D61.
Szklarczyk D, Morris JH, Cook H, Kuhn M, Wyder S, Simonovic M, et al. The STRING database in 2017: quality-controlled protein–protein association networks, made broadly accessible. Nucleic Acids Res. 2017;45(D1):D362–D8.
Chatr-aryamontri A, Oughtred R, Boucher L, Rust J, Chang C, Kolas NK, et al. The BioGRID interaction database: 2017 update. Nucleic Acids Res. 2017;45(D1):D369–D79.
Consortium GO. Gene ontology consortium: going forward. Nucleic Acids Res. 2015;43(D1):D1049–D56.
Lord PW, Stevens RD, Brass A, Goble CA. Investigating semantic similarity measures across the gene ontology: the relationship between sequence and annotation. Bioinformatics. 2003;19(10):1275–83. https://doi.org/10.1093/bioinformatics/btg153.
Dreiseitl S, Ohno-Machado L. Logistic regression and artificial neural network classification models: a methodology review. J Biomed Inform. 2002;35(5):352–9.
He H, Garcia EA. Learning from imbalanced data. IEEE Trans Knowl Data Eng. 2009;21(9):1263–84.
Anchuri P, Magdon-Ismail M, editors. Communities and balance in signed networks: A spectral approach. IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). ACM; 2012.
Vinayagam A, Stelzl U, Foulle R, Plassmann S, Zenkner M, Timm J, et al. A directed protein interaction network for investigating intracellular signal transduction. Sci Signal. 2011;4(189):rs8–rs.
Chaix A, Lopez S, Voisset E, Gros L, Dubreuil P, De Sepulveda P. Mechanisms of STAT protein activation by oncogenic KIT mutants in neoplastic mast cells. J Biol Chem. 2011;286(8):5956–66.
Yin D-M, Chen Y-J, Lu Y-S, Bean JC, Sathyamurthy A, Shen C, et al. Reversal of behavioral deficits and synaptic dysfunction in mice overexpressing neuregulin 1. Neuron. 2013;78(4):644–57.
Maiani E, Diederich M, Gonfloni S. DNA damage response: the emerging role of c-Abl as a regulatory switch? Biochem Pharmacol. 2011;82(10):1269–76.
Liu M, Bai J, He S, Villarreal R, Hu D, Zhang C, et al. Grb10 promotes lipolysis and thermogenesis by phosphorylation-dependent feedback inhibition of mTORC1. Cell Metab. 2014;19(6):967–80.
Hanson AJ, Wallace HA, Freeman TJ, Beauchamp RD, Lee LA, Lee E. XIAP monoubiquitylates Groucho/TLE to promote canonical Wnt signaling. Mol Cell. 2012;45(5):619–28.
This work and publication of this article was sponsored by the Bio-Synergy Research Project (NRF-2012M3A9C4048758) of the Ministry of Science and ICT through the National Research Foundation.
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
About this supplement
This article has been published as part of BMC Systems Biology Volume 12 Supplement 1, 2018: Selected articles from the 16th Asia Pacific Bioinformatics Conference (APBC 2018): systems biology. The full contents of the supplement are available online at https://bmcsystbiol.biomedcentral.com/articles/supplements/volume-12-supplement-1 .
Department of Bio and Brain Engineering, KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea
Soorin Yim, Dongjin Jang & Doheon Lee
Bio-Synergy Research Center, 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea
Soorin Yim, Hasun Yu & Doheon Lee
Soorin Yim
Hasun Yu
Dongjin Jang
Doheon Lee
SY and DL designed this work. SY developed the proposed technique with HY and DJ under the supervision of DL. All authors read, wrote, and approved the manuscript.
Correspondence to Doheon Lee.
Additional file 1: Table S1.
Lists all possible combinations of GO relations where relation reasoning can be applied. Figure S1-S2. shows ROC and PRC along with their area under the curves obtained from cross-validation, and independent test results. (PDF 799 kb)
Effect type-annotated human PPIs. This file contains effect type-annotated human PPIs in .xls format. Each row is a triplet of (protein 1, effect type, protein 2), in which protein 1 is a upstream protein whereas protein 2 is a downstream protein. (XLS 1547 kb)
Yim, S., Yu, H., Jang, D. et al. Annotating activation/inhibition relationships to protein-protein interactions using gene ontology relations. BMC Syst Biol 12 (Suppl 1), 9 (2018). https://doi.org/10.1186/s12918-018-0535-4 | CommonCrawl |
Please help with my geo hw
I need help... I did most of my hw, but these just can't? IDK. but...
Among all fractions $x$ that have a positive integer numerator and denominator and satisfy $$\frac{9}{11} \le x \le \frac{11}{13},$$ which fraction has the smallest denominator?
Evaluate: $\dfrac{(x+y)^2 - (x-y)^2}{y}$ for $x=6$, $y \not= 0$.
1. First, try to change the x in the middle, to a fraction. Let's say x/y. We then multiply both sides by b, to get 9b/11 and 11b/13. Now, the trick here is to find a positive integer that satisfies and is less than 1, since 11/13 is less than 1.
tertre Dec 2, 2018
Rom Dec 2, 2018
edited by Rom Dec 3, 2018
Not quite, Rom. Continuing with my method and plugging the values of b, we get b=6, so the answer is \(\boxed{\frac{5}{6}}.\) Can someone verify this?
9/11 = .8181.....
5/6 = .8333....
11/13 = .846....
Mmmmm.....it looks like you could be correct, tertre.....
CPhill Dec 3, 2018
edited by CPhill Dec 3, 2018
Here might be another way to look at this
Notice that the fractions seem to have the form n / [ n + 2] where n is an integer
Suppose that there exists a fraction such that
9/11 < n / [n + 2] < 11/13
We can split this into two inequalities
Looking at the inequality on the left....we have
9[n + 2] < 11n
9n + 18 < 11n
18 < 2n
9 < n or
Looking at the inequality on the right.... we have
13(n) < 11[n + 2]
13n < 11n + 22
2n < 22
n < 11
This implies that
9 < n < 11
So...n = 10 is the only integer that satisfies this
And n + 2 = 12
So....the fraction is 10/12 = 5/6 ......just as tertre found!!!
Good job, tertre !!!!
Thank you, CPhill! Great solution! | CommonCrawl |
Gravitational Waves, Black Holes and Fundamental Physics
IFPU, Miramare campus, Trieste, Italy
Europe/Rome timezone
Trip and Accomodation
Lunch options
[email protected]
77. Opening
79. Cosmic Archaeology with black hole binaries
Raffaella Schneider
The existence of massive stellar black hole binaries, with primary black hole (BH) masses greater than 30 -35 Msun was proven by the detection of the gravitational wave (GW) event GW150914 during the first LIGO/Virgo observing run (O1), and successively confirmed by seven additional GW signals discovered by independent analyses of the O1 and O2 data. Recently reported O3 alerts suggest that...
70. Growth of supermassive black hole seeds in ETG star-forming progenitors via gaseous dynamical friction: perspectives for GW detections
Dr Lumen Boco (SISSA)
In this talk I will discuss a novel mechanism to grow supermassive black hole seeds in star-forming ETG progenitors at z >1. This envisages the migration and merging of stellar compact remnants, via gaseous dynamical friction, toward the central regions of such galaxies. I will show that this process can build up central BH masses of order 10^4 − 10^6 Msun in a timescale shorter than 10^8 yr,...
40. Merger rate of stellar black hole binaries above the pair-instability mass gap
Mr Alberto Mangiagli (University of Milan - Bicocca)
In current stellar evolutionary models, the occurrence of pair-instability supernovae plays a key role in shaping the resulting black hole (BH) mass population, preventing the formation of remnants between about $[60, \, 120] \rm M_\odot$.
We develop a simple approach to describe BHs beyond the pair-instability gap, by convolving the initial mass function and star formation rate with the...
27. Detection and parameter estimation for accreting stellar-origin black-hole binaries and their electromagnetic counterpart
Laura Sberna (Perimeter Institute)
We study the impact of mass accretion in the evolution of LIGO-like black hole binaries. Based on simulated catalogues of binary populations, we estimate that a fraction of the events will have a detectable imprint of Eddington-level accretion, when detected by LISA or by LISA and ground-based detectors (multiband). Accretion can also induce bias in the binary parameters, such as the masses...
50. Tidal Deformability of Black Holes Immersed in Matter
Francisco Duque (GRiT/CENTRA, Instituto Superior Técnico, Universidade de Lisboa)
The tidal deformability of compact objects by an external field has a detectable imprint in the gravitational waves emitted by a binary system, which is encoded in the so-called Tidal Love Numbers (TLNs). For a particular theory of gravity, the TLNs depend solely on the object's internal structure and, remarkably, they vanish for black holes in general relativity. This fact has gathered...
30. BMS with applications
Béatrice Bonga (Radboud University)
This will be an overview talk about the Bondi-Metzner-Sachs group, which is the symmetry group of asymptotically flat spacetimes. After having reviewed its main properties, I will discuss some applications such as the memory effect. Next, I will discuss the BMS algebra in other contexts such as higher dimensions and black hole horizons. The latter is conjectured to be key in solving the...
78. LISA Data Challenges: Status and future prospects
Nikos Karnesis
The LISA Data Challenges (LDC) where established as a common ground with the aim of engaging the community to the open LISA data analysis questions. Since the first LDC, a lot of experience has been gained, and significant progress has been achieved. In this talk I will review this progress, and I will present the purpose and individual goals of the current LDCs. The status and future...
22. Force-free electrodynamics near rotation axis of a Kerr black hole
Troels Harmark (Niels Bohr Institute)
Despite their potential importance for understanding astrophysical jets, physically realistic exact solutions for magnetospheres around Kerr black holes have not been found, even in the force-free approximation. Instead approximate analytical solutions such as the Blandford-Znajek (split-)monopole, as well as numerical solutions, have been constructed. In this talk we consider a new approach...
42. Constraints on an Effective Field Theory extension to gravity using gravitational-wave observations
Dr Richard Brito (Sapienza University of Rome)
Gravitational-wave observations of coalescing binary black holes allow for novel tests of the strong-field regime of gravity. Using the detections of the LIGO and Virgo collaborations, we place the first constraints on higher-order curvature corrections that arise in the effective-field-theory extension of general relativity where higher-order powers in the Riemann tensor are included in the...
20. Rotating black hole in a higher order scalar tensor theory
Christos Charmoussis (LPT-Orsay)
We will discuss an analytic hairy black hole in a subclass of scalar tensor theories
48. The sound of DHOST
Dr Antoine Lehébel (University of Nottingham)
In generic higher-order scalar-tensor theories which avoid the Ostrogradsky instability, the presence of a scalar field significantly modifies the propagation of matter perturbations, even in weakly curved backgrounds. This affects notably the speed of sound in the atmosphere of the Earth. It can also generate instabilities in homogeneous media. I will use this to constrain the viable...
51. Hearing the strength of gravity (with the Sun)
Dr Ippocratis Saltas (CEICO - Czech Academy of Sciences)
Generic extensions of General Relativity aiming to explain dark energy typically introduce fifth forces of gravitational origin. In this talk, I will explain how helioseismic observations can provide a powerful and novel tool towards precision constraints of fifth forces, as predicted by general theories for dark energy, and I will discuss the implications for cosmology.
71. The IR limit of Horava Gravity
Dr Mario Herrero-Valea (SISSA)
Horava Gravity is a renormalizable theory of Quantum Gravity which is expected to flow to GR in the low energy limit. This naive expectation is obstructed by a strongly coupled interaction when the parameters of the Lagrangian flow to the general relativistic values. However, when closely studied, only self-interactions of the extra scalar mode of the theory are strongly coupled. When matter...
80. Multi-messenger signals from merging neutron stars
Francois Foucart
The first detection of a binary neutron star star system through gravitational waves and electromagnetic signals (gamma-ray burst, kilonova, radio) recently demonstrated the feasibility and usefulness of multi-messenger astronomy. In this talk, I will provide an overview of the physics of neutron star-neutron star and black hole-neutron star mergers, and of what we can learn from gravitational...
16. BMS flux-balance equations as constraints on the gravitational radiation
Dr Ali Seraj (ULB)
Asymptotically flat spacetimes admit infinite dimensional BMS symmetries which complete the Poincare symmetry algebra with super-translation and super-Lorentz generators. We show that each of these symmetries lead to a flux-balance equation at null infinity, which we compute to all orders in the post-Minkowskian expansion in terms of radiative multipole moments. The ten Poincare flux-balance...
23. Gauge-invariant approach to the parameterized post-Newtonian formalism
Manuel Hohmann (University of Tartu)
The parameterized post-Newtonian (PPN) formalism is an invaluable tool to assess the viability of gravity theories using a number of constant parameters. These parameters form a bridge between theory and experiment, as they have been measured in various solar system experiments and can be calculated for any given theory of gravity. The practical calculation, however, can become rather...
43. Modelling black hole binaries in the intermediate-mass-ratio regime.
Ms Mekhi Dhesi (University of Southampton)
We are working to provide accurate modelling of the dynamics and gravitational-wave signatures of black hole inspirals in the intermediate-mass-ratio regime (IMIRIs) (1:100-1:1000). In doing so we hope to bridge the gap between the accurate modelling of extreme-mass-ratio inspirals achieved through black hole perturbation theory, and that of comparable-mass inspirals using numerical...
61. Well-posedness of characteristic formulations of GR
Mr Thanasis Giannakopoulos (Instituto Superior Técnico)
Characteristic formulations of General Relativity (GR) have advantages over more standard spacelike foliations in a number of situations. For instance, the Bondi-Sachs formalism is at the base of codes that aim to produce gravitational waveforms of high accuracy, exploiting the fact that null hypersurfaces reach future null infinity and hence avoid systematic errors of extrapolation...
44. Scalarized black holes
Daniela Doneva (University of Tuebingen)
Spontaneous scalarization is a very interesting mechanism endowing the compact object with nontrivial scalar field. This mechanism is designed to work only in the strong gravity regime while remaining the weak field regime practically unaltered. While scalarization was discussed mainly for neutrons stars in the last few decades, it was recently discovered that black holes in Gauss-Bonnet...
81. The Black Hole Perturbation Toolkit
Niels Warburton
As we face the task of modelling small mass-ratio binaries for LISA we, as a community, need to spend more time developing waveform models and less time writing and re-writing codes. Currently there exist multiple, scattered black hole perturbation codes developed by a wide array of individuals or groups over a number of decades. This project brings together some of the core elements of these...
66. Teukolsky formalism for nonlinear Kerr perturbations
Dr Stephen Green (Albert Einstein Institute Potsdam)
We develop a formalism to treat higher order (nonlinear) metric perturbations of the Kerr spacetime in a Teukolsky framework. We first show that solutions to the linearized Einstein equation with nonvanishing stress tensor can be decomposed into a pure gauge part plus a zero mode (infinitesimal perturbation of the mass and spin) plus a perturbation arising from a certain scalar ("Debye-Hertz")...
6. Eikonal QNMs of black holes beyond GR
Dr Kostas Glampedakis
In this talk we study the quasi-normal modes of spherically symmetric black holes in modified theories of gravity, allowing for couplings between the tensorial and scalar field degrees of freedom. Using the eikonal approximation and a largely theory-agnostic approach, we obtain analytical results for the fundamental mode of such black holes.
19. On black hole spectroscopy using overtones
Dr Swetha Bhagwat (La Sapienza)
Validating the no-hair theorem with a gravitational wave observation from a compact binary coalescence presents a compelling argument that the remnant object is indeed a black hole described by the classical general theory of relativity. Validating this theorem relies on performing a spectroscopic analysis of the post-merger signal and recovering the frequencies of either different angular...
31. Spontaneous scalarization in generalised scalar-tensor theory
Nikolas Andreou (University of Nottingham)
Spontaneous scalarization is a mechanism that endows relativistic stars and black holes with a nontrivial configuration only when their spacetime curvature exceeds some threshold. The standard way to trigger spontaneous scalarization is via a tachyonic instability at the linear level, which is eventually quenched due to the effect of non-linear terms. At this work (Phys. Rev. D 99, 124022...
76. Numerical investigation of superradiant instabilities
Mr Alexandru Dima (SISSA)
We present a numerical investigation of the superradiant instability in spinning black holes surrounded by a plasma with density increasing when moving closer to the black hole. We try to understand whether superradiant instabilities are relevant or not for astrophysical black-holes surrounded by matter.
75. Causal structure of black holes in generalized scalar-tensor theories
Mr Nicola Franchini (Sissa)
A modified causal structure of black holes in theories beyond general relativity might have implications for the stability of such solutions. In this talk, we explore the horizon structure of black holes as perceived by scalar fields for generalized scalar-tensor theories, which exhibit derivative self-interactions. This means that the propagation of perturbations on nontrivial field...
82. Testing the no-hair theorem with LIGO and Virgo
Maximiliano Isi
Gravitational waves may allow us to experimentally probe the structure of black holes, with important implications for fundamental physics. One of the most promising ways to do so is by studying the spectrum of quasinormal modes emitted by the remnant from a binary black hole merger. This program, known as black hole spectroscopy, could allow us to test general relativity and the nature of...
74. Coalescence of Exotic Compact Objects
Miguel Bezares (SISSA)
The direct detection of gravitational waves (GWs) by the LIGO and VIRGO interferometric detectors has begun a new era of GW astronomy, allowing us to study the strong regime of gravity through GW signals produced by coalescence of compact objects. In this talk, I will present our numerical studies on coalescence of binary Exotic Compact Objects (ECOs) performed by solving the Einstein...
18. Quantum gravity predictions for black hole interior geometry
Daniele Pranzetti (Perimeter Institute)
In this talk I will show how to derive an effective Hamiltonian constraint for the Schwarzschild geometry starting from the full loop quantum gravity Hamiltonian constraint and computing its expectation value on coherent states sharply peaked around a spherically symmetric geometry. I then use this effective Hamiltonian to study the interior region of a Schwarzschild black hole, where a...
28. Post merger signal from black hole mimickers
Mr Alexandre Toubiana (APC/IAP)
Black holes mimickers, e.g. neutron stars or boson stars, are compact objects with similar properties to black holes.
The gravitational wave signal emitted by a binary of such putative objects during the inspiral phase is difficult to
distinguish from the one emitted by a black hole binary. Nevertheless, significant differences might appear in the
post merger signal. Inspired by the known...
35. Importance of the tidal heating in binary coalescence
Sayak Datta (IUCAA)
With the observation of the multiple binary inspirals, we begin to question whether the components of the binary are black holes or some exotic compact objects (ECO). The black holeness or the deviation from it can be tested in several ways. The distinguishing feature of a black hole from other exotic compact objects is the presence of the horizon. This surface acts as a one-way membrane, that...
83. The first image of a black hole
Luciano Rezzolla
I will briefly discuss how the first image of a black hole was obtained
by the EHT collaboration. In particular, I will describe the theoretical
aspects that have allowed us to model the dynamics of the plasma
accreting onto the black hole and how such dynamics was used to generate
synthetic black-hole images. I will also illustrate how the comparison
between the theoretical images and the...
85. A VISual approach to science communication
Marcos Valdes
49. A Bifocal Coordinate System in General Relativity
Mr Nikolaos Chatzarakis (Aristotle University of Thessaloniki)
Coffee+Posters session
Following the generalised form of non-stationary axisymmetric space-time (Chandrasekhar 1983), we assume a bifocale elliptic symmetry and attempt to show that this particular metric can be a solution to Einstein's field equations. We discuss the form of the metric and the curvature implied, as well as its possible physical meaning and applications.
9. A covariant simultaneous action for branes
Mr Giovany Cruz (CINVESTAV)
A covariant simultaneous action for branes in an arbitrary curved background spacetime is considered. The term `simultaneous' is imported from variational calculus and refers to the fact that extremization of the action produces at once both the ?first and second variation of a given geometrical action for the brane. The action depends on a pair of independent field variables, the brane...
58. A parametrized ringdown approach for black-hole spectroscopy of spinning black holes
Dr Andrea Maselli (Sapienza University of Rome)
Black-hole spectroscopy is arguably the most promising tool to test
gravity in extreme regimes and to probe the ultimate nature of black
holes with unparalleled precision. These tests are currently limited by
the lack of a ringdown parametrization that is both robust and accurate.
We develop an observable-based parametrization of the ringdown of
spinning black holes beyond general...
63. Beyond the Post-Newtonian expansion using Non-relativistic Gravity
Prof. Niels Obers (Nordita & Niels Bohr Institute)
I will discuss an action principle for non-relativistic gravity, as has recently been obtained from a covariant large speed of light expansion of Einstein's theory of gravity. This action reproduces Newtonian gravity as a special case, but goes beyond it by allowing for gravitational time dilation while retaining a non-relativistic causal structure. As a consequence, it can be shown that the...
21. Continuation of Schwarzschild exterior without a black hole in first order gravity
Prof. Sandipan Sengupta (IIT Kharagpur)
We present a smooth extension of the Schwarzschild exterior geometry, where the singular interior is superceded by a vacuum phase with vanishing metric determinant. Unlike the Kruskal-Szekeres continuation, this explicit solution to the first-order field equations in vacuum has no singularity in the curvature two-form fields, no horizon and no global time. The underlying non-analytic structure...
34. Dilatonic black holes and the weak gravity conjecture
Kunihito Uzawa (Kwansei Gakuin University)
We discuss the weak gravity conjecture (WGC) from black hole entropy in the Einstein-Maxwell-dilaton system or string theory. The WGC is strongly motivated by theorems forbidding global symmetries which arise in the vanishing-charge limit, and implies the fact that not only all non-BPS black holes but also extremal one without supersymmetry should be able to decay. It is shown that the large...
41. Duality of conformally and kinetically coupled scalar-tensor theories
Dmitry Gal'tsov (Moscow state University)
We show that the non-minimal conformally-coupled (CC) scalar-tensor theory and the Palatini theory with kinetic coupling of the scalar to the Ricci tensor (PKC) are the same. This is demonstrated by showing that both theories coincide in the Einstein frame. Using this duality as generating technique, we construct the PKC counterpart to the BBMB black hole of the CC theory. It turns out to be...
53. Formation and Abundance of Primordial Black Holes
Ilia Musco (University of Geneva)
Primordial black holes can form in the early Universe from the collapse of cosmological perturbations after the cosmological horizon crossing. They are possible candidates for the dark matter as well as for the seeds of supermassive black holes observed today in the centre of galaxies. If the perturbation is larger than a certain threshold, depending on the equation of state and on the...
62. Formation and evolution of the first Supermassive Black Hole Seeds
Ms Federica Sassano (Sapienza University of Rome)
In the last decade, many observations of bright quasars at $z > 5$, have revealed the existence of Supermassive Black Holes (SMBHs), giants of billion solar masses shining close to their Eddington limit. The mechanism of their
formation at these early epochs represents currently an open problem in galaxy evolution.
Several scenarios have been proposed to overcome this problem, such as the...
32. General Relativistic Study of the Structure of Highly Magnetized Neutron Stars
Dr Orlenys Troconis (International Centre for Theoretical Physics)
Neutron stars are one of the most compact and densest astrophysical objects known in nature, they result from the supernova explosion of a massive star. Many of the neutron stars have very strong magnetic fields, which lead to the emission of radio and X-ray radiation. This work is devoted to study the effects of strong magnetic fields in the structure of neutron stars, within the framework of...
29. Gravitational waves in Teleparallel theories of gravity
Viktor Gakis
The Teleparallel equivalent of General Relativity, where the connection is curvatureless, offers an alternative but equivalent way of describing gravity. In accordance to GR-based modified theories such as f(R) there are also torsion or non-metricity teleparallel modifications like f(T), where T the torsion scalar and f(Q), where Q is the non-metricity scalar both of them playing a role...
36. Gravity-induced quantum anomalies through gravitational wave polarization
Adrian del Rio
Axial-type anomalies predicted by quantum field theory in curved spacetime are determined by the Chern-Pontryagin invariant of the spacetime background. I will show that this geometric quantity is non-zero in spacetimes admitting gravitational radiation that propagates to future null infinity with an excess of one polarization mode over the other. I will further argue that typical scenarios...
73. GWxLSS: chasing the progenitors of merging binary black holes.
Giulio Scelfo
Cross-correlations between galaxy catalogs and gravitational wave maps can provide useful information regarding open questions in both cosmology and astrophysics. The detection of binary black hole mergers through gravitational waves by the LIGO-Virgo instrument sparked the discussion on whether they have astrophysical or primordial origin. According to a model whose popularity revived after...
14. Holographic Bound on Remnant Boundary Area of Black Hole Merger
Prof. Partha Sarathi Majumdar (School of Physical Sciences, Indian Association for the Cultivation of Science)
Using concomitantly the Generalized Second Law of black hole thermodynamics and the holographic Bekenstein entropy bound embellished by Loop Quantum Gravity corrections to quantum black hole entropy, we show that the boundary area of the remnant from the binary black hole merger in GW150914 is bounded from below. This lower bound is more general than the bound from application of Hawking's...
68. Kicking Q-balls and boson stars: stimulated emission of radiation by confined structures
Lorenzo Annulli (Instituto Superior Tècnico)
Scalar fields can give rise to confined structures, such as Q-balls or boson
tars, which can serve as interesting models for cold dark-matter. The existence and stability of objects in a given theory is relevant for a wide range of topics, from planetary science to a description of fundamental particles. Taking as starting point a theory describing a time-dependent scalar field, in this...
5. Moving black holes: energy extraction, absorption cross-section and the ring of fire
Mr Rodrigo Vicente (CENTRA - Instituto Superior Técnico)
We consider the interaction between a plane wave and a (counter-moving) black hole. We show that energy is transferred from the black hole to the wave, giving rise to a negative absorption cross-section. Moving black holes absorb radiation and deposit energy in external radiation. Due to this effect, a black hole hole of mass $M$ moving at relativistic speeds in a cold medium will appear...
55. New frontiers in cosmology using gravitational waves
Suvodip Mukherjee (IAP)
Cosmic microwave background and large scale structure missions have played a crucial role in constructing the standard model of cosmology. The upcoming missions in astrophysics and cosmology are going to explore the Universe over a wide range of redshifts using both electromagnetic waves and gravitational waves. I will introduce a few new frontiers in cosmology which will open-up from the...
45. New solutions in tensor-multi-scalar theories of gravity
Stoytcho Yazadjiev (Sofia University)
In this talk I will present new solutions describing neutron stars and black holes in the tensor-multi-scalar theories of gravity. Some astrophysical implications of the solutions will be also discussed.
52. Novel Wormhole Solutions in Einstein-Scalar-Gauss-Bonnet Theories
Mr Georgios Antoniou ( University of Nottingham )
Novel wormholes are obtained in Einstein-scalar-Gauss-Bonnet theory for several coupling functions. The wormholes may feature a single-throat or a double-throat geometry. The scalar field may asymptotically vanish or be finite, and it may possess radial excitations. The domain of existence is fully mapped out for several forms of the coupling function.
13. Physics Beyond General Relativity: Theoretical and Observational Constraints.
Prof. Sudipta Sarkar (IIT Gandhinagar)
The study of the effects of higher curvature terms is a major research theme of contemporary gravitational physics. In this talk, I will present a comprehensive study of the higher curvature gravity and various observational & theoretical constraints. The inclusion of these terms leads to exciting new possibilities, e.g., gravitational and electromagnetic perturbations following different...
37. Probing black holes with X-rays and gravitational waves
Sourabh Nampalliwar (Eberhard Karls University of Tuebingen)
Einstein's theory has been the standard theory of gravity for nearly a century. Alternatives to and extensions of it have been proposed to address various issues. With advances in technology, these theories are becoming testable, especially in the strong field regime around black holes. In this talk, I will describe a theory agnostic approach to probe the nature of black holes. I will provide...
69. Quantum constitutive equations for finite temperature Dirac fermions under rotation
Victor Eugen Ambrus (West University of Timisoara)
The experimental confirmation of the polarization of the Lambda hyperons observed in relativistic heavy ion collisions experiments [1] has renewed the interest in anomalous transport of fermions due to the spin-orbit coupling (e.g., through the chiral vortical effect [2]). Using a non-perturbative technique [3], exact expressions are derived for the thermal expectation values of the...
7. Rotating and non rotating, non singular compact objects
Prof. Anupam Mazumdar
I will discuss the physics of non singular compact objects which is primrily made up of gravitons, and it is as compact as that of the Buchdahl star. I will discuss how to construct such a system within higher derivative theories of gravity which incorporates nonlocal effects at the level of gravitational ingeractions. I will construct in fact both static and rotating solutions in this regard...
46. Rotating Solitonic Vacuum in TMST
Lucas Gardai Collodel
In the context of a special class of tensor-multi-scalar theories of gravity for which the target-space metric admits as a Killing field a generator of a one-parameter group of point transformations under which the theory is invariant, we present rotating vacuum solutions, namely with no matter fields. These objects behave like nontopological solitons, whose primary stability is due to the...
86. Searching tidally disrupted white dwarfs to find intermediate mass black holes
Mrs Martina Toscani
My poster is focused on two topics. The main one is related to intermediate mass black holes (IMBHs). Some recent observations suggest that IMBHs exist in our Universe. Yet, none of them has been confirmed so far. A possible way to prove the existence of these elusive objects can be the study of tidal disruption events of white dwarfs (WDs). Indeed, if a WD wanders too close to an IMBH, it...
72. Signatures of Unimodular Gravity
Raquel Santos Garcia
Unimodular Gravity is an infrared modification of General Relativity where the cosmological constant is replaced by an integration constant, thus free from radiative corrections and effectively solving one piece of the cosmological constant problem. Apart from this, both theories enjoy the same classical
dynamics, dictated by Einstein equations, and they were thought to be equivalent for any...
56. Smarr formulas for Einstein-Maxwell-dilaton stationary spacetimes with line singularities
Igor Bogush (Lomonosov Moscow State University)
We generalize the recent derivation http://arxiv.org/abs/arXiv:1908.10617 of the Smarr formulas for Einstein-Maxwell stationary axisymmetric asymptotically locally flat spacetimes with line singularities to the Einstein-Maxwell-dilaton (EMD) theory with an arbitrary dilaton coupling constant. The line singularities include the Dirac and Misner strings for spacetimes with magnetic and NUT...
38. Stationary vector clouds around Kerr black holes
Mr Nuno M. Santos (CENTRA, Instituto Superior Técnico, Universidade de Lisboa)
Kerr black holes are known to support massive bosonic test fields whose phase angular velocity fulfills the synchronization condition, i.e. the threshold of superradiance. The presence of these real-frequency bound states at the linear level, commonly dubbed stationary clouds, is intimately linked to existence of Kerr black holes with bosonic hair at the non-linear level. These configurations...
59. Strong cosmic censorship in charged black holes with a positive cosmological constant
Kyriakos Destounis (Theoretical Astrophysics, University of Tübingen)
The strong cosmic censorship conjecture has recently regained a lot of attention in charged and rotating black holes immersed in de Sitter space. Such spacetimes possess Cauchy horizons in the internal region of the black hole. The stability of Cauchy horizons is intrinsically connected to the decay of small perturbations
exterior to the event horizon. As such, the validity of strong cosmic...
65. The Ultra-relativistic Expansion of General Relativity
Dennis Hansen (ETH Zürich)
In this talk I will discuss the ultra-relativistic expansion of general relativity. The ultra-relativistic expansion in the speed of light captures very strong gravitational field effects and extreme astrophysical phenomena in a simplifying setting compared to full GR. Surprisingly it also turns out that the ultra-relativistic expansion is closely related to the non-relativistic expansion,...
15. Thermally Quasi-stable Radiant Black Holes
Partha Sarathi Majumdar (School of Physical Sciences, Indian Association for the Cultivation of Science)
We use loop quantum gravity inspired holographic thermal stability criteria to establish the existence of regions in parameter space of charged rotating black holes away from extremality, where partial fulfillment of the stability criteria is possible. Physical implications of our results will be discussed.
60. Tidal effect on scalar cloud:numerical simulations
Taishi Ikeda (Instituto Superior Tecnico)
Axion and axion-like particle are the candidates of the dark matter. Due to the super-radiant instability, these fields are amplified, and can localize around Kerr BH, as axion clouds. Since the axion clouds emit gravitational waves, it is important to analyze several properties of the axion cloud around BHs. Here, we study the axion cloud around binary BH (BBH). The axion cloud around a BH of...
39. Time-domain metric reconstruction using the Hertz potential
Mr Oliver Long (University of Southampton)
Historically the Teukolsky equation corresponding to gravitational perturbations is solved for the Weyl scalars. However, reconstructing the metric from these scalars involves solving a fourth order PDE to obtain the Hertz potential and then another second order PDE to construct the metric perturbation. Solving the (adjoint) Teukolsky equation for the Hertz potential directly simplifies the...
67. Total probability for fermion pair production in external fields on de Sitter space-time
Diana-Cristiana Popescu (West University of Timisoara)
We are illustrating a procedure for computing the total probabilities corresponding to the processes of fermion pair production in electric fields and in the field of a magnetic dipole on de Sitter space-time. The total probabilities are preserving the dependence on the expansion parameter, proving the fact that the results are consistent with the ones obtained for probability densities. The...
2. Wormholes in $R^2$-gravity
Pradyumn Sahoo (Birla Institute of Technology and Science-Pilani, Hyderabad Campus)
We propose, as a novelty in the literature, the modelling of wormholes within the particular case of the $f(R,T)$ gravity, namely $f(R,T)=R+\alpha R^{2}+\lambda T$, with $R$ and $T$ being the Ricci scalar and trace of the energy-momentum tensor, respectively, while $\alpha$ and $\lambda$ are constants. Although such a functional form application can be found in the literature, those concern to... | CommonCrawl |
Journal of Environmental Science International
The Korean Environmental Sciences Society (한국환경과학회)
Journal and conference publications and books published and distributed Besides the academic conference was held and the domestic/academic exchange Environmental science and technology seminar and workshop held in Environment related to the convergence of technology and linkages with science and Engineering Research Foundation Other research necessary to achieve the purpose of this meeting - Atmospheric environment - Biology/Ecology - Environmental chemistry - Fisheries/Marine - Water resource environment - Waste water/Waste - Green environment - Energy resources - Environmental management - Fusion environment - Agriculture, forestry, animal husbandry and food
http://submission.kenss.or.kr/ KSCI KCI
Ecological Studies on the Vegetation of Castanea crenata Community and Both Sides
Huh, Man-Kyu;Cho, Joo-Soo;Jang, Gi-Bong 1
https://doi.org/10.5322/JES.2008.17.1.001 PDF KSCI
The characters of Castanea crenata community which is associated with human activities recently extended around the field of Saengbiryang-myeon at Sanseong-gun in Gyeongsangnam-do. The C. crenata community and its outskirts were investigated for several ecological parameters and the results can be summarized as fellows. C. crenata is prevailing in the plantation area, whereas Pinus densiflora and Quercus mongolica are prevailing in its outskirts. The mean species diversity of plantation was lower than that of natural forests. In stratification of investigated areas, overstory tree layer was dominant in the zone of plantation and dominant layers in the natural forest were understory tree layer, shrub, and herb. Plant biomass and net production which estimated from degree of green naturality were much higher in natural forests than those of the plantation community. Least significant differences (LSD) post hoc analysis revealed that P. densiflora and Q. mongolica community had significantly greater than densities than C. crenata community.
Simultaneous Removal of H2S, NH3 and Toluene in a Biofilter Packed with Zeocarbon Carrier
Park, Byoung-Gi;Shin, Won-Sik;Jeong, Yong-Shik;Chung, Jong-Shik 7
Simultaneous removal of $NH_3,\;H_2S$ and toluene in a contaminated air stream was investigated over 185 days in a biofilter packed with Zeocarbon granule as microbial support. In this study, multi-microorganisms including Nitrosomonas and Nitrobacter for nitrogen removal, Thiobacillus thioparus (ATCC 23645) for $H_2S$ removal, and Pseudomonas aeruginosa (ATCC 15692), Pseudomonas putida (ATCC 17484) and Pseudomonas putida (ATCC 23973) for toluene removal were used simultaneously. The empty bed residence time (EBRT) was 40-120 seconds and the feed (inlet) concentrations of $NH_3,\;H_2S$ and toluene were 0.02-0.11, 0.05-0.23 and 0.15-0.21 ppmv, respectively. The observed removal efficiency was 85%-99% for $NH_3$, 100% for $H_2S$, and 20-90% for toluene, respectively. The maximum elimination capacities were 9.3, 20.6 and $17g/m^3/hr\;for\;NH_3,\;H_2S$ and toluene, respectively. The results of kinetic model analysis showed that there were no particular evidences of interactions or inhibitions among the microorganisms, and that the three bio degradation reactions took place independently within a finite area of biofilm developed on the surface of the Zeocarbon carrier.
Total Phenolic Compounds and Flavonoids in the Parts of Artichoke (Cynara scolymus L.) in Viet Nam
Thi, Bui Ha Thu;Park, Moon-Ki 19
Artichoke extracts are widely used alone or in association with other herbs for embittering alcoholic and soft drinks and to prepare herbal teas or herbal medicinal products in Viet Nam. The objective of this paper was a screening of flavonoids and total phenolic compounds content in the parts of artichoke (Cynara scolymus L.) as flowers, leaves, roots, trunks, stumps, The total phenolic compounds and flavonoids in the parts of artichoke were extracted among 3 extraction methods as methanol extraction (EM1), mixing methanol and water method (EM2) and water extraction method (EM3). Total phenolic compounds and flavonoids were determined by UV/VIS, HPLC techniques. The apigenin 7-O-glucosides, cynarin, narirutin, gallic acid, caffeic acid were found as the main flavonoids constituents in all parts of artichoke. It showed that value of total phenolic compounds and flavonoids by EM3 were higher than that of total phenolic compounds and flavonoids by EM1 and EM2. Furthermore, the results of this study revealed that total phenolic compounds and flavonoids, obtained by these convenient extraction methods, may show the quick efficacy of artichoke in all respects of their quality and quantity.
Ecological Recovery of Contaminated Dredged Materials in Masan Bay, Korea
Lee, Chan-Won;Jeon, Hong-Pyo;Ha, Kyung-Ae 29
A large amount of $2.1{\times}10^6m^3$ of polluted sediment was dredged from the Masan Bay and deposited in Gapo confined area, Masan, Korea. The six representative sediments were obtained and analyzed for issue components. The data was discussed with the species of benthos and their distribution. It was judged that toxicological effects of sediment analyzed ranged from ERL to ERM with copper and zinc, and ERL with cadmium, chrome, lead and nickel by the Adverse Biological Effects. The dredging index (DI) of sediments stabilized for 10 years since dumping the confined site was calculated and compared with the DI values of dredged sediment itself. DI values decreased from 0.67 to $0.07{\sim}0.18$, which reflects DI value less than 0.2 is good for benthos in the sediment by the natural recovery of dredged materials. The ecological recovery was confirmed in this confined area as a habitat of benthic organisms.
Decolorization of Rhodamine B by Electro Fenton-like Reaction
Kim, Dong-Seog;Park, Young-Seek 37
The electro-chemical decolorization of Rhodamine B (RhB) in water has been carried out by electro Fenton-like process. The effect of distance, material and shape of electrode, NaCl concentration, current, electric power, $H_2O_2$ and pH have been studied. The results obtained that decrease of RhB concentration of Fe(+)-Fe(-) electrode system was higher than that of other electrode system. The decrease of RhB concentration was not affected electrode distance and shape. Decolorization of electro Fenton-like reaction, which was added $H_2O_2$ onto the electrolysis using electrode was higher than electrolysis. Addition of NaCl decreased the electric consumption. The lower pH is, the faster initial reaction rate and reaction termination time observed.
Health Risk Assessment of Occupants in the Small-Scale Public Facilites for Aldehydes and VOCs
Yang, Ji-Yeon; Kim, Ho-Hyun;Shin, Dong-Chun;Kim, Yoon-Shin;Sohn, Jong-Ryeul;Lim, Jun-Hwan;Lim, Young-Wook 45
This study was to assess the lifetime cancer and non-cancer risk of exposure of worker and user at public facilities in Korea to volatile organic compounds (VOCs). We measured the concentrations of two aldehydes and five VOCs in indoor air at 424 public buildings that 8 kinds of public facilities (70 movie theaters, 86 offices, 86 restaurants, 70 academies, 22 auditoriums, 30 PC-rooms, 30 singing-rooms and 30 bars) all over the country. There were estimated the human exposure dose and risks with averages of the using-time and frequency for facility users and office workers, respectively. Carcinogens (formaldehyde, acetaldehyde, and benzene) were estimated the lifetime excess cancer risks (ECRs). non-carcinogens (toluene, ethylbenzene, xylene, and styrene) were estimated the hazard quotients (HQs). The average ECRs of formaldehyde and benzene for facility worker and user were $1{\times}10^{-3}{\sim}1{\times}10^{-4}\;and\;1{\times}10^{-4}{\sim}1{\times}10^{-5}$ level, respectively, in all facilities. HQs of four non-carcinogens did not exceed 1.0 for all subjects in all facilities. The estimated ECRs for restaurant and auditorium were the highest, and the PC-room and bar were the next higher facilities. Furthermore, people in a smoking facility had the highest cancer risk. Higher ECRs of formaldehyde and benzene were observed in indoor smoking facilities such as restaurant and auditorium. Higher HQs of toluene and xylene were observed at the restaurant and office building.
Distribution Properties of Heavy Metals in Goseong Cu Mine Area, Kyungsangnam-do, Korea and Their Pollution Criteria: Applicability of Frequency Analysis and Probability Plot
Na, Choon-Ki;Park, Hyun-Ju 57
The frequency analysis and the probability plot were applied to heavy metal contents of soils collected from the Goseong Cu mine area as a statistic method for the determination of the threshold value which was able to partition a population comprising largely dispersed heavy metal contents into the background and the anomalous populations. Almost all the heavy metal contents of soil showed a positively skewed distributions and their cumulative percentage frequencies plotted as a curved lines on logarithmic probability plot which represent a mixture of two or more overlapping populations. Total Cu, Pb and Cd data and extractable Cu and Pb data could be partitioned into background and anomalous populations by using the inflection in each curve. The others showed a normally distributed population or an largely overlapped populations. The threshold values obtained from replotted frequency distributions with the partitioned populations were Cu 400 mg/kg, Pb 450 mg/kg and Cd 3.5 mg/kg in total contents and Cu 40 mg/kg and Pb 12 mg/kg in extractable contents, respectively. The thresholds for total contents are much higher than the tolerable level of soil pollution proposed by Kloke(Cu 100 mg/kg, Pb 100 mg/kg, Cd 3 mg/kg), but those for extractable contents are not exceeded the worrying level of soil pollution proposed by Ministry of Environment(Cu 50 mg/kg, Pb 100 mg/kg). When the threshold values were used as the criteria of soil pollution in the study area, $9{\sim}19%$ of investigated soil population was in polluted level. The spatial distributions of heavy metal contents greater than threshold values showed that polluted soils with heavy metals are restricted within the mountain soils in the vicinity of abandoned mines.
Effect of Vapor Pressure of Adsorbate on Adsorption Phenomena
Kim, Sang-Won;Kwon, Jun-Ho;Kang, Jeong-Hwa;Song, Seung-Koo 67
Adsorption process is largely influenced by pore structures of adsorbents and physical properties of adsorbates and adsorbents. The previous studies of this laboratory was focused on the role of pore structures of adsorbents. And we found some pores of adsorbates which have larger pore diameters than the diameter of adsorbate are filled with easily. In this study the effects of physical and chemical properties of adsorbates and adsorbents, such as pore size distribution, vapor pressure on adsorption were investigated more thoroughly at the concentration of adsorbate of 1000 ppm. The adsorption in the pore ranges of $2{\sim}4$ times of adsorbates's diameter could be explained by space filling concept. But there was some condensation phenomena at larger pore ranges. The errors between the adsorbed amount of non-polar adsorbates and the calculated amounts by considering factors were found to be 44.46%, positively, and -142%, negatively. When vapor pressure is considered, the errors between the adsorbed amount of non-polar adsorbates and the calculated amounts were in the range of $1.69%{\sim}32.25%$ positively, and negatively $-1.08%{\sim}-63.10%$.
The Development of Visitor Counting System Based on Ubiquitous Sensor Networks in National Park: Case Study of Nogodan Area in Chirisan National Park
Lee, Ju-Hee;Sim, Kyu-Won;Bae, Min-Ki 77
The purpose of this study was to develop the national park visitor counting system using the ubiquitous sensor network. This system is composed of a sensor node, sink node, gateways, CDMA module, server, and clients. The results of the study were: 1) stable data transmission distance was possible within 100 meters between sensor nodes, 2) the developed counting sensor system showed a network communication stability level of 88.3 percent in 1.2m wide trails. When installed in concentrate use areas or forks of national parks, the visitor counting system will not only contribute to provide reliable visitor counting, but also to improve the quality of national park visitor service, to manage park facilities and natural resources more efficiently, to achieve an information oriented national park system.
Temporal and Spatial Variability of Chlorophyll a in the Northern East China Sea using Ocean Color Images in Summer
Kim, Sang-Woo;Lim, Jin-Wook;Jang, Lee-Hyun 85
Temporal and spatial variabilities of chlorophyll a (Chl-a) in the northern East China Sea (ECS) are described, using both 8-day composite images of the SeaWiFS (Sea-viewing Wide Field-of-view Sensor) and in-situ data investigated in August and September during 2000-2005. Ocean color imagery showed that Chl-a concentrations on the continental shelf within the 50 m depth in the ECS were above 10 times higher than those of the Kuroshio area throughout the year. Higher concentrations (above $5mg/m^3$) of yearly mean Chl-a were observed along the western part of the shelf near the coast of China. The standard deviation also showed the characteristics of the spatial variability near $122-124^{\circ}E$, where the western region of the East China Sea was grater than that of the eastern region. Particularly the significant concentration of Chl-a, up to $9mg/m^3$, was found at the western part of $125^{\circ}E$ in the in-situ data of 2002. The higher Chl-a concentrations of in-situ data were consistent with low salinity waters of below 30 psu. It means that there were the close relationship between the horizontal distribution of Chl-a and low salinity water.
The Study on the Remediation of Contaminated Soil as TPH using SVE and Bioremediation
Kim, Jung-Kwon 97
This study examined the contaminated soils with an indicator of TPH using SVE (Soil Vapor Extraction) and biological treatments. Their results are as follows. Water content in the polluted soils slowly decreased from 15% during the initial experimental condition to 10% during the final condition. Purification of polluted soils by Bioventing system is likely to hinder the microbial activity due to decrease of water content. Removal rate of TPH in the upper reaction chamber was a half of initial removal rate at the 25th day of the experiment. The removal rate in the lower reaction chamber was 45% with concentration of 995.4 mg/kg. When the Bioventing is used the removal rate at the 14th day of the experiment was 53%, showing 7 day shortenting. Since the Bioventing method control the microbial activity due to dewatering of the polluted soil, SVE method is likely to be preferable to remove in-situ TPH. The reactor that included microbes and nutrients showed somewhat higher removal rate of TPH than the reactor that included nurtients only during experimental period. In general, the concentration showed two times peaks and then decreased, followed by slight variation of the concentration in low concentration levels. Hence, in contrast to SVE treatment, the biological treatment tend to show continuous repetitive peaks of concentration followed by concentration decrease.
Modification of EPDM Rubbers for Enhancement of Environmental Durability of Aerator Membrane
Ahn, Won-Sool 107
A study on the enhancement of environmental durability of EPDM rubber materials for the aerator membrane was performed using a butyl rubber as a modifier. A conventional EPDM rubber formulation was evaluated as having about 26.0 wt% or more oil content from the chloroform immersion test. These oils would be gradually and continuously deleted from the aerator membrane when directly exposed to a waste-water or chemically corrosive fluids, making the membrane less flexible and the performance worse. To improve this, a butyl rubber (IIR) was utilized as the modifier for a low-ENB type of EPDM rubber formulation with low-oil content. The environmental durability of the IIR-modified EPDM rubber material was expected to be greatly enhanced compared to the conventional one. However, the mechanical and performance properties such as elongation, tensile strength, and air bubble size, etc. were still maintained as good as in the conventional one. Furthermore, TGA analysis of the IIR-modified EPDM material showed that there would be partially compatible between IIR and EPDM. It also showed that the initial degradation temperature of the IIR-modified EPDM could be somewhat increased, exhibiting the enhanced compatibility among the components and, thereby, more enhanced environmental durability.
Improvement Effect of Water Quality along the Water Discharged Area by Water Dispersion from the Sewage Disposal Plant
Kim, Dong-Soo;Park, Jong-Tae;Kim, Yong-Gu;Park, Sung-Chun 113
[ $6{\sim}13mg/L$ ] base water concentration on monthly BOD has been kept at the Geukrak bridge point for this research target and it indicates the water quality under the existed rank. Due to this present condition of water quality, the demage of ecology from the upper stream to the lower one of the bridge could be conjectured. Moreover, nonstructural extinction of the ecology seems to have gotten worse between both the streams of Yeoungsan River. On this research, eco-corridor between the upper stream and the lower stream of the river should be ensured, the ecological demage needs to be cut off, a dispersed discharge method which the existed method of the 1st sewage plant in Gwangju was enhanced to should be inducted for the procuring of various water ecosystem, and the conditions by the scenario suggested from this research could be applied to a water quality model. then, analysis the improvement effect of the water quality adjacent the river. From the test result, Case3-Type1 scenario is thought to be the best one. From the test result with Case3-Type1 when the concentrated discharge was never done, 0.07 mg/L of BOD concentration was increased at the lower stream where Yeoungbon B point (Haksan Bridge) is but the water improvement effect of $0.24{\sim}2.87mg/L$ is thought to have been done at the area of water deterioration.
Soil-Vapor Survey on Soil-Remediation by EMPLEX Collector
Kim, Jung-Sung 119
Laboratory analytical results of 22 sets of hydrophobic adsorbent coils containing surface soil-vapor and two soil samples collected by conventional intrusive method from each boring location at two active dry cleaning facilities in the State of Illinois, U.S.A, were presented to evaluate the performance of soil-vapor survey. The most critical factor to determine the effectiveness of soil-vapor survey is the distance from the soil-vapor sampling device to the actual contamination, which is a function of soil porosity, permeability, primary lithology, and other geological and hydrogeological site-specific parameters. Also this factor can be affected by the history of contaminant-generating operations. The laboratory analytical results in this study showed longer dry cleaning operation history (i.e., 50 years) and presence of fine sand at the beneath Site B allow the contaminants to migrate farther and deeper over a fixed time compared to Site A(i.e., 35 years and silty clay) so that the soil-vapor survey is not likely the most effective environmental site investigation method alone for Site B. However, for Site A, the soil-vapor survey successfully screened the site to identify the location reporting the highest soil concentration of chlorinated solvents.
The Content of Heavy Metals in Manufactured Herbal Medicines
Jung, Dae-Hwa;Park, Moon-Ki 129
This study is an endeavor to evaluate the safety of medicines from heavy metals, prescribed on the basis of herbal medicinal system and oriental medical prescription which are circulated much recently. For that, three globular types, four extract granular types and four liquid types of herbal medicine were bought to compare and analyze the content of heavy metals, such as As, Pb, Cd and Hg, which are harmful to human body. The concentration of Pb was found to be 0.552 ppm in Sachiltang, 2.552 ppm in Anjungjogiwhan and 1.735 ppm in Cheongsangbohwawhan in case of pill type herbal medicine, and liquid type herbal medicine, Maekmundongtang was 0.002 ppm, Galgeuntang was 0.003 ppm, Sangwhatang was 0.004 ppm, 20jeon Daebotang was 0.0185 ppm. And the concentration of Pb was found to be 0.322 ppm in Banhasasimtang, 0.47 ppm in Eungjosan, 0.29 ppm in Yukmijihwangtang, 0.64 ppm in Socheongryongtang in case of granular type. It was found that the liquid types herbal medicines were relatively safer than three pill types of, four granular types of and four liquid types herbal medicines were tested for concentration of heavy metals. It is considered that is required in the stage of raw material treatment, manufacturing and packaging because those herbal medicines are directly taken in and absorbed into human body through the final treatment process. | CommonCrawl |
Optimal contraception control for a nonlinear population model with size structure and a separable mortality
Global attracting set, exponential decay and stability in distribution of neutral SPDEs driven by additive $\alpha$-stable processes
Hopf bifurcation in a model of TGF-$\beta$ in regulation of the Th 17 phenotype
Jisun Lim 1, , Seongwon Lee 2, and Yangjin Kim 3,
School of Biological Sciences, Seoul National University, Seoul 08826, South Korea
Division of Mathematical Models, National Institute for Mathematical Sciences, Daejeon 34047, South Korea
Department of Mathematics, Konkuk University, Seoul, 05029, South Korea
Received September 2015 Revised September 2016 Published November 2016
Airway exposure of lipopolysaccharide (LPS) is shown to regulate type I and type II helper T cell induced asthma. While high doses of LPS derive Th1- or Th17-immune responses, low LPS levels lead to Th2 responses. In this paper, we analyze a mathematical model of Th1/Th2/Th17 asthma regulation suggested by Lee (S. Lee, H.J. Hwang, and Y. Kim, Modeling the role of TGF-$\beta$ in regulation of the Th17 phenotype in the LPS-driven immune system, Bull Math Biol., 76 (5), 1045-1080, 2014) and show that the system can undergo a Hopf bifurcation at a steady state of the Th17 phenotype for high LPS levels in the presence of time delays in inhibition pathways of two key regulators: IL-4/Th2 activities ($H$) and TGF-$\beta$levels ($G$). The time delays affect the phenotypic switches among the Th1, Th2, and Th17 phenotypes in response to time-dependent LPS doses via nonlinear crosstalk between $H$ and $G$. An extended reaction-diffusion model also predicts coexistence of these phenotypes under various biochemical and bio-mechanical conditions in the heterogeneous microenvironment.
Keywords: mathematical model, asthma, Delay differential equation, TGF$\beta$., Th1/Th2/Th17, Hopf bifurcation.
Mathematics Subject Classification: Primary: 92C45, 92C50; Secondary: 92B0.
Citation: Jisun Lim, Seongwon Lee, Yangjin Kim. Hopf bifurcation in a model of TGF-$\beta$ in regulation of the Th 17 phenotype. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3575-3602. doi: 10.3934/dcdsb.2016111
S. Al-Muhsen, S. Letuve, A. Vazquez-Tello, M. A. Pureza, H. Al-Jahdali, A. S. Bahammam, Q. Hamid and R. Halwani, Th17 cytokines induce pro-fibrotic cytokines release from human eosinophils,, Respir. Res., 14 (2013). doi: 10.1186/1465-9921-14-34. Google Scholar
T. Alarcón, H. M. Byrne and P. K. Maini, Towards whole-organ modelling of tumour growth,, Prog. Biophys. Mol. Biol., 85 (2004), 451. Google Scholar
J. F. Alcorn, C. R. Crowe and J. K. Kolls, $T_H$17 cells in asthma and COPD,, Annu. Rev. Physiol., 72 (2010), 495. Google Scholar
O. Arino, M. L. Hbid and E. Ait Dads, Delay Differential Equations and Applications,, Springer Netherlands, (2006). doi: 10.1007/1-4020-3647-7. Google Scholar
K. J. Baek, J. Y. Cho, P. Rosenthal, L. E. C. Alexander, V. Nizet and D. H. Broide, Hypoxia potentiates allergen induction of HIF-1$\alpha$, chemokines, airway inflammation, TGF-$\beta$1, and airway remodeling in a mouse model,, Clin. Immunol., 147 (2013), 27. Google Scholar
R. L. Bar-Or and L. A. Segel, On the role of a possible dialogue between cytokine and TCR-presentation mechanisms in the regulation of autoimmune disease,, J. Theor. Biol., 190 (1998), 161. doi: 10.1006/jtbi.1997.0545. Google Scholar
U. Behn, H. Dambeck and G. Metzner, Modeling Th1-Th2 regulation, allergy, and hyposensitization,, in Dynamical Modeling in Biotechnology, (2001), 227. doi: 10.1142/9789812813053_0011. Google Scholar
B. S. Bochner, B. J. Undem and L. M. Lichtenstein, Immunological aspects of allergic asthma,, Annu. Rev. Immunol., 12 (1994), 295. doi: 10.1146/annurev.iy.12.040194.001455. Google Scholar
R. E. Callard and A. J. Yates, Immunology and mathematics: Crossing the divide,, Immunology, 115 (2005), 21. doi: 10.1111/j.1365-2567.2005.02142.x. Google Scholar
J. Carneiro, J. Stewart, A. Coutinho and G. Coutinho, The ontogeny of class-regulation of CD4$^+$ T lymphocyte populations,, Int. Immunol., 7 (1995), 1265. doi: 10.1093/intimm/7.8.1265. Google Scholar
C. Clemedson and A. Nelson, General biology: The adult organism,, in Mechanisms in Radiobiology: Multicellular Organisms (eds. M. Errera and A. Forssberg), (1960), 95. doi: 10.1016/B978-1-4832-2829-7.50010-1. Google Scholar
L. Cosmi, F. Liotta, E. Maggi, S. Romagnani and F. Annunziato, Th17 cells: New players in asthma pathogenesis,, Allergy, 66 (2011), 989. doi: 10.1111/j.1398-9995.2011.02576.x. Google Scholar
E. Cutz, H. Levison and D. M. Cooper, Ultrastructure of airways in children with asthma,, Histopathology, 2 (1978), 407. doi: 10.1111/j.1365-2559.1978.tb01735.x. Google Scholar
C. Dong, Diversification of T-helper-cell lineages: Finding the family root of IL-17-producing cells,, Nat. Rev. Immunol., 6 (2006), 329. doi: 10.1038/nri1807. Google Scholar
C. Dong, $T_H$17 cells in development: An updated view of their molecular identity and genetic programming,, Nat. Rev. Immunol., 8 (2008), 337. Google Scholar
S. C. Eisenbarth, D. A. Piggott, J. W. Huleatt, I. Visintin, C. A. Herrick and K. Bottomly, Lipopolysaccharide-enhanced, toll-like receptor 4-dependent T helper cell type 2 responses to inhaled antigen,, J. Exp. Med., 196 (2002), 1645. doi: 10.1084/jem.20021340. Google Scholar
R. L. Elliott and G. C. Blobe, Role of transforming growth factor beta in human cancer,, J. Clin. Oncol., 23 (2005), 2078. Google Scholar
M. A. Fishman and A. S. Perelson, Th1/Th2 differentiation and cross-regulation,, Bull. Math. Biol., 61 (1999), 403. doi: 10.1006/bulm.1998.0074. Google Scholar
J. E. Gereda, D. Y. M. Leung, A. Thatayatikom, J. E. Streib, M. R. Price, M. D. Klinnert and A. H. Liu, Relation between house-dust endotoxin exposure, type 1 T-cell development, and allergen sensitisation in infants at high risk of asthma,, Lancet, 355 (2000), 1680. doi: 10.1016/S0140-6736(00)02239-X. Google Scholar
L. Gorelik, S. Constant and R. A. Flavell, Mechanism of transforming growth factor $\beta$-induced inhibition of T helper type 1 differentiation,, J. Exp. Med., 195 (2002), 1499. Google Scholar
L. Gorelik and R. A. Flavell, Abrogation of TGF$\beta$ signaling in T cells leads to spontaneous T cell differentiation and autoimmune disease,, Immunity, 12 (2000), 171. Google Scholar
F. Gross, G. Metznerb and U. Behn, Mathematical modelling of allergy and specific immunotherapy: Th1-Th2-Treg interactions,, J. Theor. Biol., 269 (2011), 70. doi: 10.1016/j.jtbi.2010.10.013. Google Scholar
G. Grünig, M. Warnock, A. E. Wakil, R. Venkayya, F. Brombacher, D. M. Rennick, D. Sheppard, M. Mohrs, D. D. Donaldson, R. M. Locksley and D. B. Corry, Requirement for IL-13 independently of IL-4 in experimental asthma,, Science, 282 (1998), 2261. Google Scholar
J. Guckenheimer and P. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, vol. 42 of Applied Mathematical Sciences,, 1st edition, (1983). doi: 10.1007/978-1-4612-1140-2. Google Scholar
I. Gutcher and B. Becher, APC-derived cytokines and T cell polarization in autoimmune inflammation,, J. Clin. Invest., 117 (2007), 1119. doi: 10.1172/JCI31720. Google Scholar
J. K. Hale, Theory of Functional Differential Equations, vol. 3 of Applied Mathematical Sciences,, Springer-Verlag New York, (1977). Google Scholar
Q. Hamid and M. Tulic, Immunobiology of asthma,, Annu. Rev. Physiol., 71 (2009), 489. doi: 10.1146/annurev.physiol.010908.163200. Google Scholar
L. E. Harrington, R. D. Hatton, P. R. Mangan, H. Turner, T. L. Murphy, K. M. Murphy and C. T. Weaver, Interleukin 17-producing $CD4^+$ effector T cells develop via a lineage distinct from the T helper type 1 and 2 lineages,, Nat. Immunol., 6 (2005), 1123. doi: 10.1038/ni1254. Google Scholar
L. E. Harrington, P. R. Mangan and C. T. Weaver, Expanding the effector CD4 T-cell repertoire: The Th17 lineage,, Curr. Opin. Immunol., 18 (2006), 349. doi: 10.1016/j.coi.2006.03.017. Google Scholar
B. D. Hassard, N. D. Kazarinoff and Y.-H. Wan, Theory and Applications of Hopf Bifurcation, vol. 41 of London Mathematical Society Lecture Note Series,, Cambridge University Press, (1981). Google Scholar
N. A. Hosken, K. Shibuya, A. W. Heath, K. M. Murphy and A. O'Garra, The effect of antigen dose on CD4$^+$ T helper cell phenotype development in a T cell receptor-$\alpha\beta$-transgenic model,, J. Exp. Med., 182 (1995), 1579. doi: 10.1084/jem.182.5.1579. Google Scholar
H. Jiang and L. Chess, An integrated view of suppressor T cell subsets in immunoregulation,, J. Clin. Invest., 114 (2004), 1198. doi: 10.1172/JCI23411. Google Scholar
Y. Kim, H. Lee, N. Dmitrieva, J. Kim, B. Kaur and A. Friedman, Choindroitinase ABC I-mediated enhancement of oncolytic virus spread and anti-tumor efficacy: A mathematical model,, PLoS One, 9 (2014). doi: 10.1371/journal.pone.0102499. Google Scholar
Y. Kim, S. Lee, Y. Kim, Y. Kim, Y. Gho, H. Hwang and S. Lawler, Regulation of Th1/Th2 cells in asthma development: A mathematical model,, Math. Bios. Eng, 10 (2013), 1095. doi: 10.3934/mbe.2013.10.1095. Google Scholar
Y. Kim and H. Othmer, A hybrid model of tumor-stromal interactions in breast cancer,, Bull Math Biol, 75 (2013), 1304. doi: 10.1007/s11538-012-9787-0. Google Scholar
Y. Kim and S. Roh, A hybrid model for cell proliferation and migration in glioblastoma,, Discrete and Continuous Dynamical Systems-B, 18 (2013), 969. doi: 10.3934/dcdsb.2013.18.969. Google Scholar
Y. Kim, M. Stolarska and H. G. Othmer, A hybrid model for tumor spheroid growth in vitro I: Theoretical development and early results,, Math. Models Methods Appl. Sci., 17 (2007), 1773. doi: 10.1142/S0218202507002479. Google Scholar
Y. Kim, M. Stolarska and H. Othmer, The role of the microenvironment in tumor growth and invasion,, Prog Biophys Mol Biol, 106 (2011), 353. doi: 10.1016/j.pbiomolbio.2011.06.006. Google Scholar
Y.-K. Kim, S.-Y. Oh, S. G. Jeon, H.-W. Park, S.-Y. Lee, E.-Y. Chun, B. Bang, H.-S. Lee, M.-H. Oh, Y.-S. Kim, J.-H. Kim, Y. S. Gho, S.-H. Cho, K.-U. Min, Y.-Y. Kim and Z. Zhu, Airway exposure levels of lipopolysaccharide determine type 1 versus type 2 experimental asthma,, J. Immunol., 178 (2007), 5375. doi: 10.4049/jimmunol.178.8.5375. Google Scholar
Y.-S. Kim, S.-W. Hong, J.-P. Choi, T.-S. Shin, H.-G. Moon, E.-J. Choi, S. G. Jeon, S.-Y. Oh, Y. S. Gho, Z. Zhu and Y.-K. Kim, Vascular endothelial growth factor is a key mediator in the development of T cell priming and its polarization to type 1 and type 17 T helper cells in the airways,, J. Immunol., 183 (2009), 5113. doi: 10.4049/jimmunol.0901566. Google Scholar
T. A. Krouskop, T. M. Wheeler, F. Kallel, B. S. Garra and T. Hall, Elastic moduli of breast and prostate tissues under compression,, Ultrason. Imaging, 20 (1998), 260. doi: 10.1177/016173469802000403. Google Scholar
Y. Kuang, Delay Differential Equations: With Applications in Population Dynamics,, Academic Press, (1993). Google Scholar
C. L. Langrish, Y. Chen, W. M. Blumenschein, J. Mattson, B. Basham, J. D. Sedgwick, T. McClanahan, R. A. Kastelein and D. J. Cua, IL-23 drives a pathogenic T cell population that induces autoimmune inflammation,, J. Exp. Med., 201 (2005), 233. doi: 10.1084/jem.20041257. Google Scholar
S. Lee, H. Hwang and Y. Kim, Modeling the role of TGF-beta in regulation of the Th17 phenotype in the LPS-driven immune system,, Bull. Math. Biol., 76 (2014), 1045. doi: 10.1007/s11538-014-9946-6. Google Scholar
Y. K. Lee, H. Turner, C. L. Maynard, J. R. Oliver, D. Chen, C. O. Elson and C. T. Weaver, Late developmental plasticity in the T helper 17 lineage,, Immunity, 30 (2009), 92. doi: 10.1016/j.immuni.2008.11.005. Google Scholar
C. M. Lloyd and C. M. Hawrylowicz, Regulatory T cells in asthma,, Immunity, 31 (2009), 438. doi: 10.1016/j.immuni.2009.08.007. Google Scholar
M. S. Maddur, P. Miossec, S. V. Kaveri and J. Bayry, Th17 cells: Biology, pathogenesis of autoimmune and inflammatory diseases, and therapeutic strategies,, Am. J. Pathol., 181 (2012), 8. doi: 10.1016/j.ajpath.2012.03.044. Google Scholar
A. O. Magnan, L. G. Mély, C. A. Camilla, M. M. Badier, F. A. Montero-Julian, C. M. Guillot, B. B. Casano, S. J. Prato, V. Fert, P. Bongrand and D. Vervloet, Assessment of the Th1/Th2 paradigm in whole blood in atopy and asthma: Increased IFN-$\gamma$-producing CD8(+) T cells in asthma,, Am. J. Respir. Crit. Care Med., 161 (2000), 1790. doi: 10.1164/ajrccm.161.6.9906130. Google Scholar
S. Marino, I. Hogue, C. Ray and D. Kirschner, A methodology for performing global uncertainty and sensitivity analysis in systems biology,, Journal of Theoretical Biology, 254 (2008), 178. doi: 10.1016/j.jtbi.2008.04.011. Google Scholar
O. Michel, R. Ginanni, J. Duchateau, F. Vertongen, B. Bon and R. Sergysels, Domestic endotoxin exposure and clinical severity of asthma,, Clin. Exp. Allergy, 21 (1991), 441. doi: 10.1111/j.1365-2222.1991.tb01684.x. Google Scholar
H.-G. Moon, Y.-M. Tae, Y.-S. Kim, S. G. Jeon, S.-Y. Oh, Y. S. Gho, Z. Zhu and Y.-K. Kim, Conversion of Th17-type into Th2-type inflammation by acetyl salicylic acid via the adenosine and uric acid pathway in the lung,, Allergy, 65 (2010), 1093. doi: 10.1111/j.1398-9995.2010.02352.x. Google Scholar
B. F. Morel, J. Kalagnanam and P. A. Morel, Mathematical modeling of Th1-Th2 dynamics,, in Theoretical and Experimental Insights into Immunology (eds. A. S. Perelson and G. Weisbuch), (1992), 171. doi: 10.1007/978-3-642-76977-1_11. Google Scholar
T. R. Mosmann and S. Sad, The expanding universe of T-cell subsets: Th1, Th2 and more,, Immunol. Today, 17 (1996), 138. doi: 10.1016/0167-5699(96)80606-2. Google Scholar
T. R. Mosmann, H. Cherwinski, M. W. Bond, M. A. Giedlin and R. L. Coffman, Two types of murine helper T cell clone. I. definition according to profiles of lymphokine activities and secreted proteins,, J. Immunol., 136 (1986), 2348. Google Scholar
T. R. Mosmann and R. L. Coffman, TH1 and TH2 cells: Different patterns of lymphokine secretion lead to different functional properties,, Annu. Rev. Immunol., 7 (1989), 145. doi: 10.1146/annurev.iy.07.040189.001045. Google Scholar
E. Muraille, O. Leo and M. Kaufman, The role of antigen presentation in the regulation of class-specific (Th1/Th2) immune responses,, J. Biol. Syst., 3 (1995), 397. doi: 10.1142/S021833909500037X. Google Scholar
K. M. Murphy, P. Travers and M. Walport, Janeway's Immunobiology,, 7th edition, (2007). Google Scholar
T. Nakagiri, M. Inoue, M. Minami, Y. Shintani and M. Okumura, Immunology mini-review: the basics of $T_H$17 and interleukin-6 in transplantation,, Transplant. Proc., 44 (2012), 1035. Google Scholar
M. F. Neurath, S. Finotto and L. H. Glimcher, The role of Th1/Th2 polarization in mucosal immunity,, Nat. Med., 8 (2002), 567. doi: 10.1038/nm0602-567. Google Scholar
K. Oh, M. W. Seo, G. Y. Lee, O.-J. Byoun, H.-R. Kang, S.-H. Cho and D.-S. Lee, Airway epithelial cells initiate the allergen response through transglutaminase 2 by inducing IL-33 expression and a subsequent Th2 response,, Respir. Res., 14 (2013), 35. doi: 10.1186/1465-9921-14-35. Google Scholar
M. J. Paszek and V. M. Weaver, The tension mounts: mechanics meets morphogenesis and malignancy,, J. Mammary Gland Biol. Neoplasia, 9 (2004), 325. doi: 10.1007/s10911-004-1404-x. Google Scholar
A. Ray, A. Khare, N. Krishnamoorthy, Z. Qi and P. Ray, Regulatory T cells in many flavors control asthma,, Mucosal Immunol., 3 (2010), 216. doi: 10.1038/mi.2010.4. Google Scholar
J. Richter, G. Metzner and U. Behn, Mathematical modelling of venom immunotherapy,, J. Theor. Med., 4 (2002), 119. doi: 10.1080/10273660290022172. Google Scholar
D. S. Robinson, Regulatory T cells and asthma,, Clin. Exp. Allergy, 39 (2009), 1314. doi: 10.1111/j.1365-2222.2009.03301.x. Google Scholar
S. Romagnani, Atopic allergy and other hypersensitivities interactions between genetic susceptibility, innocuous and/or microbial antigens and the immune system,, Curr. Opin. Immunol., 9 (1997), 773. doi: 10.1016/S0952-7915(97)80176-8. Google Scholar
S. Sakaguchi, Regulatory T cells: Key controllers of immunologic self-tolerance,, Cell, 101 (2000), 455. Google Scholar
R. A. Seder and W. E. Paul, Acquisition of lymphokine-producing phenotype by CD4$^+$ T cells,, Annu. Rev. Immunol., 12 (1994), 635. Google Scholar
R. Vogel and U. Behn, Th1-Th2 regulation and allergy: Bifurcation analysis of the non-autonomous system,, in Mathematical Modeling of Biological Systems, (2008), 145. Google Scholar
Y. Y. Wan, Multi-tasking of helper T cells,, Immunology, 130 (2010), 166. doi: 10.1111/j.1365-2567.2010.03289.x. Google Scholar
M. Wills-Karp, J. Luyimbazi, X. Xu, B. Schofield, T. Y. Neben, C. L. Karp and D. D. Donaldson, Interleukin-13: Central mediator of allergic asthma,, Science, 282 (1998), 2258. doi: 10.1126/science.282.5397.2258. Google Scholar
M. Wills-Karp, J. Santeliz and C. L. Karp, The germless theory of allergic disease: Revisiting the hygiene hypothesis,, Nat. Rev. Immunol., 1 (2001), 69. doi: 10.1038/35095579. Google Scholar
Y. Yang, H.-L. Zhang and J. Wu, Role of T regulatory cells in the pathogenesis of asthma,, Chest, 138 (2010), 1282. doi: 10.1378/chest.10-1440. Google Scholar
A. Yates, C. Bergmann, J. L. Van Hemmen, J. Stark and R. Callard, Cytokine-modulated regulation of helper T cell populations,, J. Theor. Biol., 206 (2000), 539. doi: 10.1006/jtbi.2000.2147. Google Scholar
A. Yates, R. Callard and J. Stark, Combining cytokine signalling with T-bet and GATA-3 regulation in Th1 and Th2 differentiation: A model for cellular decision-making,, J. Theor. Biol., 231 (2004), 181. doi: 10.1016/j.jtbi.2004.06.013. Google Scholar
M. Yazdanbakhsh, P. G. Kremsner and R. van Ree, Allergy, parasites, and the hygiene hypothesis,, Science, 296 (2002), 490. doi: 10.1126/science.296.5567.490. Google Scholar
Y. Zhao, J. Yang, Y. dong Gao and W. Guo, Th17 immunity in patients with allergic asthma,, Int. Arch. Allergy Immunol., 151 (2010), 297. doi: 10.1159/000250438. Google Scholar
L. Zhou, I. I. Ivanov, R. Spolski, R. Min, K. Shenderov, T. Egawa, D. E. Levy, W. J. Leonard and D. R. Littman, IL-6 programs $T_H$-17 cell differentiation by promoting sequential engagement of the IL-21 and IL-23 pathways,, Nat. Immunol., 8 (2007), 967. Google Scholar
Chiun-Chuan Chen, Yuan Lou, Hirokazu Ninomiya, Peter Polacik, Xuefeng Wang. Preface: DCDS-A special issue to honor Wei-Ming Ni's 70th birthday. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : ⅰ-ⅱ. doi: 10.3934/dcds.2020171
Mathew Gluck. Classification of solutions to a system of $ n^{\rm th} $ order equations on $ \mathbb R^n $. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5413-5436. doi: 10.3934/cpaa.2020246
Kerioui Nadjah, Abdelouahab Mohammed Salah. Stability and Hopf bifurcation of the coexistence equilibrium for a differential-algebraic biological economic system with predator harvesting. Electronic Research Archive, 2021, 29 (1) : 1641-1660. doi: 10.3934/era.2020084
Yukihiko Nakata. Existence of a period two solution of a delay differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1103-1110. doi: 10.3934/dcdss.2020392
Tetsuya Ishiwata, Young Chol Yang. Numerical and mathematical analysis of blow-up problems for a stochastic differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 909-918. doi: 10.3934/dcdss.2020391
Mugen Huang, Moxun Tang, Jianshe Yu, Bo Zheng. A stage structured model of delay differential equations for Aedes mosquito population suppression. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3467-3484. doi: 10.3934/dcds.2020042
Bernold Fiedler. Global Hopf bifurcation in networks with fast feedback cycles. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 177-203. doi: 10.3934/dcdss.2020344
Siyang Cai, Yongmei Cai, Xuerong Mao. A stochastic differential equation SIS epidemic model with regime switching. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020317
M. Dambrine, B. Puig, G. Vallet. A mathematical model for marine dinoflagellates blooms. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 615-633. doi: 10.3934/dcdss.2020424
Susmita Sadhu. Complex oscillatory patterns near singular Hopf bifurcation in a two-timescale ecosystem. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020342
Xianyong Chen, Weihua Jiang. Multiple spatiotemporal coexistence states and Turing-Hopf bifurcation in a Lotka-Volterra competition system with nonlocal delays. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021013
Jakub Kantner, Michal Beneš. Mathematical model of signal propagation in excitable media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 935-951. doi: 10.3934/dcdss.2020382
Simone Göttlich, Elisa Iacomini, Thomas Jung. Properties of the LWR model with time delay. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020032
Joel Kübler, Tobias Weth. Spectral asymptotics of radial solutions and nonradial bifurcation for the Hénon equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3629-3656. doi: 10.3934/dcds.2020032
Stefan Ruschel, Serhiy Yanchuk. The spectrum of delay differential equations with multiple hierarchical large delays. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 151-175. doi: 10.3934/dcdss.2020321
Hai Huang, Xianlong Fu. Optimal control problems for a neutral integro-differential system with infinite delay. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020107
John Mallet-Paret, Roger D. Nussbaum. Asymptotic homogenization for delay-differential equations and a question of analyticity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3789-3812. doi: 10.3934/dcds.2020044
Oleg Yu. Imanuvilov, Jean Pierre Puel. On global controllability of 2-D Burgers equation. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 299-313. doi: 10.3934/dcds.2009.23.299
Yining Cao, Chuck Jia, Roger Temam, Joseph Tribbia. Mathematical analysis of a cloud resolving model including the ice microphysics. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 131-167. doi: 10.3934/dcds.2020219
Martin Kalousek, Joshua Kortum, Anja Schlömerkemper. Mathematical analysis of weak and strong solutions to an evolutionary model for magnetoviscoelasticity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 17-39. doi: 10.3934/dcdss.2020331
PDF downloads (93)
HTML views (0)
Jisun Lim Seongwon Lee Yangjin Kim | CommonCrawl |
Labeling food safety attributes: to inform or not to inform?
Kofi Britwum ORCID: orcid.org/0000-0002-3357-75061 &
Amalia Yiannaka2
We examine the impact of food labels that make unsupported claims of food safety and labels that provide information to support such claims on consumer choices and examine consumers' willingness to pay for beef products with these different food safety labeling cues. Empirical results from a survey of grocery shoppers in a Midwestern city in the USA show that more than two thirds of respondents who received a label with unsubstantiated food safety claims chose this option and were willing to pay the highest price premium for it, compared to the less preferred labeling options that provided information to support food safety claims.
Food labels have gradually evolved from simply conveying nutritional information to communicating the presence of desirable or the absence of undesirable food attributes and/or production technologies. The development of several niche food markets has been enabled by labels highlighting the existence of positive or the absence of "negative" food attributes and/or technologies, effectively targeting consumers valuing this type of information. Examples include the "All Natural," "No Growth Promoting Antibiotics," "No GMOs," "Cage-free,",and "rBST-free" food labeling claims.
Evidence that consumers value and are willing to pay for such labels abounds. Wang et al. (1997) found that consumers concerned about rBST use in dairy production were also willing to pay more for the rBST-free label. Kanter et al. (2009) showed that having rBST-free milk reduced willingness to pay (WTP) for conventional milk by as much as 33%, after participants had been introduced to information about rBST-free milk.
There is also evidence that consumers are concerned about and are willing to pay price premiums for healthy, safe, and superior quality foods (Loureiro and McCluskey 2000). Verbeke and Ward (2006) reported that beef labeling cues that were rated as important by consumers were those related to perceived meat quality and safety. Dolgopolova and Teuber (2017), in a meta-analysis of consumer valuation of healthy food attributes, report positive WTP amounts for healthy food attributes and claims. Bimbo et al. (2016) showed that consumers were willing to pay price premiums for food attributes perceived to enhance health, such as "organic" and "natural". Syrengelas et al. (2017) found that consumers were willing to pay price premiums for the "natural" claim in steaks even though they did not know the United States Department of Agriculture (USDA) interpretation of this claim. In a study that examined British and German consumers' valuation of beef safety attributes, Lewis et al. (2017) reported that consumers were willing to pay a price premium for "hormone-free" beef, an attribute viewed as a beef safety cue. In addition, this study found that the country of origin was an important consideration among consumers who placed a high value on food safety, purportedly because it was perceived as another food safety signal (Lewis et al. 2017).
Several studies have also found that consumers are willing to pay for specific food safety technologies. Nayga Jr et al. (2006) examined consumer preferences for irradiated beef and found a WTP premium of 77 cents for a pound of irradiated ground beef, amounts considered adequate to cover the cost of the technology on a commercial scale. Huang et al. (2007) reported that consumers in the US state of Georgia were open to the use of irradiation in foods, with 65% of them expressing intent to purchase.
Despite consumer expectation ofFootnote 1 and preference for safer food, foods produced with unique food safety enhancing interventions have been rather challenging to differentiate in the market. This challenge stems in part from consumer misapprehension of the technologies adopted to ensure safer food products, and in part also due to food labeling claims that are uninformative or ambiguous, and the use of terms that do not have standardized interpretations (Palma et al. 2015). Thus, even though evidence from research studies shows that consumers are willing to pay and are accepting of certain food safety enhancing technologies when they are provided with information about their potential beneficial effects, the challenge is how to effectively communicate such technologies on food labels and how much information to provide on a label to substantiate food safety claims. This is particularly so for technologies consumers may be unfamiliar with (e.g., nanotechnology), or technologies not yet introduced.
The primary goal of this study is to examine the impact of different ways of communicating food safety attributes on consumer choices and WTP for various food safety labeling cues on food products. Secondary goals include examining how factors such as demographic characteristics, personal health issues, knowledge and acceptance of food safety interventions, and views about the government's role in regulating and ensuring food safety influence consumer preferences and WTP for food safety labels.
The food labels used in this study include both vague, unsubstantiated claims of food safety and more precise descriptions of a food safety enhancing technology to test the hypothesis that uninformative or ambiguous food labels with a positive message may resonate more powerfully with consumers than labels that provide factual information to corroborate food safety claims. The food safety enhancing technology considered is cattle vaccines against virulent strains of E. coli, a technology that has not seen widespread adoption.
Focusing on a technology that the public may be unfamiliar with and/or apprehensive about, the case study contributes to the literature by exploring different ways by which health claims attributed to the food safety intervention may be presented on food labels. Specifically, previous literature elicited bidding behavior/WTP for food safety attributes by providing respondents with different types of information such as negative and/or positive information (Fox et al. 2002; Nayga Jr et al. 2006; Teisl and Roe 2010). Our study extends the literature by examining and comparing food safety labels that use vague food safety claims to labels that include more precise descriptions of a food safety intervention to substantiate these claims without providing any additional information about the nature of the food safety intervention. By tweaking the description of health claims on food product labels, the study closely gauges labeling preferences with a design that matches an actual food purchasing scenario between competing product choices.
The rest of the study consists of five sections. The "Case study and experimental design" section describes the case study and experimental design used in the survey, followed by a description of the empirical models in the "Empirical models" section. The "Results and discussion" section discusses the model findings, and the "Conclusions" section concludes the study.
Case study and experimental design
The case study investigates consumers' response to and their labeling preferences for beef products from cattle vaccinated against virulent strains of E. coli such as E. coli O157:H7. Vaccines against E. coli O157:H7 have been approved for use by the USDA, have been shown to be effective in reducing the incidence of the bacteria in cattle by as much as 80% (Hurd and Malladi 2012), and can potentially decrease human cases of E. coli infections by at least 85% (Matthews et al. 2013). Notwithstanding the evidence supporting their effectiveness, they have received only limited adoption by beef producers (Callaway et al. 2009). This is partly attributable to the cost of the recommended application of the vaccine intervention, which can potentially erode producer surpluses if not matched by an increase in demand (Tonsor and Schroeder 2015). For this reason, capturing a price premium for beef products produced with this food safety intervention makes their differentiation in the retail market particularly pertinent for producers and processors.
However, signaling food safety attributes through food labels, and more so in the case of vaccines against E. coli can be potentially difficult for two reasons. First, the word "vaccine" on a food label may elicit mixed reactions among consumers, from concerns about drug resistance to the skepticism surrounding the long-term effect of vaccinations held by some. The second challenge involves indicating the name of a contaminant such as E. coli on a beef label, which may be subject to diverse interpretations. In this context, an important consideration is whether various consumer segments would perceive food safety claims to be credible, depending on how the claims are presented on food labels. Strijbos et al. (2016) noted that although health claims on meat with low nitrate levels were viewed as credible, trust varied by consumers' level of education; those with lower educational backgrounds were more likely to believe the claims.
A hypothetical survey was developed to address the above issues and achieve study objectives. The hypothetical nature of the survey was dictated by the fact that beef products from cattle treated with vaccines against E. coli O157 are not widely available in the market. Shoppers at five different grocery stores in Lincoln, Nebraska, were recruited to participate in the survey between December 2016 and January 2017, yielding a total of 445 participants who were also beef consumers. The stores include three local grocery brands, a Midwestern chain, and a cooperative natural foods store. The different grocery store brands and locations were selected to capture responses from diverse backgrounds.Footnote 2 The survey, which was designed using the Qualtrics software, took participants about 7 min to complete on a laptop computer, and each store session lasted approximately 5 h. A session began by setting a table and laptop computers in a heavily trafficked part of the store and by asking shoppers whether they were beef consumers, and if so, whether they would be willing to participate in a brief survey and earn a $15 store gift card.
The main part of the survey involved asking participants to choose between ground beef with a "standard" label (i.e., found on a typical ground beef product) and one that, in addition to the standard label, had a second label with food safety information. Three versions of the food safety labels were designed. The first showed the phrase "Safer Choice" in a circle with a description below indicating that the product is "from cattle raised under strict health standards to ENHANCE beef safety". In this version, no evidence is provided to support the food safety claim. We refer to this version as the "uninformative" or "unsubstantiated claim" version (Safer Choice/Enhance hereafter). The second food safety label showed the same "Safer Choice" phrase with accompanying information describing the product as originating "from cattle vaccinated against E. coli to REDUCE the risk of illness" (Safer Choice/Vaccinated hereafter). The third label showed the word E. coli in a red circle with a diagonal strikethrough to buttress the product's safety from E. coli bacteria and with a description below identical to the second food safety label (E. coli/Vaccinated hereafter). The E. coli with the slash-through design for the third label was intended to mimic other "free of" labels such as "No Growth Promoting Antibiotics" and "No Hormones," without explicitly claiming, however, that the product is entirely free of E. coli bacteria.
Each of the food safety labels was displayed to the left of the standard label on the ground beef product. The survey was designed such that participants were randomly assigned to one of the three food safety labeling options, with approximately 150 participants in each group. Thus, each participant saw only one (of the three) food safety label and had to choose among three options: ground beef with the standard label (option A), ground beef with the standard plus a food safety label (option B), and a "will not purchase either" option. To reflect food labels in an actual retail store setting, and to solely examine consumers' response to the food labels, no additional information about E. coli or vaccines were included in the survey. In making the initial choice between the ground beef in options A and B, no price information was given. The goal here was to have respondents choose between these two ground beef labels without being influenced by their prices. The food labels used in the survey are shown in section A of the Appendix. In addition to the labeling choices, the survey gathered information about participants' knowledge and opinions of animal production practices, beef consumption habits, views about the government's role in regulating and ensuring food safety, and demographic characteristics.
To determine WTP premiums for the ground beef with the additional food safety label, participants who chose option B (the standard label plus the food safety label) answered follow-up questions using the double-bounded contingent valuation (DBCV) elicitation format which presented two random premium bid amounts, with the second bid contingent on the first, following Hanemann et al. (1991). By asking the second question, the DBCV uses more information to determine WTP values, which improves the efficiency of the estimation (Hanemann et al. 1991).
Respondents who chose option B were assigned a random premium bid amount over the base price of conventional ground beef (option A) which was given as $4.30, and were asked whether they would be willing to pay this premium in addition to the original price for a pound of ground beef with the food safety label. If they answered Yes to the first premium bid, they were asked about their willingness to pay an amount greater than the initial bid. If they answered No to the first premium bid, the subsequent question posed a bid lower than the initial (still a premium over option A).
Respondents who chose option A (i.e., the ground beef with only the standard label) were asked whether they would be willing to purchase option B at a discount, if that were their only choice. For participants who answered Yes, a variant of the DBCV was used to determine the discount amounts these participants would be willing to accept to purchase the ground beef in option B. Those who were not willing to purchase option B at a discount were requested to provide a reason for this choice. Figure 1 depicts participants' labeling choice.
The labeling choice
Following findings in the literature about loss of information and efficiency when more than six bid values are used (Creel 1998; Hanemann and Kanninen 2001), five premium bid values were considered sufficient. The bid values used as premiums were $0.40, $0.80, $1.20, $2.00, and $3.00 while the bid values used as discounts were $0.20, $0.40, $0.80, $1.20, and $1.50. The premium bids chosen are shown in Table 1.
Table 1 Premium bids used
Participants who chose option A were assigned two discount bids, with the second discount amount conditioned on the first, similar to the premium bid assignments.Footnote 3 The discount amounts used are shown in Table 2.
Table 2 Discount bids used
Empirical models
Multinomial logit model
The multinomial logit was used to model individual choice among J alternatives as a function of their characteristics (Hoffman and Duncan 1988). Participants are assumed to be utility maximizers and choose the option that yields the highest utility. Let Uij be an individual's indirect utility function for a given option, expressed as:
$$ {U}_{ij}={x}_i^{\prime }{\beta}_j+{\varepsilon}_i $$
where the subscript i represents an individual, and j the alternative. The vector xi captures the individual i's characteristics, and εi the random error term which consists of unidentified factors that influence a participant's choice, and is independently and identically distributed with an extreme value type 1 distribution. Since an individual's true utility cannot be observed, the probability of a choice is used as a proxy in the estimation and is given as:
$$ Prob\left\{{y}_i=j\right\}= Prob\left\{\max \left({U}_{i1,}\dots \dots .,{U}_{iJ}\right)\right\} $$
The probability that individual i chooses alternative j, as shown by McFadden (1974), is:
$$ Prob\left({y}_i=j|{x}_i\right)=\frac{e^{x_i^{\prime }{\beta}_j}}{\sum \limits_{k=1}^J{e}^{x_i^{\prime }{\beta}_k}} $$
Equation 3 is the multinomial logit model. For this study, the ground beef with the standard label (option A) was designated as the reference or base category, with its probability given as:
$$ Prob\left({y}_i=1|{x}_i\right)=\frac{1}{1+\sum \limits_{k=1}^J{e}^{x_i^{\prime }{\beta}_k}} $$
The odds ratio of individual i choosing alternative j is:
$$ \frac{Prob\left({y}_i=j\right)}{Prob\left({y}_i=1\right)}=\exp \left({x}_i^{\prime }{\beta}_j\right) $$
The multinomial logit model was estimated using the maximum likelihood procedure.
Double-bounded contingent valuation method
The contingent valuation method measures changes to an individual's expenditure function, or their indirect utility function. An individual faced with a well-behaved utility function subject to an income constraint maximizes their utility given as:
$$ v\left(p,q,y\right)=\max\ \left\{u\left(x,q\right)\ \right|\ px\le y\Big\} $$
where x is a vector of private goods, such as ground beef; q is an attribute associated with the quality/safety of the good; and y is the individual's income. Using the compensating variation measure, we can determine the amount the individual is willing to pay for an improvement in the safety of the existing good from q0 (ground beef without the food safety enhancing attribute) to q1 (ground beef with the food safety enhancing attribute), defined as:
$$ v\left(p,{q}^1,y- WTP\right)=v\left(p,{q}^0,y\right) $$
where q1 > q0 such that ∂V/∂q > 0. If the cost of the food safety attribute is t, the individual will agree to pay this amount only if their WTP ≥ t. For the DBCV method, bivariate dichotomous choice valuation questions are asked resulting in four outcomes. Responses may fall into one of these four categories:
Yes to both bids (yes, yes)
Yes to the first bid and no to the second (yes, no)
No to the first bid and yes to the second bid (no, yes), and
No to both bids (no, no).
Assume that t1 and t2 are the two bid amounts and WTPi represents a participant's WTP a premium price for ground beef with the additional food safety label. Following Hanemann et al. (1991) and Lopez-Feldman (2012), answers to the two valuation questions will result in the following outcomes:
$$ {D}_i=\left\{\begin{array}{c}{t}^2\le {WTP}_i<\infty, \kern0.5em \mathrm{if}\ \mathrm{yes}\ \mathrm{to}\ \mathrm{both}\ \mathrm{bids}\\ {}{t}^1\le {WTP}_i<{t}^2,\kern0.5em \mathrm{if}\ \mathrm{yes}\ \mathrm{to}\ \mathrm{first}\ \mathrm{bid}\ \mathrm{and}\ \mathrm{no}\ \mathrm{to}\ \mathrm{second}\\ {}{t}^2\le {WTP}_i<{t}^1,\kern0.5em \mathrm{if}\ \mathrm{no}\ \mathrm{to}\ \mathrm{first}\ \mathrm{bid}\ \mathrm{and}\ \mathrm{yes}\ \mathrm{to}\ \mathrm{second}\\ {}{WTP}_i<{t}^2,\kern0.5em \mathrm{if}\ \mathrm{no}\ \mathrm{to}\ \mathrm{both}\ \mathrm{bids}\end{array}\right. $$
Let a participant's WTP be defined as:
$$ {WTP}_i={z}_i^{\prime}\beta +{\varepsilon}_i $$
where zi is a vector of independent variables, β is a vector of estimable parameters and εi a random error term which is normally distributed with a constant variance {εi~N(0, σ2)}. The log likelihood function for N participants is given as:
$$ lnL=\sum \limits_{i=1}^N\left[{I_i}^{\mathrm{yes},\mathrm{yes}}\mathit{\ln}\left(\Phi \left(\frac{z_i^{\prime}\beta -{t}^2}{\sigma}\right)\right)+{I_i}^{\mathrm{yes},\mathrm{no}}\mathit{\ln}\left(\Phi \left(\frac{z_i^{\prime}\beta -{t}^1}{\sigma}\right)-\Phi \left(\frac{z_i^{\prime}\beta -{t}^2}{\sigma}\right)\right)+{I_i}^{\mathrm{no},\mathrm{yes}}\mathit{\ln}\left(\Phi \left(\frac{z_i^{\prime}\beta -{t}^2}{\sigma}\right)-\Phi \left(\frac{z_i^{\prime}\beta -{t}^1}{\sigma}\right)\ \right)+{I_i}^{\mathrm{no},\kern0.5em \mathrm{no}}\mathit{\ln}\left(1-\Phi \left(\frac{z_i^{\prime}\beta -{t}^2}{\sigma}\right)\kern0.5em \right)\ \right] $$
Iiyes, yes, Iiyes, no, Iino, yes, and Iino, no are indicator variables equal to 0 or 1, depending on the outcome for each participant.
Despite the advantages of the double-bounded model, starting point bias can reduce the efficiency of the WTP estimates, with implications for statistical inference (Herriges and Shogren 1996). When participants anchor their WTP to the starting point bid, the follow-up question becomes a weighted average of a respondent's prior WTP and the initial bid (Herriges and Shogren 1996), given as:
$$ {WTP}_2={WTP}_1\left(1-\gamma \right)+\gamma {t}^1 $$
where 0 ≤ γ ≤ 1 is the anchoring weight placed on the initial bid t1, WTP1 is the prior WTP, and WTP2 is the posterior WTP.
A second potential violation of the double-bounded model is the shift effect (Alberini et al. 1997; Whitehead 2002). As expounded by Alberini et al. (1997), shift effect occurs when a participant's WTP shifts between the two responses, which means the follow-up valuation questions do not induce subjects to reveal their true WTP. In the presence of a shift effect, a subject's true WTP is equal to their stated WTP with a shift (Whitehead 2002), given as:
$$ {WTP}_2={WTP}_1+\delta $$
where δ is the shift parameter. In the presence of both shift and anchoring effects (starting point bias), WTP for the follow-up question becomes:
$$ {WTP}_2={WTP}_1\left(1-\gamma \right)+\gamma {t}^1+\delta $$
Both starting point bias and shift effect were accounted for in the empirical estimation. Starting point bias was controlled for by using two approaches, the first proposed by Chien et al. (2005) and the second by Whitehead (2002). Following Chien et al. (2005), two bid set dummies were constructed and included in the model for the three premium bid sets shown in Table 1. The last bid set ($2.00, $1.20, and $3.00) was assigned as the reference dummy. Following Alberini et al. (1997) and Whitehead (2002), the shift effect was empirically determined as the coefficient of a dummy variable equal to 1 for the follow-up question, and 0 otherwise, following the transformation of the data into a quasi-panel dataset. The starting point bias which is determined by the anchoring weight γ is the coefficient of the interaction between the dummy variable on the follow-up question and the starting point bid.
Table 3 displays descriptive statistics of participants' responses. Demographic variables show that 93% of respondents were principal household grocery shoppers, an outcome we anticipated given that the surveys were conducted in grocery stores. About 82% of respondents had at least some college background, higher than the 72% reported for the city of Lincoln (U.S. Census Bureau 2016). Average household income was approximately $57,000, indicating a slight right skew compared to the city's median household income of approximately $56,000. The majority of subjects were females, consistent with the observation that females are more likely to be principal grocery shoppers (Bureau of Labor Statistics 2017).
Table 3 Descriptive statistics and variable definition
Preferences for ground beef labels
Table 4 reports statistics of respondents' choices based on the type of labels they were exposed to. The most chosen food safety label was the "unsubstantiated version" that provided no support for the food safety claims made (Safer Choice/Enhance). Nearly 70% of participants in this group chose option B, with just about 15% of them choosing option A. A little over 60% of respondents who were exposed to the food safety label with the "Safer Choice" phrase and additional information describing that the ground beef originated from cattle vaccinated against E. coli bacteria (Safer Choice/Vaccinated) chose this option. The food safety label that was least preferred among the three was the version with the E. coli display with the diagonal strikethrough (E. coli/Vaccinated).
Table 4 Statistics of subjects' response to ground beef options
These findings are consistent with the positive opinions consumers sometimes associate with food labels without standardized interpretations or with ambiguous claims such as "All Natural" (Liu et al. 2017). What appears obvious, however, is that consumers' response may be stronger towards labels that highlight a contaminant they wish to avoid.
A chi-square test was used to test differences among the three options (i.e., option A, option B, will not purchase); the test result was significant at better than the 1% level (Table 4), indicating that differences in response among the food safety labels were not due to chance. Key demographic characteristics such as household income, age, and education were not statistically different from each other among participants in the three food safety label groups (see tests in section B of the Appendix). Consequently, it can be concluded that a participant's choice was influenced by the type of food safety label they were exposed to. Furthermore, the fact that the proportion of respondents who chose the not-purchase option was almost identical (approximately 15%) for each group provides additional support to the robustness of the experimental design.
Multinomial logit results
Results from the multinomial logit model refer to choices for the ground beef with the standard label (option A), the standard label plus a food safety label (option B), and the option to purchase neither. Option A was designated as the reference category, with results displayed in Table 5 showing both the estimates for the regressors as well as the odds ratios.
Table 5 Multinomial logit results for the labeling choices
Compared to the group who saw the E. coli/Vaccinated label version (the reference category), those in the Safer Choice/Enhance or Safer Choice/Vaccinated versions were more likely to choose option B relative to option A. Being in the Safer Choice/Enhance group, which provided no justification for the safety claims made, significantly increased the odds of participants choosing option B. Participants in this group were 4.41 times more likely to choose option B relative to option A and 2.45 times more likely to choose the neither option compared to option A, both significant at the 5% level or better. Participants in the Safer Choice/Vaccinated group were also more likely to choose option B, although the odds ratio for this group at 1.89 was lower than the Safer Choice/Enhance group. The fact that participants who received the "unsubstantiated" food safety label (Safer Choice/Enhance) without the words "vaccines" or "E. coli" were more likely to choose this version, compared to those who were exposed to the more informative labeling versions, suggests the importance of the nature of information on food labels in influencing consumer choice.
Participants who frequently read food labels were 1.75 times more likely to choose neither beef option, compared to option A, and 1.79 times more likely to choose option B. As expected, participants who are accepting of animal vaccines were more likely to choose option B, with a 51% increase in their odds. Although significant at the 10% level, the more participants preferred their beef burgers well cooked, the more likely they were to choose option B relative to option A. While it cannot be concluded that consumers who like their beef burgers well-cooked do so predominantly for safety reasons, this result indicates some level of association between such preferences and choosing the ground beef in option B.
Participants who wanted beef products from vaccinated cattle labeled as such were 1.40 times more likely to choose option B and had a 56% increase in their odds of choosing neither of the two options, relative to option A. It can thus be inferred that consumers in the latter group might prefer having the vaccine intervention indicated on a beef label to avoid it, likely the result of their concerns about the intervention. This result is similar to findings by Lusk and Fox (2002) who found a strong demand to mandatorily label beef products treated with hormones. Another interesting finding is that participants who wanted vaccines against E. coli to be mandatorily adopted had a 45% increase in their odds of choosing option B, relative to option A. This finding combined with the preference for labeling beef with the vaccine intervention among participants who chose option B or neither of the two options highlights the fact that consumers can have competing motivations when they demand food labeling. As argued by Messer et al. (2017), food process labels can lessen asymmetric information between producers and consumers. Thus, while food labels make it easier for consumers to purchase their preferred product, there are also consumer segments whose clamor for a label is to have a signal to avoid the product altogether (Liaukonyte et al. 2015). Regarding demographics, male and income were the two variables that emerged significant at the 5% level. Males were more likely to choose option B, and household income did not increase the odds of choosing option B over option A.
Finally, to examine the effect of grocery store location on preferences, the two grocery stores situated in the suburban neighborhoods were considered as one and assigned as the reference category. Relative to the grocery stores in the suburban neighborhoods, shoppers in the natural foods store were 7.86 times more likely to choose neither of the ground beef options, relative to option A. Even though this is not an entirely surprising finding, it also suggests that these consumers would be more difficult to convince concerning food safety technologies. In a similar result, shoppers in the store located in the urban center were also more likely to opt for neither of the two ground beef options, compared to shoppers in the suburban communities.
Double-bounded contingent valuation results
Responses from 263 participants who chose option B only (i.e., the ground beef with a standard label plus a food safety label) were analyzed using the DBCV method, results of which are shown in Table 6. Three variations of the model were estimated. First, a basic model (model I) which did not control for anchoring (starting point bias) and shift effects was estimated. The second model (model II) controls for starting point bias using Chien et al.'s (2005) approach with the bid set dummies, while the third model (model III) controls for both anchoring and shift effects following Alberini et al. (1997) and Whitehead (2002). The coefficients of the bid set dummies in model II are both statistically significant at better than the 1% level, an indication of starting point bias in the data. The coefficients of the anchoring weight (γ) and the shift parameter (δ) in model III are also statistically significant at better than the 1% level. The positive coefficient of the anchoring weight parameter suggests that response to the second bid was anchored to the first (Herriges and Shogren 1996; Whitehead 2002). The significant shift effect parameter also indicates that subjects' WTP shifted between the two valuation questions.
Table 6 Results from double-bounded contingent valuation
Respondents randomly assigned to the Safer Choice/Enhance food label were willing to pay more, in both models II and III, compared to respondents who saw the E. coli/Vaccinated label (the reference category), a further indication that the food safety label with no justification for the food safety claims was more appealing. The coefficient of the Safer Choice/Vaccinated label was not statistically significant, relative to the E.coli/Vaccinated label version in all three model variations.
In relation to respondents' attitudes, knowledge, and opinion, those who rated personal health issues as important in food purchasing decisions were also willing to pay more for the ground beef with a food safety label. Being more accepting of the use of animal vaccines in food production lowered marginal WTP, which was significant in all three models at the 10% significance level or better. This outcome is somewhat surprising and indicates that support for a production process or attribute may not necessarily translate into a higher WTP for that attribute. Lusk and Fox (2002), for example, found that while consumers favored mandatory labels for beef products from hormone-induced cattle as well as cattle fed GM corn, they were reluctant to pay more to have such products differentiated. In our study, however, support for labeling vaccines translated into higher WTP in all three models.
Among demographics variables, college was statistically significant in models I and III at the 10% and 5% level of significance, respectively. Remarkably, the coefficient of this variable is negative, suggesting that respondents with a college background or better were not willing to pay more for ground beef with a food safety label. Although this finding may require further investigation, the fact that more educated respondents would pay less does not necessarily indicate their aversion for the food safety label or the vaccine intervention. It could suggest that highly educated respondents were likely to be skeptical about the E. coli reduction claim from vaccine use on the food safety label, or the unsupported claim about enhanced safety from cattle raised under strict health standards, to warrant an extra cost to them. Strijbos et al. (2016) reported that among their sample of Dutch residents who participated in a study examining health claims of reduced nitrate levels in meat, those with relatively lower educational backgrounds were more likely to perceive such claims to be credible. The statistically significant income variable in models I and III indicates a higher WTP among respondents with high household incomes.
The grocery store location variables were significant in all three models at the 10% level or better, relative to the suburban locations designated as the reference category.Footnote 4 Shoppers sampled from the grocery store in the urban center had a higher marginal WTP for a food safety label relative to those in the suburban location. An interesting finding is that marginal contribution to WTP of natural food shoppers surpassed those sampled from the grocery stores in the shopping district and the urban center, relative to shoppers from stores located in the suburban neighborhoods. It is plausible to postulate that natural food shoppers are more concerned about healthy foods; thus, those in this group who chose the ground beef in option B were also willing to pay more for it.
Associated mean WTP estimates for each of the three food safety label versions given individual characteristics are displayed in Table 7, using results from model III only. The highest average price premium of $1.77 was recorded for the ground beef with the unsubstantiated food safety claim (Safer Choice/Enhance). Participants exposed to the Safer Choice/Vaccinated food safety label were willing to pay an average of $1.62 more for this option.
Table 7 Estimates of mean WTP for the food safety labels
A noteworthy finding is the response from participants in the group who saw the E. coli/Vaccinated food safety label version, who were willing to pay $1.44 as price premium for a pound of ground beef with this label, approximately 19% lower than the price premium for the food safety label without the words "vaccines" or "E. coli" (Safer Choice/Enhance). While the 95% confidence interval for the Safer Choice/Vaccinated food label overlaps with the Safer Choice/Enhance version, the latter overlaps only slightly with the confidence interval for the E. coli/Vaccinated food safety label.
Overall, our results show that food labels that make unsubstantiated claims of food safety could command higher premiums, compared to labels that offer factual and accurate information to substantiate food safety claims. As in Syrengelas et al. (2017) who found that beef consumers were willing to pay a price premium for steak with a "natural" labeling claim when they were uninformed about the USDA definition of natural, in our study, consumers faced with unsupported, positive food safety claims may be overestimating what these claims promise. Consumers may be interpreting the unsupported claim on the Safer Choice/Enhance label to imply protection against a number of harmful diseases, not just against E. coli as with the other two labeling options.Footnote 5 Additionally, even though vaccine use in animal production has not attracted widespread public debate compared to other interventions and production processes, there are concerns about the health impacts of vaccinations in general and this might perhaps have influenced respondents' perceptions of ground beef products from vaccinated cattle. As Liaukonyte et al. (2013) note, positive information about contested food production processes may not be enough to mitigate consumer biases.
Additional insights can be gleaned by examining the differences in WTP between the Safer Choice/Vaccinated and E. coli/Vaccinated label versions. With similar descriptions, the only difference between these labeling versions was the display of "Safer Choice" or the encircled E. coli with the strikethrough. The fact that the latter was the least preferred, both in terms of respondents who chose this option or the average price premium they were willing to pay, could be suggestive of some form of a boomerang or reactance effect (Gifford and Bernard 2004). In this case, the display with the E. coli with the strikethrough intended to buttress the safety of the beef product from E. coli bacteria may have instead acted as a warning, emphasizing the risk from E. coli bacteria and crowding out the message that vaccines help lower this risk.Footnote 6
Overall, consumers' WTP a price premium for ground beef with a food safety label option suggests that its presence in retail markets could potentially drive down the price of regular beef, similar to findings by Kanter et al. (2009) who showed in an experimental study that the presence of rBST-free milk reduced WTP for conventional milk.
Finally, there were 112 respondents who chose option A, representing 25.17% of the sample. Of this number, 65 participants were willing to purchase option B if they were offered a discount. The small number of these observations, however, did not allow for a very meaningful empirical analysis for this group. A table summarizing discount bids among participants is displayed in section C of the Appendix. Also included in the Appendix (section D) is a selection of comments from respondents explaining why they did not choose option B. It is important to note that approximately a quarter of respondents choosing the ground beef with the standard label also underscores the challenge in labeling food safety attributes. Among these participants, the majority indicated a willingness to purchase the ground beef with a food safety label at a discount if that was their only choice. Reasons given by respondents who were completely opposed to ground beef with a food safety label, and would not purchase it even at a discount, echoed their aversion for vaccinations for a variety of reasons. In general, the remarks given by these respondents revealed doubts about the food safety labels and insufficient knowledge of vaccines. This suggests the possibility of some consumers misinterpreting the information on the labels (Messer et al. 2017).
Despite evidence that consumers value safe food products, communicating food safety enhancing attributes/technologies on food labels is challenging, partly due to consumer apprehension and insufficient understanding of food safety interventions. Extending previous studies that show that consumers are willing to pay for specific food safety interventions when they are provided with information about them (Fox et al. 2002; Nayga Jr et al. 2006; Teisl and Roe 2010), this study explored diverse ways of communicating food safety attributes through labeling cues, on consumer choices, and WTP for such attributes. The study also examined the influence of individual characteristics on preferences for food safety labels. These objectives were achieved through a survey that asked shoppers to choose between two types of ground beef: one with a standard/generic label and one that, in addition to the standard label, also had a food safety label. Three of such food safety labels were designed, each providing different information about the food safety intervention, and randomly assigned to participants.
Results show that consumers were willing to pay a price premium for ground beef with a food safety label while the most preferred food safety label was the one that did not provide information about the intervention and its role in enhancing food safety. Results also show that preferences and WTP for safer foods are affected by demographic characteristics. For example, participants who had a high school education or less were willing to pay more for a food safety label, relative to those with higher educational backgrounds. We also found that some segments of consumers who chose the ground beef with a food safety label, such as natural food shoppers, were willing to pay a higher price for them, relative to shoppers in stores located in suburban neighborhoods. This suggests that having a good understanding of the demographic composition of the consumers that they target can help processors/retailers to more effectively use food labels to communicate food safety attributes.
For producers who may contemplate adopting the vaccine intervention or other costly food safety technologies, the prospect of commanding a price premium from identifying such interventions on food labels looks promising. Appealing to consumer segments who value these interventions will nevertheless require a tactful framing of information on food labels; one that simultaneously eases consumers' doubts (Kahan et al. 2007, 2008) and signals the enhanced safety of the product. Based on our results, labels with a positive but vague food safety message may appeal more to consumers than labels that emphasize the food safety risk that is mitigated, as the former are subject to a potential overestimation of the food safety benefits.
Finally, our findings suggest that even under a stricter labeling policy, one that would not allow unsupported claims on food labels but rather require explicit reference to the food safety interventions used to support these claims, producers adopting the vaccine intervention could effectively differentiate their products in the retail market and capture price premiums. Such a policy could also help inform and educate consumers about the technologies used in their food production. Our results should be interpreted with caution given the limited consumer pool used in the study, as well as its regional focus. Future research could further explore consumer attitudes towards different label designs and target a sample that better reflects the demographics of the USA.
According to the 2012 Food and Health Survey by the International Food Information Council, 78% of American consumers expressed confidence in the safety of foods in the United States (see http://www.foodinsight.org/Content/3848/FINAL%202012%20Food%20and%20Health%20Exec%20Summary.pdf).
The first grocery store was located in a suburban neighborhood, to help sample views from shoppers who live in the surrounding community. Shoppers in the second store were a demographically diverse mix, most likely a result of its location in a shopping district with adjoining shops and restaurants. The third and fourth grocery stores belonged to the same brand as the second, wherein the third store was situated in an urban center and the fourth was in a relatively new part of town surrounded by a shopping mall and suburban communities. The fifth store, a cooperative natural foods store, was chosen to represent consumers with preference for natural and organic food products.
This set-up is similar to McCluskey et al. (2003), who posed a second question to respondents willing to purchase a genetically modified (GM) food product at the same price as the non-GM version. Respondents who answered Yes were asked whether they were also willing to purchase the GM product at a percentage premium, otherwise, at a discount.
The location variables were interacted with the food safety label variables to investigate interaction effects between grocery store location and the type of food safety label shoppers chose. The interaction effects were not statistically significant in any of the three model variations, and a likelihood ratio test suggests that the interaction models were not significantly different from models without interaction.
We would like to thank an anonymous reviewer for suggesting this possible explanation of our findings.
A similar result is shown by Kahan et al. (2008) in a study that examines the effects of message framing on nanotechnology risk perceptions. The study found that messages that emphasized the potential of nanotechnology to mitigate alarming risks had the paradoxical effect of causing nanotechnology itself to be perceived as risky.
Alberini A, Kanninen B, Carson RT (1997) Modeling response incentive effects in dichotomous choice contingent valuation data. Land Econ 73:309–324
Bimbo F, Bonanno A, Viscecchia R (2016) Do health claims add value? The role of functionality, effectiveness and brand. Eur Rev Agric Econ 43:761–780
Bureau of Labor Statistics (2017) American time use survey summary. Available at https://www.bls.gov/news.release/atus.nr0.htm
Callaway TR, Carr MA, Edrington TS, Anderson RC, Nisbet DJ (2009) Diet, Escherichia coli O157: H7, and cattle: a review after 10 years. Current Issues in Mol Biol 11:67
Chien YL, Huang CJ, Shaw D (2005) A general model of starting point bias in double-bounded dichotomous contingent valuation surveys. J of Env Econ Manage 50:362–377
Creel M (1998) A note on consistent estimation of mean WTP using a misspecified logit contingent valuation model. J of Env Econ Manage 35:277–284
Dolgopolova I, Teuber R (2017) Consumers' willingness to pay for health benefits in food products: a meta-analysis. Appl Econ Perspect Policy 40:333–352
Fox JA, Hayes DJ, Shogren JF (2002) Consumer preferences for food irradiation: how favorable and unfavorable descriptions affect preferences for irradiated pork in experimental auctions. J Risk Uncertain 24:75–95
Gifford K, Bernard JC (2004) The impact of message framing on organic food purchase likelihood. J Food Distribution Res 35:19–28
Hanemann M, Kanninen B (1999) The statistical analysis of discrete-response CV data. Valuing environmental preferences: theory and practice of the contingent valuation method in the US, EU, and developing countries. I.J. Bateman and K.G. Willis, ed. Oxford University Press 302–441
Hanemann M, Loomis J, Kanninen B (1991) Statistical efficiency of double-bounded dichotomous choice contingent valuation. Am J Agric Econ 73:1255–1263
Herriges JA, Shogren JF (1996) Starting point bias in dichotomous choice valuation with follow-up questioning. J Env Econ Manage 30:112–131
Hoffman SD, Duncan GJ (1988) Multinomial and conditional logit discrete-choice models in demography. Demography 25:415–427
Huang CL, Wolfe K, McKissick J (2007) Consumers' willingness to pay for irradiated poultry products. J Int Food Agribusiness Mark 19:77–95
Hurd HS, Malladi S (2012) An outcomes model to evaluate risks and benefits of Escherichia coli vaccination in beef cattle. Foodborne Pathog Dis 9:952–961
Kahan DM, Kysar D, Braman D, Slovic P, Cohen G, Gastil J (2008) Cultural cognition of nanotechnology risk perceptions: an experimental investigation of message framing. Cultural cognition project, Available at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.630.9866&rep=rep1&type=pdf
Kahan DM, Slovic P, Braman D, Gastil J, Cohen GL (2007) Affect, values, and nanotechnology risk perceptions: an experimental investigation. Cultural cognition project working paper 22. Available at http://ssrn.com/abstract=968652
Kanter C, Messer KD, Kaiser HM (2009) Does production labeling stigmatize conventional milk? Am J Agric Econ 91:1097–1109
Lewis KE, Grebitus C, Colson G, Hu W (2017) German and British consumer willingness to pay for beef labeled with food safety attributes. J Agric Econ 68:451–470
Liaukonyte J, Streletskaya NA, Kaiser HM (2015) Noisy information signals and endogenous preferences for labeled attributes. J Agric Resour Econ 40(2):179–202
Liaukonyte J, Streletskaya NA, Kaiser HM, Rickard BJ (2013) Consumer response to "contains" and "free of" labeling: evidence from lab experiments. Appl Econ Perspect Policy 35:476–507
Liu R, Hooker NH, Parasidis E, Simons CT (2017) A natural experiment: using immersive technologies to study the impact of "all-natural" labeling on perceived food quality, nutritional content, and liking. J Food Sci 82:825–833
Lopez-Feldman A (2012) Introduction to contingent valuation using Stata. MPRA, p 41018. Available at http://mpra.ub.uni-muenchen.de/41018/
Loureiro ML, McCluskey JJ (2000) Consumer preferences and willingness to pay for food labeling: a discussion of empirical studies. J Food Distribution Res 34:95–102
Lusk JL, Fox JA (2002) Consumer demand for mandatory labeling of beef from cattle administered growth hormones or fed genetically modified corn. J Agric Appl Econ 34:27–38
Matthews L, Reeve R, Gally DL, Low JC, Woolhouse MEJ, McAteer SP, Locking ME, Chase-Topping ME, Haydon DT, Allison LJ, Hanson MF, Gunn GJ, Reid SWJ (2013) Predicting the public health benefit of vaccinating cattle against Escherichia coli O157. Proceedings of the National Academy of Sciences of the United States of America (PNAS)
McCluskey JJ, Curtis KR, Li Q, Wahl TI, Grimsrud KM (2003) A cross-country consumer attitudes and willingness to pay for genetically modified foods comparison. Biotechnol:117–123
McFadden D (1974) In: Zaremka P (ed) Conditional logit analysis of qualitative choice behavior. Frontiers in econometrics. Academic press, New York
Messer KD, Costanigro M, Kaiser HM (2017) Labeling food processes: the good, the bad and the ugly. Appl Econ Perspect Policy 39:407–427
Nayga RM Jr, Woodward R, Aiew W (2006) Willingness to pay for reduced risk of foodborne illness: a non-hypothetical field experiment. Canadian J Agric Econ 54:461–475
Palma MA, Collart AJ, Chammoun CJ (2015) Information asymmetry in consumer perceptions of quality-differentiated food products. J Consum Aff 49:596–612
Strijbos C, Schluck M, Bisschop J, Bui T, De Jong I, Van Leeuwen M, von Tottleben M, van Breda SG (2016) Consumer awareness and credibility factors of health claims on innovative meat products in a cross-sectional population study in the Netherlands. Food Qual Prefer 54:13–22
Syrengelas KG, DeLong KL, Grebitus C, Nayga RM (2017) Is the natural label misleading? Examining consumer preferences for natural beef. Appl Econ Perspect Policy 40:445–460
Teisl MF, Roe BE (2010) Consumer willingness to pay to reduce the probability of retail foodborne pathogen contamination. Food Policy 35:521–530
Tonsor GT, Schroeder TC (2015) Market impacts of E. coli vaccination in US feedlot cattle. Agric Food Econ 3(1):7
U.S. Census Bureau (2016) American community survey 1-year estimates. Retrieved from Census Reporter Profile page for Lincoln, NE on February 17, 2018 https://censusreporter.org/profiles/16000US3128000-lincoln-ne/
Verbeke W, Ward RW (2006) Consumer interest in information cues denoting quality, traceability and origin: an application of ordered probit models to beef labels. Food Qual Prefer 17:453–467
Wang Q, Halbrendt C, Kolodinsky J, Schmidt F (1997) Willingness to pay for rBST-free milk: a two-limit tobit model analysis. Appl Econ Lett 4:619–621
Whitehead JC (2002) Incentive incompatibility and starting-point bias in iterative valuation questions. Land Econ 78:285–297
The authors are grateful to Stamatina Kotsakou for helping with the in-store surveys.
This research is based upon work that was supported by the National Institute of Food and Agriculture, U.S. Department of Agriculture, under award 2012-68003-30155.
Data and supporting materials for the study will be shared by the authors upon request.
Postdoctoral Research Associate, School of Economics, University of Maine, 5782 Winslow Hall, Orono, ME, 04469, USA
Kofi Britwum
Department of Agricultural Economics, University of Nebraska-Lincoln, 314B H.C. Filley Hall, Lincoln, NE, 68583-0922, USA
Amalia Yiannaka
KB and AY proposed and refined the research idea. KB conducted the in-store surveys and analyzed the data. AY was instrumental in the writing and editing of the manuscript. All authors read and approved the final manuscript.
Correspondence to Kofi Britwum.
Both authors consent to the publication of this research.
A. Food safety label versions
First version of option B "Safer Choice/Enhance" provides no information about the food safety intervention used to support food safety claims.
Second version of option B "Safer Choice/Vaccinated" provides information about the food safety intervention to support food safety claims
Third version of option B "E. coli/Vaccinated" also provides information about the food safety intervention to support food safety claims
B. Demographic differences by food label version groups
Chi-square test—educational background
Food safety label
High school or less
Some college or higher
Safer Choice/Enhance 22.32 77.68 100
Safer Choice/Vaccinated 17.49 82.51 100
E. coli/Vaccinated 11.43 88.57 100
Total 17.75 82.25 100
Pearson chi-squared test (2) = 3.5309 Pr = 0.171
Analysis of variance—household income
Prob > F
Between groups 1544.19 2 772.095 0.29 0.7451
Within groups 1,156,541 441 2622.543
Total 1,158,086 443 2614.189
Variance—age
Between groups 93.29408 2 46.64704 0.17 0.8458
Within groups 123,106.4 442 278.5213
Total 123,199.7 444 277.4769
C. Willingness to accept a discount to choose option B
Table 8 in Appendix shows the count and frequency for the discount bids among participants who chose the ground beef with the standard label (option A), but indicated a willingness to purchase option B (standard label and a food safety label) only at a discounted price if that was their only choice. In total, there were 65 such respondents, which represents 15% of all participants, and 58% of those who chose option A.
Table 8 Count and frequency of discount response
D. Select comments concerning the food safety labels
Selection of comments from participants averse to the Safer Choice/Vaccinated label version
It looks scary
How do I know for sure what the cattle were vaccinated with?
I do not think E. coli vaccine prevents E. coli infections in meat
Only eat natural, grass fed, free to roam, farm raised beef with no antibiotics
I do not trust vaccinated meat
Selection of comments from participants averse to the E. coli/Vaccinated label version
It is not necessary to vaccinate for E. coli
Vaccines and medicinal treatments for animals are generally poor practices
E. coli can be killed using proper cooking and handling techniques
Just seeing the word E. coli turns me off
I do not like meat that is vaccinated
I only purchase "healthy" beef
Britwum, K., Yiannaka, A. Labeling food safety attributes: to inform or not to inform?. Agric Econ 7, 4 (2019). https://doi.org/10.1186/s40100-019-0123-y
DOI: https://doi.org/10.1186/s40100-019-0123-y
Vaccines against E. coli
Willingness to pay | CommonCrawl |
Why don't airliners use in-air refueling systems?
Right now if an airliner wants to fly a really long distance (eg., a Boeing 787 flying from Seattle to Tokyo), it has to load itself down with lots and lots of fuel, which in turn weighs thousands and thousands of pounds. This, of course, makes the flight of the aircraft less efficient than it could be1. Thus, if the craft could theoretically carry half as much fuel, that should increase the fuel efficiency of the craft, right?2
Mid-way refueling seems like it would be a Good Idea™ at that point. Of course, landing would add a heck of a lot of time to the flight, so it seems the better option would be mid-air refueling. It would allow for the aircraft to be more efficient, without the need for stopping on a long journey.
Boeing and Airbus both make a few airplanes | that can do | mid-air refueling, in fact one of them is a highly modified 747-200 (properly called a VC-25) used as Air Force One:
(source: wordpress.com)
I assume that, because Airbus and Boeing's engineers and sales managers are really quite smart, they have a really good reason that they don't fit/sell this feature on any civilian transportation aircraft. But I'm not sure what that reason is.
Does anyone know why airlines do not use aircraft that are capable of mid-air refueling?
1 If I'm not mistaken, increased weight means an increased AoA to maintain level flight, which in turn increases induced drag from the wing. Less fuel would mean less induced drag or, if mid-air refueling were common practice, a wing that was designed to be more efficient because it was required to handle less weight.
2 I don't know by how much, if it's not that much, well, that might explain why nobody does this.
airliner airline-operations refueling mid-air-refueling
Jay CarrJay Carr
$\begingroup$ Safety and logistics aside, I'm not sure the benefit of having another large aircraft burning fuel to refuel another aircraft burning fuel would offset the cost of just taking off with a larger fuel load to begin with. $\endgroup$ – Rhino Driver Aug 2 '15 at 19:48
$\begingroup$ @RhinoDriver And that'd be especially true if the refueling plane had to fly a long way out of it's way to get to a single airliner. Where as the way the military uses them...is quite different I assume. Actually if you have a moment to describe the difference in mission (ie., answering this from the "why the military uses mid air refueling" perspective) in an answer, that would be really cool. Especially since you know directly :). $\endgroup$ – Jay Carr Aug 2 '15 at 20:07
$\begingroup$ No problem Jay. Aerial refueling is a very expensive endeavor with very few practical uses. For the military, the increased price of fuel is well worth the increased tactical ability our aircraft receive. Whether that's playtime in a kill box, or blue water operations in the Navy (ie, no divert, where the carrier is the only option and being low on fuel requires a tanker), there are a myriad of tactical and administrative benefits the military pays for through aerial refueling. I'll post an actual answer later when I've got a bit more time. $\endgroup$ – Rhino Driver Aug 2 '15 at 21:00
$\begingroup$ I'm not sure safety can be set aside... ATC exists expressly to keep airliners apart, putting two very large aircraft in close proximity, (10's of feet separation), one carrying an enormous fuel load and one carrying human cargo, is a recipe for disastrous inattention. $\endgroup$ – CGCampbell Aug 3 '15 at 17:34
$\begingroup$ Given that the longest haul non-stop commercial flight is some 13,800km en.wikipedia.org/wiki/Non-stop_flight it would seem that airliners can pretty much fly as far as any commercial need would require. To put it another way, there is probably not enough economic benefit, and the solution to "not enough range" for commercial aircraft seems best addressed by increasing fuel capacity or making intermediate stops. Bear in mind that military and civilian aviation operate under different economic, safety, and logistical priorities, so what's good for one may not be for the other. $\endgroup$ – Anthony X Aug 13 '15 at 22:35
Fuel Quantity Unlike smaller fighter jets, you would need to offload a substantial quantity of fuel. For a B777, you're looking in the range of 60 tonnes of fuel for half a tank. The boom of a KC-135 (faster than a basket) can do around 3 tonnes a minute. The math comes to then 20 minutes of aerial refuelling. The KC-46 can do perhaps 180 tonnes, so you might squeeze out three refuelling operations from one flight.
Risk This is considerably more dangerous than the very conservative safety margins aircraft normally operate within. You would have one aircraft filled to the brim with fuel, and another aircraft with 300+ passengers. A quick search on YouTube is enough to propose this entire thing is very dangerous.
Furthermore, you need to consider that the aircraft needs safety margin to divert should refuelling not work, which cuts into the benefits you can expect.
Many intercontinental routes will be flown at night, making the manoeuvre more difficult.
Refuelling Area Looking at the route you proposed, the closest airfield would probably be in Alaska, at least 500km away. Guess another relevant area would be Iceland/Greenland. Even if you did reroute, you'd get away from the jetstreams that aircraft over the Pacific use lying considerably further south, reducing efficiency. The same scenario applies for eastbound Atlantic flights.
Expensive You would need to get another aircraft (~$200m for a KC-46) with special equipment and crew training. The receiving aircraft crew would also need special training. The airframe modifications would be complicated and require certification. Furthermore, you would need to get everybody to agree on some common standard.
Logistics Planning is difficult. Each aircraft would have to be rerouted to intercept the tanker at a certain time and place. You want to refuel them perfectly one after an other, which is almost certainly not possible. There's just not the volume of aircraft movements feasible for this, especially for the ultra-long flights where it may have the greatest benefit.
Other Landing Benefits Include changing crews and possibly offloading passengers.
Applicability The number of flights where this can be practically implemented and used is very limited and for all intents and purposes, you can just land the plane itself and refuel it. Even for the route you propose, it's around 7,500 km which is not a lot for a B777.
ThunderstrikeThunderstrike
$\begingroup$ Even the Air Force, with lots of experience in this, says in their tanker manuals: "Because of the magnitude of interrelated aerodynamic effects flying two aircraft at close vertical proximity is unsafe." $\endgroup$ – cpast Aug 2 '15 at 18:53
$\begingroup$ @JayCarr Over the 10 years between 1978 and 1988, the USAF had 279 refueling mishaps causing at least $10,000 in damages or at least one injury. $\endgroup$ – cpast Aug 2 '15 at 19:11
$\begingroup$ Actually Iceland is a "pleasant place" for aircraft operations, being at low altitude, nice and cold all year round, and with no short-haul domestic routes! I can remember when all the records for "least unscheduled maintenance," "lowest IFSD rates," etc on the B757 were held by Icelandair. The only problem is the occasional volcanic ash cloud. Greenland, being mostly 9,000 ft mountains, is a different story, of course. $\endgroup$ – alephzero Aug 2 '15 at 22:42
$\begingroup$ In terms of relative risk levels, regardless of the frequency of mishaps the military have with in-flight refueling, note that civilian aviation is MUCH more risk-averse, and the length of a refueling boom is somewhat less than the usual separation that the FAA tend to operate under, which are 1000ft vertically and 3 miles horizontally. $\endgroup$ – anaximander Aug 3 '15 at 16:01
$\begingroup$ @MikeFoxtrot I think you can add one other benefit of landing over in-flight refueling: nice for the passengers to be able to exit the aircraft after a while. Eight hours is a long time (I would say too long) to be cooped up in an economy cabin, and there are airliners already in revenue service capable of significantly longer flights. IMHO a more passenger-friendly way to go further in a single hop would be to go faster rather than stay up longer. $\endgroup$ – Anthony X Aug 3 '15 at 17:46
Don't look at the fuel consumption of the airline flight in isolation. An airline would need to combine the fuel used by both the revenue-earning flight and the tanker, and then add the cost of operating it, too. Even if this could be shared by four or five revenue-earning flights, the total would still be worse.
To find out how big the fuel saving by in-air refueling is, the Breguet equation is your friend. Let's assume an L/D of 18, a thrust-specific fuel consumption of $b_f$ = 0.018 kg/kNs and a speed of Mach 0.82, which equates to $v$ = 279 m/s in 11.000 m altitude. Now look at the mass fractions which go with ranges of 8000 km and 2$\cdot$4000 km: $$\frac{m_1}{m_2} = e^{\frac{R\cdot g\cdot b_f}{v\cdot L/D}}$$ Flying the distance in one go needs the plane to start with a fuel load equivalent to 32.5% of the landing mass, while an air-refueled flight needs only 2$\cdot$15.1%. The saving (fuel equivalent to 2.3% of airliner's landing mass) is real, but even if four others could benefit from the same tanker flight, it would need to cost less than the equivalent of the fuel price of 5$\cdot$2.3% = 11.5% of the airliner's landing mass.
Since the tanker needs to carry 5$\cdot$15.1% = 75.5% of the airliner's landing mass in fuel, it needs to be an airliner-sized aircraft itself. Add more if the tanker needs to do even as much as to take off, let alone fly to a refueling point and wait there. And the fuel saving needs not only to pay for the tanker's fuel needs, but also for its crew, maintenance and depreciation.
Aerial refueling is a great technology to make complex military scenarios possible, but is a very ressources-hungry beast.
$\begingroup$ It seems like the tanker just taking off and climbing up to the cruise altitude of an airliner would burn quite a lot of fuel... and that's if it would even be possible for a tanker loaded down with Jet-A to even climb up to the altitude of an airliner with light fuel load in the first place. If the airliner has to descend to meet the tanker, that's yet more wasted fuel. $\endgroup$ – reirab Aug 3 '15 at 2:57
$\begingroup$ Might it become slightly more feasible if they used smaller planes for the flights? Like a B738, or A319? Maybe even a CR-145? Then you could have one tanker carrying fuel for several planes. Granted this doesn't address a whole list of other problems but...theoretically would it help? $\endgroup$ – Jay Carr Aug 3 '15 at 19:10
$\begingroup$ @JayCarr: Using smaller passenger planes will make longer-range connections possible. Now we save not just some fuel, but make flights possible which could not be flown before. However, smaller planes are less efficient, and more cramped on the inside. The seat-mile fuel consumption would be larger, and I certainly will not look forward to flying transcontinental flights in a CR-145! $\endgroup$ – Peter Kämpf Aug 3 '15 at 19:59
$\begingroup$ Looks like we can save, in perfect world, around 9.3% of total fuel. Assuming fuel is \$5 a gallon, and a 50,000 gallon capacity (777 long range seems to be just under that) there's a potential for a $23k saving in fuel. Bigger than I would have expected, but I doubt big enough to justify the cost of a second aircraft. $\endgroup$ – NPSF3000 Aug 4 '15 at 3:38
$\begingroup$ @NPSF3000 MikeFoxtrot quoted \$200m for a KC-46, so let's go with that and your $23k savings per flight. In fact, let's make it \$25k/flight, and four refuelings per KC-46 flight, so each KC-46 takeoff saves \$100k. That's 2,000 tanker flights until break-even. But that's before you have any people to fly or care for the tanker. Even assuming a perfect world, that's many thousands of trips for the tanker before break-even. The world is not perfect. One mishap and that carrier might very well be facing bankruptcy. People understand that crashes happen, but deliberately putting that fuel there? $\endgroup$ – a CVn Aug 4 '15 at 11:20
2019 update: site is dead (now a scam service). See wayback machine here:
The people from Cruiser-Feeder http://www.cruiser-feeder.eu/ try to make it happen.
Here is part of the abstract from a paper describing their approach:
In this paper it will be described how the safety of air-to-air refuelling has been assessed, and how proposed new or amended regulations and acceptable means of compliance have been defined
They even created a conceptual design of joint-wing tanker for civil operations: http://www.cruiser-feeder.eu/downloads/li-la-rocca---conceptual-design-of-a-joint-win.pdf
In their papers, you will find data for which flights mid air refuelling is worth it, how they want to make it happen, and some interesting facts.
Antzi
jklinglerjklingler
$\begingroup$ Interesting approach: Let the tanker fly behind and push the fuel up. This requires much less training of airliner crews. But when I see a joined wing proposal, I know this is from an academic ivory tower with little practical experience in aircraft design and construction. $\endgroup$ – Peter Kämpf Aug 3 '15 at 16:36
$\begingroup$ @PeterKämpf, you mean sometimes theories don't work so well in the real world? I'm shocked! $\endgroup$ – FreeMan Aug 3 '15 at 19:08
$\begingroup$ @FreeMan: I am even more shocked. The dishonesty (or is it cluelessness?) of the people behind the cruiser-feeder website is breathtaking. They are happy to gobble up EU research funds with unbelievable claims. A simple back-of-the-envelope calculation shows that the benefits are merely a tenth of what they claim - just for the isolated airliner. System fuel consumption will go up. $\endgroup$ – Peter Kämpf Aug 3 '15 at 20:13
$\begingroup$ @PeterKämpf unfortunately I don't have much experience in this field of the wide aviation world, I posted the answer because I think it's definitely worth a read. Why are you so shocked about the cruiser-feeder concept? I think the idea behind the concept is very good. Let the plane start with a little amount of fuel to reduce its start weight and then refuell it in the air on its way to the destination. Why do you find a joined wing proposal a bad idea? As I said, I'm not experienced in this field of aviatics but I'm interested in your opinion about the Cruiser Feeder Concept. $\endgroup$ – jklingler Aug 4 '15 at 17:02
$\begingroup$ @jklingler: This concept will never be certified - just think what might happen if the refueling fails. In the end, the airliner will need to be much like airliners today, and with the Breguet equation it is easy to see that the savings are modest. I encourage you to ask new questions, because the comments make it hard to explain all details. $\endgroup$ – Peter Kämpf Aug 4 '15 at 20:22
Not the answer you're looking for? Browse other questions tagged airliner airline-operations refueling mid-air-refueling or ask your own question.
Cruiser-Feeder Concept - feasible?
Why is mid-air refueling worth it? (or is it at all?)
How can I calculate the fuel consumption of an Airbus 320-200 at various loads?
Technical factors to consider for retrofitting aerial refueling for commericial aircraft
Why do we never see high-bypass turbofan engines sharing the same nacelle on large airliners and similar aircraft?
Why don't airliner winglets have consumer advertising on them?
Why don't airliners use noise cancelling?
Would "wind turbine generators" in the bypass stream save fuel for jetliners?
Why aren't larger airliners such as the A380 more efficient than smaller ones?
What is the drag induced by a high bypass jet engine on an airliner?
What procedures are used for midair refueling without air supremacy?
Why don't modern jet engines use forced exhaust mixing? | CommonCrawl |
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
View all journals
Quantum walks and Dirac cellular automata on a programmable trapped-ion quantum computer
C. Huerta Alderete ORCID: orcid.org/0000-0003-3673-99851,2,
Shivani Singh3,4,
Nhung H. Nguyen1,
Daiwei Zhu ORCID: orcid.org/0000-0003-0019-256X1,
Radhakrishnan Balu ORCID: orcid.org/0000-0003-1494-56815,6,
Christopher Monroe1,
C. M. Chandrashekar3,4 &
Norbert M. Linke1
Nature Communications volume 11, Article number: 3720 (2020) Cite this article
The quantum walk formalism is a widely used and highly successful framework for modeling quantum systems, such as simulations of the Dirac equation, different dynamics in both the low and high energy regime, and for developing a wide range of quantum algorithms. Here we present the circuit-based implementation of a discrete-time quantum walk in position space on a five-qubit trapped-ion quantum processor. We encode the space of walker positions in particular multi-qubit states and program the system to operate with different quantum walk parameters, experimentally realizing a Dirac cellular automaton with tunable mass parameter. The quantum walk circuits and position state mapping scale favorably to a larger model and physical systems, allowing the implementation of any algorithm based on discrete-time quantum walks algorithm and the dynamics associated with the discretized version of the Dirac equation.
Quantum walks (QWs) are the quantum analog of classical random walks, in which the walker steps forward or backward along a line based on a coin flip. In a QW, the walker proceeds in a quantum superposition of paths, and the resulting interference forms the basis of a wide variety of quantum algorithms, such as quantum search1,2,3,4,5, graph isomorphism problems6,7,8, ranking nodes in a network9,10,11,12, and quantum simulations, which mimic different quantum systems at the low and high energy scale13,14,15,16,17,18,19,20,21,22. In the discrete-time QW (DQW)23,24, a quantum coin operation is introduced to prescribe the direction in which the particle moves in position space at each discrete step. In the continuous-time QW (CQW)25,26, one can directly define the walk evolution on position space itself using continuous-time evolution. We focus on DQWs and their implementation on gate-based quantum circuits in this work.
DQWs can be realized directly on lattice-based quantum systems where position space matches the discrete lattice sites. Such implementations have been reported with cold atoms27,28 and photonic systems29,30,31,32. In trapped ions, a DQW has been implemented by mapping position space to locations in phase space given by the degrees of freedom associated with the harmonic motion of the ion in the trap33,34,35. All these physical implementations have followed an analogue quantum simulation approach. However, implementing QWs on a circuit-based system is crucial to explore the algorithm applications based on QWs. The implementation of a DQW on a three-qubit NMR system36, a CQW on a two-qubit photonic processor37 and a split-step QW on superconducting circuits38,39 are the circuit-based implementations reported to date. To implement DQWs on circuit-based quantum processors, its necessary to map the position space to the available multi-qubit states. The range of the walk is set by the available qubit number and gate depth. The term Quantum Cellular Automaton (QCA) describes a unitary evolution of a particle on a discretized space40,41,42, as occurs with QWs. In this context, the one-dimensional Dirac cellular automaton (DCA) has been derived from the symmetries of the QCA showing how the dynamics of the Dirac equation emerges40,41,42,43,44.
Here we implement efficient quantum circuits for a DQW in one-dimensional position space, which provide the time-evolution up to five steps. We report the experimental realization of a DQW on five qubits within a seven-qubit programmable trapped-ion quantum computer45. With a tunable walk probability at each step we also show the experimental realization of a DCA where the coin bias parameter mimics the mass term in the Dirac equation. This will be central for discrete-time quantum simulation of the dynamics associated with the relativistic motion of a spin-1/2 particle in position space.
Review of quantum walks and the connection to the Dirac equation
The DQW consists of two quantum mechanical systems, an effective coin and the position space of the walker, as well as an evolution operator, which is applied to both systems in discrete time-steps. The evolution is given by a unitary operator defined on a tensor product of two Hilbert spaces \({{\mathcal{H}}}_{{\rm{c}}}\otimes {{\mathcal{H}}}_{{\rm{p}}}\) where, \({{\mathcal{H}}}_{{\rm{c}}}\) is the coin Hilbert space spanned by the internal states \({\left|0\right\rangle }_{{\rm{c}}}\) and \({\left|1\right\rangle }_{{\rm{c}}}\) of a single qubit, while \({{\mathcal{H}}}_{p}\) represents the position Hilbert space given by the position states \(\left|x\right\rangle\) with \(x\in {\mathbb{Z}}\) encoded in several qubits as described below. Here, the unitary quantum coin toss operation, \({\hat{C}}_{\theta }\), is a unitary rotation operator that acts on the coin qubit space,
$${\hat{C}}_{\theta }=\left[\begin{array}{cc}\cos \theta &-i\sin \theta \\ -i\sin \theta &\cos \theta \end{array}\right]\otimes {\hat{I}}_{{\rm{p}}},$$
where θ is a coin bias parameter that can be varied at each step to modify the QW path superposition weights. The conditional position-shift operator, \(\hat{S}\), translates the particle to the left and right conditioned by the state of the coin qubit,
$$\hat{S}={\left|0\right\rangle }_{{\rm{c}}\,{\rm{c}}}\langle 0| \otimes \sum _{x\in {\mathbb{Z}}}| x-1\rangle {\langle x| +| 1\rangle }_{{\rm{c}}\,{\rm{c}}}\langle 1| \otimes \sum _{x\in {\mathbb{Z}}}| x+1\rangle \left\langle x\right|.$$
The state of the particle in position space after t steps of the walk, is accomplished by the repeated action of the operator \(\hat{W}=\hat{S}{\hat{C}}_{\theta }\) on the initial state of the particle \({\left|\psi \right\rangle }_{{\rm{c}}}=\alpha {\left|0\right\rangle }_{{\rm{c}}}+\beta {\left|1\right\rangle }_{{\rm{c}}}\) at position x = 0, as shown in Fig. 1,
$$\left|\Psi (x,t)\right\rangle ={\hat{W}}^{t}\left[{\left|\psi \right\rangle }_{c}\otimes \left|x=0\right\rangle \right]=\sum _{x}\left[\begin{array}{c}{\psi }_{x,t}^{0}\\ {\psi }_{x,t}^{1}\end{array}\right],$$
where \({\psi }_{x,t}^{0(1)}\) denotes the left(right) propagating component of the particle at time-step t. The probability of finding the particle at position x and time t will be \(P(x,t)=| {\psi }_{x,t}^{0}{| }^{2}+| {\psi }_{x,t}^{1}{| }^{2}\).
Fig. 1: Discrete-time quantum walk scheme.
Each step is composed of a quantum coin operation, \({\hat{C}}_{\theta }\), with tunable effective coin bias parameters, θi, followed by a shift operation, \(\hat{S}\).
Recent works have shown a relationship between DQWs and the Dirac equation14,15,16,17,18,43. Starting form a discrete-time evolution operator and then moving from position space to momentum space, Dirac kinematics can be recovered from the diagonal terms of the unitary evolution operator for small momenta in the small mass regime16,17,18. In contrast with these proposals in the Fourier frame, we focus our implementation on the probability distribution of the DQW, which is analogous to the spreading of a relativistic particle. To realize a DCA and recover the Dirac equation, a split-step quantum walk, one form of the DQW, is used40. Each step of a split-step quantum walk is a composition of two half step evolutions with different coin biases and position-shift operators,
$${\hat{W}}_{{\rm{ss}}}={\hat{S}}_{+}{\hat{C}}_{{\theta }_{2}}{\hat{S}}_{-}{\hat{C}}_{{\theta }_{1}},$$
where the coin operation \({\hat{C}}_{{\theta }_{j}}\), with j = 1, 2, is given in Eq. (1). The split-step position-shift operators are,
$${\hat{S}}_{-}={\left|0\right\rangle }_{{\rm{c}}\,{\rm{c}}}\langle 0| \otimes \sum _{x\in {\mathbb{Z}}}| x-1\rangle {\langle x| +| 1\rangle }_{{\rm{c}}\,{\rm{c}}}\langle 1| \otimes \sum_{x\in {\mathbb{Z}}}| x\rangle \left\langle x\right|,$$
$${\hat{S}}_{+}={\left|0\right\rangle }_{{\rm{c}}\,{\rm{c}}}\langle 0| \otimes \sum_{x\in {\mathbb{Z}}}| x\rangle {\langle x| +| 1\rangle }_{{\rm{c}}\,{\rm{c}}}\langle 1| \otimes \sum_{x\in {\mathbb{Z}}}| x+1\rangle \left\langle x\right|.$$
Following Mallick40 and Kumar44, the particle state at time t and position x after the evolution operation \({\hat{W}}_{{\rm{ss}}}\) is described by the differential equation,
$$\frac{\partial }{\partial t}\left[\begin{array}{c}{\psi }_{x,t}^{0}\\ {\psi }_{x,t}^{1}\end{array}\right]= \cos {\theta }_{2}\left[\begin{array}{cc}\cos {\theta }_{1} & -i\sin {\theta }_{1}\\ i\sin {\theta }_{1} & -\cos {\theta }_{1}\end{array}\right]\left[\begin{array}{c}\frac{\partial {\psi }_{x,t}^{0}}{\partial x}\\ \frac{\partial {\psi }_{x,t}^{1}}{\partial x}\end{array}\right]\\ +\left[\begin{array}{cc}\cos ({\theta }_{1}+{\theta }_{2})-1 & -i\sin ({\theta }_{1}+{\theta }_{2})\\ -i\sin ({\theta }_{1}+{\theta }_{2}) & \cos ({\theta }_{1}+{\theta }_{2})-1\end{array}\right]\left[\begin{array}{c}{\psi }_{x,t}^{0}\\ {\psi }_{x,t}^{1}\end{array}\right].$$
The tunability of parameters θ1 and θ2 on the split-step QW permits the study of one-dimensional Dirac equations effectively, within the low momentum subspace, for spin-1/2 particles40,44. It is important to stress out that, the description of the Dirac equation used here corresponds to the 2 × 2 representation, i.e. no spin degree of freedom. For instance, the massless particle Dirac equation can be recovered for \(\cos ({\theta }_{1}+{\theta }_{2})=1\). Thereby, Eq. (7) becomes \(i\hslash [{\partial }_{t}-\cos {\theta }_{2}(\cos {\theta }_{1}{\sigma }_{z}+\sin {\theta }_{1}{\sigma }_{y}){\partial }_{x}]\Psi (x,t)=0\), which is identical to the Dirac equation of a massless particle in the relativistic limit46. In contrast, considering θ1 = 0 and a very small value of θ2 corresponds to the Dirac equation for particles with small mass35,46 in the form \(i\hslash [{\partial }_{t}-(1-{\theta }_{2}^{2}/2){\sigma }_{z}{\partial }_{x}+i{\theta }_{2}{\sigma }_{x}]\Psi (x,t)\approx 0\).
At the same time, by choosing θ1 = 0, the quantum walk operator \({\hat{W}}_{{\rm{ss}}}\) given in Eq. (4) takes the form of the unitary operator for a DCA40,
$${\hat{W}}_{{\rm{ss}}}=\left[\begin{array}{cc}\cos ({\theta }_{2}){S}_{-}&-i\sin ({\theta }_{2}){\mathbb{1}}\\ -i\sin ({\theta }_{2}){\mathbb{1}}&\cos ({\theta }_{2}){S}_{+}\end{array}\right]={U}_{{\rm{DCA}}}.$$
Within this framework, θ2 determines the mass of the Dirac particle. The split-step DQW described by the operator \({\hat{W}}_{{\rm{ss}}}\) is equivalent to the two period DQW with alternate coin operations, θ1 and θ2, when the alternate points in position space with zero probability are ignored47. Therefore, all the dynamics of a DCA can be recovered from the DQW evolution using \(\hat{W}\) and alternating the two coin operations. See Methods for a comparison between DCA and the explicit solution of the Dirac equation. Typical features of the Dirac equation in relativistic quantum mechanics, such as the Zitterbewegung40 and the Klein paradox48, are also dynamical features of the DCA, as well as the spreading of the probability distribution and the entanglement of localized positive-energy states. We note that these effects have also been shown in direct analog simulations of the Dirac equation with trapped ions35 and BECs49.
Experimental DQW implementation
To realize the DQW on a system of qubits one must pick a mapping of the particle position to the qubit space. As shown in50, there is no unique way to map position states to multi-qubit states, so each circuit decomposition depends on the configuration adopted. A direct mapping of each walker position to one qubit in the chain mimicking the arrangement of the qubit array is inefficient in terms of qubit number and gates required (the former grows linearly and the latter quadratically with the position space size modeled). In order to minimize resource use, we take advantage of a digital representation to map the position space into a multi-qubit state and re-order it in such a way that the state \(\left|0\right\rangle \,(\left|1\right\rangle )\) of the last qubit corresponds to even (odd) position numbers. This allows us to minimize the changes needed in the qubit space configuration during each step of the walk (see Fig. 2). To implement a quantum walk in one-dimensional position Hilbert space of size 2n, (n + 1) qubits are required. One qubit acts as the coin and the other n qubits mimic the position Hilbert space with 2n − 1 positions of a symmetric walk about \(\left|x=0\right\rangle\). We note that the particle can be started from any point in the position space, however setting the initial state reduces the gate counting in the circuit and hence reduces the overall error. The coin operation is achieved by single-qubit rotations on the coin-qubit while the shift operators are realized by using the coin as a control qubit to change the position state during the walk.
Fig. 2: Mapping of multi-qubit states to position states.
Multi-qubit states are re-ordered in such a way that the state \(\left|0\right\rangle \,(\left|1\right\rangle )\) of the last qubit corresponds to even (odd) position numbers and its correspondence in the position space.
We realize the walk on a chain of seven individual 171Yb+ ions confined in a Paul trap and laser-cooled close to their motional ground state45,51. Five of these are used to encode qubits in their hyperfine-split 2S1/2 ground level. Single-qubit rotations, or R gates, and two-qubit entangling interactions, or XX gates are achieved by applying two counter-propagating optical Raman beams to the chain, one of which features individual addressing (see Methods for experimental details). We can represent up to 15 positions of a symmetric QW, including the initial position \(\left|x=0\right\rangle\).
Based on this position representation a circuit diagram for the DQW on five qubits with the initial state \({\left|0\right\rangle }_{c}\otimes \left|0000\right\rangle\) is composed for up to five steps, see Fig. 3. Each evolution step, \(\hat{W}\), starts with a rotation operation on the coin-qubit, \({\hat{C}}_{{\theta }_{j}}\), followed by a set of controlled gates that change the position state of the particle under \(\hat{S}\). Due to the gratuitous choice of position representation used, it is enough to perform a single-qubit rotation on the last qubit at every step, which could also be done by classical tracking50.
Fig. 3: Circuit implementation of quantum walks on a trapped-ion processor and its time evolution.
a Circuit diagram for a DQW and DCA. Each dashed block describes one step in the quantum walk. b Discrete-time Quantum Walk. Comparison of the experimental results (left) and the theoretical quantum-walk probability distribution (right) for the first five steps with initial particle state b i and b iv \({\left|\psi \right\rangle }_{{\rm{c}}}={\left|0\right\rangle }_{{\rm{c}}}\), b ii and b iv \({\left|\psi \right\rangle }_{{\rm{c}}}={\left|1\right\rangle }_{{\rm{c}}}\), b iii and b vi \({\left|\psi \right\rangle }_{{\rm{c}}}={\left|0\right\rangle }_{{\rm{c}}}+i{\left|1\right\rangle }_{{\rm{c}}}\), and position state \(\left|x=0\right\rangle\). c Output of a step-5 Dirac Cellular Automaton for θ1 = 0 and, c i and c iv θ2 = π/4, c ii and c v θ2 = π/10 and c iii and c vi θ2 = π/20 with the initial state \(\left|{\Psi }_{{\rm{in}}}\right\rangle =({\left|0\right\rangle }_{{\rm{c}}}+i{\left|1\right\rangle }_{{\rm{c}}})\otimes \left|x=0\right\rangle\).
Computational gates such as CNOT, Toffoli, and Toffoli-4 are generated by a compiler which breaks them down into constituent physical-level single- and two-qubit gates45. A circuit diagram detailing the compiled building blocks is shown in Methods. To prepare an initial particle state different from \({\left|0\right\rangle }_{{\rm{c}}}\) it is enough to perform a rotation on the coin-qubit before the first step. In some cases this rotation can be absorbed into the first gates in step one. Table 1 summarizes the number of native gates needed per step for initial state. To recover the evolution of the Dirac equation in a DQW after five steps, 81 single qubit gates and 32 XX-gates are required.
Table 1 Gate counting.
After evolving a number of steps, we sample the corresponding probability distribution 3000 times and correct the results for readout errors. For the DQW evolution up to five steps shown in Fig. 3, a balanced coin (θ1 = θ2 = π/4) is used where the initial position is \(\left|x=0\right\rangle\) for different initial particle states, \({\left|0\right\rangle }_{{\rm{c}}}\) in Fig. 3b i, \({\left|1\right\rangle }_{{\rm{c}}}\) in Fig. 3b ii, and an equal superposition of both in Fig. 3b iii. In Fig. 3b iv, b v, and b vi we show the ideal output from classical simulation of the circuit for comparison (see Methods for a plot of the difference). With a balanced coin the particle evolves in equal superposition to the left and right position at each time step and upon measurement, there is a 50/50 probability of finding the particle to the left or right of its previous position, just as in classical walk. If we let the DQW evolve for more than three steps before we perform a position measurement, we will find a very different probability distribution compared to the classical random walk52.
The same experimental setup can be used to recover a DCA with a two-period DQW. Here we set θ1 = 0 and varied θ2 to recover the Dirac equation for different mass values. In Fig. 3c, we show experimental results for θ2 = π/4, π/10 and π/20, corresponding to a mass 1.1357, 0.3305, and 0.159 in units of ℏc−2s−1, with the initial particle state in the superposition \({\left|0\right\rangle }_{c}+i{\left|1\right\rangle }_{c}\). The main signature of a DCA for small mass values is the presence of peaks moving outward and a flat distribution in the middle as shown for the cases with small values of θ2, Figs. 3c ii-iii. This bimodal probability distribution in position space is an indication of the one-dimensional analog of an initially localized Dirac particle, with positive energy, evolving in time which spreads in all directions in position space at speeds close to the speed of light53. In contrast, a DCA with θ2 = π/4, Fig. 3c i corresponds to a massive particle and hence there is a slow spread rather than a ballistic trajectory in position space.
We have shown how quantum walks form the basic elements for simulation of the dynamics associated with the free Dirac particle with positive energy. Despite the population mismatch of 0.05–0.2 between the simulation and the experimental results after five steps, the final probability density exhibits the characteristic behavior of an initially localized Dirac particle. A key factor on the digitization of DQW/DCA is the mapping of qubit states to position space. An adequate mapping is important to minimize the number of gates on the protocol, and as a consequence, the resource scaling of the evolution. By increasing in the available number of qubits, these quantum circuits can be scaled to implement more steps and simulate a multi-particle DQW. The number of gates has a polynomial growth rate with the number of steps54. The correspondence between DQWs and the dynamics of Dirac particles suggests that the QWs formalism is as a viable approach to reproduce a variety of phenomena underpinned by Dirac particle dynamics in both the high- and low-energy regime22,39,43. Quantum simulations of free quantum field theory43, Yang-Mills gauge-field on fermionic matter55, as well as the effect of mass and space-time curvature on entanglement between accelerated particles20,56,57 have been reported and probing quantum field theory from the algorithmic perspective in an active field of research. However, the circuit complexity for position-dependent coin operations needed for simulating these effects will increase with the complexity of the evolution, which means further improvements in quantum hardware will be necessary for their realization.
Experimental details
The experiments are performed in a chain of seven individual 171Yb+ ions confined in a Paul trap and laser-cooled close to their motional ground state45,51. In order to guarantee higher uniformity in the ion spacing, matching the equally spaced individual addressing beams, the middle five of these are used to encode qubits in their hyperfine-split 2S1/2 ground level, with an energy difference of 12.642821 GHz. The two edge ions are neither manipulated nor measured, however, their contribution to the collective motion is included when creating the entangling operations. The ions are initialized by an optical pumping scheme and are collectively read out using state-dependent fluorescence detection58, with each ion being mapped to a distinct photo-multiplier tube (PMT) channel. The system has two mechanisms for quantum control, which can be combined to implement any desired operation: single-qubit rotations, or R gates, and two-qubit entangling interactions, or XX gates. These quantum operations are achieved by applying two counter-propagating optical Raman beams from a single 355-nm mode-locked laser59. The first Raman beam is a global beam applied to the entire chain, while the second is split into individual addressing beams, each of which can be controlled independently and targets one qubit. Single-qubit gates are generated by driving resonant Rabi rotations of defined phase, amplitude, and duration. Two-qubit gates are realized by illuminating two ions with beat-note frequencies near to the motional sidebands and creating an effective spin-spin (Ising) interaction via transient entanglement between the state of two ions and all modes of motion60,61,62. The average state detection fidelity for single- and two-qubit gate are 99.5(2)% and 98–99%, respectively. Rotations around the z-axis are achieved by phase advances on the classical control signals. Both the R as well as the XX angle can be varied continuously. State preparation and measurement (SPAM) errors are characterized and corrected by applying the inverse of an independently measured state-to-state error matrix63.
In order to illustrate how our experiment performs, we plot the absolute value of the difference between measured and simulated position distributions, Fig. 4, they match the theoretical expectation closely. These distributions are obtained after tracing out the coin information of the unitary evolution \(\hat{W}=\hat{S}{\hat{C}}_{\theta }\) for each time-step. In both instances, DQW and DCA, the number of gates and hence the error incurred grows with the number of steps.
Fig. 4: Experimental errors.
Experimental error distribution for DQW (left) and DCA (right).
Apart from this, the output from the walk both, DQW and DCA, is designed to have zero probabilities for an alternate position, however, due to addressing crosstalk in the system, we see a small amount of population in these states. The same mechanism can populate the state \(\left|1000\right\rangle\) of the logical encoding not included in our mapping. In fact, the average experimental population registered in this state is <2% for the deepest circuits and hence does not affect the results significantly.
Comparison between Dirac kinematics and DCA
We use the explicit time-dependent solution of the one-dimensional Dirac equation provided by Strauch18:
$$\Psi (x,t)=\frac{m{\mathcal{N}}}{\pi }\left(\begin{array}{c}{s}^{-1}{K}_{1}(ms)\left[a+i(t+x)\right]+{K}_{0}(ms)\\ {s}^{-1}{K}_{1}(ms)\left[a+i(t-x)\right]+{K}_{0}(ms)\end{array}\right),$$
where \(s={[{x}^{2}{(a+it)}^{2}]}^{1/2}\), \({\mathcal{N}}=\sqrt{(\pi /2m)}{[{K}_{1}(2ma)+{K}_{0}(2ma)]}^{-1/2}\) the normalized factor and Kn is the modified Bessel Function of order n, to show the corresponding probability density at time t to the DCA after the time-step t, Fig. 5. The relationship between the mass in the Dirac equation and the coin bias parameter is given by,
$$m\approx \frac{{\theta }_{2}}{1-\frac{{\theta }_{2}^{2}}{2}}.$$
Fig. 5: Dirac kinematics and DCA.
Numerical simulation of the explicit time-dependent solution of the one-dimensional Dirac equation (solid blue) and DCA (yellow bars) at (a) t = 3 and (b) t = 5 with a = 0.4 and θ2 = π/20.
Gate block
The compiler breaks down the gate blocks shown in Fig. 3 (Toffoli-CNOT and Toffoli - Toffoli 4 - CNOT) into native R and XX gates as given by the following circuits, which are optimal in the XX-gate count, Fig. 6. Sketch of the XX-gate is meant to symbolize the two-qubit entangling gate between the outer ions inside a square.
Fig. 6: Gate block.
a Toffoli 4--CNOT and b Toffoli 4--CNOT.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Childs, A. M. et. al. Exponential algorithmic speedup by a quantum walk. In Proc. Thirty-fifth Annual ACM Symposium on Theory of Computing 59–68 (Association for Computing Machinery, 2003).
Ambainis, A. Quantum walks and their algorithmic applications. Int. J. Quantum Inf. 1, 507–518 (2003).
MATH Google Scholar
Shenvi, N., Kempe, J., Whaley, K. B. & Whaley, K. B. Quantum random-walk search algorithm. Phys. Rev. A 67, 052307 (2003).
ADS Google Scholar
Ambainis, A. Quantum walk algorithm for element distinctness. SIAM J. Comput. 37, 210–239 (2007).
MathSciNet MATH Google Scholar
Magniez, F., Santha, M. & Szegedy, M. Quantum algorithms for the triangle problem. SIAM J. Comput. 37, 413–424 (2007).
Douglas, B. L. & Wang, J. B. A classical approach to the graph isomorphism problem using quantum walks. J. Phys. A: Math. Theor. 41, 075303 (2008).
ADS MathSciNet MATH Google Scholar
Gamble, J. K., Friesen, M., Zhou, D., Joynt, R. & Coppersmith, S. N. Two-particle quantum walks applied to the graph isomorphism problem. Phys. Rev. A 81, 052313 (2010).
Berry, S. D. & Wang, J. B. Two-particle quantum walks: entanglement and graph isomorphism testing. Phys. Rev. A 83, 042317 (2011).
Paparo, G. & Martin-Delgado, M. Google in a quantum network. Sci. Rep. 2, 444 (2012).
ADS CAS PubMed PubMed Central Google Scholar
Paparo, G., Müller, M., Comellas, F. & Martin-Delgado, M. A. Quantum Google in a complex network. Sci. Rep. 3, 2773 (2013).
ADS PubMed PubMed Central Google Scholar
Loke, T., Tang, J. W., Rodriguez, J., Small, M. & Wang, J. B. Comparing classical and quantum PageRanks. Quantum Inf. Process. 16, 25 (2017).
ADS MATH Google Scholar
Chawla, P., Mangal, R. & Chandrashekar, C. M. Discrete-time quantum walk algorithm for ranking nodes on a network. Quantum Inf. Process. 19, 158 (2020).
ADS MathSciNet Google Scholar
DiMolfetta, G., Brachet, M. & Debbasch, F. Quantum walks in artificial electric and gravitational fields. Phys. A: Stat. Mech. its Appl. 397, 157–168 (2014).
Chandrashekar, C. M., Banerjee, S. & Srikanth, R. Relationship between quantum walks and relativistic quantum mechanics. Phys. Rev. A 81, 062340 (2010).
DiMolfetta, G., Brachet, M. & Debbasch, F. Quantum walks as massless Dirac fermions in curved space-time. Phys. Rev. A 88, 042301 (2013).
Arrighi, P., Facchini, S. & Forets, M. Quantum walking in curved spacetime. Quantum Inf. Process. 15, 3467–3486 (2016).
Chandrashekar, C. M. Two-component Dirac-like Hamiltonian for generating quantum walk on one-, two-and three-dimensional lattices. Sci. Rep. 3, 2829 (2013).
Strauch, F. W. Relativistic quantum walks. Phys. Rev. A 73, 054302 (2006).
DiMolfetta, G. & Pérez, A. Quantum walks as simulators of neutrino oscillations in a vacuum and matter. N. J. Phys. 18, 103038 (2016).
Mallick, A., Mandal, S., Karan, A. & Chandrashekar, C. M. Simulating Dirac Hamiltonian in curved space-time by split-step quantum walk. J. Phys. Commun. 3, 015012 (2019).
Chandrashekar, C. M. & Busch, T. Localized quantum walks as secured quantum memory. EPL 110, 10005 (2015).
Mallick, A., Mandal, S. & Chandrashekar, C. M. Neutrino oscillations in discrete-time quantum walk framework. Eur. Phys. J. C. 77, 85 (2017).
Aharonov, D., Ambainis, A., Kempe, J. & Vazirani, U., Quantum walks on graphs. In Proc. Thirty-third Annual ACM Symposium on Theoretical Computing 50–59, (Association for Computing Machinery, 2001).
Tregenna, B., Flanagan, W., Maile, R. & Kendon, V. Controlling discrete quantum walks: coins and initial states. N. J. Phys. 5, 83 (2003).
Farhi, E. & Gutmann, S. Quantum computation and decision trees. Phys. Rev. A 58, 915–928 (1998).
ADS MathSciNet CAS Google Scholar
Gerhardt, H. & Watrous, J. Continuous-time quantum walks on the symmetric group. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 290–301 (Springer Berlin Heidelberg, 2003).
Perets, H. B. et al. Realization of quantum walks with negligible decoherence in waveguide lattices. Phys. Rev. Lett. 100, 170506 (2008).
ADS PubMed Google Scholar
Karski, M. et al. Quantum walk in position space with single optically trapped atoms. Science 325, 174–177 (2009).
ADS CAS PubMed Google Scholar
Peruzzo, A. et al. Quantum walks of correlated photons. Science 329, 1500–1503 (2010).
Schreiber, A. et al. Photons walking the line: a quantum walk with adjustable coin operations. Phys. Rev. Lett. 104, 050502 (2010).
Broome, M. A. et al. Discrete single-photon quantum walks with tunable decoherence. Phys. Rev. Lett. 104, 153602 (2010).
Tamura, M., Mukaiyama, T. & Toyoda, K. Quantum walks of a phonon in trapped ions. Phys. Rev. Lett. 124, 200501 (2020).
Schmitz, H. et al. Quantum walk of a trapped ion in phase space. Phys. Rev. Lett. 103, 090504 (2009).
Zähringer, F. et al. Realization of a quantum walk with one and two trapped ions. Phys. Rev. Lett. 104, 100503 (2010).
Gerritsma, R. et al. Quantum simulation of the Dirac equation. Nature 463, 68–71 (2010).
Ryan, C. A., Laforest, M., Boileau, J. C. & Laflamme, R. Experimental implementation of a discrete-time quantum random walk on an NMR quantum-information processor. Phys. Rev. A 72, 062317 (2005).
Qiang, X. et al. Efficient quantum walk on a quantum processor. Nat. Commun. 7, 11511 (2016).
Ramasesh, V. V., Flurin, E., Rudner, M., Siddiqi, I. & Yao, N. Y. Direct probe of topological invariants using Bloch oscillating quantum walks. Phys. Rev. Lett. 118, 130501 (2017).
ADS MathSciNet CAS PubMed Google Scholar
Flurin, E., Ramasesh, V. V., Hacohen-Gourgy, S., Martin, L. S., Yao, N. Y. & Siddiqi, I. Observing topological invariants using quantum walks in superconducting circuits. Phys. Rev. X 7, 031023 (2017).
Mallick, A. & Chandrashekar, C. M. Dirac cellular automaton from split-step quantum walk. Sci. Rep. 6, 25779 (2016).
Pérez, A. Asymptotic properties of the Dirac quantum cellular automaton. Phys. Rev. A 93, 012328 (2016).
Meyer, D. A. From quantum cellular automata to quantum lattice gases. J. Stat. Phys. 85, 551–574 (1996).
Bisio, A., D'Ariano, G. M. & Tosini, A. Quantum field as a quantum cellular automaton: the Dirac free evolution in one dimension. Ann. Phys. 354, 244–264 (2015).
ADS MathSciNet CAS MATH Google Scholar
Kumar, N. P., Balu, R., Laflamme, R. & Chandrashekar, C. M. Bounds on the dynamics of periodic quantum walks and emergence of the gapless and gapped Dirac equation. Phys. Rev. A 97, 012116 (2018).
ADS CAS Google Scholar
Debnath, S. et al. Demonstration of a small programmable quantum computer with atomic qubits. Nature 536, 63–66 (2016).
Thaller, B. The Dirac equation (Springer Science & Business Media, 2013).
Zhang, W.-W., Goyal, S. K., Simon, C. & Sanders, B. C. Decomposition of split-step quantum walks for simulating Majorana modes and edge states. Phys. Rev. A 95, 052351 (2017).
Bisio, A., D'Ariano, G. M. & Tosini, A. Dirac quantum cellular automaton in one dimension: Zitterbewegung and scattering from potential. Phys. Rev. A 88, 032301 (2013).
LeBlanc, L. J. et al. Direct observation of zitterbewegung in a Bose-Einstein condensate. N. J. Phys. 15, 073011 (2013).
Singh, S. et. al. Universal one-dimensional discrete-time quantum walks and their implementation on near term quantum hardware. Preprint at https://arxiv.org/abs/2001.11197 (2020).
Landsman, K. A. et al. Verified quantum information scrambling. Nature 567, 61–65 (2019).
Omar, Y., Paunković, N., Sheridan, L. & Bose, S. Quantum walk on a line with two entangled particles. Phys. Rev. A 74, 042304 (2006).
Bracken, A. J., Ellinas, D. & Smyrnakis, I. Free-Dirac particle evolution as a quantum random walk. Phys. Rev. A 75, 022322 (2007).
Fillion-Gourdeau, F., MacLean, S. & Laflamme, R. Algorithm for the solution of the Dirac equation on digital quantum computers. Phys. Rev. A 95, 042343 (2017).
Arnault, P., DiMolfetta, G., Brachet, M. & Debbasch, F. Quantum walks and non-Abelian discrete gauge theory. Phys. Rev. A. 94, 012335 (2016).
Arrighi, P., DiMolfetta, G. & Facchini, S. Quantum walking in curved spacetime: discrete metric. Quantum 2, 84 (2018).
Singh, S., Balu, R., Laflamme, R. & Chandrashekar, C. M. Accelerated quantum walk, two-particle entanglement generation and localization. J. Phys. Commun. 3, 055008 (2019).
Olmschenk, S. et al. Manipulation and detection of a trapped Yb+ hyperfine qubit. Phys. Rev. A 76, 052314 (2007).
Islam, R. et al. Beat note stabilization of mode-locked lasers for quantum information processing. Opt. Lett. 39, 3238–3241 (2014).
Choi, T. et al. Optimal quantum control of multimode couplings between trapped ion qubits for scalable entanglement. Phys. Rev. Lett. 112, 190502 (2014).
Mølmer, K. & Sørensen, A. Multiparticle entanglement of hot trapped ions. Phys. Rev. Lett. 82, 1835–1838 (1999).
Solano, E., de Matos Filho, R. L. & Zagury, N. Deterministic Bell states and measurement of the motional state of two trapped ions. Phys. Rev. A 59, R2539–R2543 (1999).
Shen, C. & Duan, L. M. Correcting detection errors in quantum state engineering through data processing. N. J. Phys. 14, 053053 (2012).
The authors would like to thank to Y. Nam and C. Figgatt for helpful discussions. C.H.A. acknowledges financial support from CONACYT doctoral grant no. 455378. C.M.C. acknowledge the support from DST, Government of India under Ramanujan Fellowship grant no. SB/S2/RJN-192/2014 and US Army ITC-PAC contract no. FA520919PA139. N.M.L. acknowledges financial support from the NSF grant no. PHY-1430094 to the PFC@JQI.
Joint Quantum Institute, Department of Physics, University of Maryland, College Park, MD, 20742, USA
C. Huerta Alderete, Nhung H. Nguyen, Daiwei Zhu, Christopher Monroe & Norbert M. Linke
Instituto Nacional de Astrofísica, Óptica y Electrónica, Calle Luis Enrique Erro No. 1, 72840, Sta. Ma. Tonantzintla, PUE, Mexico
C. Huerta Alderete
The Institute of Mathematical Sciences, C. I. T. Campus, Taramani, Chennai, 600113, India
Shivani Singh & C. M. Chandrashekar
Homi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai, 400094, India
U.S. Army Research Laboratory, Computational and Information Sciences Directorate, Adelphi, MD, 20783, USA
Radhakrishnan Balu
Department of Mathematics & Norbert Wiener Center for Harmonic Analysis and Applications, University of Maryland, College Park, MD, 20742, USA
Nhung H. Nguyen
Daiwei Zhu
Christopher Monroe
C. M. Chandrashekar
Norbert M. Linke
C.H.A., S.S., N.H.N., D.Z., R.B., C.M., C.M.C. and N.M.L. designed the research, C.H.A., N.H.N., D.Z., and N.M.L. collected and analyzed data. All the authors contributed to this manuscript.
Correspondence to C. Huerta Alderete.
The authors declare no competing interests.
Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer review file
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Huerta Alderete, C., Singh, S., Nguyen, N.H. et al. Quantum walks and Dirac cellular automata on a programmable trapped-ion quantum computer. Nat Commun 11, 3720 (2020). https://doi.org/10.1038/s41467-020-17519-4
Accepted: 02 July 2020
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Reviews & Analysis
Editorial Values Statement
Editors' Highlights
Top 50 Articles
Search articles by subject, keyword or author
Show results from All journals This journal
Explore articles by subject
Nature Communications (Nat Commun) ISSN 2041-1723 (online)
nature.com sitemap
Protocol Exchange
Nature portfolio policies
Author & Researcher services
Scientific editing
Nature Masterclasses
Nature Research Academies
Libraries & institutions
Librarian service & tools
Librarian portal
Nature Conferences
Nature Africa
Nature China
Nature India
Nature Italy
Nature Japan
Nature Korea
Nature Middle East
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing | CommonCrawl |
PIV investigation of the flow fields in subject-specific vertebro-basilar (VA-BA) junction
Guangyu Zhu ORCID: orcid.org/0000-0002-3469-37791,
Yuan Wei1,
Qi Yuan ORCID: orcid.org/0000-0002-7004-98681,
Jian Yang2 &
Joon Hock Yeo3
As the only arterial structure of which two main arteries merged into one, the vertebro-basilar (VA-BA) system is one of the favorite sites of cerebral atherosclerotic plaques. The aim of this study was to investigate the detailed hemodynamics characteristics in the VA-BA system.
A scale-up subject-specific flow phantom of VA-BA system was fabricated based on the computed tomography angiography (CTA) scanning images of a healthy adult. Flow fields in eight axial planes and six radial planes were measured and analyzed by using particle image velocimetry (PIV) under steady flow conditions of \({Re}=300\), \({Re}=500\). A water–glycerin mixture was used as the working fluid.
The flow in the current model exhibited highly three-dimensional characteristics. The confluence of VAs flow formed bimodal velocity distribution near the confluence apex. Due to the asymmetrical structural configuration, the bimodal velocity profile skewed towards left, and sharper peaks were observed under higher Reynolds condition. Secondary flow characterized by two vortices formed in the radial planes where 10 mm downstream the confluence apex and persists along the BA under both Reynolds numbers. The strength of secondary flow under \({Re}=500\) is around 8% higher than that under \({Re}=300\), and decayed nonlinearly along the flow direction. In addition, a low momentum recirculation region induced by boundary layer separation was observed near the confluence apex. The wall shear stress (WSS) in the recirculation area was found to be lower than 0.4 Pa. This region coincides well with the preferential site of vascular lesions in the VA-BA system.
This preliminary study verified that the subject-specific in-vitro experiment is capable of reflecting the detailed flow features in the VA-BA system. The findings from this study may help to expand the understanding of the hemodynamics in the VA-BA system, and further clarifying the mechanism that underlying the localization of vascular lesions.
The vertebro-basilar (VA-BA) system is the only arterial structure in human that two large arteries merge into one, in which the VAs are arising from the subclavian arteries and join into BA. It provides a critical cerebral blood supply path that feeding the posterior circulation of the circle of Willis under normal conditions, and responsible for supplying compensational blood flow to anterior circulation when anatomical or pathological variations occurred [1,2,3].
Clinical observations have shown that the VA-BA region is a preferential site of vascular lesions. The prevalence of plaques in this region is around 50% in general population [4,5,6,7,8]. Moreover, approximately 25% of ischemic strokes were related to the lesions in VA-BA [9]. Compare with other causes of ischemic stroke, strokes caused by lesions in VA-BA would result in a much higher in-hospital mortality (8% vs. 20%) and worse functional outcomes [10].
Hemodynamics characteristics have long been associated with the initiation and the progression of the vascular diseases [4, 11,12,13]. Fry et al. [14] firstly suggested that the wall shear stress (WSS) excess 40 Pa could result in acute damages in the endothelial layer of vessels. In contrast, Caro et al. indicated that early lesions prefer to develop in low WSS regions [15]. This observation is supported by in-vivo and in-vitro studies that concerned hemodynamic at the carotid bifurcations [16, 17], coronary arteries [18,19,20,21], and descending thoracic aorta [22]. The results from the above studies have shown that the intimal thickening of vessels is strongly correlated with regions of low WSS. Further support of the atherogenic role of low WSS came from animal models [23, 24], computational fluid dynamics (CFD) simulations [10, 25], and studies at molecular and cellular scales [26, 27]. Based on the above evidence, low WSS theory is currently considered to be more convictive than high WSS hypotheses. In addition to low WSS, oscillatory shear stress [16, 17, 22, 28] and spatial wall shear stress gradient (SWSSG) [29] are also implicated as critical adverse factors in the disease process.
Most of the studies, however, are concerning the blood flow in bifurcations. Only limited attention has been paid to the flow characteristics in the arterial confluence. Despite the geometry of bifurcations is similar to junctions, the hemodynamic patterns in the arterial confluences are totally different from bifurcations due to the reversed flow direction, especially near the apex region.
One of the first studies concerning the flow in the VA-BA system was conducted by McDonald et al. [30]. In their in-vivo animal experiment, flows in the BA were visualized by injecting ink into the vessels. The study showed that the streams from symmetrical VAs would not mix in the BA. Limited by the method itself, however, no quantitative data were provided. Thereafter, multiple research methods were applied in the investigation of hemodynamic characteristics in VA-BA. Numerically, simulations were performed by applying two-dimensional models [31,32,33], three-dimensional symmetric models [31, 34, 35] and patient-specific models [10, 36]. Clinically, the using of MRA made the in-vivo measurement of flow in the confluence area possible [36,37,38].
In addition to the numerical and in-vivo method, the in-vitro experiment is another powerful research tool to explore the hemodynamics in vascular and to validate the numerical results. Ravensbergen et al. experimentally investigated the impacts of confluence angle [39] and merging flows [34] on flow characteristics in a rectangular cross-section confluence phantom by using laser Doppler anemometry (LDA). The flow velocity profiles in the confluence area were measured, and the results validated the presence of secondary flow in the confluence area that reported in an earlier numerical simulation [35]. Lutz et al. [40] visualized the confluence flow patterns in the VA-BA system by using dye injection, and quantified the mixing effect by introducing the concept of mixing index that based on the measurement of dye concentration. Kobayashi et al. [41] investigated the velocity profiles in VA-BA segments excised from elderly cadavers under different steady flow rates by using high-speed-camera. These studies provided unique and valuable in-vitro perspectives of the flow patterns in the VA-BA region. However, constrained by the measurement tools, certain hemodynamic characteristics in the confluence region, such as shear stress and velocity field, are yet to be fully investigated.
This study aims to investigate the detailed hemodynamics characteristics in the VA-BA system. To achieve the goal, a subject-specific 3D VA-BA flow phantom was fabricated base on CTA scanning images, and the detailed flow features in the VA-BA system were investigated through in-vitro experiments under different Reynolds numbers. Hemodynamics parameters in the confluence area, including velocity fields, shear stress distribution, and secondary flow, were analyzed to provide a better understanding of the role of hemodynamics in the localized vascular lesions.
Image acquisition
The cerebral CTA data of a healthy adult were provided by the First Affiliated Hospital of Xi'an Jiaotong University (Xi'an, Shaanxi, China). The scanning was performed on a 64 detector spiral CT (Aquilion 64, Toshiba Medical Systems, California, USA). The field of view (FOV), number of slices, tube voltage, tube current, scan time, and slice thickness were \(265\,\text {mm}\times 265\,\text {mm}\), 967, 120 kV, 350 mA, 500 ms, and 0.5 mm, respectively (Fig. 1).
CTA image set of cerebral scanning
Fabrication of the flow phantom
First of all, a 3D digital model of the VA-BA region was reconstructed from the CTA images. The process of reconstruction was introduced in our previous paper in detail [1]. To enable the accurate measurement, the derived VA-BA model was scaled-up to 5 times of original size in Solidworks and exported to STL format (Fig. 2). Then the STL file was sent to the 3D printing system (SCPS350, Xi'an Jiaotong University, China), and a resin phantom of the vascular structure was printed in a resolution of 0.1 mm (Fig. 3a). Finally, the printed vascular model was fixed into a small tank fulfilled with silicone gel (Sylgard 184, Dow Corning Inc., USA) (Fig. 3b). After the curing of the silicone, the resin vascular model was melted by heating, and a transparent phantom with internal flow tunnel was prepared for the in-vitro PIV measurements (Fig. 3c).
Reconstructed digital model of the VA-BA junction
Physical phantom fabrication for experiment
The experimental setup was composed of a flow loop and a measurement system (Fig. 4).
The steady flow was supplied by an upstream overflow tank that provided constant pressure. According to the empirical equation (Eq. 1) [6], the length of tube that connected the tank was set to 1.6 m and flow-regulating honeycombs were put inside the tubes to ensure the inlet flow fully developed.
$$\begin{aligned} L_{\text{e}}=0.06 \times D \times Re \end{aligned}$$
The inlet flow rates of bilateral VAs were controlled by adjusting the valves placed at the upstream of the VA entrances and monitored by the electromagnetic flow meters. At the efferent of BA, an adjustable resistor was connected into the flow loop. The efferent flow of the phantom was collected in a downstream tank and pumped back to the overflow tank. An adjustable thermostat heater was placed in the overflow tank to control the temperature of the working fluid.
Working fluid and flow parameters
Blood is a multiphase fluid that behaves shear-thinning property. However, the viscosity of blood is relatively constant at shear rates above 100 s−1 [7]. Non-Newtonian nature of blood was neglected in the present study because of the shear rates in large arteries as ICA and VA are rarely lower than 100 s−1.
The working fluid that selected for this experiment was the mixture of glycerin and water with a density of 1157 kg/m3, a viscosity of 10.6 cP at \(37\,^{\circ }\text {C}\) and a refractive index of 1.41. The refractive index of the working fluid matched well with the silicon phantom, and no distortion was observed at the fluid-solid interface (Fig. 5). Hollow glass spheres with an average diameter of \(10\,\upmu \text {m}\) were seeded into the working fluid for PIV data acquisition.
Image distortion after refractive matching
Matching the Reynolds number in the experimental condition with the in-vivo arterial flow condition is the key to obtain meaningful results from a scaled flow phantom. As the Reynolds number ranges from 100 to 1000 in medium-size arteries, Reynolds number 300 and 500 were chosen as the experimental conditions. The Reynolds number was defined as follows:
$$\begin{aligned} Re=\frac{\rho UD}{\mu } \end{aligned}$$
Flow visualization
PIV system was used to capture the flow velocity in the phantom. The PIV system consisted of a CCD camera (LaVision Image Pro 4M CCD, \(2048\times 2048\) pix2), a 200 mJ Nd:Yag laser (Gemini 200, 532 nm), and an optic lens that produces light sheet about 1 mm thick.
To investigate the hemodynamics characteristics in the VA-BA system in detail, the flow fields in eight planes parallel to the flow direction (axial planes) (Fig. 5a) and six planes orthogonal to the mainstream (radial planes) (Fig. 5b) were captured. The axial planes were evenly spaced from top to bottom with an interval of 1 mm. The radial planes were distributed from upstream to downstream with an interval of 5 mm.
The two-frame cross-correlation method was used for image acquisitions. 100 image pairs were recorded in each acquisition to diminish the error caused by random events. The acquired raw images in the IM7 format were post-processed in the DaVis software to obtain the velocity vector files. The interrogation windows size was set to \(32\times 32\) pix with 50% overlap. A self-developed MATLAB algorithm was utilized to eliminate the noise outside the flow region.
Shear stress conversion
Shear stress is defined as follow:
$$\begin{aligned} \tau _{xy}=\tau _{yx}=\mu \left( \frac{ \partial v }{\partial x}+\frac{ \partial u }{\partial y}\right) \end{aligned}$$
Thus, shear stress can be got from the velocity vector field. To compare the shear stress in this scaled phantom to that of the original size, a scale factor was applied in this study. The scale factor is derived from Buckingham Pi theorem [10, 42]:
$$\begin{aligned} \tau _{v}=\left( \frac{\rho _{b}}{\rho _{f}}\right) \left( \frac{v_{b}}{v_{f}}\right) \tau _{b} \end{aligned}$$
Geometrical structure of the VA-BA system
The current VA-BA model has an asymmetrical structural that the sagittal plane of the body did not pass through the center of junction apex. The original diameters of the left VA (LVA), right VA (RVA), and BA were 2.37 mm, 2.75 mm, and 2.83 mm, respectively. The diameter ratio of the bilateral VAs was 1.16 and the confluence angle was \(63^\circ\).
Axial flow patterns
Figure 6 showed the overall axial velocity distributions in the VA-BA region under different Reynolds numbers.
Schematic diagram of velocity fields in axial planes under \({Re} = 300\) (a) and \({Re} = 500\) (b)
Detailed axial velocity fields (solid contour) and velocity profiles (vector arrows) in each plane were illustrated in Fig. 7.
Velocity fields and velocity profiles in axial planes under \({Re}=300\) (left) and \({Re}=500\) (right)
As the figure showed, the velocity distributions in the bilateral VAs were quasi-parabolic. When the flows confronted, a bimodally distributed velocity profile with a trough at the arterial axis appeared in BA. Under the flow condition of \({Re}=300\), it takes 18 mm in the median plane for the bimodal velocity profile restore to the quasi-parabolic distribution (Fig. 7d, e). In contrast, the bimodal velocity distribution pattern existed all along the BA segment of flow phantom under the flow condition of \({Re}=500\). The peaks of bimodal velocity distribution are more bias to the left side under higher Reynolds number. In addition, a triangle shaped flow stagnation region was found at the confluence apex, where the flow velocity magnitude is below 0.05 m/s. The area of the flow stagnation decreased with the increasing of Reynolds number, which is 74.09 mm2 under \({Re}=300\) and 59.69 mm2 under \({Re}=500.\)
Streamlines over the whole VA-BA region under different Reynolds numbers were illustrated in (Fig. 8).
Schematic diagram of streamlines in axial planes under \({Re} = 300\) (a) and \({Re} = 500\) (b)
Detailed streamlines in each plane were illustrated in Fig. 9. The overall streamline distributions were similar between different Reynolds numbers. The flows from the bilateral VAs confronted each other at the beginning of the BA where 10 mm downstream the confluence apex. In the upper planes, stream from the left side fully dominated the flow in the BA (Fig. 9a, b). When comes to the lower planes of the flow phantom, the flow from RVA gradually took dominance in the BA (Fig. 9c–h), this phenomenon became stronger as the Reynolds number increases. However, no signs of flow disturbance and mixing were observed in the entire flow region. The two streams from the bilateral VAs remain parallel in BA, and a clear boundary between the bilateral streams could be observed. Moreover, it is interesting to note that stable recirculation zones formed at the bifurcation site while no shedding was observed under experimental conditions.
Shear stress obtained from the experimental data has been converted based on Eq. 4 (Fig. 10). Detailed shear stress distributions in each plane under different Reynolds numbers were plotted in Fig. 11. Low shear stress \((<0.4 \; \text {Pa})\) was observed near the wall of the confluence apex and the left side of the BA. High shear stress was observed along with the confluence interface and the vessel walls of VA and BA.
Schematic diagram of shear stress in axial planes under \({Re} = 300\) (a) and \({Re} = 500\) (b)
Shear stress in axial planes under \({Re}=300\) (left) and \({Re}=500\) (right)
Radial flow patterns
Velocity fields in the radial planes were shown in Figs. 12 and 13. Flow from the left VA was wrapped by that from the right side and formed unique three-dimensional flow patterns downstream the confluence apex. An obvious boundary between the flow from the bilateral VAs was observed in the BA (Fig. 13c–f). This phenomenon agrees with the observations in the axial planes.
Schematic diagram of velocity fields in radial planes under \({Re} = 300\) (a) and \({Re} = 500\) (b)
Velocity fields in radial planes under \({Re}=300\) (left) and \({Re}=500\) (right)
The velocity vectors in radial planes were illustrated in Figs. 14 and 15 further clarified the radial flow characteristic in the VA-BA region. It can be seen that the secondary flows in the BA were characterized by two vortices rotate in opposite directions (Fig. 15c–f). To quantify the strength of the secondary flow, a dimensionless term "mean secondary velocity" that introduced by Ravensbergen et al. [39] was utilized in this study. The mean secondary velocity is defined as the ratio of the cross-sectional mean of the secondary velocities to the cross-sectional mean of the axial velocities at each location. The impacts of Reynolds numbers and distance from the confluence apex on the strength of secondary flow were quantitatively analyzed and showed in Fig. 16, and the specific data were listed in Table 1.
Schematic diagram of velocity vectors in radial planes under \({Re} = 300\) (a) and \({Re} = 500\) (b)
Velocity vectors in radial planes under \({Re} = 300\) (left) and \({Re} = 500\) (right)
Impacts of Reynolds number and distance from confluence apex on mean secondary velocity
Table 1 Mean secondary velocity under different Reynolds numbers
Shear stress distributions in radial planes were calculated and illustrated in Figs. 17 and 18. It was noticed that a low shear stress zone appeared at the apex site in the radial flow (Fig. 18b) as well, with the value that lower than 0.4 Pa.
Schematic diagram of shear stress in radial planes under \({Re} = 300\) (a) and \({Re} = 500\) (b)
Shear stress in radial planes under \({Re}=300\) (left) and \({Re}=500\) (right)
In the present study, a subject-specific VA-BA phantom was reconstructed, and in-vitro investigations were conducted to elucidate the hemodynamics features of the VA-BA arterial system. Detailed flow patterns and shear stress distributions in the confluence region were studied and compared under different Reynolds numbers.
Structural characteristics
The geometrical structure of the VA-BA arterial system varies among populations. Previous clinical studies have reported that the confluence angle of the VAs ranges from \(10\,^{\circ }\) to \(160^{\circ }\,(60^{\circ }{\pm }30^{\circ })\) [10, 39]. Wake-Buck et al. [10] classified the structure of VA-BAs into three types according to the spatial configurations, including Walking type, Tuning fork type, and Lambda type. A recent retrospective clinical study showed that 64% of patients have a Lambda type VA-BA, and the prevalence of Walking type and Tuning fork type configuration is 17% and 19%, respectively [43]. In present study, the confluence angle between the VAs was \(63^{\circ }\) and the diameter ratio of RVA to LVA was 1.16, which are within the reported physiological range. The angle between the LVA and BA was \(176^{\circ }\) and RVA joints the BA in a pseudo-T-junction, which could be classified as the lambda type VA-BA.
Flow characteristics
Axial flow
As a unique arterial structure, the flow in the VA-BA system also exhibited distinctive characteristics. When confronted in the proximal end of the BA, the streams from the bilateral VAs flowed on their own side without mixing. Such patterns suggested the flow in the BA is laminar. In addition, bimodal velocity profiles appeared immediately after the confluence. Due to the asymmetrical structure, the left peak of the velocity profile is sharper than the right side, which represents the eccentric flow patterns in BA. These findings were supported by several CFD [10], in-vitro [41] and in-vivo [30, 38, 44] studies.
To make the data suitable for quantitative comparison, the normalized velocity that defined as the maximum magnitude in each velocity profile (Umax) divided by the averaged cross-sectional velocity (Umean) in BA were calculated, and data were listed in Table 2. According to the Poiseuille law, the ratio of Umax to Umax should be 2 in the fully developed laminar flow inside a tube, and the length required for the flow development was given in Eq. 1. Because of the limited length, it was impossible for the flow in BA fully developed. Thus, normalized velocity would between 1 and 2, and its magnitude would decrease with increasing of Reynolds number. The results from the current study agree well with the theoretical analysis. Meanwhile, due to the eccentric flow induced by the asymmetrical structure, the normalized velocity under \({Re}=300\) was close to 2 near the confluence apex. The similar phenomenon was reported by Kobayashi et al. [41]. The decreasing of normalized velocity along flow direction revealed that the flow development in BA was affected by the confluence of bilateral streams. Moreover, despite the flows from VAs are parallel along the BA in the median planes, (Fig. 9c–f), helical flow was observed in upper (Fig. 9a, b) and lower planes (Fig. 9g, h) near the distal end of BA segment. This observation further suggested the fluid downstream the confluence flows in a layered pattern along radials direction.
Table 2 Normalized velocity under different Reynolds numbers
When focused on the confluence apex region, a zone of low shear stress that coincides with low momentum recirculation as the result of boundary layer separation was observed in this study. The area of this region decreased with the increasing of Reynolds number. The WSS in the VA-BA was derived from the shear stress field that one pixel from to the wall, and low WSS \((<0.4\,\text {Pa})\) region at the apex of VAs junction was predicted in this study. This finding agrees with the results from the idealize and patient-specific models [8, 10, 45]. Several studies have suggested low WSS yield atheroprotective range could trigger an inflammatory-cell-mediated pathway that associated with the growth of atherosclerotic plaques [14, 46,47,48]. The observation of the localized low WSS provided additional hemodynamic evidence of the preferential of vascular plaques in the confluence apex region. Besides, the low-velocity recirculation zone may accelerate the progression of atherosclerosis by gathering and depositing of blood components.
Radial flow
Highly three-dimensional flows in the VA-BA system have been reported in literature [8, 10, 34, 39]. Thus, the investigation of flow patterns in radial direction would be important for a more comprehensive understanding of the hemodynamics characteristics in the VA-BA system.
In previous studies, Ravensbergen's team investigated the radial flow in a generalized tuning fork type VA-BA system by using CFD simulations [34, 35, 39]. Their results showed that secondary flow with a distinct four-vortex pattern appeared in the BA as the consequence of flow confluence. Furthermore, the same team quantitatively analyzed the impact of flow conditions on the secondary flow strength by using the non-dimensional term mean secondary velocity, which suggested that the secondary flow is stronger under higher Reynolds number and decayed along the flow direction.
The present study is the first in-vitro attempt to investigate the secondary flow in the subject-specific phantom of VA-BA system. Our results showed that the secondary flow was established around 10 mm downstream the confluence apex due to the boundary layer separation, and persists along the BA. The secondary flow strength at the plane 10 mm downstream apex was 42.1% under \({Re}=300\) and 45.5% under \({Re}=500\), and decreased along the flow direction in a non-linear pattern (Fig. 16). Though the directly quantitative comparison of secondary strength between studies is difficult due to the use of different model and flow conditions, its distribution patterns in this study agree well with the results from previous ones [34, 35]. However, we only observed two vortices rotate oppositely in the secondary flow. This phenomenon is mainly due to the skewed flow in the BA, which was induced by the lambda type structural configuration.
Though the average shear stress level in the radial planes is low, relatively high shear stress areas appeared at the interface of two streams and near the artery wall of BA. The secondary flow may bring some blood from a relatively low shear stress area to a high shear stress area repeatedly. Blood components in the high and low shear stress cycle may result in blood damage due to fatigue [49]. Meanwhile, secondary flow in the artery could also play a key role in the erosion and endothelial response at the early stage of atherosclerosis pathology [50].
Simplifications made in this study
Restricted by the viable experimental condition, it is difficult to fully mimic the physiological flow in the VA-BA system. Thus, several simplifications have been made in the current study.
First of all, the inlet flow was assumed as fully developed at the entrance in the current study. Although this assumption is not physiologically correct, it has been widely accepted in in-vitro experimental studies [51, 52] and provided a basis for comparison between studies.
Secondly, due to the deformation of the vessels is small and blood behaves as Newtonian fluid in arteries of the size of the VA-BA system, the arterial wall elasticity and the non-Newtonian properties of blood were neglected.
In addition, despite the blood flow in the vascular system was pulsatile under physiological conditions, several numerical [16, 53] and in-vitro [41, 54, 55] studies have suggested that steady boundary conditions are able to predict the non-temporal related cerebral flow characteristics at the corresponding point of the pulsatile flow profile. Among which, a comparison between pulsatile results [33] and steady results [31] in two-dimensional VA-BA models showed that the most important flow patterns are the same in both cases. Kobayashi et al. [41] further suggested that the VA-BA flow phenomena occurred in pulsatile flow are essentially the same as those found in steady flow. Thus, the steady flow boundary conditions adopted in this study is capable of revealing the flow characteristics in the region of interesting as no temporal hemodynamics terms were involved.
Moreover, the flow phantom utilized in this study was scaled-up. To overcome the spatial restrictions, the scaling of vascular replicas based on the principle of dynamic similarity, also known as dynamic scaling, has been widely used in the in-vitro experiments [56,57,58,59]. The dynamic scaling is a well-established concept that ensures the development of flow in scaled phantom identical to the original object. In the incompressible steady flow within rigid domains, the dynamic similarity could be ensured by matching of the Reynolds number. In this study, a scaled-up VA-BA phantom was deployed due to the difficulties of flow visualization in the original size. As described in the method section of this manuscript, the dynamic scaling has been achieved by matching the Reynolds number between the scaled phantom and in-vivo conditions. Thus, the measurements from the scaled phantom are capable of reflecting the flow characteristics in the non-scaled situations.
Limitations and future works
It is important to emphasize that the current study carries inherent limitations associated with the in-vitro modeling techniques. Firstly, the arteries that branch out from BA were removed. To investigate the flow in VA-BA system in a more detailed manner, these branch arteries should be included in future work. Secondly, the elasticity of the arterial wall was neglected. As discussed before, though the rigid wall assumption is acceptable in the current study, it still may lead to a slight overestimation of WSS [60]. In addition, more samples should be included in the future investigations to systemically evaluate the impact of various spatial characteristics [21]. Finally, the three-dimensional flow field should be measured in future work to assess the complex spatial flow in the junction site.
In this study, in-vitro experiments were conducted to investigate the detailed hemodynamics characteristics in a subject-specific VA-BA flow phantom. Flow characteristics in the axial and radial planes were visualized and analyzed by using PIV. The preliminary results showed that the flow in VA-BA system behaves a highly three-dimensional feature, and the flow patterns were affected by the spatial structures and inflow Reynolds numbers. Further, a low WSS region coincides well with the preferential region of atherosclerotic plaques was found near the confluence apex. The findings from this study could help to expand the understanding of the hemodynamics in the VA-BA system, and further clarifying the mechanism that underlying the localization of vascular lesions.
BA:
basilar artery
CFD:
CoW:
circle of Willis
CTA:
LDL:
low density lipoprotein
NURBS:
non-uniform rational B-splines
PIV::
particle image velocimetry
VA::
vertebral artery
Zhu GY, Yuan Q, Yang J, Yeo JH. The role of the circle of Willis in internal carotid artery stenosis and anatomical variations: a computational study based on a patient-specific three-dimensional model. Biomed Eng Online. 2015;14(1):107. https://doi.org/10.1186/s12938-015-0105-6.
Zhu GY, Yuan Q, Yang J, Yeo JH. Experimental study of hemodynamics in the circle of Willis. Biomed Eng Online. 2015;14(Suppl 1):10. https://doi.org/10.1186/1475-925X-14-S1-S10.
Liu X, Gao Z, Xiong H, Ghista D, Ren L, Zhang H, Wu W, Huang W, Hau WK. Three-dimensional hemodynamics analysis of the circle of Willis in the patient-specific nonintegral arterial structures. Biomech Model Mechanobiol. 2016;15(6):1439–56. https://doi.org/10.1007/s10237-016-0773-6.
Cecchi E, Giglioli C, Valente S, Lazzeri C, Gensini GF, Abbate R, Mannini L. Role of hemodynamic shear stress in cardiovascular disease. Atherosclerosis. 2011;214(2):249–56. https://doi.org/10.1016/j.atherosclerosis.2010.09.008.
Schaffer S, Schwartz C, Wagner W. A definition of the intima of human arteries and of its atherosclerosis-prone regions. Circulation. 1992;85(1):391–405. https://doi.org/10.1161/01.CIR.85.1.391.
Bamford J, Sandercock P, Dennis M, Burn J, Warlow C. Classification and natural history of clinical identifiable subtypes of cerebral infarction. Lancet. 1991;337(8756):1521–6. https://doi.org/10.1016/0140-6736(91)93206-O.
Graziano F, Ganau M, Iacopino DG, Boccardi E. Vertebro-basilar junction aneurysms: a single centre experience and meta-analysis of endovascular treatments. Neuroradiol J. 2014;27(6):732–41. https://doi.org/10.15274/NRJ-2014-10100.
Ravensbergen J, Ravensbergen JW, Krijger JK, Hillen B, Hoogstraten HW. Localizing role of hemodynamics in atherosclerosis in several human vertebrobasilar junction geometries. Arterioscler Thromb Vasc Biol. 1998;18(5):708–16. https://doi.org/10.1161/01.ATV.18.5.708.
Bogousslavsky J, Van Melle G, Regli F. The Lausanne stroke registry: analysis of 1,000 consecutive patients with first stroke. Stroke. 1988;19(9):1083–92. https://doi.org/10.1161/01.STR.19.9.1083.
Wake-Buck AK, Gatenby JC, Gore JC. Hemodynamic characteristics of the vertebrobasilar system analyzed using MRI-based models. PLoS ONE. 2012;7(12):1354–7. https://doi.org/10.1371/journal.pone.0051346.
Cebral JR, Mut F, Weir J, Putman CM. Association of hemodynamic characteristics and cerebral aneurysm rupture. Am J Neuroradiol. 2011;32(2):264–70. https://doi.org/10.3174/ajnr.A2274.
Kulcsár Z, Ugron Á, Marosfoi M, Berentei Z, Paál G, Szikora I. Hemodynamics of cerebral aneurysm initiation: the role of wall shear stress and spatial wall shear stress gradient. Am J Neuroradiol. 2011;32(3):587–94. https://doi.org/10.3174/ajnr.A2339.
Sforza DM, Kono K, Tateshima S, Vinuela F, Putman C, Cebral JR. Hemodynamics in growing and stable cerebral aneurysms. J Neurointerv Surg. 2015;. https://doi.org/10.1136/neurintsurg-2014-011339.
Fry DL. Acute vascular endothelial changes associated with increased blood velocity gradients. Circ Res. 1968;22(2):165–97. https://doi.org/10.1161/01.RES.22.2.165.
Caro CG, Fitz-Gerald JM, Schroter RC. Arterial wall shear and distribution of early atheroma in man. Nature. 1969;223(5211):1159–60. https://doi.org/10.1038/2231159a0.
Ku DN, Giddens DP, Zarins CK, Glagov S. Pulsatile flow and atherosclerosis in the human carotid bifurcation. Positive correlation between plaque location and low oscillating shear stress. Arterioscler Thromb Vasc Biol. 1985;5(3):293–302. https://doi.org/10.1161/01.ATV.5.3.293.
Friedman MH, Deters OJ, Bargeron CB, Hutchins GM, Mark FF. Shear-dependent thickening of the human arterial intima. Atherosclerosis. 1986;60(2):161–71. https://doi.org/10.1016/0021-9150(86)90008-0.
Perktold K, Hofer M, Rappitsch G, Loew M, Kuban BD, Friedman MH. Validated computation of physiologic flow in a realistic coronary artery branch. J Biomech. 1998;31(3):217–28. https://doi.org/10.1016/S0021-9290(97)00118-8.
Friedman MH, Deters OJ. Correlation among shear rate measures in vascular flows. J Biomech Eng. 1987;109(1):25–6. https://doi.org/10.1115/1.3138637.
Chatzizisis YS, Coskun AU, Jonas M, Edelman ER, Feldman CL, Stone PH. Role of endothelial shear stress in the natural history of coronary atherosclerosis and vascular remodeling: molecular, cellular, and vascular behavior. J Am Coll Cardiol. 2007;49(25):2379–93. https://doi.org/10.1016/j.jacc.2007.02.059.
Yang Y, Liu X, Xia Y, Liu X, Wu W, Xiong H, Zhang H, Xu L, Wong KKL, Ouyang H, Huang W. Impact of spatial characteristics in the left stenotic coronary artery on the hemodynamics and visualization of 3D replica models. Sci Rep. 2017;7(1):15452. https://doi.org/10.1038/s41598-017-15620-1.
Moore JE, Xu C, Glagov S, Zarins CK, Ku DN. Fluid wall shear stress measurements in a model of the human abdominal aorta: oscillatory behavior and relationship to atherosclerosis. Atherosclerosis. 1994;110(2):225–40. https://doi.org/10.1016/0021-9150(94)90207-0.
Gambillara V, Chambaz C, Montorzi G, Roy S, Stergiopulos N, Silacci P. Plaque-prone hemodynamics impair endothelial function in pig carotid arteries. Am J Physiol Heart Circ Physiol. 2006;290(6):2320–8. https://doi.org/10.1152/ajpheart.00486.2005.
Buchanan JR, Kleinstreuer C, Truskey GA, Lei M. Relation between non-uniform hemodynamics and sites of altered permeability and lesion growth at the rabbit aorto-celiac junction. Atherosclerosis. 1999;143(1):27–40. https://doi.org/10.1016/S0021-9150(98)00264-0.
Buchanan J. Hemodynamics simulation and identification of susceptible sites of atherosclerotic lesion formation in a model abdominal aorta. J Biomech. 2003;36(8):1185–96. https://doi.org/10.1016/S0021-9290(03)00088-5.
Cheng C, Van Haperen R, De Waard M, Van Damme LCa, Tempel D, Hanemaaijer L, Van Cappellen GWa, Bos J, Slager CJ, Duncker DJ, Van Der Steen AFW, De Crom R, Krams R. Shear stress affects the intracellular distribution of eNOS: direct demonstration by a novel in vivo technique. Blood. 2005;106(12):3691–8. https://doi.org/10.1182/blood-2005-06-2326.
Lutgens E, Gijbels M, Smook M, Heeringa P, Gotwals P, Koteliansky VE, Daemen MJaP. Transforming growth factor-beta mediates balance between inflammation and fibrosis during plaque progression. Arterioscler Thromb Vasc Biol. 2002;22(6):975–82. https://doi.org/10.1161/01.ATV.0000019729.39500.2F.
Ku DN. Blood flow in arteries. Annu Rev Fluid Mech. 1997;29(1):399–434. https://doi.org/10.1146/annurev.fluid.29.1.399.
Lei M, Kleinstreuer C, Truskey GA. Numerical investigation and prediction of atherogenic sites in branching arteries. J Biomech Eng. 1995;117(3):350–7. https://doi.org/10.1115/1.2794191.
McDonald DA, Potter JM. The distribution of blood to the brain. J Physiol. 1951;114(3):356–71. https://doi.org/10.1113/jphysiol.1951.sp004627.
Krijger JK, Hillen B, Hoogstraten HW. Mathematical models of the flow in the basilar artery. J Biomech. 1989;22(11–12):1193–202. https://doi.org/10.1016/0021-9290(89)90221-2.
Krijger JKB, Hillen B, Hoogstraten HW, van den Raadt MPMG. Steady two-dimensional merging flow from two channels into a single channel. Appl Sci Res. 1990;47(3):233–46. https://doi.org/10.1007/BF00418053.
Krijger JKB, Hillen B, Hoogstraten HW. A two-dimensional model of pulsating flow in the basilar artery. J Appl Math Phys. 1991;42(5):649–62. https://doi.org/10.1007/BF00944764.
Ravensbergen J, Krijger JKB, Hillen B, Hoogstraten HW. Merging flows in an arterial confluence: the vertebro-basilar junction. J Fluid Mech. 1995;304(11):119–41. https://doi.org/10.1017/S0022112095004368.
Krijger JKB, Heethaar RM, Ravensbergen J. Computation of steady three-dimensional flow in a model of the basilar artery. J Biomech. 1992;25(12):1451–65. https://doi.org/10.1016/0021-9290(92)90058-9.
Zhao X, Zhao M, Amin-Hanjani S, Du X, Ruland S, Charbel FT. Wall shear stress in major cerebral arteries as a function of age and gender-a study of 301 healthy volunteers. J Neuroimaging. 2015;25(3):403–7. https://doi.org/10.1111/jon.12133.
Bockman MD, Kansagra AP, Shadden SC, Wong EC, Marsden AL. Fluid mechanics of mixing in the vertebrobasilar system: comparison of simulation and MRI. Cardiovasc Eng Technol. 2012;3(4):450–61. https://doi.org/10.1007/s13239-012-0112-8.
Smith AS, Bellon JR. Parallel and spiral flow patterns of vertebral artery contributions to the basilar artery. Am J Neuroradiol. 1995;16(8):1587–91.
Ravensbergen J, Krijger JKB, Hillen B, Hoogstraten HW. The influence of the angle of confluence on the flow in a vertebro-basilar junction model. J Biomech. 1996;29(3):281–99. https://doi.org/10.1016/0021-9290(95)00064-X.
Lutz RJ, Warren K, Balis F, Patronas N, Dedrick RL. Mixing during intravertebral arterial infusions in an in vitro model. J Neurooncol. 2002;58(2):95–106. https://doi.org/10.1023/A:1016034910875.
Kobayashi N, Karino T. Flow patterns and velocity distributions in the human vertebrobasilar arterial system. Laboratory investigation. J Neurosurg. 2010;113(4):810–9. https://doi.org/10.3171/2010.1.JNS09575.
Bale-Glickman JM. Experimental studies of physiological flows in replicated diseased carotid bifurcations. Ph.D. thesis, University of California, Berkeley; 2005. https://books.google.co.jp/books?id=KELFNwAACAAJ.
Yu J, Zhang S, Li M-L, Ma Y, Dong Y-R, Lou M, Feng F, Gao S, Wu S-W, Xu W-H. Relationship between the geometry patterns of vertebrobasilar artery and atherosclerosis. BMC Neurol. 2018;18(1):83. https://doi.org/10.1186/s12883-018-1084-6.
Taveras JM, Wood EH. Diagnostic neuroradiology, vol. 1. Baltimore: Williams & Wilkins; 1976.
Chong BW, Kerber CW, Buxton RB, Frank LR, Hesselink JR. Blood flow dynamics in the vertebrobasilar system: correlation of a transparent elastic model and MR angiography. Am J Neuroradiol. 1994;15(4):733–45.
Lusis A. Atherosclerosis. Nature. 2000;407(6801):233–41. https://doi.org/10.1038/35025203.Atherosclerosis.
Dejana E, Valiron O, Navarro P, Lampugnani MG. Intercellular junctions in the endothelium and the control of vascular permeability. Ann N Y Acad Sci. 1997;811:36–44. https://doi.org/10.1111/j.1749-6632.1997.tb51986.x.
Meng H, Tutino VM, Xiang J, Siddiqui a. High WSS or low WSS? Complex interactions of hemodynamics with intracranial aneurysm initiation, growth, and rupture: toward a unifying hypothesis. Am J Neuroradiol. 2014;35(7):1254–62. https://doi.org/10.3174/ajnr.A3558.
Zhang JN, Bergeron AL, Yu Q, McBride L, Bray PF, Dong JF. Duration of exposure to high fluid shear stress is critical in shear-induced platelet activation-aggregation. Thromb Haemost. 2003;90(4):672–8. https://doi.org/10.1160/TH03-03-0145.
Mohamied Y, Rowland EM, Bailey EL, Sherwin SJ, Schwartz Ma, Weinberg PD. Change of direction in the biomechanics of atherosclerosis. Ann Biomed Eng. 2014;43(1):16–25. https://doi.org/10.1007/s10439-014-1095-4.
Katritsis D, Kaiktsis L, Chaniotis A, Pantos J, Efstathopoulos EP, Marmarelis V. Wall shear stress: theoretical considerations and methods of measurement. Architecture. 2007;49(5):307–29. https://doi.org/10.1016/j.pcad.2006.11.001.
Brunette J, Mongrain R, Laurier J, Galaz R, Tardif JC. 3D flow study in a mildly stenotic coronary artery phantom using a whole volume PIV method. Med Eng Phys. 2008;30(9):1193–200. https://doi.org/10.1016/j.medengphy.2008.02.012.
Hillen B, Drinkenburg BA, Hoogstraten HW, Post L. Analysis of flow and vascular resistance in a model of the circle of Willis. J Biomech. 1988;21(10):807–14. https://doi.org/10.1016/0021-9290(90)90307-O.
Chen L. Hemodynamics in the cerebral circulation: numerical studies and experimental investigation. Doctor of philosophy. Nanyang: Nanyang Technological University; 2005.
Fahy P, McCarthy P, Sultan S, Hynes N, Delassus P, Morris L. An experimental investigation of the hemodynamic variations due to aplastic vessels within three-dimensional phantom models of the circle of Willis. Ann Biomed Eng. 2014;42(1):123–38. https://doi.org/10.1007/s10439-013-0905-4.
Beier S, Ormiston J, Webster M, Cater J, Norris S, Medrano-Gracia P, Young A, Cowan B. Vascular hemodynamics with computational modeling and experimental studies. In: Computing and visualization for intravascular imaging and computer-assisted stenting. Amsterdam: Elsevier; 2017. p. 227–51. https://doi.org/10.1016/B978-0-12-811018-8.00009-6. https://linkinghub.elsevier.com/retrieve/pii/B9780128110188000096.
Beier S, Ormiston J, Webster M, Cater J, Norris S, Medrano-Gracia P, Young A, Cowan B. Impact of bifurcation angle and other anatomical characteristics on blood flow–a computational study of non-stented and stented coronary arteries. J Biomech. 2016;49(9):1570–82. https://doi.org/10.1016/j.jbiomech.2016.03.038.
Hasler D, Obrist D. Three-dimensional flow structures past a bio-prosthetic valve in an in-vitro model of the aortic root. PLoS ONE. 2018;13(3):0194384. https://doi.org/10.1371/journal.pone.0194384.
Friedman MH, Kuban BD, Schmalbrock P, Smith K, Altan T. Fabrication of vascular replicas from magnetic resonance images. J Biomech Eng. 1995;117(3):364. https://doi.org/10.1115/1.2794193.
Torii R, Oshima M, Kobayashi T, Takagi K, Tezduyar TE. Influence of wall thickness on fluid–structure interaction computations of cerebral aneurysms. Int J Numer Methods Biomed Eng. 2010. https://doi.org/10.1002/cnm.1289.
We would like to acknowledge the help from our technician Mr. Zhenwei Ji during the setting of test rig.
This project is supported by grants from the National Natural Science Foundation of China (NSFC) (11802227), China postdoctoral science foundation grant (2016M600781), and the Fundamental Research Funds for the Central Universities (XJJ2017032).
School of Energy and Power Engineering, Xi'an Jiaotong University, No. 28 Xian Ning West Road, Xi'an, 710049, China
Guangyu Zhu
, Yuan Wei
& Qi Yuan
Department of Radiology and Medical Imaging, The First Affiliated Hospital of Xi'an Jiaotong University, 277 Yanta Weest Road, Xi'an, 710061, China
Jian Yang
School of Mechanical and Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore
Joon Hock Yeo
Search for Guangyu Zhu in:
Search for Yuan Wei in:
Search for Qi Yuan in:
Search for Jian Yang in:
Search for Joon Hock Yeo in:
GYZ carried out the experimental study, drafted the manuscript, and acquired the funding. YW participated in the manuscript drafting. QY participated in the design of this study. JHY participated in the design of the study and helped to revise the manuscript. All authors read and approved the final manuscript.
Correspondence to Qi Yuan.
Zhu, G., Wei, Y., Yuan, Q. et al. PIV investigation of the flow fields in subject-specific vertebro-basilar (VA-BA) junction. BioMed Eng OnLine 18, 93 (2019) doi:10.1186/s12938-019-0711-9
PIV
VA-BA | CommonCrawl |
Operator Theory on Function Spaces
Session code: otf
Zeljko Cuckovic (University of Toledo)
Nikolai Vasilevski (CINVESTAV)
Nina Zorboska (University of Manitoba)
Tuesday, Jul 25 [McGill U., Arts Building, Room W-215]
11:45 Dmitry Khavinson (University of South Florida, USA), Vanishing of reproducing kernels in spaces of analytic functions
12:15 Raul Curto (University of Iowa, USA), A New Necessary Condition for the Hyponormality of Toeplitz Operators on the Bergman Space
14:15 Lewis Coburn (SUNY at Buffalo, USA), Toeplitz Quantization
14:45 Nikolai Vasilevski (CINVESTAV, Mexico), Toeplitz operators defined by sesquilinear forms
15:45 Trieu Le (University of Toledo, USA), Commutants of Separately Radial Toeplitz Operators on the Bergman Space
16:15 Nina Zorboska (University of Manitoba), Intrinsic operators on spaces of holomorphic functions
17:00 Javad Mashreghi (Universite Laval, Canada), The Gleason--Kahane--Zelazko theorem for modules
17:30 Ruhan Zhao (SUNY at Brockport, USA), Closures of Hardy and Hardy-Sobolev spaces in the Bloch type space on the unit ball
Wednesday, Jul 26 [McGill U., Arts Building, Room W-215]
11:15 Raphael Clouatre (University of Manitoba, Canada), Annihilating ideals and spectra for commuting row contractions
11:45 Michael Stessin (SUNY at Albany, USA), Spectral characterization of representations of symmetry groups
13:45 Thomas Ransford (Universite Laval), Cyclicity in the harmonic Dirichlet space
14:15 Catherine Beneteau (University of South Florida, USA), Zeros of optimal polynomial approximants in Dirichlet-type spaces
14:45 Raul Quiroga-Barranco (CIMAT, Mexico), Toeplitz operators, special symbols and moment maps
15:15 Yunus Zeytuncu (University of Michigan-Dearborn), Compactness of Hankel and Toeplitz operators on domains in $\mathbb{C}^n$
16:15 Maribel Loaiza (CINVESTAV, Mexico), On Toeplitz operators on the poly harmonic Bergman space
16:45 Armando Sanchez-Nungaray (Universidad Veracruzana, Mexico), Commutative algebras of Toeplitz operators on the Siegel domain
Dmitry Khavinson
University of South Florida, USA
Vanishing of reproducing kernels in spaces of analytic functions
In most situations we are accustomed to , e.g., Bergman and Hardy spaces in the disk, the reproducing kernels do not vanish. Neither they do if we consider the later spaces with fairly general weights, for example comprised from moduluses of analytic functions. Yet in the "cut-off spaces" formed by polynomials of degree less or equal to n this is not necessarily true. We shall discuss what is known and the numerous compelling open problems that remain.
Location: McGill U., Arts Building, Room W-215
Raul Curto
University of Iowa, USA
A New Necessary Condition for the Hyponormality of Toeplitz Operators on the Bergman Space
A well known result of C. Cowen states that, for a symbol $\varphi \in L^{\infty }, \; \varphi \equiv \bar{f}+g \;\;(f,g\in H^{2})$, the Toeplitz operator $T_{\varphi }$ acting on the Hardy space of the unit circle is hyponormal if and only if $f=c+T_{\bar{h}}g,$ for some $c\in {\mathbb C}$, $h\in H^{\infty }$, $\left\| h\right\| _{\infty}\leq 1.$ In this talk we consider possible versions of this result in the Bergman space case. Concretely, we consider Toeplitz operators on the Bergman space of the unit disk, with symbols of the form $$\varphi \equiv \alpha z^n+\beta z^m +\gamma \overline z ^p + \delta \overline z ^q,$$ where $\alpha, \beta, \gamma, \delta \in \mathbb{C}$ and $m,n,p,q \in \mathbb{Z}_+$, $m < n$ and $p < q$. By studying the asymptotic behavior of the action of $T_{\varphi}$ on a particular sequence of vectors, we obtain a sharp inequality involving the above mentioned data. This inequality improves a number of existing results, and it is intended to be a precursor of basic necessary conditions for joint hyponormality of tuples of Toeplitz operators acting on Bergman spaces in one or several complex variables.
Lewis Coburn
SUNY at Buffalo, USA
Toeplitz Quantization
I discuss some recent work with Wolfram Bauer and Raffael Hagger. Here, $C^n$ is complex n-space and, for z in $C^n$, we consider the standard family of Gaussian measures $d\mu_{t} (z) = (4\pi t)^{-n} exp(-|z|^{2}/4t)dv(z), t > 0$ where $dv$ is Lebesgue measure. We consider the Hilbert space $L^{2}_{t}$ of all $\mu_{t}$-square integrable complex-valued measurable functions on $C^n$ and the closed subspace of all square-integrable entire functions, $H^{2}_{t}$. For $f$ measurable and $h$ in $H^{2}_{t}$ with $fh$ in $L^{2}_{t}$, we consider the Toeplitz operators $T^{(t)}_{f} h = P^{(t)} (fh)$ where $P^{(t)}$ is the orthogonal projection from $L^{2}_{t}$ onto $H^{2}_{t}$. For bounded $f$ ($f$ in $L^{\infty}$) and some unbounded $f$, these are bounded operators with norm $|| \cdot ||_{t}$. For $ f, g$ bounded, with ``sufficiently many" bounded derivatives, there are known deformation quantization conditions, including $(0) lim_{t \rightarrow 0}|| T^{(t)}_{f}||_{t} = ||f||_{\infty}$ and $(1) lim_{t \rightarrow 0} ||T^{(t)}_{f} T^{(t)}_{g} - T^{(t)}_{fg} ||_{t} = 0$. We exhibit a pair of bounded real-analytic functions $F, G$ so that $(1)$ fails. On the positive side, for the space $VMO$ of functions with vanishing mean oscillation, we show that $(1)$ holds for all $f$ in (the sup-norm closed algebra) $VMO \cap L^{\infty}$ and $g$ in $L^{\infty}$. $(1)$ also holds for all $f$ in $UC$ (uniformly continuous functions, bounded or not) while $(0)$ holds for all bounded continuous $f$.
Nikolai Vasilevski
CINVESTAV, Mexico
Toeplitz operators defined by sesquilinear forms
The classical theory of Toeplitz operators in spaces of analytic functions (Hardy, Bergman, Fock, etc spaces) deals usually with symbols that are bounded measurable functions on the domain in question. A further extension of the theory was made for symbols being unbounded functions, measures, and compactly supported distributions. For reproducing kernel Hilbert spaces we describe a certain common pattern, based on the language of sesquilinear forms, that permits us to introduce a further substantial extension of a class of admissible symbols that generate bounded Toeplitz operators. Although the approach is unified for all reproducing kernel Hilbert spaces, for concrete operator consideration in this talk we restrict ourselves to Toeplitz operators acting on the standard Fock and Bergman spaces, as well as, on the Herglotz space of solutions of the Helmholtz equation. The talk is based on a joint work with Grigori Rozenblum, Chalmers University of Technology, Gothenburg, Sweden.
Trieu Le
University of Toledo, USA
Commutants of Separately Radial Toeplitz Operators on the Bergman Space
If $\varphi$ is a bounded separately radial function on the unit ball, the Toeplitz operator $T_{\varphi}$ is diagonalizable with respect to the standard orthogonal basis of monomials on the Bergman space. Given such a function $\varphi$, we characterize bounded functions $\psi$ for which $T_{\psi}$ commutes with $T_{\varphi}$. Several examples will be discussed to illustrate our results.
Nina Zorboska
Intrinsic operators on spaces of holomorphic functions
I will talk about the boundedness and compactness of a large class of operators, mapping from general Banach spaces of holomorphic functions into the so called growth spaces. This class of operators contains some widely studied specific operators such as, for example, the composition, the multiplication, and the integral operators. I will present few results which generalize the previously known specific cases, and which show that the boundedness and compactness of the class of intrinsic operators depends largely on the behaviour over the point evaluation functions.
Javad Mashreghi
Universite Laval, Canada
The Gleason--Kahane--Zelazko theorem for modules
Let $T: H^p \to H^p$ be a linear mapping (no continuity assumption). What can we say about $T$ if we assume that ``it preserves outer functions''? Another related question is to consider linear functionals $T: H^p \to \mathbb{C}$ (again, no continuity assumption) and ask about those functionals whose kernels do not include any outer function. We study such questions via an abstract result which can be interpreted as the generalized Gleason--Kahane--\.Zelazko theorem for modules. In particular, we see that continuity of endomorphisms and functionals is a part of the conclusion. This is a joint work with T. Ransford.
Ruhan Zhao
SUNY at Brockport, USA
Closures of Hardy and Hardy-Sobolev spaces in the Bloch type space on the unit ball
For $0<\alpha<\infty$, $0< p<\infty$ and $0< s<\infty$, we characterize the closures in the $\alpha$-Bloch norm of $\alpha$-Bloch functions that are in a Hardy space $H^p$ and in a Hardy-Sobolev space $H^p_s$ on the unit ball of $\mathbb C^n$. This is a joint work with Jasbir Singh Manhas.
Raphael Clouatre
University of Manitoba, Canada
Annihilating ideals and spectra for commuting row contractions
We investigate the relationship between an ideal of multipliers and the spectra of operators on Hilbert space annihilated by the ideal. This relationship is well-known and especially transparent in the single variable case, but we focus on the multivariate situation and the associated function theoretic framework of the Drury-Arveson space. Recent advances in the structure of multipliers are leveraged to obtain a description of the spectrum of a commuting row contraction annihilated by a given ideal.
Michael Stessin
SUNY at Albany, USA
Spectral characterization of representations of symmetry groups
Joint work with Z.Cuckovic. Joint projective spectra of operator tuples generalize projective determinantal hypersurfaces that have been studied since 1920s. It was shown in a recent paper of Stessin and Tchernev that the appearance of certain quadrics in two-dimensional sections of joint spectrum of a tuple of self-adjoint operators implies the existence of subspace invariant for the tuples such that the group generated by restrictions of the operators to this subspace represents a Coxeter group. We further investigate the connection between determinantal hypersurfaces and representations of Symmetry groups. Our main result is stated as follows. Theorem. Let $G$ be one of the groups $A_n$, $B_n$ or a dihedral group $Ip(n)$ and let $\rho_1$ and $\rho_2$ be two representations of $G$. If the join projective spectra of images of Coxeter generators of $G$ under $\rho_1$ and $\rho_2$ are the same, $$\sigma(\rho_1(w_1),...,\rho_1(w_n),I)=\sigma(\rho_2(w_1),...,\rho_2(w_n),I),$$ then $\rho_1$ and $\rho_2$ are equivalent.
Thomas Ransford
Universite Laval
Cyclicity in the harmonic Dirichlet space
The harmonic Dirichlet space $\mathcal{D}(\mathbb{T})$ is the Hilbert space of functions $f\in L^2(\mathbb{T})$ such that $$ \|f\|_{\mathcal{D}(\mathbb{T})}^2:=\sum_{n\in\mathbb{Z}}(1+|n|)|\hat{f}(n)|^2<\infty. $$ We give sufficient conditions for $f$ to be cyclic in $\mathcal{D}(\mathbb{T})$, in other words, for $\{\zeta ^nf(\zeta):\ n\geq 0\}$ to span a dense subspace of $\mathcal{D}(\mathbb{T})$. (Joint work with E. Abakumov, O. El-Fallah and K. Kellay.)
Catherine Beneteau
Zeros of optimal polynomial approximants in Dirichlet-type spaces
In this talk, I will discuss certain polynomials that are optimal approximants of inverses of functions in growth restricted analytic function spaces of the unit disk. I will examine the structure of the zeros of these optimal approximants and in particular study an extremal problem whose solution is related to Jacobi matrices and real orthogonal polynomials on the real line.
Raul Quiroga-Barranco
CIMAT, Mexico
Toeplitz operators, special symbols and moment maps
Let us denote by $\mathbb{P}^n(\mathbb{C})$ the $n$-dimensional complex projective space. Our setup considers the weighted Bergman spaces over $\mathbb{P}^n(\mathbb{C})$ and their corresponding Toeplitz operators. Among the latter, we have special interest on those Toeplitz operators whose symbols are quasi-radial and quasi-homogeneous. Generally speaking, this means that, in a certain sense, the symbols depend only on the radial and spherical parts of subsets of the homogeneous coordinates. It turns out that such symbols, and thus their Toeplitz operators, can be related to the structure of toric manifold on $\mathbb{P}^n(\mathbb{C})$. We will describe such topological and geometric relationships. This is joint work with M. A. Morales-Ramos and A. Sanchez-Nungaray.
Yunus Zeytuncu
Compactness of Hankel and Toeplitz operators on domains in $\mathbb{C}^n$
In this talk, I will present various characterizations of compactness of some canonical operators on domains in $\mathbb{C}^n$. I will highlight how complex geometry of the boundary of the domain plays a role in these characterizations. In particular, I will prove that on smooth bounded pseudoconvex Hartogs domains in $\mathbb{C}^2$ compactness of the $\overline{\partial}$-Neumann operator is equivalent to compactness of all Hankel operators with symbols smooth on the closure of the domain. The talk is based on recent joint projects with \u{Z}eljko \u{C}u\u{c}kovi\'c and S\"{o}nmez \c{S}ahuto\u{g}lu.
Maribel Loaiza
On Toeplitz operators on the poly harmonic Bergman space
Consider the upper half plane $\Pi$ with the Lebesgue measure. Although the harmonic Bergman space $b^2(\Pi)$ is represented in terms of the Bergman and the anti-Bergman spaces, Toeplitz operators acting on $b^2(\Pi)$ behave different from those acting on the Bergman space. For example, contrary to the case of the Bergman space, the C*-algebra generated by Toeplitz operators with homogeneous symbols acting on the harmonic Bergman space is not commutative. On the other hand, the harmonic Bergman space is contained in each poly harmonic Bergman space, thus, it is natural to study Toeplitz operators acting on the last spaces. With this in mind, in this talk we study the C*-algebra generated by Toeplitz operators with homogeneous symbols acting on the poly harmonic Bergman space of the upper half plane.
Armando Sanchez-Nungaray
Universidad Veracruzana, Mexico
Commutative algebras of Toeplitz operators on the Siegel domain
We describe several ways of how the symbols, subordinated to the nilpotent group of biholomorphisms of the unit ball (i.e., invariant under the action of a subgroup of the nilpotent group), generate Banach (and even $C^*$) algebras that are commutative on each weighted Bergman space. Recall for completeness that the nilpotent group of biholomorphisms of the Siegel domain $D_n$, the unbounded realization of the unit ball in $\mathbb{C}^n$, is isomorphic to $\mathbb{R}^{n-1} \times \mathbb{R}_+$ with the following group action \begin{equation*} (b,h)\, : \ (z',z_n) \in D_n \ \longmapsto \ (z'+b, z_n + h + 2iz'\cdot b + i|b|^2) \in D_n, \end{equation*} for each $(b,h) \in \mathbb{R}^{n-1} \times \mathbb{R}_+$. The key role in our study is played by the direct integral decomposition of the isomorphic image of the Bergman space on the Siegel domain, which is direct integral where each component is a weighted Fock spaces. We describe the action of Toeplitz operators with certain symbols as a direct integral of scalar multiplication operators and a direct integral of Toeplitz operators with the same symbol on the weighted Fock spaces. Note that all the above symbols are invariant under the action of the subgroup $\mathbb{R}^{\ell} \times \mathbb{R}_+$ ($\ell < n-1$) of the nilpotent group. | CommonCrawl |
JeanMarieRamirez
Terms in this set (28)
Age Cohort
people born within the same 5-10-year time span
Agents of Socialization
arenas in which we interact and in which the socialization process happens (e.g., schools, neighborhood, families, etc.)
the distribution of the number or proportion of people of various ages based upon the historic trends in birth and death rates
behavior that is not conducive to societal expectations, especially those that are aggressive or disruptive
the private places where we practice our performance, and the less guarded version of self
Degradation ceremony
a ceremony, ritual or encounter in which a total institution's resident is humiliated, often in front of the institution's other residents or officials
the idea that we can understand social interaction as if it were a theatrical performance.
Front Stage
our more public face where we deliver our performance
gender is a social concept and refers to the social and cultural differences a society assigns to feminine and masculine characteristics based on biological sex
Gender Socialization
the process by which people learn gender role expectations, as deemed appropriate by their society
Hidden curriculum
what conflict theorists call part of the schooling process that gets children accept, without questioning the cultural values of the society in which the schools are found
Impression Management
individual's routine attempts to convey a positive impression of themselves to the people with whom they interact
what are commonly referred to as stages of life (e.g., childhood, adolescence, adulthood, and old age)
Looking-glass self
a process of socialization described by Cooley through which we imagine how we appear to others and then imagine how they think of us
communications media, such as television, radio, newspapers and social media, that reaches a mass audience
strong influence by peers in a group to affect the behavior of a member
Primary Socialization
socialization in which the individual learns the basic skills needed to function in society
Racial Socialization
the messages and practices concerning the nature of a person's racial or ethnic status as it relates to identity, interpersonal relationships and position in the social hierarchy
the process of developing extremist ideologies and beliefs
how often people practice rituals associated with religion (e.g., pray, go to service, etc.)
Resocialization
a process in which people learn new values, norms, etc. (e.g., military, going off to college, etc.)
Rite of Passage
events that mark an individual's transition from one status to another
Role of the other
when children pretend to be other people in their play and in so doing learn what these other people expect of them
Secondary Socialization
socialization which happens during and after childhood through interaction with other groups and organizations such as school
one's identity, self-concept and self-image
the process by which people learn their culture
computer based technology that facilitates the sharing of thoughts, ideas and information
Total institutions
institutions that have total control over the lives of the people who live in them (e.g., prison, boot camps, etc.)
Chapter 1 SOCL120 OER WITH WORKBOOK
SOCL120 Final Exam Concepts
Chapter 13 SOCL120 OER WITH WORKBOOK
What is the dry adiabatic lapse rate?
Assume that the probabilities of a customer purchasing 0,1, or 2 books at a bookstore are $0.5,0.3$, and $0.2$, respectively. What is the desired number of books a customer will purchase?
Lyons, Inc., provides consulting services. A few of the company's business transactions occurring during June are described below: 1. On June 1, the company billed customers $\$ 5,000$ on account for consulting services rendered. Customers are required to make full payment within 30 days. 2. On June 3 , the company purchased office supplies costing $\$ 3,200$, paying $\$ 800$ cash and charging the remainder on the company's 30-day account at Office Warehouse. The supplies are expected to last several months. 3. On June 5, the company returned to Office Warehouse $\$ 100$ of supplies that were not needed. The return of these supplies reduced by $\$ 100$ the amount owed to Office Warehouse. 4. On June 17, the company issued an additional 1,000 shares of capital stock at $\$ 5$ per share. The cash raised will be used to purchase new equipment in September. 5. On June 22, the company received $\$ 1,200$ cash from customers it had billed on June 1. 6. On June 29 , the company paid its outstanding account payable to Office Warehouse. 7. On June 30 , a cash dividend totaling $\$ 1,800$ was declared and paid to the company's stockholders.\ Instructions\ d. How does the matching principle influence the manner in which the June 3 purchase of supplies is recorded in the accounting records?
You are provided with the following information taken from Items Inc.'s May 31, 2014, balance sheet. Cash $19,000 Accounts receivable 47,600 Inventory 21,000 Property, plant, and equipment, net of depreciation 70,400 Accounts payable 18,200 Common stock 110,000 Retained earnings 21,800 Additional information concerning Items Inc. is as follows. 1. Gross profit is 40% of sales. 2. Actual and budgeted sales data:$ $$ \begin{array}{lrr} \text{May (actual)}&\text{\$\hspace{1pt}68,000}\\ \text{June (budgeted)}&\text{\hspace{1pt}60,000}\\ \end{array} $$ $$ 3. Sales are both cash and credit. Cash collections expected in June are: $$ $$ \begin{array}{lrr} \text{May}&\text{\$\hspace{1pt}47,600}&\text{(70\\\% of \$68,000)}\\ \text{June}&\underline{\text{\hspace{5pt}18,000}}&\text{(30\\\% of \$60,000)}\\ &\underline{\underline{\text{\$\hspace{1pt}65,600}}}\\ \end{array} $$ $$ 4. Half of a month's purchases are paid for in the month of purchase and half in the following month. Cash disbursements expected in June are: $$ $$ \begin{array}{lrr} \text{Purchases May}&\text{\$\hspace{1pt}18,200}\\ \text{Purchases June}&\underline{\text{\hspace{5pt}27,500}}\\ &\underline{\underline{\text{\$\hspace{1pt}45,700}}}\\ \end{array} $$ $5. Cash operating costs are anticipated to be$21,300 for the month of June. 6. Equipment costing $6,000 will be purchased for cash in June. 7. The company wishes to maintain a minimum cash balance of$11,000. An open line of credit is available at the bank. All borrowing is done at the beginning of the month, and all repayments are made at the end of the month. The interest rate is 9% per year, and interest expense is accrued at the end of the month and paid in the following month. ***Instructions*** Prepare a cash budget for the month of June. Determine how much cash Items Inc. must borrow, or can repay, in June.
Anderson's Business Law and the Legal Environment, Comprehensive Volume
23rd EditionDavid Twomey, Marianne Jennings, Stephanie Greene
Operations Management: Sustainability and Supply Chain Management
12th EditionBarry Render, Chuck Munson, Jay Heizer
1,698 solutions
13th EditionWilliam Stevenson
PTCB Exam
tara406653
exam 1: Medical Interview from power poi…
brittanimorales91
Mikayla_Morgensen
Pharmacology Chaper 20
kmripley | CommonCrawl |
EssayProject / Lab Report
Find More Articles
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%2345678901234567890123456789012345678901234567890123456789012345678901234567890
% 1 2 3 4 5 6 7 8
\documentclass[letterpaper, 10 pt, conference]{ieeeconf} % Comment this line out
% if you need a4paper
%\documentclass[a4paper, 10pt, conference]{ieeeconf} % Use this line for a4
% paper
\usepackage{graphicx}
\IEEEoverridecommandlockouts % This command is only
% needed if you want to
% use the \thanks command
\overrideIEEEmargins
% See the \addtolength command later in the file to balance the column lengths
% on the last page of the document
% The following packages can be found on http:\\www.ctan.org
%\usepackage{graphics} % for pdf, bitmapped graphics files
%\usepackage{epsfig} % for postscript graphics files
%\usepackage{mathptmx} % assumes new font selection scheme installed
%\usepackage{times} % assumes new font selection scheme installed
%\usepackage{amsmath} % assumes amsmath package installed
%\usepackage{amssymb} % assumes amsmath package installed
\title{\LARGE \bf
Analyzing the Physical and Chemical Properties of Hydrogen Gas as an Alternative Energy Storage for the Future Transportation Sector*
%\author{ \parbox{3 in}{\centering Huibert Kwakernaak*
% \thanks{*Use the $\backslash$thanks command to put information here}\\
% Faculty of Electrical Engineering, Mathematics and Computer Science\\
% University of Twente\\
% 7500 AE Enschede, The Netherlands\\
% {\tt\small [email protected]}}
% \hspace*{ 0.5 in}
% \parbox{3 in}{ \centering Pradeep Misra**
% \thanks{**The footnote marks may be inserted manually}\\
% Department of Electrical Engineering \\
% Wright State University\\
% Dayton, OH 45435, USA\\
% {\tt\small [email protected]}}
\author{Jomardee Perkins
Undergraduate Physics major
University of Washington, Bothell
} % <-this % stops a space
\thispagestyle{empty}
% \begin{abstract}
% This electronic document is a ÒliveÓ template. The various components of your paper [title, text, heads, etc.] are already defined on the style sheet, as illustrated by the portions given in this document.
% \end{abstract}
\section{INTRODUCTION}
The evolution of technology has spawned numerous opportunities that lead to energy efficient transportation systems. Energy utilized for transportation systems throughout the globe has experienced tremendous growth over the past several decades. Passengers travel mainly by automobiles for short and long distances. Many of these conventional vehicles run on fossil fuels that are non-renewable, natural, sources of gasoline. Although producing fossil fuels are much cheaper than reproducing other elements, a major worldwide problem is the pollution from increasing fossil fuel emissions. Figure 1 demonstrates the amount of energy consumed by certain vehicles and sectors. 75\% of the total energy is mainly consumed on the Highway where over 40\% of the U.S. population owns a vehicle. The remaining 25\% of the energy that is not consumed on road is utilized for agricultural, industrial, and constructional purposes. Thus, many scientists are seeking for a more alternative method to reduce such cause of pollution.
\begin{figure}[thpb]
\framebox{\parbox{3.3in}{
\includegraphics[scale=.6]{fig1}
%\includegraphics[scale=1.0]{fig1}
\caption{U.S. Transportation energy consumption by mode and vehicle in 2003 [20]}
\label{figurelabel}
Mobility is a socio-economic reality that is a necessity now and for the growing future. As an alternative method, for a more fuel-efficient energy storage system, hydrogen comes to mind. Many studies have shown that hydrogen can be an alternative fit for the transportation industry because it is a common element and comes in abundance. The development of hydrogen fuel cell cars raises attention for the near future. Recent developments in hydrogen power electric vehicles are sustainable and environmentally friendly [5]. Like fossil fuels, hydrogen can come from various sources: renewable or non-renewable. The main attraction of this element as fuel is that it gives off zero emissions of $CO_2$ in the atmosphere when driven. However, hydrogen comes in many ways that emit greenhouse gas (GHG) emissions in the atmosphere during production. One of the many ways to produce hydrogen is from natural gas reformation or gasification which mixes the chemicals of hydrogen, carbon monoxide, and carbon dioxide in order to create natural gas with high temperature steam [24]. Carbon monoxide is then reacted with water to produce an abundance of hydrogen. Figure 2 demonstrates the route for hydrogen production from fossil sources while capturing $CO_2$ emissions [16].
\includegraphics[scale=.57]{fig5}
\caption{The route for hydrogen production from fossil fuel sources with $CO_2$ capture. Solid black lines represent gasification or reformation processing. The dash black lines represents other streams or processing such as methane reformation processing [16].}
Around 60\% of the $CO_2$ is captured from the overall process [16]. Another way to produce hydrogen is through electrolysis where an electric current splits water into hydrogen and oxygen. Hydrogen is then also considered renewable as long as the electricity is produced by renewable sources, such as solar or wind [24]. By examining the GHG emissions from various transportation sectors, it is found that the production of hydrogen creates 33\% fewer GHG emissions than petrol [24].
Hydrogen fuel cell vehicles (FCV) run on an electric motor generated by electricity. As that happens, a chemical reaction between hydrogen and oxygen forms and is pulled in from the outside to produce only water and heat, thus reducing $CO_2$ emissions in the atmosphere. Hydrogen is a clean alternative for storing energy in transportation systems because it reduces greenhouse gas emissions by creating water as a discharge. Although hydrogen FCV has many benefits in the economy and the environment, there are also various aspects that need improvement before executing hydrogen FCV into the near future. This approach addresses a fundamental question in the research: "Under what safety circumstances is compressed hydrogen gas a more sustainable energy storage for the near future"?
Before discussing the safety of hydrogen, the status of hydrogen needs to be further discussed by looking at the economic and environmental impact it will have in comparison to other energy stored vehicles. These other energy storage vehicles, are common amongst the transportation industry, which are conventional, hybrid and electric cars.
The future of the 21st century significantly relies on an eco-friendlier energy production in replacement with today's energy storages. The amount of energy use for transportation in the world has experienced tremendous growth over the past decade [20]. Due to an excessive amount of vehicle ownership, the world's global energy market is more than 1.5 trillion dollars and heavily relies on fossil fuels [5]. Thus, the rise for a sustainable future transportation system is further discussed by analyzing the issues of one of the most dominant transportation sectors in the world other, the United States. These issues solemnly discuss the amount of increase energy consumption due to growth of vehicle ownership.
Due to high intake of energy produced by fossil fuels in the transportation sector, it remains to be a major source of GHG emissions. An investigated approach between hydrogen and some fossil fuels, such as gasoline and methane, are further analyzed to show how hydrogen is safer for the environment since it does not emit any GHG emissions.
Although hydrogen is considered one of the most promising fuels in replacing fossil fuels in the future of the transportation sector, certain safety circumstances must be considered before substantially moving forward. By looking at the main hazards associated with storing hydrogen in addition to a deeper analysis on the effects and conditions of hydrogen cells during refueling, the production of hydrogen FCVs are more likely to be widely accepted in the near future as an alternative energy storage for the transportation sector.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Economic and Environmental Analysis of conventional, hybrid, electric and hydrogen fuel-cell vehicles}
Utilizing actual data, an economic and environmental comparison is performed in four types of vehicles; conventional, hybrid, electric and hydrogen fuel cells. The beauty of hydrogen FCV is that they can be produced in many ways that do not always use conventional fuels, such as oil or gas, thus reducing economic dependence on oil producing countries [21, 26]. Using mathematical procedures, which includes economic indicators such as; pricing of the vehicles and their driving ranges are compared. Environmental indicators such as GHG and pollution emissions are also addressed in addition to the vehicles optimal relationships [26]. Nonetheless, it is safe to conclude that the statistics of hydrogen FCV outweigh the conventional, hybrid and electric vehicle through process of elimination given in the data [14].
\subsection{Economic Characteristics of the four vehicles}
The economic characteristics of each vehicle focuses on their price range, fuel cost, and driving range. Figure 3 demonstrates the economic characteristics of each of the four vehicles in which a 40-L tank is assumed for conventional and hybrid vehicles in order to calculate driving ranges [14, 17].
\includegraphics[scale=.47]{table1}
\caption{Lists of the economic characteristics for four vehicles [20].}
Although the price of hydrogen cars is much more expensive than others, it consumes about 129.5 MJ/100km of fuel, which is much less than conventional (236.8 MJ/100km) and hybrid (137.6 MJ/100km). The table also shows that the refueling prices for hydrogen (\$1.69) is much less than both hybrid (\$1.71) and conventional (\$2.94), thus indicating that hydrogen is an inexpensive fuel when consuming and refueling [18]. The only downfall is the driving range that hydrogen has in competition with conventional and hybrid. Hydrogen's driving range is 355 km total for a given full tank, leading in front are conventional vehicles (540 km) and then hybrid being first with 930 km [25]. The electric vehicle is not competitive in this sector because it is mainly generated through electricity rather than gas, thus weeding this type of mobility noncompetitive with the other three vehicles. In addition to the price ranges of the vehicles, Figure 4 shows the various forms of energy within a given parameter in years [17].
\includegraphics[scale=4.6]{figure2}
\caption{Prices of selected energy carriers in MJ from 1999 to 2004 [17].}
Within the year 2000, it shows the price of gasoline is about two times that of crude oil, whereas the price of hydrogen is also about two times that of natural gas. Thus, the efficiency of producing gasoline from crude oil and hydrogen from natural gas are similar [17, 27]. However, to utilize hydrogen in vehicles, it must be compressed, liquefied or stored. More chemical work must be done to utilize hydrogen as a FCV, thus the pricing of it is slightly higher than that of gas by about \$.01. The issue with the given data is that it is more than a decade ago, in which the economic statistics have tremendously changed throughout the time. Thus, it is uncertain whether the cost of each energy source has risen or dropped within comparison. Crude oil and natural gas are much cheaper than hydrogen, but it does take a huge toll on the environment, emitting carbon dioxide ($CO_2$). For future studies, a more latest approach on comparing the economic characteristics of the four vehicles during the late 2010's would be a more relevant data to compare these statistics with.
\subsection{Environmental Characteristics of the four vehicles}
The environmental impact of each vehicle is measured by examining the air pollution (AP) and GHG emissions during production stages. The main gases in GHG emissions are $CO_2$, $CH_4$, $N_2O$ and $SF_6$ [28]. In figure 5, it shows the impact that each vehicle has on the environment by assuming that GHG and AP are proportional to the vehicle mass [26].
\caption{Environmental impact associated with vehicle production stages [14].}
Hydrogen FCV, during production, is seen to be emitting the most GHG and AP emissions than all three of the other vehicles compared. That is because more energy is utilized when trying to produce various sources of hydrogen using natural gas, whether it be liquid, compressed or gas. Also, utilizing a mathematical approach,
For conventional vehicles:
AP=m_{car}AP_m {(1)}
GHG=m_{car}GHG_m {(2)}
For hybrid vehicles:
AP=(m_{car}-m_{bat})AP_m+m_{bat}AP_{bat} {(3)}
GHG=(m_{car}-m_{bat})GHG+m_{bat}GHG_{bat} {(4)}
For fuel cell vehicles:
AP=(m_{car}-m_{fc})AP_m+M_{fc}AP_{fc)} {(5)}
GHG=(m_{car}-m_{fc)})GHG_m+m_{fc}GHG_{fc} {(6)}
Where $m_{car}$, $m_{bat}$, and $m_{fc}$ are the masses of the cars. $AP_m$, $AP_{bat}$, $AP_{FC}$ are air pollution emissions per kilogram of conventional vehicle. $GHG_m$, $GHG_{bat}$ and $GHG_{fc}$ are greenhouse gas emissions per kilogram of conventional vehicle. These equations were the given approach to calculate the environmental impact associated with each vehicle production. The GHG emissions for hydrogen is 9832.4 kg, which is doubled that from conventional vehicles in which emits 3595.8 kg of GHG during production stages. For AP emissions, hydrogen is leading with most at 42.86 kg, electric with 15.09 kg, hybrid with 10.10 kg, and conventional being the least with 8.74 kg [14].
However, a very interesting study published by the Pembina Institute, compared the total carbon dioxide emissions of fuel cell vehicles using hydrogen produced by various sources of methods, Figure 6 [30].
\caption{Graph comparing carbon dioxide emissions in kilograms per 1000km of cars, using different types of fuel sources [22].}
These results clearly show that fuel cell car using hydrogen from natural gas emits only about 75 kg/ 1000km of $CO_2$, which is much less $CO_2$ in the atmosphere as compared with cars having internal combustion engine (250 kg/1000km) during production [22, 29, 30]. A further investigation for the AP and GHG emissions of the four vehicles should also be considered in the future for much more accurate environmental data.
The analysis of the four types of vehicles show that each are compatible in improving the economy and environment. However, with the given statistics using various sources in measuring the economic and environmental characteristics of the four vehicles, hydrogen is calculated as the best alternative energy storage for the future. With little AP and GHG emissions during production stages and no emissions during fuel cell usage; fuel-cell technology offers a promising future in the transportation industry.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{An investigated approach between hydrogen and some fossil fuels}
An investigated approach of some fossil fuels is further discussed to demonstrate the negative affect it has on the environment in comparison to hydrogen. There are high fossil usage and consumptions in the transportation sector, in which increases emissions of AP and GHG. Hydrogen comes from a variety of foundations that can chemically separate hydrogen from its source, which is nearly an infinite amount of supply and no additional environmental impact costs [23]. Whereas gasoline's sources of energy are from crude oil, where there's only a finite number of supplies left in the world and additional environmental impact costs. In this content, it is important to focus on that environmental impact cost that is the main reason for the earth's average temperature increment [4]. The contribution of $CO_2$ and other compounds contribute to about 76\% of the greenhouse effect (GHE). As seen on figure 7,
\caption{Percent contribution provided by different greenhouse gases to the earth's average temperature rise [4, 32].}
Carbon dioxide ($CO_2$) contributes about 55\%, methane ($CH_4$) contributes about 15\%, and nitrogen oxide ($N_2O$) contributes to about 6\% of the GHG emissions to the earth's average temperature rise [31]. In figure 8 shows that about 73\% of $CO_2$ sources are from fossil fuels, mainly emitted from the transportation sectors [32].
\caption{Main $CO_2$ sources [4, 32].}
Figure 9 is a bar graph that compares the densities of hydrogen, natural gas, propane and gasoline vapor relative to air being 1.0.
\caption{Bar graph of hydrogen and some fossil fuels relative to air [21].}
As shown, hydrogen is about 14 times lighter than air, this means that when hydrogen is released from its tank and exposed to air, it will typically rise and disperse rapidly. In comparison with gasoline vapor being 4.0, hydrogen's vapor density is .07, which means that hydrogen is about 57 times lighter than gasoline vapor [21]. When gasoline vapor is released during the combustion of fossil fuels, it creates a more toxic odor when discharging since it's density is much heavier than most gases. In relation to density, the chemical compounds' diffusion coefficients are also important in determining the compound's toxicity. figure 10 analyzes gasoline and hydrogen's physics and chemical compounds much closer using various papers;
%figure 10
\includegraphics[scale=.27]{fig11}
\caption{Physical and chemical properties of hydrogen and gasoline. The highlighted parts demonstrate hydrogen being the safest. [3, 13, 18, 33]}
With low density and high diffusion coefficient causes buoyancy relative to air, thus hydrogen is a safer compound as an alternative source because it can easily disperse rapidly when exposed to air. The specific heat causes the fuel to be safer because it slows down the temperature increases for a given heat input [3]. Each of these given factors relates to the flame emissivity, if the density is low, diffusion coefficient is high, and the specific heat is high, then the flame emissivity is low. Flame emissivity is the strength of the flame which emits thermal radiation. Thermal radiation of the flame increases if the diameter of the flame increases and ignition limit is high. The safety factor was calculated to compare the two fuel's safety aspects. The safety factor is a ratio of how reliable a vehicle is given the calculate data of both hydrogen and gasoline's properties. A factor of safety below one represents that the vehicle is unsafe for the environment, a factor of one means that it is safe, and a factor above one is absolutely safe for the environment [3]. It was reported that hydrogen is the safest fuel with a safety factor of 1.0 and gasoline having a safety factor of .53. What's assumed in this table is that most of the properties are at normal temperature and pressure. Also, the ignition energy and flame temperature were calculated in Celsius from Sonal Singh's paper, which was then measured to Kelvins for absolute temperature measurements [3]. Nonetheless, given the data from various papers, it is safe to conclude that hydrogen is much safer for the environment. However, when it comes to hydrogen storage, such as refueling, certain hazards need to be addressed.
\section{Direct dangers and situations in hydrogen use}
Although hydrogen is considered one of the most promising fuels in replacing fossil fuels in the future of the transportation sector, certain circumstances must be considered before substantially moving forward. Some of these concerns are from [1];
\item Explosions
\item Hydrogen embrittlement
\item Hydrogen leakage
\item High pressurization during refueling
In figure 11, a ranking of hydrogen and some fossil fuel's safety were analyzed to demonstrate each fuel's safety during ignition [3]. Hydrogen is safe in most of the characteristics that were listed, but is ranked unsafe for the ignition limit, ignition energy, and flame temperature [3, 4].
% figure 10
\caption{Ranking of gasoline, methane and hydrogen. 3 - least safe, 2 - less safe, 1 - safe [3, 33]}
Particularly, hydrogen is not a dangerous fuel, although in areas where it's ranked "unsafe" is because hydrogen has the widest explosion/ ignition mix range when reacted with air of all gases [22]. Figure 12 compares some of the fossil fuels and hydrogen with calculated statistics.
\caption{Physical and chemical properties for safety consideration of three investigated fuels [4].}
The ignition limit in air for hydrogen is 75.0 vol\%, which is 10 times higher than that of gasoline (7.6 vol\%) [3, 4, 33]. Higher ignition limit is dangerous because when hydrogen leaks from the storage tank, it reacts quickly with air, which is a wide range to react with. The ignition energy for hydrogen is .02 MJ, which is 12 times less than gasoline (.24 MJ). With lower ignition energy means that hydrogen takes less work to ignite during a leakage, which leads to a lower flame temperature in air. Hydrogen's flame temperature in air is 2318 K, whereas gasoline's flame temperature is 2470 K. Although the difference between the two gases are a little over 150 K, lower flame temperature causes less impact on the vehicle and the environment during explosions.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Explosions}
A Hydrogen and Fuel Cell Vehicles Safety Evaluation was constructed to compare high pressured tank-mounting vehicles with gasoline vehicles. Figure 13 to 15 shows the states of flame at the time of maximum strength between gasoline and hydrogen fuel types [15]. For the sources of vehicle fires, the ignition of solid fuels was utilized to demonstrate a more natural fire. This scenario is mainly for what would happen if gasoline and hydrogen FCV come to a collision in which both reach their maximum flammability limit and temperature in air.
\caption{Gasoline flame from a vehicle filled with 40-L tank. (a) back view, (b) side view, (c) graph of heat flux with respect to time [15].}
For gasoline, in figure 13, shows an ordinary steel gasoline tank with 40-L of gasoline filled. Results show that after 14 minutes of ignition, gasoline vapor leaking from the seals of the gas tank burned and caused erratic flames. Therefore, various values of heat radiations were recorded, but the maximum value of the heat radiated was about 200 $kW/m^2$ at about 26 minutes. The duration of the flame was measured passed 30 minutes and seemed to increase in the image, but the literature did not specify how long the flame continued after 30 minutes.
\caption{Upward hydrogen flame from a vehicle with two 35MPa installed. (a) back view, (b) side view, (c) heat flux with respect to time [15].}
Figure 14 demonstrates the case of mounting two 35 MPa high pressure hydrogen tank in the trunk of the vehicle in which hydrogen began to release upward and emit about 25 $kW/m^2$ of constant heat flux, much less than that of gasoline. The duration of the flame was measured to be about 16 minutes, but no conspicuous peak of heat radiation was measured [15]. If compared with the physical properties for safety, then the ignition limit is much lower during the spread of fire, small effect of the heat radiation in its surrounding, which is also known as the flame emissivity, and a constant heat flow of temperature in air compared to gasoline.
\caption{Downward hydrogen flame from vehicle with two 35MPa installed. (a) back view, (b) side view, (c) heat flux with respect to time [15].}
Figure 15 shows the same case for the hydrogen tank, where two 35MPa high pressure hydrogen tanks were mounted, however, hydrogen was released downward rather than upward. Because of that, the heat flux after 17 minutes of ignition radiated up to 190 $kW/m^2$, which is near to gasoline's heat radiation, 200 $kW/m^2$. As seen in the given data, when hydrogen flame ignites downward, it's flame limit in air is much wider in volume than gasoline. By comparing the images of gasoline and downward hydrogen flames, hydrogen covers a wide range of volume because of its reaction with air. However, it's ignition time lasts much shorter than gasoline. In the literature, the ignition length for gasoline is uncertain, which does not give an accurate measurement in the difference between the two fuels.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Hydrogen embrittlement and leakage}
Hydrogen fuel-cell cars suffer from many sudden failures in their parts and machineries because of unexpected effects hydrogen reacts towards metal used on vehicles. Although hydrogen is viewed as one of the promising alternative energy storages for transport, hydrogen's chemical element has made many unusual distortions when reacted with metal fuel tanks such as steel, aluminum and magnesium causing a deformation known as embrittlement that can cause leakage in the tank [35]. It has been shown that during low temperatures inside storage tanks, hydrogen can penetrate through metal frameworks during corrosion since mild steel and most iron alloys tend to lose their tensile strength and risk mechanical failure [1]. In order to catch these deformations from letting hydrogen leak, a SEA Technical Information Report has been created to address the safety performance of hydrogen storage and through performance testing [35]. A durability test of the compressed hydrogen storage system (CHSS) was conducted by subjecting a prototype containment vessel to pressure cycling with hydrogen gas [35] that measured the effects of hydrogen embrittlement on mechanical work. Figure 16 demonstrates a schematic procedure of CHSS during (a) performance testing (pneumatic) and (b) durability testing (hydraulic) [35].
\caption{Schematics showing protocols for (a) the expected service performance test (pneumatic) and (b) the durability test (hydraulic) [35].}
The performance (pneumatic) test is for evaluating the effect of hydrogen embrittlement using hydrogen gas. If the results show that the metal has accepted hydrogen embrittlement resistance, then the durability of the vessel can be evaluated from the hydraulics test [35]. Two temperatures, 223.15 K and 293.15 K are specified because hydrogen embrittlement depends on temperature and one of the two temperatures measured typically corresponds to the maximum embrittlement in most metals [35, 36]. It is expected, however uncertain, that the operating temperature for a hydrogen tank can reach 223.15 K, which is the critical temperature for embrittlement. Thus, the materials of the tank, their performance during compressed hydrogen fueling, and durability should be further tested for safety considerations in order to fully accept hydrogen as an alternative energy fuel.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{High pressurization during refueling}
Throughout the paper, hydrogen is concluded to be a clean alternative in replacing some common fuels such as gasoline in the transportation sector. When it comes to hydrogen storage tanks, hydrogen fueling stations usually store hydrogen in either high pressure buffer or cascade systems. Figure 17 shows a diagram of a typical hydrogen fueling station, where hydrogen is compressed using a multi-stage compressor and collects the hydrogen in the storage system [37]. The storage system comes in several sizes of large cylinders, typically from 50-L to a little over 100-L capacity [6].
\includegraphics[scale=1.0]{fig18}
\caption{A schematic diagram of a typical hydrogen fueling station [6].}
The buffer storage system shown in figure 18, operates in the range of 37-70MPa. In this type of storage, the reservoir temperature and pressure are assumed to be equal to 300K and 37MPa [6].
\caption{A schematic diagram of the buffer storage system [6].}
The cascade storage systems shown in figure 19 consists of three reservoirs that are divided into low, medium and high-pressure reservoirs. Each reservoir also contains large cylinders in which are put into ascending pressure. During filling, the hydrogen cylinder is first connected to the low-pressure reservoir until it reaches a pre-set level that switches the system to the medium-pressure reservoir and then the high-pressure reservoir to complete the fill [6].
\caption{A schematic diagram of the cascade storage system [6].}
During refueling, dangers such as over pressurization in the tank is likely to occur due to mass filling rate and initial pressure of the cylinder were considered [9]. During refueling, for a buffer in comparison to cascade storage system, the pressure of the fluid within the pores of the reservoirs are at instant and constant high-pressure. This can be dangerous for the material of the storage tank due to over pressurization that can cause cracks and leaks around the circumference of the tank. Also, as pressure rises, so does the temperature of the tank. Figure 20 shows various temperature rises each at initial pressure [9].
\caption{Temperature rise with different initial pressure in the cylinder [9].}
However, in the image it shows that 25MPa has the smallest temperature rise with respect to time. It indicated that the temperature filling process was determined by mass filling rate, the temperature of the gas, the initial pressure in the cylinder and the initial temperature in the cylinder. This showed that the ambient temperature has only a small effect on the temperature rise. Although high pressure intake on the storage tank can cause deformation and leakage, it takes a much longer time for the temperature of the tank to increase at a much higher-pressure reservoir than a low-pressure reservoir. Nonetheless, safety factors like over pressurization during refueling can cause serious dangers and fire hazards more commonly than gasoline since when hydrogen does leak out from the tank and is heated, it releases water vapor and heat. Hydrogen can be and has been handled safely and carefully many times, just like any other fuel. Hydrogen tanks have been put through a series of tests for performance, durability, and pressure. Although hydrogen is still a dangerous chemical to toggle with, at times the gas rather leaks out, burns, but hardly ever explodes [22]. Another issue that another literature mentioned that causes pressure and temperature rise during refueling is because the compressor hydrogen gas is not cooled at a given temperature. CHSS must be cooled to about -30 degrees to the hydrogen station before refueling to prevent a rise in temperature in the tank that causes embrittlement, leakage, or even explosions [18].
\section{CONCLUSIONS}
The future of alternative energy storage for vehicles is bright because the use of hydrogen in the economy provides various solutions to environmental situations. Hydrogen is as flexible as electricity in that it can be produced in both renewable and non-renewable sources of energy. However, the literature written focuses on the conditions that are needed to be accounted for when it comes to switching to hydrogen as an alternative energy storage using statistical analysis. By taking an economic and environmental investigation, hydrogen has the best air pollution emissions in comparison to conventional, hybrid and electric vehicles. It is also found that hydrogen fuel cell vehicles are simpler in design which accounts for its weight that is much lighter than most vehicles. Ergo, hydrogen is the preferred fuel for fuel cell vehicles because of efficiency that can increase the potential for a sustainable climate. The disagreement in this literature states that hydrogen is cost effective, but in the scope of various research paper, it is not because refueling hydrogen cost much more than refueling a traditional gasoline car. A comparison of hydrogen and fossil fuels were also discussed in this paper to get into an investigated approach about the environment. On the transition path from fossil to hydrogen fueling, the growth of the economy will surely prosper. But various gaps in the literature that compares the two fuel sources are addressed since many scenarios are being played out rather than tested. This limits the scope of what data is significant and what is assumed. However, it does provide a wide range of opportunities for future research. For future work, further studies on the replacement of fossil fuels with hydrogen fuels are required to show true environmental statistics. When that's the case, an accurate measurement of a more energy efficient transportation system that uses hydrogen fuel vs traditional fossil fuels can be utilized in further research and the future. This study will help consider the environmental considerations within the basis of introducing hydrogen into the economy as a modern transportation system.
Using hydrogen is a clean alternative way to store as fuel in automobiles because it has a positive impact on the environment, but can lead to a series of hazards arising from hydrogen storage. These hazards can lead to a series of accident type problems like hydrogen embrittlement, leakage, over pressurization and explosions when hydrogen is not handled properly. For future work, further investigation on embrittlement of storage tanks and various temperatures should be considered during refueling and non-refueling times. Further investigation on the effects and conditions of hydrogen fuel cells caused by refueling should also be addressed before switching to hydrogen as a more ecofriendly path for future automobiles and the environment. Consequently, no disagreements within the literatures were addressed. Hydrogen is an important feedstock to progressing into a more sustainable future. Overall, studying the safety conditions of hydrogen's physical and chemical properties will give the audience a much deeper perspective in accepting hydrogen as a replacement for fossil fuels to store energy for automobiles.
\addtolength{\textheight}{-12cm} % This command serves to balance the column lengths
% on the last page of the document manually. It shortens
% the textheight of the last page by a suitable amount.
% This command does not take effect until the next page
% so it should come on the page before the last. Make
% sure that you do not shorten the textheight too much.
\begin{thebibliography}{99}
\bibitem{c1} Rigas, Fotis. "Evaluation of hazards associated with hydrogen storage facilities." International Journal of Hydrogen Energy, vol. 30, no. 13-14, 2005, pp. 1501–1510.
\bibitem{c2} Ogden, Joan M. "A comparison of hydrogen, methanol and gasoline as fuels for fuel cell vehicles: implications for vehicle design and infrastructure development." Journal of Power Sources, vol. 79, no. 2, June 1999, pp. 143–168.
\bibitem{c3} Singh, Sonal. "Hydrogen: A sustainable fuel for the future of the transport sector." Renewable and Sustainable Energy Reviews, vol. 51, Nov. 2015, pp. 623–633.
\bibitem{c4} Nicoletti, Giovanni. "A technical and environmental comparison between hydrogen and some fossil fuels." Energy Conversion and Management, vol. 89, 1 Jan. 2015, pp. 205–213.
\bibitem{c5} Ahmed, Adeel. "Hydrogen fuel and transport system: A sustainable and environmental future." International Journal of Hydrogen Energy, vol. 41, no. 3, 21 Jan. 2016, pp. 1369–1380.
\bibitem{c6} Farzaneh-Gord, Mahmood. "Effects of storage types and conditions on compressed hydrogen fueling stations performance." International Journal of Hydrogen Energy, vol. 37, no. 4, Feb. 2012, pp. 3500–3509.
\bibitem{c7} Yang, Jiann C. "A thermodynamic analysis of refueling of hydrogen tank." International Journal of Hydrogen Energy, vol. 34, no. 16, Aug. 2009, pp. 6712–6721.
\bibitem{c8} Zheng, Jinyang. "An optimized control method for a high utilization ratio and fast filling speed in hydrogen refueling stations." International Journal of Hydrogen Energy, vol. 35, no. 7, Apr. 2010, pp. 3011–3017.
\bibitem{c9} Liu, Yan-Lei, et al. "Experimental studies on temperature rise within a hydrogen cylinder during refueling." International Journal of Hydrogen Energy, vol. 35, no. 7, Apr. 2010, pp. 2627–2632.
\bibitem{c10} Schulte, Inga. "Issues affecting the acceptance of hydrogen fuel." International Journal of Hydrogen Energy, vol. 29, no. 7, Jan. 2004, pp. 677–685.
\bibitem{c11} Akansu, S.Orhan. "Internal combustion engines fueled by natural gas - hydrogen mixtures." International Journal of Hydrogen Energy, vol. 29, no. 14, Nov. 2004, pp. 1527–1539.
\bibitem{c12} Momirian, M, and T.N Veziroglu. "Current statues of hydrogen energy." Renewable and Sustainable Energy Reviews, vol. 6, no. 1-2, 2002, pp. 141–179.
\bibitem{c13} Adamsona, Keey-Ann. "Hydrogen and methanol: a comparison of safety, economics, efficiencies and emissions." Journal of Power Sources, vol. 86, no. 1-2, Mar. 2000, pp. 548–555.
\bibitem{c14} Granovskii, Mikhail. "Economic and environmental comparison of conventional, hybrid, electric and hydrogen fuel cell vehicles." Journal of Power Sources, vol. 159, no. 2, 22 Sept. 2006, pp. 1186–1193.
\bibitem{c15} Watanabe, S. "The new facility for hydrogen and fuel cell vehicle safety evaluation." International Journal of Hydrogen Energy, vol. 32, no. 13, Sept. 2007, pp. 2154–2161
\bibitem{c16} Voldsund, Mari. "Hydrogen production with CO2 capture." International Journal of Hydrogen Energy, vol. 41, no. 9, 9 Mar. 2016, pp. 4969–4992.
\bibitem{c17} Energy Information Administration, Official energy statistics from the U.S. Government, accessed on June 5, 2005.
\bibitem{c18} "Hydrogen Safety - Gasoline vs. Hydrogen." The 40 Fires Foundation, 2010.
\bibitem{c19} C.E.G Padro, V. Putsche, Survey of the Economics of Hydrogen Technologies, Report No. NREL/TP-570-27079, National Renewable Energy Laboratory, U.S. Department of Energy, 1999.
\bibitem{c20} "Energy Efficiency in Transportation." Real Prospects for Energy Efficiency in the United States, National Academy of Science, 2010, pp. 121–184.
\bibitem{c21} "Hydrogen Compared with Other Fuels." Hydrogen Compared with Other Fuels | Hydrogen Tools.
\bibitem{c22} Petinrin, Moses Omolayo, et al. "A Review on Hydrogen as a Fuel for Automotive Application." International Journal of Energy Engineering, Scientific \& Academic Publishing.
\bibitem{c23} "Hydrogen Fuel Cost vs Gasoline." HES Hydrogen, heshydrogen.com/hydrogen-fuel-cost-vs-gasoline/.
\bibitem{c24} "Hydrogen Production and Distribution." Alternative Fuels Data Center: Hydrogen Production and Distribution.
\bibitem{c25} United States Department of Energy, Energy Efficiency, and Renewable Energy. Accessed May 15, 2005.
\bibitem{c26} R. Dhingra, J. Overly, G. Davis, Life-Cycle Environmental Evaluation of Aluminum and Composite Intensive Vehicles, Report, University of Tennessee, Center for Clean Products and Technologies, 1999.
\bibitem{c27} M. Granovskii, I. Dincer, M.A. Rosen, Life cycle assessment of hydrogen fuel cell and gasoline vehicles, Int. J. Hydrogen Energy, 2006, in press.
\bibitem{c28} J.T. Houghton, L.G. Meira Filho, B.A. Callander, N. Harris, A. Kattenberg, K. Maskell. Climate Change, Cambridge University Press, New York (1996)
\bibitem{c29} Sossina M Haile, "Concern about possibility of hydrogen gas leakage", Fuel cell materials and component, Dept of material Science and of Chemical engineering, California Institute of Technology, Pasadena USA, 2003.
\bibitem{c30} B. Cook, "An introduction to fuel cells and hydrogen technology", Heliocentris, Vancouver, BC V6R-1S2, Canada, Dec; 2001.
\bibitem{c31} Nicoletti Gi, Alitto G, Anile F. Il Controllo della CO2: Misure e Strategie. In: Proceedings of Italian conference ATI, L'Aquila (ITALY); 2009.
\bibitem{c32} L. Bruzzi, V. Boragno, S. Verità. Sostenibilità ambientale dei sistemi energetici. Tecnologie e Normative, ENEA report (2007)
\bibitem{c33} TN Veziroğlu
21st Century׳s energy: hydrogen energy system
Energy Convers Manag, 49 (2008), pp. 1820-1831
\bibitem{c34} Engineer, The. "Hydrogen embrittlement could lead to failure of fuel-Cell cars." The Engineer, 16 Dec. 2015.
\bibitem{c35} Somerday, B.P. "Addressing Hydrogen Embrittlement of Metals in the SAE J2579 Fuel Cell Vehicle Tank Standard." pp. 1–10.
\bibitem{c36} San Marchi, C. and Somerday, B.P., Technical Reference on Hydrogen Compatibility of
Materials, SAND2008-1163, Sandia National Laboratories, Livermore, CA USA, 2008
\bibitem{c37} S. Baur, J. Sheffield, D. Enke
First annual university students' hydrogen design contest: hydrogen fuelling station
National Hydrogen Association, U.S. Department of Energy and Chevron Texaco (2004)
\end{thebibliography} | CommonCrawl |
Estimating regional flood discharge during Palaeocene-Eocene global warming
Late Holocene canyon-carving floods in northern Iceland were smaller than previously reported
Willem G. M. van der Bilt, Iestyn D. Barr, … Jostein Bakke
Changing flood frequencies under opposing late Pleistocene eastern Mediterranean climates
Yoav Ben Dor, Moshe Armon, … Yehouda Enzel
A millennium-long climate history of erosive storms across the Tiber River Basin, Italy, from 725 to 2019 CE
Nazzareno Diodato, Fredrik Charpentier Ljungqvist & Gianni Bellocchi
Rapid incision of the Mekong River in the middle Miocene linked to monsoonal precipitation
Junsheng Nie, Gregory Ruetenik, … Shanpin Liu
Drivers of river reactivation in North Africa during the last glacial cycle
Cécile L. Blanchet, Anne H. Osborne, … Martin Frank
Atypical responses of a large catchment river to the Holocene sea-level highstand: The Murray River, Australia
Anna M. Helfensdorfer, Hannah E. Power & Thomas C. T. Hubble
Rapid inundation of southern Florida coastline despite low relative sea-level rise rates during the late-Holocene
Miriam C. Jones, G. Lynn Wingard, … Christopher E. Bernhardt
Ice-dominated Arctic deltas
Irina Overeem, Jaap H. Nienhuis & Anastasia Piliouras
10Be-inferred paleo-denudation rates imply that the mid-Miocene western central Andes eroded as slowly as today
Andrea Madella, Romain Delunel, … Marcus Christl
Chen Chen1,
Laure Guerit1,2,
Brady Z. Foreman ORCID: orcid.org/0000-0002-4168-06183,
Hima J. Hassenruck-Gudipati4,
Thierry Adatte5,
Louis Honegger1,
Marc Perret1,
Appy Sluijs6 &
Sébastien Castelltort ORCID: orcid.org/0000-0002-6405-40381
Scientific Reports volume 8, Article number: 13391 (2018) Cite this article
Palaeoclimate
Among the most urgent challenges in future climate change scenarios is accurately predicting the magnitude to which precipitation extremes will intensify. Analogous changes have been reported for an episode of millennial-scale 5 °C warming, termed the Palaeocene-Eocene Thermal Maximum (PETM; 56 Ma), providing independent constraints on hydrological response to global warming. However, quantifying hydrologic extremes during geologic global warming analogs has proven difficult. Here we show that water discharge increased by at least 1.35 and potentially up to 14 times during the early phase of the PETM in northern Spain. We base these estimates on analyses of channel dimensions, sediment grain size, and palaeochannel gradients across the early PETM, which is regionally marked by an abrupt transition from overbank palaeosol deposits to conglomeratic fluvial sequences. We infer that extreme floods and channel mobility quickly denuded surrounding soil-mantled landscapes, plausibly enhanced by regional vegetation decline, and exported enormous quantities of terrigenous material towards the ocean. These results support hypotheses that extreme rainfall events and associated risks of flooding increase with global warming at similar, but potentially at much higher, magnitudes than currently predicted.
Alluvial deposits within the Tremp-Graus Basin of northern Spain (~35°N palaeolatitude) show a change from strata dominated by overbank palaeosols to an anomalously thick and widespread, conglomeratic fluvial unit that coincides with the early phase of the PETM1,2,3. This was interpreted to reflect the development of a vast braid plain due to an abrupt and dramatic increase in seasonal rainfall1. Late Palaeocene floodplain deposits near the town of Aren (Esplugafreda Formation; Fig. 1) are intercalated with coarse sandstones and clast-supported conglomerates filling isolated single- and multi-storey ribbon fluvial channels deposits4. Levels of gypsum, ubiquitous microcodium remains, abundant carbonate nodule horizons, and reddish palaeosols indicate deposition in generally semi-arid alluvial plains4,5.
Study area in palaeogeographic context (modified from ref.3) and simplified stratigraphic column with main formations, ages and carbon isotopic profile showing the negative δ13C excursion in soil carbonates1 (blue profile) and in organic carbon8 (green profile). IVF: Incised Valley Fill. PETM: Palaeocene-Eocene Thermal Maximum. ys and rs: yellowish and reddish soils. Arrows indicate main palaeoflow directions in the Late Palaeocene.
A member of the overlying Claret Formation that formed ~40 kyr prior to the PETM represents a 30 m thick incised valley fill (IVF) made of coarse- and fine-grained fluvial sediment, which displays an erosional base with maximum relief of ~30 m and maximum width of ~5 km (ref.2). The IVF member is overlaid by an extensive sheet-like pebbly calcarenite and clast-supported conglomerate unit, the Claret Conglomerate (CC), which has typical thicknesses of 1 to 4 m and locally up to 8 m (ref.1). Studies on organic carbon have demonstrated that this unit occurs after the onset of the carbon isotope excursion6,7,8 (CIE) and terminates prior to the peak of the CIE1,2,7 (Fig. 1), suggesting the Claret Conglomerate formed during the early phase of the PETM over a time span of ~10 kyrs1 (ref.1) or less. The CC ends abruptly and is overlaid by ~20 m of fine-grained yellowish soil mainly made up of silty mudstones with abundant small carbonate nodules and gypsum layers, which span the majority of the carbon isotope excursion and its recovery1. After the PETM, an interval of red soils marks the return to Palaeocene-like conditions. A suite of carbon isotope records using bulk organic, pedogenic carbonate nodules, and compound-specific proxies reinforce the correlation between the PETM and this unique sedimentologic interval from stratigraphic sections located in both proximal and distal portions of the Tremp-Graus Basin1,2,3,6,7,8,9. However, there is some lingering disagreement in the precise timing of the sedimentologic response in relation to the onset, body, and recovery portions of the PETM3,6,7,8. This uncertainty is likely related to correlation imprecision, the timescales of formation of different proxies, taphonomic preservation issues, and the inherent incompleteness of the terrestrial stratigraphic record on timescales shorter than 10 kyrs3,6,7,8,10.
It should be noted that there is clear evidence for tectonic and eustatic influences on deposition in the Tremp-Graus Basin throughout the Late Cretaceous and early Paleogene. Compressional tectonics between the European and Iberian plates instigated active thrusting within the Pyrenees and formation of the Tremp-Graus foreland basin11,12,13. Structural relationships, subsidence analyses, and changing basin sedimentation rates indicate that the compressional tectonic regime produced discrete intervals of active thrusting and tectonic quiescence14,15; However, these major episodes of thrusting are uncorrelated with the PETM and occurred during the late Santonian-late Maastrichtian (preceding the PETM) and the middle Illerdian-middle Lutetian intervals (post-dating the PETM)14. The intervening period, which includes the PETM interval, experienced slow, uniform subsidence rates12,15,16. The basin was also subjected to eustatic variability and the foreland basin inundated several times during the Late Cretaceous and early Paleogene3. Most pertinent is the sea level fall and subsequent rise documented by the IVF unit underlying the Claret Conglomerate, however, subsequent study has established this eustatic fluctuation preceded the PETM interval and was likely not the primary driver of the Claret Conglomerate3.
Thus, in the absence of compelling independent evidence for a tectonic or eustatic forcing on the Claret Conglomerate and yellow palaeosol interval, and its tight correlation with several isotope records we proceed under the inference that the observed change in stratigraphy was driven by climatic shifts associated with high atmospheric carbon dioxide levels during the early phase of the PETM. Previous studies have inferred a qualitative increase in seasonality, extreme events, and intra-annual humidity during the PETM based on the Claret Conglomerate1,2. Unfortunately, there is no detailed analysis of palaeosols in the Tremp-Graus Basin comparable to extensive studies of PETM paleosols in the Bighorn Basin of Wyoming, USA17,18,19. However, existing data suggest that the shift from red-to-yellow-to-red bed palaeosols implies these altered hydrologic conditions persisted throughout the PETM18.
To quantify the magnitude of change in water and sediment discharge recorded by the fluvial systems in the basin, we first reconstruct pre-PETM and PETM fluvial palaeoslopes and equilibrium flow velocities from field estimates of grain size and channel depth data. We then extract average channel widths from a published cross-section of the Claret Conglomerate and combine with flow velocities to obtain first-order estimates of volumetric discharge during channel forming events before and during the early PETM. Conspicuously, the Claret Conglomerate is temporally restricted to the early portion of the PETM (Fig. 1) and may only be representative of the climatic transition from baseline Paleocene conditions to the elevated pCO2 conditions of the PETM. Thus, our quantitative reconstructions of discharge and precipitation extremes must be conservatively restricted to the early phase of the PETM. However, there is also a possibility that the temporal brevity of the Claret Conglomerate is related to non-linear behavior of the geomorphic system to the PETM-forcing. Several modeling, experimental, and field studies suggest the propagation of environmental signals will be modified by alluvial systems20,21. This hypothesis has yet to be thoroughly vetted in the Tremp-Graus Basin and as such we restrict our inferences of hydrologic changes to the early PETM but note the possibility that they may be representative of the PETM as a whole.
We estimated channel depth from fining upward sequences and bar clinoforms22, and grain size from 26 Palaeocene (Esplugafreda and IVF) and 22 early PETM (CC) channel bodies (see Methods, Fig. 2 and Supplementary Fig. 1). At each location, the b-axes of between 94 and 405 clasts (median of 108), were measured near the base of individual channel deposits (Supplementary Material). D50 corresponds to the 50th percentile of the grain size distribution showing a normal cumulative density function. Channel heights are given in meters, with uncertainty of 35% due to incomplete preservation of original channel fill thickness23.
Outcrop panoramic view and line drawing with location of field grain size measurement stations. PETM Claret Conglomerate is in pink above blue IVF interval (colors as on Fig. 1). Green line is the Mid-Palaeocene Unconformity separating Lower Palaeocene Talarn and Upper Palaeocene Esplugafreda formations. Image data: Google, Digital Globe. See Supplementary Material for large version.
The mean D50 of the pre-PETM channels deposits is 21.2 ± 5 mm (1σ, N = 26) and the mean D50 of the CC deposits is 19.5 ± 4 mm (N = 22). Average channel depth is 1.1 ± 0.6 m to 1.4 ± 0.6 m, respectively for the Palaeocene and early PETM (Fig. 3a). The data are non-normally distributed, and a non-parametric Kruskal-Wallis test on grain size (χ2 = 1.17, p = 0.2791) and channel depth (χ2 = 2.97, p = 0.085) data do not reject the null hypotheses that pre-PETM and early PETM channel deposit have the same median values (at the 5% confidence level).
Channel deposits characteristics before and during the PETM global warming. (a) D50 and bankfull channel depth Hbf (±SE) at pre-PETM (N = 26) and PETM (N = 22) field stations. Large circles indicate population mean (±1σ). (b) Calculated palaeoslopes at individual field stations indicated with standard error. Larger circles indicate formation average paleoslope (±1σ).
Paola and Mohrig24 proposed an estimator of river palaeoslope for coarse-grained braided channel fills
$${S}_{est}=0.094\times \langle {D}_{50}\rangle /\langle h\rangle ,$$
where 〈D50〉 and 〈h〉 are channel-averaged median grain size and bankfull depth, respectively). Although the Claret Conglomerate appears to meet the specific criteria outlined by Paola and Mohrig24, the Esplugafreda channels encased in cohesive floodplain banks and interpreted as sinuous ribbons4 likely do not. Thus, we employ a more generalized empirical relationship for alluvial rivers developed by Trampush25:
$$\mathrm{log}\,S={\alpha }_{0}+{\alpha }_{1}\,\mathrm{log}\,{D}_{50}+{\alpha }_{2}\,\mathrm{log}\,{H}_{bf}$$
where S is the channel slope, and Hbf the bankfull channel depth. Empirical coefficients, 𝛼0, 𝛼1 and 𝛼2 used are −2.08 ± 0.0015 (mean ± standard error SE), 0.2540 ± 0.0007, and −1.0900 ± 0.0019, respectively25. Equation 2 is particularly amenable to palaeoslope estimate of both the Esplugafreda and Claret channel deposits because it is based on a broad range of channel patterns, grain size (sand and gravel) and mode of sediment transport. Calculations indicate a decrease in average channel slope from 0.0035 ± 0.0016 (mean ± 1σ, in m/m) in the Palaeocene to 0.0028 ± 0.0017 during the early PETM (Fig. 3b). However, the estimates are not normally distributed and a Kruskal-Wallis test (χ2 = 2.22, p = 0.136) cannot reject the null hypothesis that the population medians are the same.
We estimate volumetric fluxes of water by multiplying average equilibrium velocity, U, (using above derived height and slope data with Manning's equation
$${\boldsymbol{U}}=\frac{1}{{\boldsymbol{n}}}{{\boldsymbol{R}}}^{2{\boldsymbol{/}}3}{{\boldsymbol{S}}}^{1{\boldsymbol{/}}2},$$
where n=0.03 ± 0.005 (±1σ) is Manning's coefficient, and R the hydraulic radius is approximated by 〈h〉 the channel height), average channel depth (Fig. 3) and formation averaged river width extracted from published data (Methods). Dreyer4 and Colombera et al.5 present comprehensive data sets of channel width and number of storeys of the Esplugafreda and Claret formations (average palaeoflows are perpendicular to the outcrop strike). Average individual storey width in the Palaeocene is 15 ± 7 m (1σ, N = 24, Fig. 4, Methods) and is interpreted to represent full flow width during channel forming events. In contrast, early PETM sandbodies display multi-lateral channels4,5 that represent belts of shallow interconnected streams with individual storey average width of 169 ± 36 m (1σ, N = 13, see Methods and Supplementary Material). Comparison with modern river data (Fig. 4) suggests that active flow widths within such channel belts were most likely near a central value of 95.5 meters, in a range of 22 m to 169 m. The fewer number of total channel bodies in the Claret Conglomerate is related to their larger width compared to the Esplugafreda Formation as the total basin width likely did not change spanning the PETM. Moreover, during the early PETM the extreme (close to 100%) channel density prohibits assessments of whether more than one of these braid-belts was active at any given time. In contrast, the very low channel density of ~5% during the Esplugafreda Formation (Fig. 2, and Suppl. Fig. 1) suggests only one active channel at a given time. We obtain a representative volumetric discharge estimate (±SE) of 31 ± 4.3 m3/s in the Palaeocene compared to 253 ± 102 m3/s during the early PETM. Propagating uncertainties, this amounts to 8.1 ± 3.5-fold increase (±SE) of volumetric peak channel-forming discharge during the early PETM, implying at least a 1.35-fold, and at most a 14.9-fold increase within a 95% confidence interval (±1.96xSE).
Channel width and depth data recorded before and during the PETM in the Esplugafreda sector. Ribbon channels (width/depth <15) dominate the pre-PETM deposits (blue dots). The range of possible active flow width during PETM braid-belt deposition is obtained from PETM single-story width estimates (orange dots) and modern river data (white and grey squares).
Channel-forming discharge in alluvial river systems is typically dictated by flood recurrence on timescales of 1.5–3 years24,26, and slopes adjusted to sediment flux and grain size distribution27,28. Therefore the parameters measured in this study unlikely relate to mean annual precipitation conditions, but rather to (inter-) annual rainfall variability and/or extreme precipitation events. These extreme events may be related to transport of the outsized clasts observed by Schmitz and Pujalte1. The observed minimal changes in flow depths and slopes, but increases in channel width spanning the early PETM are consistent with recent studies that suggest modern, coarse-grained rivers actively self-organize to slightly exceed critical shear velocity under a variety of discharges29. Larger floods and discharge events induce channel widening rather than deepening29.
Likely exacerbating this widening response is the observed vegetation decline in the region. Pollen records of correlative marine sections in western Spain30 document a change from permanent conifer forests prior to the PETM to sparse vegetation consistent with brief periods of rain in a warmer and drier climate during the PETM. Such a decline in vegetation would have enhanced erodibility of channel banks by decreasing their root-controlled cohesion inducing a more braided planform morphology and/or promoting channel lateral mobility31. This behavior would also have enhanced wholesale denudation of the entire landscape. Field studies of deforested/afforested catchments32 and numerical models of coupled vegetation-landscape evolution33 demonstrate that devegetated catchments respond quickly to rainfall events and produce narrower hydrographs and higher peak discharges, which result in more-than-linear increase in catchment sediment efflux. The motion of landslides can also be strongly accelerated by even negligible increases in rainfall34. Vegetation decline and extreme precipitation events both provide a positive feedback to increased bedload flux, which itself is a primary control on channel cross-sectional aspect ratio35.
In addition, the observed changes in stratigraphy (abrupt alluvial progradation) are broadly consistent with numerical models of fluvial response to increased mean precipitation rates27,36,37. However, since most river adjustment during the early PETM took place by enlargement of the braid belt, specific transport capacity does not evolve significantly and thus also implies only minor grain size evolution of the coarse fraction. This phenomenon is also observed in fluvial deposits within the northern Bighorn Basin of Wyoming (U.S.A.), where minimal changes in grain size and flow depths occur, but a combination of seasonal climate, increased sediment flux, and sparse floodplain vegetation generated an anomalously thick and laterally extensive fluvial sandbody17,38,39.
Overall our findings contribute to the growing evidence for substantial increases in runoff and continental erosion during the PETM40,41. Consistent evidence for hydrological change on land and continental margins further comes from biotic change recorded in fossils38,42,43, and the hydrogen isotopic composition of plant biomarkers44. It appears the PETM caused a number of 'system clearing' events45 within terrestrial geomorphic systems that flushed downstream fine-grained sediments, which were eventually exported into marginal marine settings20,39,40,46. A 6-fold and a 9-fold increases in clay abundance across the PETM have been reported in the distal portion of the Tremp-Graus Basin46 and in the northern margin of the Bay of Biscay47, respectively. Within error, this is consistent with the vast increase in discharge proposed herein despite the variety of other factors (e.g., marine currents, shelf storage) that control sediment delivery to deep-water48.
What implications do these results have for the future? Model simulations and observations suggest that anthropogenic climate warming will lead to pronounced changes in global hydrology. Specifically, changes in seasonality and the increased occurrence and intensity of extreme weather events are expected, but uncertainty remains in the magnitude of change49,50,51. Theoretical arguments indicate that precipitation extremes should scale with the water-holding capacity of the atmosphere, which increases at rates of ~7% C−1 according to the Clausius–Clapeyron equation52. Although this prediction is supported by global data on annual maximum daily rainfall53, subdaily precipitation extremes (hourly) seem to depart from it54 with some regions showing lower-than Clausius-Clapeyron scaling while others display "super" Clausius-Clapeyron dependence for temperatures above ~12 °C55 and decreasing rainfall intensity above ~24 °C56. These predictions, however, may differ significantly between dry and wet regions51,57, and depend on moisture availability, rainfall mechanism (convective versus stratiform58), and local topographic effects59 among others. This leads to little consensus on expected perturbations of precipitation patterns with global warming51.
If we proceed under the presumption that our estimates of river discharges document heavy rainfall events, the observed increase during the early PETM warming is at least close to a 7% C−1 Clausius-Clapeyron prediction of 1.4-fold increase for a +5 °C of warming (cumulating 7% of increase for 5 warming steps, Methods), but likely largely greater than even "super" Clausius-Clapeyron predictions whereby at double the 7% C−1 rate54,55, a +5 °C warming yields a 1.93-fold increase in precipitation. Proximity to water masses (Atlantic and Mediterranean) and moisture availability56, added to local convective and topographic effects in the piedmont of the nascent Pyrenean orogeny could explain such locally amplified response. Within uncertainties, our results suggest a possible "hyper" Clausius-Clapeyron scaling of precipitation extremes during the PETM, and hence support the likelihood that current global warming may intensify extreme rainfall events and associated floods at rates higher, perhaps unpredictably higher, than forecast by general circulation models54.
Grain size data collection
At each location, the b-axis of between 94 and 405 clasts (median of 108), were measured near the base of individual channel deposits following established methods60,61,62. The grid-by-number method63 was used on relatively large, easily accessed outcrops. A grid with regularly spaced nodes was marked over the vertical surface of the outcrop and clasts located under each node were measured. The spacing of the nodes was defined according to visual estimate of the D90 of the outcrop in order to avoid repeated sampling of identical clasts, and on average, nodes were spaced by at least 20 cm. The random method64 was performed on outcrops with limited extent. In this case, the measured clasts were randomly selected in a 1 × 1 m2 area. Finally, the grain-size distribution was also determined from pictures for outcrops with access issues65. Pictures were taken with a Nikon Coolpix S2700 camera with 16Mpixels resolution from a distance of ca. 1 meter, and a ruler was included on each picture for scale. The average resolution of the pictures thus obtained is ~0.12 mm/pixel. Excluding the edges of the pictures, all visible clasts were measured using JMicrovision software66. This method corresponds to an areal-by-number sample that must be converted to an equivalent grid-by-number sample to be comparable to other samples. A conversion factor of 2 was used in this study65,67,68.
Width-depth data
Esplugafreda formation
In the Esplugafreda formation, Dreyer4 described single- and multistorey ephemeral ribbon-bodies interpreted as arroyo-like channels entrenched into the floodplain, and filled during sporadic discharge episodes, and measured their widths and depths. The width and heights of individual storeys within multistorey sandbodies of the Esplugafreda bodies are not reported in Dreyer4 and thus not taken into account in our analyses. The heights of single storeys reported in Dreyer's study range from 0.4 to 5.6 meters. Given our own field measurements of channel heights, with average of 1.1 ± 0.6 m (1σ), we thus excluded Dreyer's storeys with heights exceeding our measured average by 2 standard deviations (i.e. exceeding 2.25 m), i.e. 6 out of 30 storeys, which we suspect could be multistoreys given their anomalous height. This minimizes slightly the mean channel width by approximately 10%, i.e. mean width of 15 ± 7 m (1σ, N = 24) instead of 17 ± 8 m, and thus yields a conservative estimate of water discharge.
Claret Conglomerate
Channel sandbodies of the Aren exposure drawn in Dreyer4 allows measuring individual storey dimensions. Dreyer identified single storey sandbodies based on the presence of major erosion surfaces and moderately well developed pedogenesis intervals (pause-planes) between separate bodies. Minor erosion surfaces found within the single storeys sandbodies are interpreted as surfaces separating smaller-scale elements within a braid-belt such as bars and individual channels4. In the present study, we measured width and depth on Dreyer's panorama4 with reference to Mohrig et al.22 methodological guidelines considering 1) the presence of wings, which can represent either a relatively wide topmost internal storey69, or a channel levee tapering out towards the overbank fines, and 2) the topographic relief above the lowest wing, which can represent either superelevation of the channel above the adjacent floodplain wings22, or be the result of lateral migration of the entire braid belt. According to Mohrig et al.22, natural channels become superelevated to the point where the riverbed approximately reaches the elevation of the adjacent floodplain. Accordingly, storeys displaying topographic relief (above the lowest wing) greater than incision depth (below the lowest wing) are considered as suspect multistorey channel sandbodies (even though they are identified as single-storey in Dreyer's study) and excluded from the analysis. This assumption may exclude some anomalously deep channels within the dataset, and yields more conservative estimates for discharge volume. Width and depth of sandbodies are therefore measured at the level of the lowest wing, or at the level of the lowest eroded sandbody margin (Supplementary Fig. 2), thus always yielding conservative width estimates. According to this approach, the average single-storey estimated width amounts to a conservative value of 169 ± 36 m (1σ, N = 13). By comparison, Colombera et al.5 recently described the entire multi-storey channel complexes of the Claret formation and measured a less conservative average width of 484 ± 508 m (1σ).
Church and Rood (1983) river data
Figure 4 shows the width and depth of modern rivers of the Church and Rood70 catalogue with median grain size in the same range as found in the Esplugafreda and Claret deposits (17.5 mm to 27 mm).
Clausius-Clapeyron changes in precipitation
Precipitation extremes are expected to scale with temperature change at a rate given by the Clausius-Clapeyron equation, which governs change in water-holding capacity of the atmosphere at a rate of 7% per degree52. Cumulating this rate 5 times to account for a 5 °C increase in temperature during the PETM amounts to a ~40% increase in precipitation, i.e. 1.4 times the initial pre-PETM value. The so-called "super" Clausius-Clapeyron scaling involves a doubling (i.e. 14%) of the above rate for average temperatures above 12 °C, which implies a 1.93-fold increase in precipitation from initial value for a 5 °C global warming.
Schmitz, B. & Pujalte, V. Abrupt increase in seasonal extreme precipitation at the Palaeocene-Eocene boundary. Geology 35, 215–218 (2007).
Schmitz, B. & Pujalte, V. Sea-level, humidity, and land-erosion records across the initial Eocene thermal maximum from a continental-marine transect in northern Spain. Geology 31, 689, https://doi.org/10.1130/g19527.1 (2003).
Pujalte, V., Schmitz, B. & Baceta, J. I. Sea-level changes across the Palaeocene–Eocene interval in the Spanish Pyrenees, and their possible relationship with North Atlantic magmatism. Palaeogeography, Palaeoclimatology, Palaeoecology 393, 45–60, https://doi.org/10.1016/j.palaeo.2013.10.016 (2014).
Dreyer, T. Quantified fluvial architecture in ephemeral stream deposits of the Esplugafreda Formation (Palaeocene), Tremp-Graus Basin, northern Spain. In Alluvial Sedimentation, edited by Marzo, M and Puigdefabregas, C. Spec. Publs Int. Ass. Sediment. 17, 337–362 (1993).
Colombera, L., Arévalo, O. J. & Mountney, N. P. Fluvial-system response to climate change: The Palaeocene-Eocene Tremp Group, Pyrenees, Spain. Global and Planetary Change 157, 1–17 (2017).
Domingo, L., López-Martínez, N., Leng, M. J. & Grimes, S. T. The Paleocene–Eocene Thermal Maximum record in the organic matter of the Claret and Tendruy continental sections (South-central Pyrenees, Lleida, Spain). Earth and Planetary Science Letters 281(3–4), 226–237 (2009).
Manners, H. R. et al. Magnitude and profile of organic carbon isotope records from the Paleocene–Eocene Thermal Maximum: Evidence from northern Spain. Earth and Planetary Science Letters. 376, 220–230 (2013).
Manners, H. R. A Multi-Proxy Study Of The Palaeocene-Eocene Thermal Maximum In Northern Spain. 238 (2014).
Pujalte, V. et al. Correlation of the Thanetian-Ilerdian turnover of larger foraminifera and the Paleocene-Eocene thermal maximum: confirming evidence from the Campo area (Pyrenees, Spain). Geologica Acta. 7(1–2) (2009).
Foreman, B. Z. & Straub, K. M. Autogenic geomorphic processes determine the resolution and fidelity of terrestrial paleoclimate records. Science advances. 3(9), (2017).
Roest, W. R. & Srivastava, S. P. Kinematics of the plate boundaries between Eurasia, Iberia, and Africa in the North Atlantic from the Late Cretaceous to the present. Geology 19(6), 613–616 (1991).
Muñoz, J. A. Evolution of a continental collision belt: ECORS-Pyrenees crustal balanced cross-section. In: McClay, K. R. & Buchanan, P. G. (Eds.), Thrust Tectonics. Chapman & Hall, London, 235–246 (1992).
Teixell, A. Estructura cortical de la Cordillera Pirenaica. Geologia de Espana, 320–321 (2004).
Puigdefàbregas, C., Muñoz, J. A. & Vergés, J. Thrusting and foreland basin evolution in the southern Pyrenees. In: McClay, K.R., Buchanan, P.G. (Eds.), Thrust Tectonics. Chapman & Hall, London, 247–254 (1992).
Dinarès‐Turell, J., Baceta, J. I., Pujalte, V., Orue‐Etxebarria, X. & Bernaola, G. Magnetostratigraphic and cyclostratigraphic calibration of a prospective Palaeocene/Eocene stratotype at Zumaia (Basque Basin, northern Spain). Terra Nova 14(5), 371–378 (2002).
Baceta, J. I., Pujalte, V., Serra-Kiel, J., Robador, A. & Orue-Etxebarria, X. El Maastrichtiense final, Paleoceno e Ilerdiense inferior de la Cordillera Pirenaica. Geología de España, 308–313 (2004).
Kraus, M. J., Woody, D. T., Smith, J. J. & Dukic, V. Alluvial response to the Palaeocene–Eocene Thermal Maximum climatic event, Polecat Bench, Wyoming (USA). Palaeogeography, Palaeoclimatology, Palaeoecology 435, 177–192 (2015).
Kraus, M. J. & Riggins, S. Transient drying during the Paleocene–Eocene Thermal Maximum (PETM): analysis of paleosols in the Bighorn Basin, Wyoming. Palaeogeography, Palaeoclimatology, Palaeoecology 245(3–4), 444–461 (2007).
Adams, J. S., Kraus, M. J. & Wing, S. L. Evaluating the use of weathering indices for determining mean annual precipitation in the ancient stratigraphic record. Palaeogeography, Palaeoclimatology, Palaeoecology 309(3–4), 358–366 (2011).
Foreman, B. Z., Heller, P. L. & Clementz, M. T. Fluvial response to abrupt global warming at the Palaeocene/Eocene boundary. Nature 491, 92–95 (2012).
Article ADS PubMed CAS Google Scholar
Romans, B. W., Castelltort, S., Covault, J. A., Fildani, A. & Walsh, J. P. Environmental signal propagation in sedimentary systems across timescales. Earth-Science Reviews 153, 7–29 (2016).
Mohrig, D., Heller, P. L., Paola, C. & Lyons, W. J. Interpreting avulsion process from ancient alluvial sequences: Guadalope-Matarranya system (northern Spain) and Wasatch Formation (western Colorado). Geological Society of America Bulletin 112, 1787, https://doi.org/10.1130/0016-7606(2000)112 (2000).
Paola, C. & Borgman, L. Reconstructing random topography from preserved stratification. Sedimentology 38, 553–565 (1991).
Paola, C. & Mohrig, D. Palaeohydraulics revisited: palaeoslope estimation in coarse-grained braided rivers. Basin Research 8, 243–254 (1996).
Trampush, S. M., Huzurbazar, S. & McElroy, B. Empirical assessment of theory for bankfull characteristics of alluvial channels. Water Resources Research 50, 9211–9220, https://doi.org/10.1002/2014wr015597 (2014).
Castro, J. M. & Jackson, P. L. Bankfull discharge recurrence intervals and regional hydraulic geometry relationships: Patterns in the Pacific Northwest, USA. Journal of the American Water Resources Association 37, 1249–1262 (2001).
Paola, C., Heller, P. L. & Angevine, C. L. The large‐scale dynamics of grain‐size variation in alluvial basins, 1: Theory. Basin Research 4, 73–90 (1992).
Fedele, J. J. & Paola, C. Similarity solutions for fluvial sediment fining by selective deposition. Journal of Geophysical Research 112, https://doi.org/10.1029/2005jf000409 (2007).
Phillips, C. B. & Jerolmack, D. J. Self-organization of river channels as a critical filter on climate signals. Science 352, 694–697 (2016).
Schmitz, B., Pujalte, V. & Nunez-Betelu, K. Climate and sea-level perturbations during the Incipient Eocene Thermal Maximum: evidence from siliciclastic units in the Basque Basin (Ermua, Zumaia and Trabakua Pass), northern Spain. Palaeogeography, Palaeoclimatology, Palaeoecology 165, 299–320 (2001).
Gran, K. & Paola, C. Riparian vegetation controls on braided stream dynamics. Water Resources Research 37, 3275–3283 (2001).
Alatorre, L., Beguería, S., Lana-Renault, N., Navas, A. & García-Ruiz, J. Soil erosion and sediment delivery in a mountain catchment under scenarios of land use change using a spatially distributed numerical model. Hydrology and Earth System Sciences 16, 1321 (2012).
Coulthard, T., Kirkby, M. & Macklin, M. Modelling geomorphic response to environmental change in an upland catchment. Hydrological Processes 14, 2031–2045 (2000).
Schulz, W. H. et al. Landslide kinematics and their potential controls from hourly to decadal timescales: Insights from integrating ground-based InSAR measurements with structural maps and long-term monitoring data. Geomorphology 285, 121–136 (2017).
Métivier, F. & Barrier, L. Alluvial Landscape Evolution: What Do We Know About Metamorphosis of Gravel‐Bed Meandering and Braided Streams? Gravel-bed Rivers: processes, tools, environments, 474–501 (2012).
Armitage, J. J., Duller, R. A., Whittaker, A. C. & Allen, P. A. Transformation of tectonic and climatic signals from source to sedimentary archive. Nature Geoscience 4, 231–235 (2011).
Simpson, G. & Castelltort, S. Model shows that rivers transmit high-frequency climate cycles to the sedimentary record. Geology 40, 1131–1134 (2012).
Wing, S. L. et al. Transient floral change and rapid global warming at the Palaeocene-Eocene boundary. Science 310, 993–996 (2005).
Foreman, B. Z. Climate-driven generation of a fluvial sheet sand body at the Palaeocene-Eocene boundary in north-west Wyoming (USA). Basin Research 26, 225–241, https://doi.org/10.1111/bre.12027 (2014).
Sluijs, A. et al. Warming, euxinia and sea level rise during the Palaeocene-Eocene Thermal Maximum on the Gulf Coastal Plain: Implications for ocean oxygenation and nutrient cycling. Climate of the Past 10, 1421 (2014).
Carmichael, M. J. et al. Hydrological and associated biogeochemical consequences of rapid global warming during the Palaeocene-Eocene Thermal Maximum. Global and Planetary Change 157, 114–138 (2017).
Sluijs, A. & Brinkhuis, H. A dynamic climate and ecosystem state during the Palaeocene-Eocene Thermal Maximum: inferences from dinoflagellate cyst assemblages on the New Jersey Shelf. Biogeosciences 6 (2009).
Eldrett, J., Greenwood, D., Polling, M., Brinkhuis, H. & Sluijs, A. A seasonality trigger for carbon injection at the Palaeocene–Eocene Thermal Maximum. Climate of the Past 10, 759–769 (2014).
Pagani, M. et al. Arctic hydrology during global warming at the Palaeocene/Eocene thermal maximum. Nature 442, 671–675 (2006).
Jerolmack, D. J. & Paola, C. Shredding of environmental signals by sediment transport. Geophysical Research Letters 37 (2010).
Bolle, M., Adatte, T., Keller, G., Von Salis, K. & Hunziker, J. Biostratigraphy, mineralogy and geochemistry of the Trabakua Pass and Ermua sections in Spain: Palaeocene-Eocene transition. Eclogae geol. Helv. 91, 1–25 (1998).
Bornemann, A. et al. Persistent environmental change after the Palaeocene–Eocene Thermal Maximum in the eastern North Atlantic. Earth and Planetary Science Letters 394, 70–81 (2014).
Covault, J. A. & Graham, S. A. Submarine fans at all sea-level stands: Tectono-morphologic and climatic controls on terrigenous sediment delivery to the deep sea. Geology 38, 939–942 (2010).
Coumou, D. & Rahmstorf, S. A decade of weather extremes. Nature climate change 2, 491–496 (2012).
Edenhofer, O. et al. IPCC, 2014: summary for policymakers. Climate change (2014).
Donat, M. G., Lowry, A. L., Alexander, L. V., O'Gorman, P. A. & Maher, N. More extreme precipitation in the world's dry and wet regions. Nature Climate Change 6, 508–513 (2016).
Trenberth, K. E., Dai, A., Rasmussen, R. M. & Parsons, D. B. The changing character of precipitation. Bulletin of the American Meteorological Society 84, 1205–1217 (2003).
Westra, S., Alexander, L. V. & Zwiers, F. W. Global increasing trends in annual maximum daily precipitation. Journal of Climate 26, 3904–3918 (2013).
Westra, S. et al. Future changes to the intensity and frequency of short‐duration extreme rainfall. Reviews of Geophysics 52, 522–555 (2014).
Lenderink, G. & Van Meijgaard, E. Increase in hourly precipitation extremes beyond expectations from temperature changes. Nature Geoscience 1, 511–514 (2008).
Hardwick Jones, R., Westra, S. & Sharma, A. Observed relationships between extreme sub‐daily precipitation, surface temperature, and relative humidity. Geophysical Research Letters 37 (2010).
Byrne, M. P. & O'Gorman, P. A. The response of precipitation minus evapotranspiration to climate warming: Why the "wet-get-wetter, dry-get-drier" scaling does not hold over land. Journal of Climate 28, 8078–8092 (2015).
Berg, P., Moseley, C. & Haerter, J. O. Strong increase in convective precipitation in response to higher temperatures. Nature Geoscience 6, 181–185 (2013).
Siler, N. & Roe, G. How will orographic precipitation respond to surface warming? An idealized thermodynamic perspective. Geophysical Research Letters 41, 2606–2613 (2014).
Church, M. A., McLean, D. & Wolcott, J. River bed gravels: sampling and analysis. In: Gravel-Bed Rivers. John Wiley and Sons New York. 43–88 (1987).
Diplas, P. & Fripp, J. B. Properties of various sediment sampling procedures. Journal of Hydraulic Engineering 118, 955–970 (1992).
Rice, S. & Church, M. Sampling surficial fluvial gravels: the precision of size distribution percentile estimates. Journal of Sedimentary Research 66 (1996).
Wolman, M. G. A method of sampling coarse river‐bed material. EOS, Transactions American Geophysical Union 35, 951–956 (1954).
Bevenger, G. S. & King, R. M. A pebble count procedure for assessing watershed cumulative effects. Research paper RM (USA) (1995).
Bunte, K. & Abt, S. R. Sampling surface and subsurface particle-size distributions in wadable gravel-and cobble-bed streams for analyses in sediment transport, hydraulics, and streambed monitoring. General Technical Report US Department of Agriculture, Rocky Mountain Research Station, Fort Collins, RMRS-GTR-74 (2001).
Roduit, N. JMicroVision: Image analysis toolbox for measuring and quantifying components of high-definition images. Ver 1, 2002–2007 (2008).
Kellerhals, R. & Bray, D. I. Sampling procedures for coarse fluvial sediments. Journal of the Hydraulics Division 97, 1165–1180 (1971).
Graham, D. J., Rollet, A.-J., Rice, S. P. & Piégay, H. Conversions of surface grain-size samples collected and recorded using different procedures. Journal of Hydraulic Engineering 138, 839–849 (2012).
Gibling, M. R. Width and Thickness of Fluvial Channel Bodies and Valley Fills in the Geological Record: A Literature Compilation and Classification. Journal of Sedimentary Research 76, 731–770, https://doi.org/10.2110/jsr.2006.060 (2006).
Church, M. & Rood, K. Catalogue Of Alluvial River Channel Regime Data. The University of British Columbia, Department of Geography, Vancouver (1983).
This research was funded by Swiss National Science Foundation grant Earth Surface Signaling Systems to S.C. (No 200021-146822). A.S. acknowledges support from the Netherlands Earth System Sciences Centre (NESSC). We acknowledge Chris Paola, Fritz Schlunegger and David Mohrig for discussions.
Department of Earth Sciences, University of Geneva, Rue des Maraîchers 13, 1205, Geneva, Switzerland
Chen Chen, Laure Guerit, Louis Honegger, Marc Perret & Sébastien Castelltort
Géosciences Environnement Toulouse, 14 av. Edouard Belin, 31400, Toulouse, France
Laure Guerit
Department of Geology, Western Washington University, Bellingham, Washington, 98225, USA
Brady Z. Foreman
Jackson School of Geosciences, The University of Texas at Austin, 2305 Speedway Stop, C1160, Austin, Texas, USA
Hima J. Hassenruck-Gudipati
ISTE, Geopolis, University of Lausanne, 1015, Lausanne, Switzerland
Thierry Adatte
Department of Earth Sciences, Faculty of Geosciences, Utrecht University, Heidelberglaan 2, 3584CS, Utrecht, Netherlands
Appy Sluijs
Louis Honegger
Marc Perret
Sébastien Castelltort
C.C., L.G., B.Z.F., H.J.H., T.A., L.H., M.P. and S.C. collected field data. C.C., L.G., B.Z.F. and S.C. supervised field data collection, statistical analyses and palaeohydraulic estimates. S.C. wrote the manuscript with B.Z.F., A.S., C.C. and L.G. All authors contributed to data analysis, interpretation, manuscript editing and discussions.
Correspondence to Sébastien Castelltort.
Chen, C., Guerit, L., Foreman, B.Z. et al. Estimating regional flood discharge during Palaeocene-Eocene global warming. Sci Rep 8, 13391 (2018). https://doi.org/10.1038/s41598-018-31076-3
Two types of hyperthermal events in the Mesozoic-Cenozoic: Environmental impacts, biotic effects, and driving mechanisms
Xiumian Hu
Juan Li
Yongxiang Li
Science China Earth Sciences (2020)
Top 100 in Earth Science
About Scientific Reports
Guide to referees
Scientific Reports (Sci Rep) ISSN 2045-2322 (online) | CommonCrawl |
Home Journals TS MRI Liver Image Assisted Diagnosis Based on Improved Faster R-CNN
Impact Factor (JCR) 2021: 2.299 ℹImpact Factor (JCR):
The JCR provides quantitative tools for ranking, evaluating, categorizing, and comparing journals. The impact factor is one of these; it is a measure of the frequency with which the "average article" in a journal has been cited in a particular year or period. The annual JCR impact factor is a ratio between citations and recent citable items published. Thus, the impact factor of a journal is calculated by dividing the number of current year citations to the source items published in that journal during the previous two years.
5-Year Impact Factor: 1.944 ℹ5-Year Impact Factor:
A 5-Year Impact Factor shows the long-term citation trend for a journal. This is calculated differently from the Journal Impact Factor, so it is not simply an average of the Impact Factors in the time period. The Impact Factor itself is based only on Web of Science Core Collection citation data from the last three years and thus reflects only recent impact. The Journal Impact Factor is the average number of times articles from the journal published in the past two years have been cited in the Journal Citation Reports year.
Source Normalized Impact per Paper (SNIP) 2021: 0.943 ℹSource Normalized Impact per Paper (SNIP):
MRI Liver Image Assisted Diagnosis Based on Improved Faster R-CNN
Minjie Tao | Jianshe Lou | Li Wang*
Department of Hepatobiliary Surgery, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou 310006, China
Audit Office of Zhejiang Business College, Hangzhou 310053, China
https://doi.org/10.18280/ts.390428
39.04_28.pdf
In response to challenges in liver occupancy such a variety of types and manifestations and difficulties in differentiating benign and malignant ones, this paper takes liver images of enhanced MRI scan as the research object, targets on the detection and identification of liver occupancy lesion areas and determining if it is benign or malignant. Accordingly, the paper proposes an auxiliary diagnosis method for liver image combining deep learning and MRI medical imaging. The first step is to establish a reusable standard dataset for MRI liver occupancy detection by pre-processing, image denoising, lesion annotation and data augmentation. Then it improves the classical region-based convolutional neural network (R-CNN) algorithm Faster R-CNN by incorporating CondenseNet feature extraction network, custom-designed anchor size and transfer learning pre-training. This is to further improve the detection accuracy and benign and malignant classification performance of liver occupancy. Experiments show that the improved model algorithm can effectively identify and localise liver occupancies in MRI images, and achieves a mean average precision (mAP) of 0.848 and an Area Under the Curve (AUC) of 0.926 on the MRI standard dataset. This study has important research significance and application value for reducing manual misses and misdiagnosis and improving the early clinical diagnosis rate of liver cancer.
MRI images, liver occupancy, image segmentation, deep learning, Faster R-CNN
Liver cancer is a common malignant tumour in clinical practice, with the 5th highest mortality rate among malignant tumours worldwide. China has about half of the world's liver cancer cases, with a significantly higher incidence and mortality rate than the world level, with the 4th highest incidence rate and the 3rd highest cause of cancer-related deaths [1-3]. Medical data of hepatobiliary surgery indicates that liver cancer is often found to be progressive or mid to late stage once detected, and the effective treatments that can be carried out are limited. At present, the radical cure with definite efficacy are mainly liver resection and transplantation, etc. Although surgery can effectively control the cancer development, reports [4-6] say that the five-year recurrence rate of patients after radical resection is as high as 40%-70%, resulting in a prognosis below desired level. Other studies have shown that the five-year survival rate for people with small hepatocellular carcinoma (a single nodule less than 3 cm in diameter) can reach 80% after surgical resection and radiofrequency ablation [7-10]. Therefore, early detection of small intrahepatic occupancies has great significance for the treatment and prognosis of liver cancer.
Among the liver occupancy diagnostic techniques, dynamic contrast-enhanced CT or MRI examinations are recommended by the American Association for the Study of Liver Diseases (AASLD) for the non-invasive diagnosis of hepatocellular carcinoma (HCC), and the American College of Radiology updated the Liver Imaging Reporting and Data System (LI-RADS v2018) in 2018, which aims to unify the imaging signs and imaging diagnostic process for HCC. For the detection of earlier stage liver cancer, such as the previously mentioned small hepatocellular carcinoma (small HCC), a medical expert [11] compared the diagnostic value of CT and MRI imaging respectively based on LI-RADS v2018 for HCC that is less than 3 cm in diameter, and demonstrated that MRI imaging was overall superior to CT examination for diagnosis of both obvious cancer-caused liver occupancy and early small HCC, MRI imaging has higher enhanced sensitivity and imaging saliency. In view of this, to improve the diagnostic rate of liver occupancies, this paper uses MRI-enhanced scans to study the data.
With the advent of big data in healthcare, there is an urgent need for computer-aided diagnosis (CAD) techniques that can quantitatively analyse medical images and give proactive references. In recent years, deep learning algorithms (e.g., Fast R-CNN, Deep Convolutional Neural Network and Faster R-CNN) have achieved good results in liver tumour detection and recognition. Meng et al. [12] proposed a 3D dual path multi-scale convolutional neural network that used pairwise paths to balance the performance of segmentation and reduce the computational resource requirements for robust segmentation of the liver and liver tumours. Tang et al. [13] used Faster R-CNN to detect the approximate location of the liver and then fed into DeepLab to segment the liver. Li et al. [14] proposed a Hybrid-DenseNet (H-DenseNet), which effectively aggregated the 2D DenseUNet extracted intra-slice features into a 3D DenseUNet to perform 3D segmentation of the liver and tumour simultaneously. Bousabarah et al. [15] used a deep convolutional neural network with radiological capabilities to automatically detect and characterise hepatocellular carcinoma on contrast-enhanced MRI. Kim et al. [16] used a deep learning-based classifier to detect HCC on contrast-enhanced MRI. Zhao et al. [17] combined adversarial learning ideas with Fast R-CNN to improve the detection capability of the network using the three-way adversarial idea. While the above studies have demonstrated the feasibility of deep learning techniques for tumour target detection, there are fewer studies related to image-aided diagnosis for the lesion classification of liver occupancy and determination of benign and malignant liver occupancies. In addition, the problems of uneven intensity, noise interference, weak contrast and irregular appearance and size of tumour lesions in MRI [18] pose challenges for CAD research of liver occupancy images based on deep learning techniques.
To address the above issues, this paper proposes an auxiliary diagnosis algorithm for detecting lesion types and identifying benign and malignant liver occupancies using image segmentation and deep learning techniques. The paper demonstrates the efficacy of this technique through experiments in predicting benign and malignant liver lesions, clarifies the efficacy of this improved algorithm in distinguishing different categories of liver occupancies after confirming liver lesions, and explores the feasibility and application value of this improved algorithm for the identification and detection of liver lesions in MRI liver images, so as to assist physicians in analysing MRI images of liver cancer and making further diagnostic measures.
2. Construction of a Standard Dataset for MRI-Based Liver Occupancy Detection
Deep learning and CAD algorithms require a large amount of training data, but there is a lack of standard MRI image datasets for liver occupancy. In response, we decided to construct a standard MRI dataset for liver occupancy detection in this paper, including both benign and malignant occupancies, for training and testing of deep learning algorithms. The images that dynamic contrast-enhanced MRI (DCE-MRI) acquire are mostly multimodal data of the liver depending on the time of contrast agent injection [19], including iso-inverted phase T1WI, pressurised lipid/non-pressurised lipid T2WI, unenhanced scan, diffusion-weighted imaging (DWI) and enhanced scan sequence images (arterial phase, portal phase, equilibrium phase, hepatobiliary phase). A hepatocyte-specific contrast agent is used during the hepatobiliary phase.
2.1 Image pre-processing
The main tasks during the data pre-processing stage were: (1) the raw MRI data in DICOM format acquired from the hospital PACS system were converted to JPEG images that could be processed for deep learning analysis by using Matlab R2018a to transcode the DICOM files [20]; (2) by collaborating with physicians with years of radiology experience at the partner hospital, we analysed the JPEG images of liver occupancy patients admitted to the hospital in the last year, and included 93 liver occupancy patients in the cases who met the following two criteria: (1) DCE-MRI performed within seven days before biopsy or treatment; (2) patients with a diagnosis of benign or malignant liver occupancy confirmed by surgery or puncture biopsy. A total of 93 liver occupancy patients aged 30 to 80, including 35 women and 58 men, were finally included in this study (Table 1).
Table 1. The current main MRI benign and malignant liver occupancy types and MRI signs
Occupancy type
Major MRI signs
Benign liver occupancy
Hepatic hemangioma
Prevalent in women aged 40-50. MRI shows moderate to high signal on T2WI and low signal on T1WI. Enhanced scans show nodular discontinuous enhancement and contrast agent retention at the edges. Central necrosis is seen in large hepatic haemangiomas. If hepatocyte-specific contrast agent is used, there may be an artifact of contrast agent outflow.
Focal nodular hyperplasia (FNH)
Prevalent in young women. MRI shows isosignal T1WI and mild high-signal T2WI. The central necrosis may have low signal on T1WI and moderate to high signal on T2WI. On enhancement, the arterial phase is homogeneous, the portal phase is isosignal to the liver parenchyma, and the central necrosis is seen as delayed enhancement. FNH shows no rapid contrast washout.
Hepatic adenoma (HCA)
Prevalent in patients using oral estrogen. MRI shows mild/moderate high signal on T2WI with arterial-phase intensification. Pathologically, HCA is classified into the following three types with different imaging features: 1. Inflammatory type, with marked high signal at the margins on T2WI and delayed enhancement on enhancement scans. 2. HNF-1α-activated HCA with diffuse fatty component, i.e., high signal on T1WI and antiphase signal decrease. 3. Beta-chain protein-activated type, with indistinct irregular margins and high signal on T2WI. This type tends to malignant transformation. Note: HCA is sometimes not easily distinguished from FNH, but there is usually no necrosis within an HCA. The use of a hepatocyte-specific contrast agent can help to differentiate them. FNH shows contrast uptake, but generally HCA does not.
Cystic lesions
The cysts are usually benign. MRI shows uniform low signal on T1WI and significant high signal on T2WI, with clear margins and no intensification after enhancement.
Malignant liver occupancy
Hepatocellular carcinoma (HCC)
The main signs include "envelope", significant enhancement in the non-circular arterial phase, and non-circular "contouring". MRI shows high signal in the arterial phase, contrast agent outflow in the portal phase, low signal in T1WI, mildly high signal in T2WI and high signal in DWI. The signal is heterogeneous in the early enhanced arterial phase, with contrast agent outflow and pseudo-envelope patterns seen in the late enhanced phase.
Intrahepatic bile duct cancer
MRI shows low signal on T1WI and high signal on T2WI, with heterogeneous continuous enhancement at the edges and retraction of the hepatic tegument.
Metastatic cancer of the liver
MRI presentation shows multiple lesions of variable size with low signal on T1WI and high signal on T2WI. Most of the metastases have rich blood supply, with circumferential enhancement seen in segment VII lesions and contrast agent outflow in lesion's late enhancement.
2.2 Image denoising
For building a well-performing CAD system, it is essential to improve the image quality of DCE-MRI through reasonable image denoising. What MRI images generate is mainly thermal noise and sometimes physiological noise [21]. Many studies have suggested that it belongs to Rician noise [22], which is strongly correlated with signal [23]. Traditional denoising methods are only suitable for filtering certain types of noise, but not for filtering Rician noise, while wavelet transform has better filtering effect on Rician noise. Therefore, this paper chooses a wavelet transform-based denoising method [24] to denoise the image samples. The main process has three steps: firstly, we input the original MRI liver image with noise and added additive Gaussian noise to the original signal in the data; secondly, we performed wavelet change to obtain the wavelet coefficient matrix; and finally, the matrix was processed by hard and soft thresholding functions. After setting a threshold, we reduced and zeroed coefficients larger and smaller than ϕ respectively. Then, we obtained the denoised image based on the new coefficients. The soft and hard thresholding methods add absolute value judgement to the above, and the equations are expressed as follows:
$\rho(\psi)= \begin{cases}\operatorname{sign}\left(\psi_{i, j}\right) \cdot\left(\left|\psi_{i, j}\right|-\phi\right), & \left|\psi_{i, j}\right| \geq \phi \\ 0, & \left|\psi_{i, j}\right|<\phi\end{cases}$
$\rho(\psi)= \begin{cases}\psi_{i, j}, & \left|\psi_{i, j}\right| \geq \phi \\ 0, & \left|\psi_{i, j}\right|<\phi\end{cases}$
where, i is the number of decomposition layers and j represents the wavelet coefficients in different directions. After soft and hard thresholding methods for wavelet denoising, we found that the images obtained by the soft thresholding method were smoother, while the image texture by the hard thresholding method had more visible jitters, hence we finally chose the denoised images obtained through soft thresholding.
Figure 1 below shows the denoising effect of this research (some areas are presented in enlargement).
(a) Partial original image
11b.png
(b) After denoising
Figure 1. Denoising effect of MRI images through soft thresholding
2.3 Lesion labeling
The annotation of the dataset in this study was performed under the guidance of a specialist radiologist at the partner hospital. We determined the location of the liver occupancy and the nature of the case after taking account of the patient's diagnostic history. The liver cancer in this dataset was classified as benign or malignant. The lesion location was annotated using the minimum coverage matrix that could completely cover the lesion, using the target detection annotation software LabelImg [25]. After manual annotation of the images, the software converts the annotation information into an XML format file for storage, which is flexible enough to store the location and category-structured data of the masses for the deep learning algorithm to read during training. As shown in Figure 2, (a) is the original image, (b) is the physician's manual annotation of the lesion location, and (c) is the software representation in terms of a minimum coverage matrix.
(a) Original MRI image (b) Annotation of lesion location by physician (c) Annotation by software
Figure 2. Image annotation
2.4 Image data augmentation
In deep learning, an adequate number of samples is required to ensure the effectiveness of the training model and the generalization ability of the model [26]. Therefore, to obtain sufficient training data, this study increases the data volume of the dataset by means of data augmentation, in the expectation that the image texture and pathological features of the limited base images will be expressed in the augmented images to increase the sample space. In this paper, we mainly adopted the geometric transformation of image rotation and flip for data augmentation, for which we rotated each image counterclockwise by 60°, 90°, 180°, and 270° as well as horizontal and vertical flips. Here, rotation of an image means that each pixel point is rotated by an equal angle at the same origin. Its affine transformation formula is
$\left[\begin{array}{l}x \\ y \\ 1\end{array}\right]=\left[\begin{array}{lll}\cos \theta & \sin \theta & 0 \\ -\sin \theta & \cos \theta & 0 \\ 0 & 0 & 1\end{array}\right]\left[\begin{array}{l}x_0 \\ y_0 \\ 1\end{array}\right]$
Calculation formula for coordinates after horizontal flip:
$\left[\begin{array}{lll}x_1 & y_1 & 1\end{array}\right]=\left[\begin{array}{lll}x_0 & y_0 & 1\end{array}\right]\left[\begin{array}{lcl}-1 & 0 & 0 \\ 0 & 1 & 0 \\ \text { width } & 0 & 1\end{array}\right]$ $=\left[\begin{array}{lll}\text { width }-x_0 & y_0 & 1\end{array}\right]$
Calculation formula for coordinates after vertical flip:
${\left[\begin{array}{lll}x_1 & y_1 & 1\end{array}\right]=\left[\begin{array}{lll}x_0 & y_0 & 1\end{array}\right]\left[\begin{array}{ccc}1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & \text { height } & 1\end{array}\right] }$$=\left[\begin{array}{lll}x_0 & \text { height }-y_0 & 1\end{array}\right]$
After counterclockwise rotation in the four angles and flips in the two directions, the original, rotated and flipped MRI images of the liver formed the training data for subsequent deep learning. Figure 3 below shows the sequence map of one case's T2 image after image augmentation: (a) is the original image, (b) is the image after rotating 60 degrees counterclockwise, (c) is the one after rotating 90 degrees counterclockwise, (d) is the one after rotating 180 degrees counterclockwise, (e) is the one after rotating 270 degrees counterclockwise, (f) is the image after horizontal flip, and (g) is the one after vertical flip.
Figure 3. MRI image data augmentation
3. UNet for Liver Segmentation
MRI images are scanned in the abdominal region, so each MRI image of the data covers four organs—spleen, liver, left kidney and right kidney. Our goal is to segment the liver area to facilitate subsequent classification and identification of the target of interest. Segmentation of the liver area is a prerequisite for subsequent feature extraction and accurate classification, as well as an important step in the quantitative analysis of tumours by physicians.
UNet is widely used in medical image segmentation [27]. Its advantages are: (1) multi-scale information extraction: both details and coarser abstract information are effectively extracted and retained; the gradient information of fuzzy boundaries is maximally retained while reducing the impact of noise. (2) skip connection: the more accurate information on gradient, point and line of the encoder at the same layer is directly concatenated into the decoder at the same layer, which is equivalent to adding detailed information to the general area of the target to make UNet obtain more accurate segmentation results. Considering that the UNet structure is not only smaller in model but also higher in accuracy [28], this paper uses UNet to finish liver segmentation from abdominal MRI data with the following network architecture diagram:
In Figure 4, the left side is repeated downsampling->convolution, and the right side is repeated upsampling->convolution. The first part of the figure is feature extraction, where there is a scale for each passing through a pooling layer. In the upsampling part, each upsampling is fused with the same scale of the channel corresponding to the feature extraction part (labelled copy and crop in the figure). But it is cropped before the fusion. The fusion here is stitching. The blue arrow represents a 3x3 convolution operation with a stride of 1 and valid padding, so that after each convolution, the feature map size is doubled. The red arrows represent a 2x2 max pooling operation. Since the 2*2 max-pooling operator is suitable for images of even pixel length and width, it is important to choose the right input size. The green arrows represent a 2x2 convolution plus upsampling operation, which multiplies the feature map size by 2. The grey arrows represent a copy and cut operation, where it can be noticed that the last layer on the left side of the same layer has a slightly larger resolution than the first layer on the right side, which leads to some cutting if you want to make use of the features in the shallower layers. The last layer of the output is classified using a 1x1 convolutional layer, and the two layers of the output are foreground and background. A comparison of the experimental segmentation results and physicians' annotation result in this paper is shown in Figure 5.
Figure 4. UNet architecture
(a) Original image (b) segmentation result annotated by physicians (c) Experimental segmentation result
Figure 5. Results of liver segmentation
4. MRI Liver Image Assisted Diagnosis Based on Improved Faster R-CNN
Target detection of liver MRI images refers to the localisation and diagnosis of liver occupancy targets from MRI image data. Accurate localisation of liver occupancy is the fundamental basis for assisting physicians in surgical planning, interventional surgery, and tumour definition. The detection results are integrated with patient age, clinical comorbidities, and biochemical results for guiding the post-operative treatment of patients. In terms of deep learning target detection, the R-CNN algorithm Faster R-CNN integrates image feature extraction, pre-selected box extraction, target regression, and target classification into a single network to achieve an efficient and unified end-to-end target detection algorithm [29]. It delivers superior performance in terms of both detection speed and accuracy. Therefore, this paper uses the Faster R-CNN framework as the basis, and proposes an improved Faster R-CNN algorithm for the integration of recognising and classifying benign and malignant occupancy from MRI liver images, given the multiple types of liver occupancies with complex and varied size and morphology.
4.1 Network model design
There are 3 steps for Faster R-CNN to detect the input liver segmentation images: firstly, the target features in the input images are extracted by the pre-trained feature extraction network; then the region proposal network (RPN) uses the extracted features to find a certain number of regions of interest (ROI) to estimate the class and location of the target that may contain the lesion. The image features and ROI are input to the ROI pooling unit in the Faster R-CNN to extract features, and Softmax regression is used to classify the ROIs and determine the class of liver occupancy, while fine-tuning the positions of these ROIs using bounding box regression to obtain the final accurate position of the detection box, i.e., to localize the lesion. The network architecture of the Faster RCNN is as follows Figure 6 shows.
4.2 CondenseNet feature extraction network
Some scholars have demonstrated that for small target detection in medicine, if DenseNet is employed as a feature extraction network for Faster R-CNN, the experimental performance outperforms the VGG16 as well as the ResNet structure employed in the original Faster R-CNN [30].
However, one of DenseNet's biggest drawbacks is the large video memory consumption, mainly due to the generation of more extra feature layers. To reduce the memory consumption of the model during training, Gao Huang at Cornell University [31] optimized the DenseNet network in 2018 by using convolutional group operations and pruning during training to reduce memory and increase speed, making it more computationally efficient and storing fewer parameters. Hasan and Linte [32] used CondenseUNet in 2020 for biventricular blood pooling and myocardial segmentation in cardiac cine MRI (CMR) imaging. Experiments demonstrated that the CondenseUNet architecture can be used in the Automated Cardiac Diagnostic Challenge (ACDC) dataset, using half (50%) of the memory requirements of DenseNet and one-twelfth (approximately 8%) of the memory requirements of UNet, while still maintaining excellent cardiac segmentation accuracy. Accordingly, this study uses the CondenseNet network architecture for feature extraction of the dataset to obtain better network performance, while being suitable for MRI images and ensuring memory requirements. The CondenseNet network architecture is characterised by: (1) the introduction of convolutional group operations, with an improvement during the introduction of group operations in 1*1 convolution. (2) Pruning of weights is done at the beginning of training, instead of pruning the trained model. (3) Introducing cross-block dense connectivity on top of DenseNet. The CondenseNet network configuration used in this dataset is shown in Table 2.
Figure 6. Faster R-CNN architecture
Table 2. CondenseNet network architecture table
Feature map size
$112 \times 112$
$3 \times 3$ Conv, stride 2
$\left[\begin{array}{ll}1 \times 1 & L-\operatorname{con} v \\ 3 \times 3 & G-\operatorname{con} v\end{array}\right] \times 4 \quad(k=8)$
$56 \times 56$
$2 \times 2$ average pool, stride 2
$\left[\begin{array}{ll}1 \times 1 & \text { L-conv } \\ 3 \times 3 & \text { G-conv }\end{array}\right] \times 6 \quad(k=16)$
$\left[\begin{array}{ll}1 \times 1 & L-\operatorname{conv} \\ 3 \times 3 & G-\operatorname{conv}\end{array}\right] \times 10 \quad(k=64)$
$7 \times 7$
$\left[\begin{array}{ll}1 \times 1 & L-\operatorname{conv} \\ 3 \times 3 & G-\operatorname{conv}\end{array}\right] \times 8 \quad(k=128)$
$7 \times 7$ global average pool
1000D fully-connected, softmax
4.3 RPN and anchor design
The function of RPN is to generate candidate regions for liver occupancy detection. For any feature map received, RPN can compute a series of candidate regions and a corresponding score between 0 and 1, indicating the confidence level that the candidate region is predicted to be a foreground target. To generate candidate regions, RPN uses a sliding window of size 3×3 to obtain n×n anchor locations based on the shared feature map. RPN also uses k different shapes of anchors in the process to enrich the prediction range for each sliding window location. One anchor position yields k candidate regions, so that for an input feature map of size W×H, RPN obtains W×H×k anchors with translation invariance. In this study, the size of the lesion occupancy was statistically analysed for 93 patients in the constructed dataset, and the size of the occupancy ranged from 8mm to 80mm. Based on the proportion of this statistic on the original MRI image and the corresponding perceptual field size of the shared convolutional feature map mapped to the CondenseNet output, we designed three different scales and three different aspect ratios, which were combined into nine different shapes of anchors, namely 722, 2882 and 5122, with aspect ratios of 1:1, 2:1 and 1:2 respectively.
4.4 Transfer learning model training
To obtain good prediction performance, we also had to employ transfer learning to train the network model while addressing the problem of insufficient data volume. This is because although we have performed data augmentation, the data volume of the MRI liver target detection dataset we constructed is still relatively too small compared to the number of neural network parameters, which tends to cause overfitting of the network parameters during training and poor recognition results. Therefore, we need to pre-train the improved network model on a natural image open dataset with a large data volume beforehand, so that the network learns certain natural image texture patterns in advance to obtain the model parameters to initialize our model, and then fine-tune the network on the liver occupancy dataset afterwards.
The commonly used open datasets for natural images include ImageNet, an image classification dataset, and PascalVOC, a target detection dataset. ImageNet contains more than 1.5 million annotated natural images covering over 1000 item categories. Pascal VOC consists of Pascal VOC 2007 and Pascal VOC 2012, together containing a total of more than 30,000 images, 70,000 detection targets and 20 categories [33]. Transfer learning can be divided into partial transfer learning and full transfer learning. Partial transfer learning refers to loading some of the network architecture parameters from a pre-trained model, such as loading only a few specific convolutional layers; full transfer learning refers to loading the complete network parameters from the pre-trained model. In the Faster R-CNN training in this study, we used both.
(1) Pre-training CondenseNet on ImageNet by first performing partial transfer learning of the feature extraction network.
(2) Full transfer learning was then performed, with the Faster R-CNN structure pre-trained on the Pascal VOC 2007+2012 dataset and finally fine-tuned on the MRI liver occupancy dataset based on the obtained network parameters.
5. Experimental Results
To demonstrate the effectiveness of the Faster R-CNN optimisation and improvement in this paper, two evaluation metrics are used to assess the detection and classification performance of the Faster R-CNN model trained in this paper. The first evaluation metric is the Mean Average Precision (mAP), which is commonly used in target detection, and the other is the Free-response Receiver Operating Characteristic (FROC) curve. A comparison was made between the detection and classification performance of a model trained using the original Faster R-CNN network and the improved model in this paper. The original Faster R-CNN refers to the model obtained by using VGG16 as the backbone network and trained based on the original anchor size and without using transfer learning.
The experimental evaluation was performed on the MRI liver dataset constructed in this paper, which was derived from the MRI-enhanced liver scans of 93 patients, among whom 15 were with benign occupancies and 78 with malignant occupancies. The dataset contained a total of 3,906 original MRI liver images and data augmented images, of which 558 were original MRI liver images. Among the 558 images, 90 were benign occupancies and 468 were malignant occupancies.
5.1 Mean average accuracy
The accuracy of the liver occupancy detection algorithm for a given liver occupancy category A is calculated by the following formula:
precison $_A=\frac{T P}{T P+F P}=\frac{N(\text { TruePositives })_A}{N(\text { GroundTruths })_A}$
The average accuracy AP value refers to that under the assumption that each MRI image on the test set contains true annotations for all categories, the average accuracy of category A is the sum of the accuracies of all MRI images on the test set for category A over the number of all images containing true annotations for category A. Following is the equation.
average precision ${ }_A=\frac{\sum \text { precision }_A}{N(\text { total images })_A}$
The mean average precision is then the expectation of the AP value for all categories, expressed by the following formula:
mean average precision $=\frac{\sum_A \text { average precision }}{N(\text { classes })}$
(a) The original Faster R-CNN
(b) The improved model in this paper
Figure 7. PR curves for evaluating the performance of Faster R-CNN on MRI liver datasets
In addition, the magnitude of the mAP value is often calculated by plotting the precision-recall (PR) curve during the actual calculation. Figure 7 and Table 3 show the comparison of the evaluation results between the improved Faster R-CNN model and the original Faster R-CNN model in this paper on the constructed MRI liver occupancy dataset.
As shown in (a) of Figure 7, the original Faster R-CNN did not detect and classify benign occlusions well (AP=0.648) because the benign occupancies themselves had a small dataset and were not easily identified due to their small size. The original model, however, had a high detection accuracy for the majority of malignant tumours (AP=0.842), suggesting that the original Faster R-CNN was more impacted by inter-class imbalance. The improved model ((b) in Figure 7) used data augmentation to improve the interclass imbalance, used CondenseNet to improve the feature extraction performance, custom designed anchors to match the lesion size, and did transfer learning pre-training. After these, the improved model achieved a more balanced detection and classification performance for benign and malignant tumours, with the mAP value improving from the original 0.745 to 0.848 (see Table 3).
Table 3. Comparison of the mAP of the original Faster RCNN model and the improved model in this paper
Benign occupancy (AP)
Malignant occupancy (AP)
Original Faster R-CNN model
Improved Faster R-CNN model
5.2 Receiver operating characteristic (ROC) curve
(a) FROC curve of the original Faster R-CNN model
(b) FROC curve of the improved Faster R-CNN model
Figure 8. FROC curves of the original Faster R-CNN model and the improved model in this paper
The ROC Area Under the Curve (AUC) is an evaluation metric frequently used in target detection classification. Due to the specificity of medical tasks, it is often necessary to obtain a high recall and sensitivity in prediction to avoid missing malignant patients, hence false positive predictors can be tolerated to some extent. Therefore, for the target detection problem on medical images, FROC, a variant of the ROC curve [34], is commonly used to evaluate the predictive performance of the model. FROC replaces the false positive rate on the horizontal axis with the mean number of false positives in the image, allowing the FROC curve to represent the level of recall and sensitivity that can be obtained at what level of false positives. Figure 8 shows the FROC curves and their AUC values for the original Faster R-CNN model and the improved model in this paper.
The FROC curves in Figure 8 above and the results in Table 4 are the prediction results based on a uniform distribution of 100 threshold points between 0 and 1 as IoU thresholds on the dataset of this paper. Compared with the original Faster R-CNN model (Sen = 0.912 when FP = 0.432), the improved model can obtain a higher sensitivity peak at a lower false positive rate (Sen = 0.948 when FP = 0.402), and the sensitivity of the improved model is higher than that of the original Faster R-CNN model at the same false positive level. The sensitivity of the improved model was higher than that of the original Faster RCNN model at the same level of false positives. In addition, we extended the maximum value of the horizontal coordinate of the above FROC curve to 1 and kept the maximum value of the vertical coordinate unchanged to calculate the AUC value of the FROC curve. The corresponding AUC values of the original Faster R-CNN model and the improved model in this paper were 0.848 and 0.926, respectively. The above findings suggest that the improved Faster R-CNN model can help improve the performance of the Faster R-CNN model for liver occupancy detection and benign and malignant classification on MRI liver images.
Table 4. Comparison of the sensitivity of the original Faster R-CNN model and the improved model
Sensitivity of liver occupancy detection (FP=0.125)
Sensitivity of liver occupancy detection (FP=0.25)
Highest sensitivity for liver occupancy detection
0.912 (FP=0.432)
With the development of medical big data and MRI imaging technology, liver MRI has a higher sensitivity and cancer detection rate than CT examination, making early detection and diagnosis of liver cancer possible. Based on this, this paper first constructs a standard dataset for early detection and diagnosis of liver cancer in collaboration with relevant hospitals to overcome the current lack of MRI liver datasets in the field for carrying out research work on a computer-aided detection and diagnosis system for MRI of liver cancer. Wavelet-based soft threshold denoising is then used in the image pre-processing work to remove imaging thermal and physiological noise from the MRI images. The dataset is then annotated with the location of the lesion and its benignity or malignancy on each image, under the guidance of a specialist radiologist. In addition, to increase the data volume of the dataset, this paper uses an image geometric transformation to augment the original data, increasing the image texture information embedded in each image and the overall dataset data volume. The paper then proposes a computer-aided detection and diagnosis system based on the improved Faster R-CNN algorithm. The experimental comparison with the detection results of the original Faster R-CNN model demonstrates that the method in this paper achieves higher detection sensitivity on the constructed MRI standard dataset. This research paper provides a second suggestion to improve the efficiency of radiologists, meet the radiologists' need for reading images, and helps physicians for early diagnosis of liver cancer.
This paper was supported by the Construction Fund of Key medical disciplines of Hangzhou (Grant No.: OO20200265).
[1] Wong, R.J., Ahmed, A. (2020). Understanding gaps in the hepatocellular carcinoma cascade of care: opportunities to improve hepatocellular carcinoma outcomes. Journal of Clinical Gastroenterology, 54(10): 850-856. https://doi.org/10.1097/MCG.0000000000001422
[2] Heimbach, J.K., Kulik, L.M., Finn, R.S., Sirlin, C.B., Abecassis, M.M., Roberts, L.R., Marrero, J.A. (2018). AASLD guidelines for the treatment of hepatocellular carcinoma. Hepatology, 67(1): 358-380. https://doi.org/10.1002/hep.29086
[3] Zhang, C.H., Ni, X.C., Chen, B.Y., Qiu, S.J., Zhu, Y.M., Luo, M. (2019). Combined preoperative albumin-bilirubin (ALBI) and serum γ-glutamyl transpeptidase (GGT) predicts the outcome of hepatocellular carcinoma patients following hepatic resection. Journal of Cancer, 10(20): 4836-4845. https://doi.org/10.7150/jca.33877
[4] Medical Administration and Hospital Administration of the National Health Commission of the People's Republic of China. (2019). Guidelines for the diagnosis and treatment of primary liver cancer. Chinese Journal of Liver Diseases, 28(2): 112-128.
[5] Wang, H., Naghavi, M., Allen, C., Barber, R.M., Bhutta, Z.A., Carter, A., Bell, M.L. (2016). Global, regional, and national life expectancy, all-cause mortality, and cause-specific mortality for 249 causes of death, 1980-2015: A systematic analysis for the Global Burden of Disease Study 2015. The Lancet, 388(10053): 1459-1544. https://doi.org/10.1016/S0140-6736(16)31012-1
[6] Chen, W., Zheng, R., Baade, P.D., Zhang, S., Zeng, H., Bray, F., He, J. (2016). Cancer statistics in China, 2015. CA: A Cancer Journal for Clinicians, 66(2): 115-132. https://doi.org/10.3322/caac.21338
[7] Elsayes, K.M., Hooker, J.C., Agrons, M.M., Kielar, A.Z., Tang, A., Fowler, K.J., Sirlin, C.B. (2017). 2017 Version of LI-RADS for CT and MR Imaging: An Update. Radiographics: A Review Publication of the Radiological Society of North America, Inc, 37(7): 1994-2017. https://doi.org/10.1148/rg.2017170098
[8] Ayuso, C., Rimola, J., Vilana, R., Burrel, M., Darnell, A., García-Criado, Á., Brú, C. (2018). Diagnosis and staging of hepatocellular carcinoma (HCC): Current guidelines. European Journal of Radiology, 101: 72-81. https://doi.org/10.1016/j.ejrad.2018.01.025
[9] Marrero, J.A., Kulik, L.M., Sirlin, C.B., Zhu, A.X., Finn, R.S., Abecassis, M.M., Heimbach, J.K. (2018). Diagnosis, staging, and management of hepatocellular carcinoma: 2018 practice guidance by the American association for the study of liver diseases. Hepatology, 68(2): 723-750. https://doi.org/10.1002/hep.29913
[10] Choi, J.Y., Cho, H.C., Sun, M., Kim, H.C., Sirlin, C.B. (2013). Indeterminate observations (liver imaging reporting and data system category 3) on MRI in the cirrhotic liver: fate and clinical implications. American Journal of Roentgenology, 201(5): 993-1001. https://doi.org/10.2214/ajr.12.10007
[11] Jiang, J., Wang, W., Cui, Y.N., Zhang, M.W., Chen, D., Fang, X., Liu, A.L. (2021). Evaluation of the diagnostic value of CT and MRI for hepatocellular carcinoma less than or equal to 3 cm based on the 2018 version of the liver imaging report and data system. Magnetic Resonance Imaging, 12(9): 25-29, 44. https://doi.org/10.12015/issn.1674-8034.2021.09.006
[12] Meng, L., Tian, Y., Bu, S. (2020). Liver tumor segmentation based on 3D convolutional neural network with dual scale. Journal of Applied Clinical Medical Physics, 21(1): 144-157. https://doi.org/10.1002/acm2.12784
[13] Tang, W., Zou, D., Yang, S., Shi, J., Dan, J., Song, G. (2020). A two-stage approach for automatic liver segmentation with Faster R-CNN and DeepLab. Neural Computing and Applications, 32(11): 6769-6778. https://doi.org/10.1007/s00521-019-04700-0
[14] Li, X., Chen, H., Qi, X., Dou, Q., Fu, C.W., Heng, P.A. (2018). H-DenseUNet: Hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Transactions on Medical Imaging, 37(12): 2663-2674. https://doi.org/10.1109/TMI.2018.2845918
[15] Bousabarah, K., Letzen, B., Tefera, J., Savic, L., Schobert, I., Schlachter, T., Lin, M. (2021). Automated detection and delineation of hepatocellular carcinoma on multiphasic contrast-enhanced MRI using deep learning. Abdominal Radiology, 46(1): 216-225. https://doi.org/10.1007/s00261-020-02604-5
[16] Kim, J., Min, J.H., Kim, S.K., Shin, S.Y., Lee, M.W. (2020). Detection of hepatocellular carcinoma in contrast-enhanced magnetic resonance imaging using deep learning classifier: A multi-center retrospective study. Scientific Reports, 10(1): 1-11. https://doi.org/10.1038/s41598-020-65875-4
[17] Zhao, J., Li, D., Kassam, Z., Howey, J., Chong, J., Chen, B., Li, S. (2020). Tripartite-GAN: Synthesizing liver contrast-enhanced MRI to improve tumor detection. Medical Image Analysis, 63: 101667. https://doi.org/10.1016/j.media.2020.101667
[18] Li, C., Zhou, Y., Li, Y., Yang, S. (2021). A coarse-to-fine registration method for three-dimensional MR images. Medical & Biological Engineering & Computing, 59(2): 457-469. https://doi.org/10.1007/s11517-021-02317-x
[19] Yang, Z.H., Feng, F., Wang, X.Y. (2010). Guidelines for magnetic resonance imaging techniques: Examination norms, clinical strategies, and new technologies (Revised Edition). Chinese Journal of Medical Imaging, 2010(4): 312.
[20] Oladiran, O., Gichoya, J., Purkayastha, S. (2017). Conversion of JPG image into DICOM image format with one click tagging. In International Conference on Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management, pp. 61-70. https://doi.org/10.1007/978-3-319-58466-9_6
[21] Hadiyoso, S., Zakaria, H., Ong, P.A., Mengko, T.L.E.R. (2021). Hemispheric coherence analysis of wide band EEG signals for characterization of post-stroke patients with dementia. Traitement du Signal, 38(4): 985-992. https://doi.org/10.18280/ts.380408
[22] Das, A., Agrawal, S., Samantaray, L., Panda, R., Abraham, A. (2020). State-of-the art optimal multilevel thresholding methods for brain MR image analysis. Revue d'Intelligence Artificielle, 34(3): 243-256. https://doi.org/10.18280/ria.340302
[23] Pal, C., Das, P., Chakrabarti, A., Ghosh, R. (2017). Rician noise removal in magnitude MRI images using efficient anisotropic diffusion filtering. International Journal of Imaging Systems and Technology, 27(3): 248-264. https://doi.org/10.1002/ima.22230
[24] Ismael, A.A., Baykara, M. (2021). Digital image denoising techniques based on multi-resolution wavelet domain with spatial filters: A review. Traitement du Signal, 38(3): 639-651. https://doi.org/10.18280/ts.380311
[25] Zhou, Y., Liu, W.P., Luo, Y.Q., Zong, S.X. (2021). Small object detection for infected trees based on the deep learning method. Scientia Silvae Sinicae, 57(3): 98-107.
[26] Ge, C., Gu, I.Y.H., Jakola, A.S., Yang, J. (2020). Enlarged training dataset by pairwise GANs for molecular-based brain tumor classification. IEEE Access, 8: 22560-22570. https://doi.org/10.1109/ACCESS.2020.2969805
[27] Yang, X., Liu, L., Li, T. (2022). MR-UNet: An UNet model using multi-scale and residual convolutions for retinal vessel segmentation. International Journal of Imaging Systems and Technology, 32(5): 1588-1603. https://doi.org/10.1002/ima.22728
[28] Cai, S., Wu, Y., Chen, G. (2022). A novel elastomeric UNet for medical image segmentation. Frontiers in Aging Neuroscience, 14: 841297. https://doi.org/10.3389/fnagi.2022.841297
[29] Ren, S., He, K., Girshick, R., Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6): 1137-1149. https://doi.org/10.1109/TPAMI.2016.2577031
[30] Uemura, T., Näppi, J.J., Hironaka, T., Kim, H., Yoshida, H. (2020). Comparative performance of 3D-DenseNet, 3D-ResNet, and 3D-VGG models in polyp detection for CT colonography. In Medical Imaging 2020: Computer-Aided Diagnosis, 11314: 736-741. https://doi.org/10.1117/12.2549103
[31] Huang, G., Liu, S., Van der Maaten, L., Weinberger, K.Q. (2018). Condensenet: An efficient densenet using learned group convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2752-2761. https://doi.org/10.1109/CVPR.2018.00291
[32] Hasan, S.K., Linte, C.A. (2020). CondenseUNet: a memory-efficient condensely-connected architecture for bi-ventricular blood pool and myocardium segmentation. In Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling, 11315: 402-408. https://doi.org/10.1117/12.2550640
[33] Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A. (2010). The pascal visual object classes (VOC) challenge. International Journal of Computer Vision, 88(2): 303-338. https://doi.org/10.1007/s11263-009-0275-4
[34] Bandos, A.I., Obuchowski, N.A. (2019). Evaluation of diagnostic accuracy in free-response detection-localization tasks using ROC tools. Statistical Methods in Medical Research, 28(6): 1808-1825. https://doi.org/10.1177/0962280218776683 | CommonCrawl |
Only show content I have access to (17)
Only show open access (2)
Last 12 months (2)
Last 3 years (2)
Over 3 years (15)
Physics and Astronomy (6)
Symposium - International Astronomical Union (8)
Proceedings of the International Astronomical Union (4)
Highlights of Astronomy (2)
Publications of the Astronomical Society of Australia (2)
Proceedings of the Nutrition Society (1)
International Astronomical Union (14)
Nutrition Society (1)
WALLABY Pilot Survey: Public release of HI kinematic models for more than 100 galaxies from phase 1 of ASKAP pilot observations
N. Deg, K. Spekkens, T. Westmeier, T. N. Reynolds, P. Venkataraman, S. Goliath, A. X. Shen, R. Halloran, A. Bosma, B Catinella, W. J. G. de Blok, H. Dénes, E. M. DiTeodoro, A. Elagali, B.-Q. For, C Howlett, G. I. G. Józsa, P. Kamphuis, D. Kleiner, B Koribalski, K. Lee-Waddell, F. Lelli, X. Lin, C. Murugeshan, S. Oh, J. Rhee, T. C. Scott, L. Staveley-Smith, J. M. van der Hulst, L. Verdes-Montenegro, J. Wang, O. I. Wong
Journal: Publications of the Astronomical Society of Australia / Volume 39 / 2022
Published online by Cambridge University Press: 15 November 2022, e059
You have access Access
We present the Widefield ASKAP L-band Legacy All-sky Blind surveY (WALLABY) Pilot Phase I Hi kinematic models. This first data release consists of Hi observations of three fields in the direction of the Hydra and Norma clusters, and the NGC 4636 galaxy group. In this paper, we describe how we generate and publicly release flat-disk tilted-ring kinematic models for 109/592 unique Hi detections in these fields. The modelling method adopted here—which we call the WALLABY Kinematic Analysis Proto-Pipeline (WKAPP) and for which the corresponding scripts are also publicly available—consists of combining results from the homogeneous application of the FAT and 3DBarolo algorithms to the subset of 209 detections with sufficient resolution and $S/N$ in order to generate optimised model parameters and uncertainties. The 109 models presented here tend to be gas rich detections resolved by at least 3–4 synthesised beams across their major axes, but there is no obvious environmental bias in the modelling. The data release described here is the first step towards the derivation of similar products for thousands of spatially resolved WALLABY detections via a dedicated kinematic pipeline. Such a large publicly available and homogeneously analysed dataset will be a powerful legacy product that that will enable a wide range of scientific studies.
WALLABY pilot survey: Public release of H i data for almost 600 galaxies from phase 1 of ASKAP pilot observations
T. Westmeier, N. Deg, K. Spekkens, T. N. Reynolds, A. X. Shen, S. Gaudet, S. Goliath, M. T. Huynh, P. Venkataraman, X. Lin, T. O'Beirne, B. Catinella, L. Cortese, H. Dénes, A. Elagali, B.-Q. For, G. I. G. Józsa, C. Howlett, J. M. van der Hulst, R. J. Jurek, P. Kamphuis, V. A. Kilborn, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, C. Murugeshan, J. Rhee, P. Serra, L. Shao, L. Staveley-Smith, J. Wang, O. I. Wong, M. A. Zwaan, J. R. Allison, C. S. Anderson, Lewis Ball, D. C.-J. Bock, D. Brodrick, J. D. Bunton, F. R. Cooray, N. Gupta, D. B. Hayman, E. K. Mahony, V. A. Moss, A. Ng, S. E. Pearce, W. Raja, D. N. Roxby, M. A. Voronkov, K. A. Warhurst, H. M. Courtois, K. Said
We present WALLABY pilot data release 1, the first public release of H i pilot survey data from the Wide-field ASKAP L-band Legacy All-sky Blind Survey (WALLABY) on the Australian Square Kilometre Array Pathfinder. Phase 1 of the WALLABY pilot survey targeted three $60\,\mathrm{deg}^{2}$ regions on the sky in the direction of the Hydra and Norma galaxy clusters and the NGC 4636 galaxy group, covering the redshift range of $z \lesssim 0.08$ . The source catalogue, images and spectra of nearly 600 extragalactic H i detections and kinematic models for 109 spatially resolved galaxies are available. As the pilot survey targeted regions containing nearby group and cluster environments, the median redshift of the sample of $z \approx 0.014$ is relatively low compared to the full WALLABY survey. The median galaxy H i mass is $2.3 \times 10^{9}\,{\rm M}_{{\odot}}$ . The target noise level of $1.6\,\mathrm{mJy}$ per 30′′ beam and $18.5\,\mathrm{kHz}$ channel translates into a $5 \sigma$ H i mass sensitivity for point sources of about $5.2 \times 10^{8} \, (D_{\rm L} / \mathrm{100\,Mpc})^{2} \, {\rm M}_{{\odot}}$ across 50 spectral channels ( ${\approx} 200\,\mathrm{km \, s}^{-1}$ ) and a $5 \sigma$ H i column density sensitivity of about $8.6 \times 10^{19} \, (1 + z)^{4}\,\mathrm{cm}^{-2}$ across 5 channels ( ${\approx} 20\,\mathrm{km \, s}^{-1}$ ) for emission filling the 30′′ beam. As expected for a pilot survey, several technical issues and artefacts are still affecting the data quality. Most notably, there are systematic flux errors of up to several 10% caused by uncertainties about the exact size and shape of each of the primary beams as well as the presence of sidelobes due to the finite deconvolution threshold. In addition, artefacts such as residual continuum emission and bandpass ripples have affected some of the data. The pilot survey has been highly successful in uncovering such technical problems, most of which are expected to be addressed and rectified before the start of the full WALLABY survey.
3-D interactive visualisation tools for Hi spectral line imaging
J. M. van der Hulst, D. Punzo, J. B. T. M. Roerdink
Journal: Proceedings of the International Astronomical Union / Volume 12 / Issue S325 / October 2016
Published online by Cambridge University Press: 30 May 2017, pp. 305-310
Print publication: October 2016
Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is often complex in nature. Here we present SlicerAstro, an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing, which we developed for the inspection and analysis of HI spectral line data. We describe its initial capabilities, including 3-D filtering, 3-D selection and comparative modelling.
The Void Galaxy Survey: Galaxy Evolution and Gas Accretion in Voids
Kathryn Kreckel, Jacqueline H. van Gorkom, Burcu Beygu, Rien van de Weygaert, J. M. van der Hulst, Miguel A. Aragon-Calvo, Reynier F. Peletier
Journal: Proceedings of the International Astronomical Union / Volume 11 / Issue S308 / June 2014
Published online by Cambridge University Press: 12 October 2016, pp. 591-599
Voids represent a unique environment for the study of galaxy evolution, as the lower density environment is expected to result in shorter merger histories and slower evolution of galaxies. This provides an ideal opportunity to test theories of galaxy formation and evolution. Imaging of the neutral hydrogen, central in both driving and regulating star formation, directly traces the gas reservoir and can reveal interactions and signs of cold gas accretion. For a new Void Galaxy Survey (VGS), we have carefully selected a sample of 59 galaxies that reside in the deepest underdensities of geometrically identified voids within the SDSS at distances of ∼100 Mpc, and pursued deep UV, optical, Hα, IR, and HI imaging to study in detail the morphology and kinematics of both the stellar and gaseous components. This sample allows us to not only examine the global statistical properties of void galaxies, but also to explore the details of the dynamical properties. We present an overview of the VGS, and highlight key results on the HI content and individually interesting systems. In general, we find that the void galaxies are gas rich, low luminosity, blue disk galaxies, with optical and HI properties that are not unusual for their luminosity and morphology. We see evidence of both ongoing assembly, through the gas dynamics between interacting systems, and significant gas accretion, seen in extended gas disks and kinematic misalignments. The VGS establishes a local reference sample to be used in future HI surveys (CHILES, DINGO, LADUMA) that will directly observe the HI evolution of void galaxies over cosmic time.
Liver protein and glutamine metabolism during cachexia
Karel W. E. Hulsewé, Nicolaas E. P. Deutz, Ivo De Blaauw, Rene R. W. J. Van Der Hulst, Maarten M. F. Von Meyenfeldt, Peter B. Soeters
Journal: Proceedings of the Nutrition Society / Volume 56 / Issue 2 / July 1997
Published online by Cambridge University Press: 28 February 2007, pp. 801-806
IC 4200: an early-type galaxy formed via a major merger.
Paolo Serra, S. C. Trager, J. M. van der Hulst, T. A. Oosterloo, R. Morganti, J. H. van Gorkom
Journal: Proceedings of the International Astronomical Union / Volume 2 / Issue S241 / December 2006
Published online by Cambridge University Press: 01 December 2006, pp. 428-429
Print publication: December 2006
Recent observations have revealed a class of unusually HI-rich early-type galaxies. By combining observations of their morphology, stellar populations and neutral hydrogen we aim to understand how these galaxies fit into the hierarchical formation paradigm. Here we present the result of our radio and optical observations of a test case galaxy, the E/S0 IC 4200.
Local galaxies as damped Lyman-$\alpha$ absorber analogues
M. A. Zwaan, J. M. van der Hulst, F. H. Briggs, M. A. W. Verheijen, E. V. Ryan-Weber
Journal: Proceedings of the International Astronomical Union / Volume 1 / Issue C199 / March 2005
Print publication: March 2005
We calculate in detail the expected properties of low redshift DLAs under the assumption that they arise in the gaseous disks of galaxies like those in the $z\approx 0$ population. A sample of 355 nearby galaxies were analysed, for which high quality H I 21-cm emission line maps are available as part of an extensive survey with the Westerbork telescope (WHISP). We find that expected luminosities, impact parameters between quasars and DLA host galaxies, and metal abundances are in good agreement with the observed properties of DLAs and DLA galaxies. The measured redshift number density of $z=0$ gas above the DLA limit is $dN/dz=0.045\pm 0.006$, which compared to higher $z$ measurements implies that there is no evolution in the co-moving density of DLAs along a line of sight between $z\sim 1.5$ and $z=0$, and a decrease of only a factor of two from $z\sim 4$ to the present time. We conclude that the local galaxy population can explain all properties of low redshift DLAs.
The Interstellar Medium in Nearby Galaxies
J. M. Van Der Hulst
Journal: Highlights of Astronomy / Volume 9 / 1992
Published online by Cambridge University Press: 30 March 2016, pp. 101-107
Print publication: 1992
Recent observations of the several phases of the interstellar medium in galaxies reveal a wide range of structures and physical properties. The HI structure in nearby galaxies is very filamentary with a large number of shells and filaments suggestive of a close interaction between the star formation in the disk and the surrounding interstellar medium.
The HI surface density in low surface brightness galaxies
J. M. Van Der Hulst, E. D. Skillman, G. D. Bothun, T. R. Smith
Journal: Symposium - International Astronomical Union / Volume 149 / 1992
Published online by Cambridge University Press: 07 August 2017, p. 499
The HI surface density of 8 low surface brightness galaxies falls below the critical density for star formation. This may explain why these galaxies appear so unevolved and are generally deficient in molecular gas.
The Mass of the Binary Galaxies NGC 4038/39 (The "Antennae")
J. M. Mahoney, B. F. Burke, J.M. van der Hulst
Published online by Cambridge University Press: 04 August 2017, p. 94
View extract
The binary galaxies NGC 4038/39 have extended filamentary arms generated by tidal interactions (Toomre and Toomre, Ap. J. 178; 623, (1972)(TT)). The velocity field was determined by HI observations taken with the VLA (a facility operated by the NRAO under contract with the NSF), and the combined velocity and morphological information was used to constrain the allowed orbital parameters, halo characteristics, and dynamical friction. TT-type calculations were carried out with central masses and rings of test particles, and the calculated results compared with the data. Using disk orientations derived from optical data (Rubin et al., (1970) Ap. J. 160 81), and solving for the six remaining orbital parameters, central potential softening constant (representing the halo), and frictional relaxation time, a good fit between the model and the radio data was found. The best model is shown in Figure 1, and is superimposed on an HI column density map in Figure 2. The orbit is well-determined, and must be nearly parabolic; the pair are interacting for the first time, and if the galaxies have extensive massive halos much larger than their discs, then their tidal arms would be shorter and stubbier than observed. More limited halos are allowed; each galaxy could have up to 80% of its total mass in a halo, but the halos cannot be much larger than the discs. A halo several times larger than the disc, with 10 to 20 times the disc mass, is not permitted by the data.
Radio Supernovae
K. W. Weiler, R. A. Sramek, J. M. van der Hulst, N. Panagia
Published online by Cambridge University Press: 04 August 2017, pp. 171-176
Three supernovae have so far been detected in the radio range shortly after their optical outbursts. All are Type IIs. A fourth supernova, a Type I, is being monitored for radio emission but, at an age of approximately one year, has not yet been detected. For two of the supernovae, extensive data are presented on their "light curves" and spectra and models which have been suggested in the literature are discussed.
HI in the Barred Spiral Galaxies NGC 1365 and NGC 1097.
J. M. van der Hulst, M. P. Ondrechen, J. H. van Gorkom, E. Hummel
In this paper we present preliminary results from 21-cm line observations with the Very Large Array (VLA) of the southern barred spiral galaxies NGC 1365 and NGC 1097. Despite a wealth of theoretical models describing the gas flow in a non-axisymmetric bar potential (see Prendergast this volume), few observations of the HI distribution and motions in barred spiral galaxies exist. A notable exception is NGC 5383 (Sancisi et al. 1979). The observations we performed with the VLA are described below. The velocity resolution is 25 km sec−1. The angular resolution is 28″x20″, p.a. 20° for NGC 1365 and 30″x25″, p.a. 20° for NGC 1097. Velocities are heliocentric.
The Radio Emission of Interacting Galaxies
E. Hummel, J. M. van der Hulst, J. H. van Gorkom, C. G. Kotanyi
Journal: Symposium - International Astronomical Union / Volume 97 / 1982
Published online by Cambridge University Press: 14 August 2015, pp. 93-94
Gravitational interaction is a straightforward interpretation of some of the peculiar optical morphologies shown by galaxies. There have also been attempts to study the effects of a gravitational interaction on the radio continuum emission. Statistically, the central radio sources (inner 1 kpc) in interacting spiral galaxies are about three times stronger than in isolated spirals; on the other hand, the intensity of the extended emission does not seem to be affected (Stocke, 1978; Hummel, 1981). Peculiar radio morphologies are not a general property of interacting galaxies, since in the complete sample studied by Hummel (1981) of spirals with a probability ≥0.8 of being physically related to their companion, less than 5% have a peculiar radio morphology.
Detection of a Broad HI Absorption Feature at 5300 km Sec−1 Associated with NGC 1275 (3C84)
P. C. Crane, J. M. van der Hulst, A. D. Haschick
Observations of NGC 1275 at ∼ 1396 MHz with the NRAO line interferometer in 1974 and 1976 suggest the presence of a very broad, shallow HI absorption feature centered at ∼ 5300 km sec−1. These observations were repeated in 1981 June with the Very Large Array using a greater bandwidth to determine a satisfactory baseline.
Extragalactic Radio Supernovae in NGC 4321 and NGC 6946
R. A. Sramek, K. W. Weiler, J. M. van der Hulst
The supernovae SN1979c in NGC 4321 and SN1980k in NGC 6946 have both been detected at centimeter wavelengths at the VLA. The radio emission turns on very rapidly, but may be delayed by as much as a year with respect to the optical outburst. In both supernovae, the 20 cm radiation peaks after the 6 cm, and the radio emission has a very slow post-maximum decay.
Stephan's Quintet Revisited
J. M. van der Hulst, A. H. Rots
VLA observations at 1465 MHz of the Stephan's Quintet region reveal that the arc-shaped area of emission discussed by Allen and Hartsuiker (1972) breaks up into several components. The idea that NGC 7318b is a recent interloper in the group and that the interaction resulting from this event causes the enhanced activity at the east side of NGC 7318b is adopted as still the most reasonable explanation. The results are discussed in more detail in another paper (van der Hulst and Rots, 1981).
Radio Continuum Emission from the Nuclei of Normal Galaxies
During the last few years detailed and sensitive observations of the radio emission from the nuclei of many normal spiral galaxies has become available. Observations from the Very Large Array (VLA) of the National Radio Astronomy Observatory (NRAO1), in particular, enable us to distinguish details on a scale of ≤100 pc for galaxies at distances less than 21 Mpc. The best studied nucleus, however, still is the center of our own Galaxy (see Oort 1977 and references therein). Its radio structure is complex. It consists of an extended non-thermal component 200 × 70 pc in size, with embedded therein several giant HII regions and the central source Sgr A (˜9 pc in size). Sgr A itself consists of a thermal source, Sgr A West, located at the center of the Galaxy, and a weaker, non-thermal source, Sgr A East. Sgr A West moreover contains a weak, extremely compact (≤10 AU) source. The radio morphology of several other galactic nuclei is quite similar to that of the Galactic Center, as will be discussed in section 2. Recent reviews of the radio properties of the nuclei of normal galaxies have been given by Ekers (1978a,b) and De Bruyn (1978). The latter author, however, concentrates on galaxies with either active nuclei or an unusual radio morphology. In this paper I will describe recent results from the Westerbork Synthesis Radio Telescope (WSRT, Hummel 1979), the NRAO 3-element interferometer (Carlson, 1977; Condon and Dressel 1978), and the VLA (Heckman et al., 1979; Van der Hulst et al., 1979). I will discuss the nuclear radio morphology in section 2, the luminosities in section 3, and the spectra in section 4. In section 5 I will briefly comment upon the possible implications for the physical processes in the nuclei that are responsible for the radio emission. | CommonCrawl |
Distinguishing probability measure, function and distribution
I have a bit trouble distinguishing the following concepts:
probability measure
probability function (with special cases probability mass function and probability density function)
Are some of these interchangeable? Which of these are defined with respect to probability spaces and which with respect to random variables?
probability probability-theory probability-distributions
MarcMarc
The difference between the terms "probability measure" and "probability distribution" is in some ways more of a difference between terms rather than a difference between the things that the terms refer to. It's more about the way the terms are used.
A probability distribution or a probability measure is a function assigning probabilities to measurable subsets of some set.
When the term "probability distribution" is used, the set is often $\mathbb R$ or $\mathbb R^n$ or $\{0,1,2,3,\ldots\}$ or some other very familiar set, and the actual values of members of that set are of interest. For example, one may speak of the temperature on December 15th in Chicago over the aeons, or the income of a randomly chosen member of the population, or the particular partition of the set of animals captured and tagged, where two animals are in the same part in the partition if they are of the same species.
When the term "probability measure" is used, often nobody cares just what the set $\Omega$ is, to whose subsets probabilities are assigned, and nobody cares about the nature of the members or which member is randomly chosen on any particular occasion. But one may care about the values of some function $X$ whose domain is $\Omega$, and about the resulting probability distribution of $X$.
"Probablity mass function", on the other hand, is precisely defined. A probability mass function $f$ assigns a probabilty to each subset containing just one point, of some specified set $S$, and we always have $\sum_{s\in S} f(s)=1$. The resulting probability distribution on $S$ is a discrete distribution. Discrete distributions are precisely those that can be defined in this way by a probability mass function.
"Probability density function" is also precisely defined. A probability density function $f$ on a set $S$ is a function specifies probabilities assigned to measurable subsets $A$ of $S$ as follows: $$ \Pr(A) = \int_A f\,d\mu $$ where $\mu$ is a "measure", a function assigning non-negative numbers to measurable subsets of $A$ in a way that is "additive" (i.e. $\mu\left(A_1\cup A_2\cup A_3\cup\cdots\right) = \mu(A_1)+\mu(A_2)+\mu(A_3)+\cdots$ if every two $A_i,A_j$ are mutually exclusive). The measure $\mu$ need not be a probability measure; for example, one could have $\mu(S)=\infty\ne 1$. For example, the function $$ f(x) = \begin{cases} e^{-x} & \text{if }x>0, \\ 0 & \text{if }x<0, \end{cases} $$ is a probability density on $\mathbb R$, where the underlying measure is one for which the measure of every interval $(a,b)$ is its length $b-a$.
Michael HardyMichael Hardy
$\begingroup$ The measure is not determined by the pdf. The standard normal density $x\mapsto\dfrac1{\sqrt{2\pi}} e^{-x^2/2}$ is also a probability density with respect to the SAME measure. Every time you see an expression like $\displaystyle\int_a^b f(x)\,dx$, you're talking about integrating with respect to that measure. ${}\qquad{}$ $\endgroup$ – Michael Hardy Dec 18 '14 at 22:05
$\begingroup$ One instance is when the measure of a set is simply the number of members of the set, and in that case a probability density is the same thing as a probability mass function. $\endgroup$ – Michael Hardy Dec 19 '14 at 1:56
Not the answer you're looking for? Browse other questions tagged probability probability-theory probability-distributions or ask your own question.
Visualizing a probability measures through a probability density functions
Confusion between probability distribution function and probability density function
Characteristic function: fourier transform of probability measure or density?
Probability density/mass function
On clarifying the relationship between distribution functions in measure theory and probability theory
Probability, Mass Function
probability measures vs. probability distributions vs. measure of probability density
mixture distribution cannot be described by a PDF?
Joint Probability Distribution Function.
Probability distribution vs. probability mass function / Probability density function terms: what's the difference
Definition for terms in probability | CommonCrawl |
November 2002 , Volume 2 , Issue 4
Optimal control of treatments in a two-strain tuberculosis model
E. Jung, Suzanne Lenhart and Z. Feng
Optimal control theory is applied to a system of ordinary differential equations modeling a two-strain tuberculosis model. Seeking to reduce the latent and infectious groups with the resistant-strain tuberculosis, we use controls representing two types of treatments. The optimal controls are characterized in terms of the optimality system, which is solved numerically for several scenarios.
E. Jung, Suzanne Lenhart, Z. Feng. Optimal control of treatments in a two-strain tuberculosis model. Discrete & Continuous Dynamical Systems - B, 2002, 2(4): 473-482. doi: 10.3934/dcdsb.2002.2.473.
Stability of stationary solutions of the forced Navier-Stokes equations on the two-torus
Chuong V. Tran, Theodore G. Shepherd and Han-Ru Cho
We study the linear and nonlinear stability of stationary solutions of the forced two-dimensional Navier-Stokes equations on the domain $[0,2\pi]\times[0,2\pi/\alpha]$, where $\alpha\in(0,1]$, with doubly periodic boundary conditions. For the linear problem we employ the classical energy--enstrophy argument to derive some fundamental properties of unstable eigenmodes. From this it is shown that forces of pure $x_2$-modes having wavelengths greater than $2\pi$ do not give rise to linear instability of the corresponding primary stationary solutions. For the nonlinear problem, we prove the equivalence of nonlinear stability with respect to the energy and enstrophy norms. This equivalence is then applied to derive optimal conditions for nonlinear stability, including both the high- and low-Reynolds-number limits.
Chuong V. Tran, Theodore G. Shepherd, Han-Ru Cho. Stability of stationary solutions of the forced Navier-Stokes equations on the two-torus. Discrete & Continuous Dynamical Systems - B, 2002, 2(4): 483-494. doi: 10.3934/dcdsb.2002.2.483.
Analysis of a chemostat model for bacteria and virulent bacteriophage
Edoardo Beretta, Fortunata Solimano and Yanbin Tang
The purpose of this paper is to study the mathematical properties of the solutions of a model for bacteria and virulent bacteriophage system in a chemostat. A general model was first proposed by Levin, Stewart and Chao [13] and then, a specific one, by Lenski and Levin [12]. The numerical simulations come from the experimental data referred in [12,13]. In our Knowledge the analysis presented herefollowing is the first mathematical attempt to analyse the model of bacteria and virulent bacteriophage and presents two fresh frontiers: 1) the modeling of delay (latency period) incorporating the realistic through time death rate in linear stability analysis brings to characteristic equations with delay dependent parameters for which only recently Beretta and Kuang [5] provided a geometric stability switch criterion which application is presented along the paper; 2) the modelling of the dynamics through three full delay stages can be reduced to two using the integral representation for the density of infected bacteria. The basic properties of the model which are investigated are the existence of equilibria, positive invariance and boundedness of solutions and permanence results. Second, using the geometric stability switch criterion in the delay differential system with delay dependent parameters, we present the local asymptotic stability of the equilibria by analyzing the corresponding characteristic equation which coefficients depend on the time delay (the latency period). Numerical simulations are presented to illustrate the results of local stability. Then, we study the global asymptotic stability of the boundary equilibria via Liapunov functional method. Finally, we give a discussion about the model.
Edoardo Beretta, Fortunata Solimano, Yanbin Tang. Analysis of a chemostat model for bacteria and virulent bacteriophage. Discrete & Continuous Dynamical Systems - B, 2002, 2(4): 495-520. doi: 10.3934/dcdsb.2002.2.495.
Regular and chaotic motions of the fast rotating rigid body: a numerical study
Giancarlo Benettin, Anna Maria Cherubini and Francesco Fassò
2002, 2(4): 521-540 doi: 10.3934/dcdsb.2002.2.521 +[Abstract](2358) +[PDF](3741.4KB)
We numerically investigate the dynamics of a symmetric rigid body with a fixed point in a small analytic external potential (equivalently, a fast rotating body in a given external field) in the light of previous theoretical investigations based on Nekhoroshev theory. Special attention is posed on "resonant" motions, for which the tip of the unit vector $\mu$ in the direction of the angular momentum vector can wander, for no matter how small $\varepsilon$, on an extended, essentially two-dimensional, region of the unit sphere, a phenomenon called "slow chaos". We produce numerical evidence that slow chaos actually takes place in simple cases, in agreement with the theoretical prediction. Chaos however disappears for motions near proper rotations around the symmetry axis, thus indicating that the theory of these phenomena still needs to be improved. An heuristic explanation is proposed.
Giancarlo Benettin, Anna Maria Cherubini, Francesco Fass\u00F2. Regular and chaotic motions of the fast rotating rigid body: a numerical study. Discrete & Continuous Dynamical Systems - B, 2002, 2(4): 521-540. doi: 10.3934/dcdsb.2002.2.521.
Global stability for differential equations with homogeneous nonlinearity and application to population dynamics
Pierre Magal
In this paper we investigate global stability for a differential equation containing a positively homogeneous nonlinearity. We first consider perturbations of the infinitesimal generator of a strongly continuous semigroup which has a simple dominant eigenvalue. We prove that for "small" perturbation by a positively homogeneous nonlinearity the qualitative properties of the linear semigroup persist. From this result, we deduce a global stability result when one adds a certain type of saturation term. We conclude the paper by an application to a phenotype structured population dynamic model.
Pierre Magal. Global stability for differential equations with homogeneous nonlinearity and application to population dynamics. Discrete & Continuous Dynamical Systems - B, 2002, 2(4): 541-560. doi: 10.3934/dcdsb.2002.2.541.
On the stability of two nematic liquid crystal configurations
Bagisa Mukherjee and Chun Liu
In this article we study the stability properties of two different configurations in nematic liquid crystals. One of them is the static configuration in the presence of magnetic fields. The other one is the Poiseuille flow under the model of Ericksen for liquid crystals with variable degree of orientation [E, 91]. In the first case, we show that the planar radial symmetry solution is stable with respect to the small external magnetic field. Such phenomenon illustrates the competition mechanism between the magnetic field and the strong anchoring boundary conditions. In the Poiseuille flow case, we show that the stationary configuration obtained from our previous works [C-L, 99] [C-M, 96] is stable when the velocity gradient is small.
Bagisa Mukherjee, Chun Liu. On the stability of two nematic liquid crystal configurations. Discrete & Continuous Dynamical Systems - B, 2002, 2(4): 561-574. doi: 10.3934/dcdsb.2002.2.561.
Linear and nonlinear stability in a diffusional ecotoxicological model with time delays
David Schley and S.A. Gourley
We propose a reaction-diffusion extension of a two species ecotoxicological model with time-delays proposed by Chattopadhyay et al (1997). Each species has the capacity to produce a substance toxic to its competitor, and a distributed time-delay is incorporated to model lags in the production of toxin. Additionally, nonlocal spatial effects are present because of the combination of delay and diffusion. The stability of the various uniform equilibria of the model are studied by using linearised analysis, on an infinite spatial domain. It is shown that simple exponentially decaying delay kernels cannot destabilise the coexistence equilibrium state. In the case of a finite spatial domain, with purely temporal delays, a nonlinear convergence result is proved using ideas of Lyapunov functionals together with invariant set theory. The result is also applicable to the purely temporal system studied by other investigators and, in fact, extends their results.
David Schley, S.A. Gourley. Linear and nonlinear stability in a diffusional ecotoxicological model with time delays. Discrete & Continuous Dynamical Systems - B, 2002, 2(4): 575-590. doi: 10.3934/dcdsb.2002.2.575.
Well-posedness of a kinetic model of dispersed two-phase flow with point-particles and stability of travelling waves
K. Domelevo
We study the existence, uniqueness and long time behaviour of a system consisting of the viscous Burgers' equation coupled to a kinetic equation. This system models the motion of a dispersed phase made of inertial particles immersed in a fluid modelled by the Burgers' equation. The initial conditions are in $L^\infty+W^{1,1}(\mathbb{R}_x)$ for the fluid and in the space $\mathcal {M}(\mathbb{R}_x\times\mathbb{R}_v\times\mathbb{R}_r)$ of bounded measures for the dispersed phase. This means that the limiting case where the particles are regarded as point particles is taken into account. First, we prove the existence and uniqueness of solutions to the system by using the regularizing properties of the viscous Burgers' equation. Then, we prove that the usual stability properties of travelling waves for the viscous Burgers' equation is not affected by the coupling with a small mass of inertial particles.
K. Domelevo. Well-posedness of a kinetic model of dispersed two-phase flow with point-particles and stability of travelling waves. Discrete & Continuous Dynamical Systems - B, 2002, 2(4): 591-607. doi: 10.3934/dcdsb.2002.2.591. | CommonCrawl |
Intersection cohomology of the Uhlenbeck compactification of the Calogero-Moser space
arxiv.org. math. Cornell University, 2015
Finkelberg M. V., Ginzburg V., Ionov A., Kuznetsov A. G.
We study the natural Gieseker and Uhlenbeck compactifications of the rational Calogero–Moser phase space. The Gieseker compactification is smooth and provides a small resolution of the Uhlenbeck compactification. This allows computing the IC stalks of the latter.
Priority areas: mathematics
Text on another site
Keywords: Calogero-Moser systemsUhlenbeck compactification
Publication based on the results of: Алгебраическая геометрия и ее приложения(2015)
Dunkl operators at infinity and Calogero-Moser systems
Sergeev A. International Mathematical Research Notes. 2015.
Added: Sep 7, 2015
A finite analog of the AGT relation I: finite W-algebras and quasimaps' spaces
Braverman A., Rybnikov L. G., Feigin B. L. et al. Communications in Mathematical Physics. 2011. Vol. 308. No. 2. P. 457-478.
Recently Alday, Gaiotto and Tachikawa proposed a conjecture relating 4-dimensional super-symmetric gauge theory for a gauge group G with certain 2-dimensional conformal field theory. This conjecture implies the existence of certain structures on the (equivariant) intersection cohomology of the Uhlenbeck partial compactification of the moduli space of framed G-bundles on P^2. More precisely, it predicts the existence of an action of the corresponding W-algebra on the above cohomology, satisfying certain properties. We propose a "finite analog" of the (above corollary of the) AGT conjecture.
Finkelberg M. V., Ginzburg V., Ionov A. et al. Selecta Mathematica, New Series. 2016. Vol. 22. No. 4. P. 2491-2534.
We study the natural Gieseker and Uhlenbeck compactifications of the rational Calogero–Moser phase space. The Gieseker compactification is smooth and provides a small resolution of the Uhlenbeck compactification. We use the resolution to compute the stalks of the IC-sheaf of the Uhlenbeck compactification.
Instanton moduli spaces and $\mathscr W$-algebras
Braverman A., Finkelberg M. V., Nakajima H. arxiv.org. math. Cornell University, 2014. No. 2381.
We describe the (equivariant) intersection cohomology of certain moduli spaces ("framed Uhlenbeck spaces") together with some structures on them (such as e.g.\ the Poincar\'e pairing) in terms of representation theory of some vertex operator algebras ("W-algebras").
Absolutely convergent Fourier series. An improvement of the Beurling-Helson theorem
Vladimir Lebedev. arxiv.org. math. Cornell University, 2011. No. 1112.4892v1.
We obtain a partial solution of the problem on the growth of the norms of exponential functions with a continuous phase in the Wiener algebra. The problem was posed by J.-P. Kahane at the International Congress of Mathematicians in Stockholm in 1962. He conjectured that (for a nonlinear phase) one can not achieve the growth slower than the logarithm of the frequency. Though the conjecture is still not confirmed, the author obtained first nontrivial results.
Обоснование адиабатического предела для гиперболических уравнений Гинзбурга-Ландау
Пальвелев Р., Сергеев А. Г. Труды Математического института им. В.А. Стеклова РАН. 2012. Т. 277. С. 199-214.
Сабейские этюды
Коротаев А. В. М.: Восточная литература, 1997.
Метод параметрикса для диффузий и цепей Маркова
Конаков В. Д. STI. WP BRP. Издательство попечительского совета механико-математического факультета МГУ, 2012. № 2012.
Added: Dec 5, 2012
Hypercommutative operad as a homotopy quotient of BV
Khoroshkin A., Markaryan N. S., Shadrin S. arxiv.org. math. Cornell University, 2012. No. 1206.3749.
We give an explicit formula for a quasi-isomorphism between the operads Hycomm (the homology of the moduli space of stable genus 0 curves) and BV/Δ (the homotopy quotient of Batalin-Vilkovisky operad by the BV-operator). In other words we derive an equivalence of Hycomm-algebras and BV-algebras enhanced with a homotopy that trivializes the BV-operator. These formulas are given in terms of the Givental graphs, and are proved in two different ways. One proof uses the Givental group action, and the other proof goes through a chain of explicit formulas on resolutions of Hycomm and BV. The second approach gives, in particular, a homological explanation of the Givental group action on Hycomm-algebras.
Added: Aug 29, 2012
Is the function field of a reductive Lie algebra purely transcendental over the field of invariants for the adjoint action?
Colliot-Thélène J., Kunyavskiĭ B., Vladimir L. Popov et al. Compositio Mathematica. 2011. Vol. 147. No. 2. P. 428-466.
Let k be a field of characteristic zero, let G be a connected reductive algebraic group over k and let g be its Lie algebra. Let k(G), respectively, k(g), be the field of k- rational functions on G, respectively, g. The conjugation action of G on itself induces the adjoint action of G on g. We investigate the question whether or not the field extensions k(G)/k(G)^G and k(g)/k(g)^G are purely transcendental. We show that the answer is the same for k(G)/k(G)^G and k(g)/k(g)^G, and reduce the problem to the case where G is simple. For simple groups we show that the answer is positive if G is split of type A_n or C_n, and negative for groups of other types, except possibly G_2. A key ingredient in the proof of the negative result is a recent formula for the unramified Brauer group of a homogeneous space with connected stabilizers. As a byproduct of our investigation we give an affirmative answer to a question of Grothendieck about the existence of a rational section of the categorical quotient morphism for the conjugating action of G on itself.
Cross-sections, quotients, and representation rings of semisimple algebraic groups
V. L. Popov. Transformation Groups. 2011. Vol. 16. No. 3. P. 827-856.
Let G be a connected semisimple algebraic group over an algebraically closed field k. In 1965 Steinberg proved that if G is simply connected, then in G there exists a closed irreducible cross-section of the set of closures of regular conjugacy classes. We prove that in arbitrary G such a cross-section exists if and only if the universal covering isogeny Ĝ → G is bijective; this answers Grothendieck's question cited in the epigraph. In particular, for char k = 0, the converse to Steinberg's theorem holds. The existence of a cross-section in G implies, at least for char k = 0, that the algebra k[G]G of class functions on G is generated by rk G elements. We describe, for arbitrary G, a minimal generating set of k[G]G and that of the representation ring of G and answer two Grothendieck's questions on constructing generating sets of k[G]G. We prove the existence of a rational (i.e., local) section of the quotient morphism for arbitrary G and the existence of a rational cross-section in G (for char k = 0, this has been proved earlier); this answers the other question cited in the epigraph. We also prove that the existence of a rational section is equivalent to the existence of a rational W-equivariant map T- - - >G/T where T is a maximal torus of G and W the Weyl group.
Математическое моделирование социальных процессов
Edited by: А. Михайлов Вып. 14. М.: Социологический факультет МГУ, 2012.
Dynamics of Information Systems: Mathematical Foundations
Iss. 20. NY: Springer, 2012.
This proceedings publication is a compilation of selected contributions from the "Third International Conference on the Dynamics of Information Systems" which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study. | CommonCrawl |
Gravity-induced coronal plane joint moments in adolescent idiopathic scoliosis
Bethany E. Keenan1Email authorView ORCID ID profile,
Graeme J. Pettet2,
Maree T. Izatt1,
Geoffrey N. Askin1,
Robert D. Labrom1,
Mark J. Pearcy1 and
Clayton Adam1
Scoliosis201510:35
© Keenan et al. 2015
Accepted: 2 December 2015
Adolescent Idiopathic Scoliosis is the most common type of spinal deformity, and whilst the isk of progression appears to be biomechanically mediated (larger deformities are more likely to progress), the detailed biomechanical mechanisms driving progression are not well understood. Gravitational forces in the upright position are the primary sustained loads experienced by the spine. In scoliosis they are asymmetrical, generating moments about the spinal joints which may promote asymmetrical growth and deformity progression. Using 3D imaging modalities to estimate segmental torso masses allows the gravitational loading on the scoliotic spine to be determined. The resulting distribution of joint moments aids understanding of the mechanics of scoliosis progression.
Existing low-dose CT scans were used to estimate torso segment masses and joint moments for 20 female scoliosis patients. Intervertebral joint moments at each vertebral level were found by summing the moments of each of the torso segment masses above the required joint.
The patients' mean age was 15.3 years (SD 2.3; range 11.9–22.3 years); mean thoracic major Cobb angle 52° (SD 5.9°; range 42–63°) and mean weight 57.5 kg (SD 11.5 kg; range 41–84.7 kg). Joint moments of up to 7 Nm were estimated at the apical level. No significant correlation was found between the patients' major Cobb angles and apical joint moments.
Patients with larger Cobb angles do not necessarily have higher joint moments, and curve shape is an important determinant of joint moment distribution. These findings may help to explain the variations in progression between individual patients. This study suggests that substantial corrective forces are required of either internal instrumentation or orthoses to effectively counter the gravity-induced moments acting to deform the spinal joints of idiopathic scoliosis patients.
Adolescent idiopathic scoliosis (AIS)
Instantaneous centre of rotation (ICR)
Joint moments
Scoliosis progression
Adolescent Idiopathic Scoliosis (AIS) is a three-dimensional spinal deformity whose aetiology remains unclear [1–4]. Whilst the initial deformity may be due to a complex interplay of biomechanical, biochemical, and/or genetic factors, as well as growth asymmetries originating in the sagittal plane, it is widely accepted that scoliosis progression is predominantly a biomechanical process, whereby the spine undergoes asymmetric loading and alteration of vertebral growth in a "vicious cycle" [5, 6].
Supine imaging modalities such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) have made possible 3D reconstructions of the spine, which allow detailed measurements of spinal anatomy not possible with standard radiographs. Segmental (vertebral level-by-level) torso masses1 can be calculated from these 3D reconstructions and used to determine the gravity-induced joint moments2 acting on the scoliotic spine. In the healthy, non-scoliotic spine there are negligible joint moments acting in the coronal plane, whereas, once a small lateral curvature presents, the weight of the torso segments superior to that curve generate a lateral bending moment that can potentially exacerbate the deformity during subsequent growth. For a patient with mild scoliosis, the moment created has been estimated to be in the order of 0.5 Nm [7]. However, it is not yet known whether there is a threshold beyond which joint moments drive deformity progression.
Previous studies suggest that gravitational forces in the standing position play an important role in scoliosis progression. Adam et al. (2008) reported that gravity-induced axial rotation torques may modulate intravertebral rotation in progressive idiopathic scoliosis (as gravitational forces acting on a curved spinal column generate torque about the column axis).
Torques as high as 7.5 Nm were found acting on scoliotic spines in the standing position, but further investigation of this area is required [8]. Spinal loading asymmetry in the lumbar spine with regard to muscle activation has also been extensively reviewed by Stokes [9–11]. However, to the best of our knowledge, there have not been any previous analyses of gravity-induced coronal plane joint moments in the thoracic and lumbar spines of AIS patients.
Given that joint moments in the transverse plane have previously been estimated by Adam et al (2008), the plane of primary interest in the current study is the coronal plane. This is also the plane in which routine scoliosis assessment is performed clinically through the use of standing postero-anterior plane radiographs. Whilst joint moments are also induced in the sagittal plane, the spine is adapted to resist these, since they are present as a consequence of the natural thoracic kyphosis and lumbar lordosis which is present in scoliotic and healthy spines alike.
The aims of the present study were: (i) to estimate torso segment masses in the thoracic and lumbar spines of AIS patients, using a series of existing low-dose supine CT scans, (ii) to calculate the resulting gravity-induced coronal plane joint moments in the adolescent scoliotic spine; and (iii) to assess the relationship between coronal plane joint moments and the severity of the deformity.
Patient cohort and ethical consideration
Existing low-dose supine CT scans taken between November 2002 and January 2008 for a group of female AIS patients were used retrospectively to determine the coronal plane joint moments acting on the thoracic scoliotic spine. All patients had right-sided thoracic curves with a Cobb angle greater than 40°.
The curve type was based on the Lenke classification system [12], and all patients were categorised as Lenke Type 1 (i.e. they had major thoracic curves). The Risser sign was used to categorise the skeletal maturity of each patient [13].
A single low-dose CT scan was part of the pre-operative clinical assessment process at the time, for patients scheduled to undergo thoracoscopic anterior spinal fusion to assist with safer screw sizing and positioning [14]. Ethical and institutional governance approvals were gained prior to commencement of the study.
CT data evaluation
Three different CT scanners were used over the 6 year period of the CT scan acquisition: (i) a 64 slice GE Lightspeed Plus (GE Healthcare, Chalfont St. Giles, UK); (ii) a 64-slice Philips Brilliance (Philips Healthcare, Andover, USA); and (iii) a 64 slice GE Lightspeed VCT (GE Healthcare, Chalfont St. Giles, UK). Dose reports were commissioned for all three scanners, and the highest estimated radiation dose of 3.0 mSv occurred with the oldest scanner (GE Lightspeed Plus), with uncertainties due to the dose model in the order of 20 % [15].
By comparison, the combined dose for a postero-anterior (PA) and lateral standing radiograph is in the order of 1.0 mSv, and the annual background radiation dose in Queensland, Australia is approximately 2.0–2.4 mSv [15, 16]. Estimated doses for the newer 64 slice scanners were substantially lower (in the order of 2 mSv). Subjects were in a supine position with the upper limbs positioned over the head during CT scanning. Scan coverage was from C7 to S1.
The CT scan in-plane resolution and slice thickness/spacing varied slightly over the course of the study. All raw scans were 16 bit axial stacks with 512 × 512 pixels in each slice of the stack. Pixel spacing in the (axial) plane varied between 1.7 and 1.8 pixels/mm with a slice thickness between 2.0 and 3.0 mm and slice spacing between 1.0 and 1.25 mm. Since the re-sliced coronal plane images (Fig. 1) derive their resolution from the original CT dataset, the pixel spacing in the plane (on the re-sliced stack) varied from 1.7–1.8 pixels/mm in both the lateral (left-right) and anterior-posterior directions, and 0.8–1.0 pixels/mm in the longitudinal (inferior-superior) direction.
a Axial slice through the T8 vertebra showing x and y co-ordinate axes relative to the centroid of T1 b Thresholded axial slice at the same level - where A4 is the area of the torso, minus that of the lungs A1 (right lung) and A2 (left lung), A3 is the area of the total slice and (Xc,Yc) denotes the centroid of the cross-section. c Torso segment thickness (h) calculated from the reconstructed coronal image, where the segment height was taken from the midpoint of the superior vertebral endplate of each vertebra to the midpoint of the superior vertebral endplate of the level below including the IV disc. The distance between these two points was measured using the ImageJ 'Segmented Line' measuring tool
The image processing software, ImageJ (v. 1.45, National Institutes of Health, USA) was used to create re-sliced coronal plane images from the axial CT slices and reconstruct vertebral level-by-level torso segments. The scoliotic vertebral level segmental parameters of height, volume, area and mass were measured, to enable estimation of the coronal plane joint moments acting at each level. All analyses were completed by a single observer.
Estimating vertebral level torso segment masses
A single axial CT slice located at the mid-height of each thoracic and lumbar vertebra was selected and used to determine the axial plane location of the torso segment centroid for that vertebral level as described below.
ImageJ's default thresholding method based on the IsoData algorithm described by Ridler and Calvard [17] was then used on the axial slice through the centre of each vertebral body to distinguish the external and lung airspaces from the trunk tissues (Fig. 1). The centroid co-ordinates of this thresholded slice (Xc, Yc) in the scanner bed coordinate system were then found using the First Moment of Area equation (Equation 1 and 2).
$$ Xc = \frac{(A3X3) - (A1X1) - (A2X2)}{A4} $$
$$ Yc = \frac{(A3Y3) - (A1Y1) - (A2Y2)}{A4} $$
Note: A4=A3-A1-A2
A3 is the total area of the axial slice enclosed by the skin boundary, A1 is the area of the right lung and A2 is area of the left lung. The centroid co-ordinates of the whole slice are defined as (X3, Y3) with the centroid location of the right and left lungs denoted as (X1, Y1) and (X2, Y2) respectively.
The 'z-project' function in ImageJ (standard deviation projection type) was used to create a pseudo-coronal plane radiograph to allow the whole thoracic and lumbar spine to be viewed on a single image.
In order to project a clear single image of the thoracic and lumbar spine (without the ribs), the start and end slices for the z-project function were selected at the anterior and posterior edges of the vertebral body. The thickness of each torso segment was measured using techniques described by Keenan et al. [18] where the segment height (or thickness) was taken from the midpoint of the superior vertebral endplate of each vertebra to the midpoint of the superior vertebral endplate of the level below including the IV disc. The volume, V, was then calculated by multiplying the area of the central slice (A4) in the axial plane by the thickness (h) of the vertebral body segment (Fig. 1) corresponding to the vertebra and disc in question.
A single density, ρ, of 1040 kg/m3 [18, 19] was used to estimate the torso segment masses, M, corresponding to each vertebral level (Equation 3).
$$ M=\rho \times V $$
The torso segment masses were then multiplied by 9.81 m/s2 to give torso segment weight vectors. These were plotted at the centroid co-ordinate positions on the patients' antero-posterior (AP) view images (Fig. 2). Note that although the torso segment weights were estimated based on CT scans performed in the supine position, the resulting gravity vectors were oriented perpendicular to the torso cross-section slices, in order to estimate joint moments in a simulated standing position.
Constructions to calculate segmental moments: a using ImageJ polygon tool to trace the outline of a disc space for calculation of the disc's coronal centroid for the joint's ICR location (blue dot); b coronal reformatted image of the entire thoracic and lumbar spine with overlaid plot of the torso segment weight vectors (indicated by red arrows), the estimated head and upper limb weight vectors (pink arrows) and the ICRs located at the centroids of the IVDs (blue dots)
Anthropometric data
As the CT scans only included the thoracic and lumbar spine and glenohumeral joint, the weight of other body segments above the apex (i.e. the head, neck, arms and hands) were estimated using anthropometric data [20]. Equations 4 and 5 were then used to determine patient specific values for the mass of the head + neck, and for each arm + hand.
$$ 8.1\% \times patient\ body\ weight\ (kg) = mass\ of\ head+ neck $$
$$ 5.6\% \times patient\ body\ weight\ (kg) = mass\ of\ arm+ hand $$
where the head + neck weight vector was located at the centroid of the T1 superior endplate and the arm + hand weight vector was positioned on the glenohumeral joint (Fig. 2(b)). It is important to note however, that the present literature regarding body segment parameters is limited, particularly for female subjects or adolescents. As a result, the anthropometric body segment percentage values (reported by Winter) are based on measurements of eight male cadavers aged 61–83 years. Whilst we note that this introduces a limitation to the study, AIS patients tend to be leaner and have significantly lower body mass index (BMI) compared to healthy age-matched controls; and therefore are likely to have segment values as a percentage of body mass closer to those of elderly adults [21, 22].
Locating the Instantaneous Centre of Rotation (ICR)
The next step was to estimate the Instantaneous Centre of Rotation (ICR) location at each intervertebral joint, about which the joint moments would be calculated. Since there is an absence of ICR measurements for lateral bending in scoliosis patients, the ICR was assumed to lie at the centroid of a coronal plane projection of the intervertebral disc.
The ImageJ 'Polygon' tool was used to trace the outline of each intervertebral disc (as shown by the blue lines in Fig. 2(a)). Note that the image has been thresholded in this Figure to only display bone, hence the disc is not visible. Once the boundaries of the intervertebral disc had been drawn, the ImageJ 'Centroid Measurement' tool was used to determine the centre of this region, which in turn allowed the ICRs to be located in the geometric centres of the discs in the pseudo-coronal plane projection of the patient's spine. Figure 2(b) shows a plot of the torso segment weight vectors (red arrows) together with the assumed ICRs located in the centre of each disc (blue dots). The lengths of the red arrows are not proportional to the weight of the torso segment, but simply illustrate the location of the vectors. In addition, the pink arrows located on the glenohumeral joint and the centroid of the T1 superior endplate represent the estimated torso segment weight vectors of the arm + hands (Fg2) and head + neck (Fg1). Note that the most caudal joint moment calculation was performed at the L4/L5 intervertebral joint because of the lack of clarity of the lumbo-sacral joint in a number of the CT scans.
Calculating gravity-induced coronal plane joint moments
The torso segment weight vectors, together with the assumed ICR locations allowed calculation of the coronal plane joint moments induced by simulated gravitational loading. Intervertebral joint moments, JM, at each vertebral level were found by summing the moment of each of the torso segment weight vectors (including the head, neck and arms) about the ICR of the joint in question (Force × perpendicular distance to the ICR).
For example, for the moment acting about the ICR of the T3/T4 joint, the moment contributions from the head and neck, left arm, right arm and T1, T2 and T3 segments were summed to obtain a value for the T3/T4 joint moment.
As the torso segment weight vectors were applied parallel to the z-axis of the CT scanner, it is important to consider the position of each patient on the scanner bed to identify any misalignment of patients during their supine scan. This was assessed by comparing the standing radiograph coronal plane (T1-S1) plumb line with the supine CT scan for the entire cohort.
The effect of shifting the location of the Instantaneous Centre of Rotation (ICR)
As just stated, we assumed that the coronal plane ICR was located at the centroid of a coronal plane projection of the intervertebral disc in question. Since there is uncertainty regarding the position of the ICR in the scoliotic spine, a sensitivity analysis was carried out to assess the effect of changing the ICR location on the estimated joint moments. In this sensitivity analysis, the assumed ICR location was shifted laterally by 10 mm in each direction (towards and away from) the convexity of the scoliotic curve.
The patient demographics and clinical data for each of the 20 AIS patients are presented in Table 1. The mean age of the group was 15.3 years (SD 2.3; range 11.9–22.3 years). All curves were right-sided major thoracic Lenke Type 1 with 11 patients further classified as lumbar spine modifier A, 4 as lumbar modifier B and 5 as lumbar modifier C. The mean thoracic Cobb angle was 52° (SD 5.9; range 42–63°). The mean mass was 57.5 kg (SD 11.5; range 41–84.7 kg). Five patients were Risser grade 0, one patient was Risser grade 3, five patients were Risser grade 4 and nine patients were Risser grade 5.
Demographics and clinical data (from standing radiographs) for the patients grouped by maximum apical coronal plane joint moment (JM)
JM Group
Pt ID
Patient Mass (kg)
BMI (kg/m2)
Risser (0–5)
Major Cobb Angle (°)
Lumbar compensatory Cobb Angle (°)
Lenke class
Apex location
Max JM (Nm)
T9/10
T9/T10
Group Mean
The mean data of each JM group is shown in bold
Figure 3 shows a boxplot of joint moment vs vertebral level for the entire cohort. The joint moment distributions for each of the 20 patients in the study are given individually in the Additional file 1. As expected, the maximum joint moment for the major curve in each patient occurred at the joint closest to the apex of the curve, and the magnitudes of these peak moments for the entire patient group are compared in Fig. 4. It should be noted that there were in some patients, large joint moments in the lumbar spine. These larger moments were due in part to the cumulative weight of the body segments above the lumbar joints. The lumbar moments showed greater variability than those for the thoracic region, most probably due to the variability in the presence of a large compensatory lumbar curve.
Coronal plane joint moments for the 20 patients. The circles represent outliers (those data points between 1.5 and 3.0 times the interquartile range outside the first and third quartiles respectively) and the asterisks represent extreme outliers (more than 3.0 times the interquartile range outside the first/third respectively). The scale shows joint moments in Nm and + ve is a clockwise moment. Patient 5 is shown in orange, Patient 13 is shown in green and Patient 14 is shown in blue
Coronal plane joint moment (Nm) acting at the apex of the curve for the 20 patients. The grey bars are the estimated increase in the joint moment according to the increase in Cobb angle that occurs in standing relative to the supine position
A scatter plot of the maximum apical joint moment versus clinical major Cobb angle is shown in Fig. 5. Regression analysis found no statistically significant relationship between maximum joint moment at the apex and clinically measured major Cobb angle (P = 0.192). This plot also shows the maximum apical joint moment plotted against the horizontal offset distance between the apical disc ICR and the straight line joining the T1/T2 and L4/L5 disc ICRs. Again, no statistically significant relationship was found between maximum joint moment at the apex and the offset distance (P = 0.280). However, upon removal of Patient 13 (outlier patient of mass 84 kg with an apical joint moment of 7 Nm), we found a p-value close to statistical significance (p = 0.079) for the relationship between apex horizontal offset and apical joint moment, but not for Cobb (P = 0.364).
Scatter plot of coronal plane joint moment at the apical joint versus (i) clinically measured major cobb angle (blue dots) and (ii) lateral deviation of the ICR of the apical intervertebral joint (pink dots). Neither of the regressions are statistically significant, however when the outlier (patient 13 = 7.1 Nm moment) is removed from the regression, the correlation between apical joint moment and apical lateral deviation is near-significant (P < 0.10)
Visual inspection of the joint moment distributions for each patient shows significant variability according to spinal curve shape. This is highlighted in Fig. 6, where three patients from the study (Patients 8, 12 and 19) all having similar major Cobb angles (47, 54 and 52° with peak joint moments of 1.86, 2.29 and 3.06 Nm respectively) are compared.
Examples of coronal CT images (from Patients 4, 8 and 10) showing the variability of joint moment distribution for three patients (Anterior-Posterior view). The scale shows joint moments in Nm and + ve is a clockwise moment
Multi-linear regression found no statistically significant relationship between joint moment distribution and four independent variables: patient mass (p = 0.50), age (p = 0.35), Risser sign (p = 0.10) and Lenke modifier (p = 0.78) with an R-squared value 18 % (for all four variables combined).
Because the foregoing analysis was performed on supine CT anatomy, we also include an estimated correction for supine to standing change in Cobb angle (grey bars in Fig. 4). This correction was performed by measuring the Cobb angle on both the supine CT image and the clinical standing X-ray as described in a previous study [23]. The difference between the two Cobb angle measures was divided by the patient's supine Cobb angle, and used to scale the joint moment at the apex to provide an estimate of the joint moment in standing. In this way, the estimated increase in joint moment varies by patient depending on the flexibility of the spine. Shifting the ICR by 10 mm towards or away from the convexity of the spine, changed the joint moment at that level by a mean of 9.0 %, showing that calculated joint moments were relatively insensitive to the assumed ICR location.
With regard to patient positioning on the scanner bed, of the 20 patients analysed, only three of the plumb lines (Patients 2, 16 and 17) differed by 2 cm or more between standing radiograph and supine CT.
Intra-observer variability
Intra-observer variability for the torso segment thickness, mass and slice location has previously been reported [18]. With regard to the sensitivity of the ICR location, the abovementioned 9 % change in response to a 10 mm shift towards or away from the curve convexity suggests that a relative shift of the ICR by 2–3 mm (a typical value for intra-observer variability in ICR location selection) would have a negligible (< 3 %) effect on the resulting coronal plane joint moment.
Previous studies have suggested that gravity plays a key role in driving deformity progression in AIS. The primary aim of this study, therefore, was to take advantage of an existing 3D CT dataset to estimate joint moments in the standing scoliotic spine; to assess the magnitude and distribution of coronal plane joint moments occurring in AIS patients with moderate deformities; and to assess whether there is a relationship between joint moment magnitude and curve severity.
The results from this study have shown that there is a consistent pattern of joint moments in this group of patients with the same type of deformity (i.e. right-sided Lenke Type 1, thoracic curves). Despite this, the maximum joint moment in the major curve always occurred at the apex of the thoracic curve, although some patients also displayed large joint moments in the lumbar compensatory curve.
When dividing the patient cohort into subgroups of patients with similar joint moments, no clear trends were observed with existing clinical measures (as shown in Table 1). Intuitively one would expect apical joint moments to increase with Cobb angle. However Cobb angle is a limited measure from a biomechanical perspective because two scoliotic curves can have the same Cobb angle but differ widely in the apical moment due to differences in the offset of the apical vertebra. This is the reason that we included a scatter plot of apical joint moment vs apical offset in Fig. 5, where we found that whilst Cobb versus apical joint moment was not statistically significant, differences in offset versus apical moment was close to statistical significance. Similarly, there was no clear relationship between patient age, mass, Lenke modifier and Risser sign with joint moment distribution, although it is worth noting that the heaviest patient in the study also had the highest apical joint moment by a substantial margin.
Because the CT scans were performed in the supine position and Cobb angle magnitudes in this position are known to be 7–10° smaller than those measured in standing [23, 24], the joint moments in actual standing (as opposed to the simulated standing analysis performed here) would be expected to be greater than those calculated. The effect of the supine vs standing position on joint moments was estimated in Fig. 4, with the apical joint moment increasing by an average of 1.08 Nm (range 0.43–2.34 Nm).
It is also important to consider patient alignment on the scanner bed because the gravity vectors were always assumed to act along the scanner bed coordinate system z-axis in this analysis. We note that only three of the patients had a difference in plumb line of more than 2 cm between standing clinical radiograph and supine CT scan, suggesting that perhaps these three patients were not ideally aligned for their supine scan. Two of these patients (16 and 17) do exhibit relatively high lumbar spine joint moments, which may be an artefact of their positioning on the scanner bed. The other patient (Patient 2) does not exhibit high lumbar joint moments, but this could be either due to the fact that they were positioned 'straight' on the scanner bed when in fact they have an appreciable lateral offset when standing, or that they were positioned slightly angled on the scanner bed in the opposite direction to Patients 16 and 17, thus cancelling rather than exacerbating the calculated lumbar moment. Thus a potential improvement to our methodology in future could be attempting to standardise patient positioning on the scanner bed, although we note that this standardisation would have to be careful not to 'correct' a coronal plane plumb line offset which is actually present in the patient.
The technique presented in the current study was performed by a single observer for research purposes and hence inter-observer variability in a clinical setting was not assessed. The authors therefore do not anticipate that the findings of the current study should be used to influence clinical decision making until further validation studies of this technique are performed.
Whilst the present study does not include any muscle loading, we believe that the use of static analysis to calculate the gravity-induced joint moments was an appropriate starting point; particularly for analysis of loading in the thoracic spine. Firstly, the rib cage has been shown to significantly increase stability of the thoracic spine (by 40 % in flexion/extension, 35 % in lateral bending and 31 % in axial rotation [25]). Secondly, lateral bending tests on cadaveric lumbar spine motion segments have shown that applied moments of 4.7–10.6 Nm of lateral bending result in rotations of 3.51–5.64° [26, 27]. Taken together, these studies suggest that the coronal plane moments (up to 7 Nm) estimated here, could be resisted by passive osseoligamentous structures undergoing a few degrees of lateral wedging. The extent to which muscle activation is involved in resisting coronal plane moments in standing AIS patients is unclear. Reports in the literature regarding muscle activation patterns in AIS are limited, particularly for the thoracic spine and in the standing position. Whilst Finite Element models have been developed to assess muscle asymmetry, the focus is primarily only on the lumbar spine [10, 28], which is inherently different to the thoracic spine (due to the absence of the ribcage).
In future, deformity progression could be assessed using sequential imaging techniques (as opposed to a single scan taken at one instant in time). Comparing the moments estimated at a particular time point to subsequent progression of a patient's curvature could provide valuable information on whether there is a threshold beyond which the joint moments are large enough to drive the deformity. It will also be important to extend the coronal plane analysis performed here to three dimensions, to allow a full accounting of the effect of coronal, sagittal and transverse plane moments and forces on deformity progression [29]. Such biomechanical understanding would provide useful insights into the effectiveness of bracing and other treatment strategies in individual patients.
Torso segment masses were used to estimate joint moments in the thoracic and lumbar spines of scoliosis patients. This study suggests that significant gravity-induced coronal plane joint moments act on the spines of scoliosis patients. Coronal plane joint moments of up to 7 Nm are present at the apical level of the major curve, increasing to an estimated 9 Nm in the upright position. There is substantial variation in joint moment distributions between patients with apparently similar curve type and magnitude. Although the relationship between magnitude of moment and deformity severity in individual patients remains unclear, gravity is a potential driving factor in coronal plane scoliosis progression, which may help to explain the mechanics of AIS. In terms of clinical implications for deformity correction, this study suggests that quite large forces for both internal instrumentation and external bracing are required to produce moments capable of countering those induced by gravity, and further (three dimensional) development of the approach used here may provide a quantitative foundation for treatments aimed at halting and correcting deformity progression.
'Segmental mass' here refers to the mass of an imaginary slice through the torso at the level of the vertebra in question. The thickness of the slice is shown in Fig. 1(c).
In Engineering Mechanics a moment is a twisting torque due to a force acting at a distance from a centre of rotation. Since the force is due to gravity acting on segments of body mass and the centres of rotation are spinal joints we refer to these moments as gravity-induced joint moments.
Additional file 1: Individual Coronal Plane Joint Moment Plots (Anterior-Posterior views on reformatted CT image). The scale shows joint moments in Nm and + ve is a clockwise moment. (DOCX 2095 kb)
All authors of the manuscript were fully involved in the study and preparation of the manuscript. The material within has not been and will not be submitted for published elsewhere. Authors' contributions are as follows: CA designed the study. GA, RL and MI collected and maintained data. BK carried out torso segment reconstruction and calculation of joint moments, analysing the data and writing the manuscript. MP, CA and GP helped in analysing and interpreting data. MP and CA helped in writing the manuscript. MP and CA helped in reading and editing the manuscript. All authors read and approved the final manuscript.
Paediatric Spine Research Group, Institute of Health and Biomedical Innovation, Queensland University of Technology and Mater Health Services, Brisbane, 4101, Queensland, Australia
Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, QLD, Australia
Harrington PR. The etiology of idiopathic scoliosis. Clin Orthop Relat Res. 1977;(126):17-25.Google Scholar
Goldberg CJ, Moore DP, Fogarty EE, Dowling FE. Scoliosis: a review. Pediatr Surg Int. 2008;24:129–44.View ArticlePubMedGoogle Scholar
Nachemson A, Sahlstrand T. Etiologic factors in adolescent idiopathic scoliosis. Spine. 1977;2:176–84.View ArticleGoogle Scholar
Lowe T, Edgar M, Margulies J, Miller N, Raso V, Reinker K, et al. Etiology of idiopathic scoliosis: current trends in research. J Bone Joint Surg Am. 2000;82-A:1157–68.PubMedGoogle Scholar
Stokes I, Burwell RG, Dangerfield P. Biomechanical spinal growth modulation and progressive adolescent scoliosis - a test of the 'vicious cycle' pathogenetic hypothesis: summary of an electronic focus group debate of the IBSE. Scoliosis. 2006;1:16.PubMed CentralView ArticlePubMedGoogle Scholar
Villemure I, Aubin C, Dansereau J, Labelle H. Simulation of progressive deformities in adolescent idiopathic scoliosis using a biomechanical model integrating vertebral growth modulation. J Biomech Eng. 2002;124:784–90.View ArticlePubMedGoogle Scholar
Schultz AB. Biomechanical factors in the progression of idiopathic scoliosis. Ann Biomed Eng. 1984;12:621–30.View ArticlePubMedGoogle Scholar
Adam CJ, Askin GN, Pearcy MJ. Gravity-induced torque and intravertebral rotation in idiopathic scoliosis. Spine. 2008;33:E30–37.View ArticlePubMedGoogle Scholar
Stokes IA. Analysis and simulation of progressive adolescent scoliosis by biomechanical growth modulation. Eur Spine J. 2007;16:1621–8.PubMed CentralView ArticlePubMedGoogle Scholar
Stokes IA, Gardner-Morse M. Muscle activation strategies and symmetry of spinal loading in the lumbar spine with scoliosis. Spine. 2004;29:2103–7.View ArticlePubMedGoogle Scholar
Stokes IA. Analysis of symmetry of vertebral body loading consequent to lateral spinal curvature. Spine. 1997;22:2495–503.View ArticlePubMedGoogle Scholar
Lenke LG, Betz RR, Harms J, Bridwell KH, Clements DH, Lowe TG, et al. Adolescent idiopathic scoliosis: a new classification to determine extent of spinal arthrodesis. J Bone Joint Surg Am. 2001;83-A:1169–81.PubMedGoogle Scholar
Risser JC. The Iliac apophysis; an invaluable sign in the management of scoliosis. Clin Orthop. 1958;11:111–9.PubMedGoogle Scholar
Kamimura M, Kinoshita T, Itoh H, Yuzawa Y, Takahashi J, Hirabayashi H, et al. Preoperative CT examination for accurate and safe anterior spinal instrumentation surgery with endoscopic approach. J Spinal Disord Tech. 2002;15:47–51. discussion 51–42.View ArticlePubMedGoogle Scholar
Schick D. Computed tomography radiation doses for paediatric scoliosis scans. Brisbane: Internal report commissioned by Paediatric Spine Research Group from Queensland Health Biomedical Technology Services; 2004.Google Scholar
Pace N, Ricci L, Negrini S. A comparison approach to explain risks related to X-ray imaging for scoliosis, 2012 SOSORT award winner. Scoliosis. 2013;8:11.PubMed CentralView ArticlePubMedGoogle Scholar
Ridler TW, Calvard S. Picture thresholding using an iterative selection method. IEEE Trans Syst Man Cybern. 1978;8:630–2.View ArticleGoogle Scholar
Keenan BE, Izatt MT, Askin GN, Labrom RD, Pettet GJ, Pearcy MJ, et al. Segmental torso masses in adolescent idiopathic scoliosis. Clin Biomech (Bristol, Avon). 2014;29:773–9.View ArticleGoogle Scholar
Pearsall DJ, Reid JG, Livingston LA. Segmental inertial parameters of the human trunk as determined from computed tomography. Ann Biomed Eng. 1996;24:198–210.View ArticlePubMedGoogle Scholar
Winter DA. Biomechanics and motor control of human movement. 4th ed. Canada: John Wiley & Sons. Inc.; 2009.View ArticleGoogle Scholar
Barrios C, Cortes S, Perez-Encinas C, Escriva MD, Benet I, Burgos J, et al. Anthropometry and body composition profile of girls with nonsurgically treated adolescent idiopathic scoliosis. Spine. 2011;36:1470–7.View ArticlePubMedGoogle Scholar
Ramirez M, Martinez-Llorens J, Sanchez JF, Bago J, Molina A, Gea J, et al. Body composition in adolescent idiopathic scoliosis. Eur Spine J. 2013;22:324–9.PubMed CentralView ArticlePubMedGoogle Scholar
Keenan BE, Izatt MT, Askin GN, Labrom RD, Pearcy MJ, Adam CJ. Supine to standing Cobb angle change in idiopathic scoliosis: the effect of endplate pre-selection. Scoliosis. 2014;9:16.PubMed CentralView ArticlePubMedGoogle Scholar
Torell G, Nachemson A, Haderspeck-Grib K, Schultz A. Standing and supine Cobb measures in girls with idiopathic scoliosis. Spine. 1985;10:425–7.View ArticlePubMedGoogle Scholar
Watkins R, Watkins 3rd R, Williams L, Ahlbrand S, Garcia R, Karamanian A, et al. Stability provided by the sternum and rib cage in the thoracic spine. Spine. 2005;30:1283–6.View ArticlePubMedGoogle Scholar
Nachemson AL, Schultz AB, Berkson MH. Mechanical properties of human lumbar spine motion segments. Influence of age, sex, disc level, and degeneration. Spine. 1979;4:1–8.View ArticlePubMedGoogle Scholar
Kelly BP, Bennett CR. Design and validation of a novel Cartesian biomechanical testing system with coordinated 6DOF real-time load control: application to the lumbar spine (L1-S, L4-L5). J Biomech. 2013;46:1948–54.View ArticlePubMedGoogle Scholar
Macintosh JE, Pearcy MJ, Bogduk N. The axial torque of the lumbar back muscles: torsion strength of the back muscles. Aust N Z J Surg. 1993;63:205–12.View ArticlePubMedGoogle Scholar
Sangole AP, Aubin CE, Labelle H, Stokes IAF, Lenke LG, Jackson R, et al. Three-dimensional classification of thoracic scoliotic curves. Spine. 2008;34:91–9.View ArticleGoogle Scholar | CommonCrawl |
Sat, 22 Nov 2014
Within this instrument, resides the Universe
When opportunity permits, I have been trying to teach my ten-year-old daughter Katara rudiments of algebra and group theory. Last night I posed this problem:
Mary and Sue are sisters. Today, Mary is three times as old as Sue; in two years, she will be twice as old as Sue. How old are they now?
I have tried to teach Katara that these problems have several phases. In the first phase you translate the problem into algebra, and then in the second phase you manipulate the symbols, almost mechanically, until the answer pops out as if by magic.
There is a third phase, which is pedagogically and practically essential. This is to check that the solution is correct by translating the results back to the context of the original problem. It's surprising how often teachers neglect this step; it is as if a magician who had made a rabbit vanish from behind a screen then forgot to take away the screen to show the audience that the rabbit had vanished.
Katara set up the equations, not as I would have done, but using four unknowns, to represent the two ages today and the two ages in the future:
$$\begin{align} MT & = 3ST \\ MY & = 2SY \\ \end{align} $$
(!!MT!! here is the name of a single variable, not a product of !!M!! and !!T!!; the others should be understood similarly.)
"Good so far," I said, "but you have four unknowns and only two equations. You need to find two more relationships between the unknowns." She thought a bit and then wrote down the other two relations:
$$\begin{align} MY & = MT + 2 \\ SY & = ST + 2 \end{align} $$
I would have written two equations in two unknowns:
$$\begin{align} M_T & = 3S_T\\ M_T+2 & = 2(S_T + 2) \end{align} $$
but one of the best things about mathematics is that there are many ways to solve each problem, and no method is privileged above any other except perhaps for reasons of practicality. Katara's translation is different from what I would have done, and it requires more work in phase 2, but it is correct, and I am not going to tell her to do it my way. The method works both ways; this is one of its best features. If the problem can be solved by thinking of it as a problem in two unknowns, then it can also be solved by thinking of it as a problem in four or in eleven unknowns. You need to find more relationships, but they must exist and they can be found.
Katara may eventually want to learn a technically easier way to do it, but to teach that right now would be what programmers call a premature optimization. If her formulation of the problem requires more symbol manipulation than what I would have done, that is all right; she needs practice manipulating the symbols anyway.
She went ahead with the manipulations, reducing the system of four equations to three, then two and then one, solving the one equation to find the value of the single remaining unknown, and then substituting that value back to find the other unknowns. One nice thing about these simple problems is that when the solution is correct you can see it at a glance: Mary is six years old and Sue is two, and in two years they will be eight and four. Katara loves picking values for the unknowns ahead of time, writing down a random set of relations among those values, and then working the method and seeing the correct answer pop out. I remember being endlessly delighted by almost the same thing when I was a little older than her. In The Dying Earth Jack Vance writes of a wizard who travels to an alternate universe to learn from the master "the secret of renewed youth, many spells of the ancients, and a strange abstract lore that Pandelume termed 'Mathematics.'"
"I find herein a wonderful beauty," he told Pandelume. "This is no science, this is art, where equations fall away to elements like resolving chords, and where always prevails a symmetry either explicit or multiplex, but always of a crystalline serenity."
After Katara had solved this problem, I asked if she was game for something a little weird, and she said she was, so I asked her:
Mary and Sue are sisters. Today, Mary is three times as old as Sue; in two years, they will be the same age. How old are they now?
"WHAAAAAT?" she said. She has a good number sense, and immediately saw that this was a strange set of conditions. (If they aren't the same age now, how can they be the same age in two years?) She asked me what would happen. I said (truthfully) that I wasn't sure, and suggested she work through it to find out. So she set up the equations as before and worked out the solution, which is obvious once you see it: Both girls are zero years old today, and zero is three times as old as zero. Katara was thrilled and delighted, and shared her discovery with her mother and her aunt.
There are some powerful lessons here. One is that the method works even when the conditions seem to make no sense; often the results pop out just the same, and can sometimes make sense of problems that seem ill-posed or impossible. Once you have set up the equations, you can just push the symbols around and the answer will emerge, like a familiar building approached through a fog.
But another lesson, only hinted at so far, is that mathematics has its own way of understanding things, and this is not always the way that humans understand them. Goethe famously said that whatever you say to mathematicians, they immediately translate it into their own language and then it is something different; I think this is exactly what he meant.
In this case it is not too much of a stretch to agree that Mary is three times as old as Sue when they are both zero years old. But in the future I plan to give Katara a problem that requires Mary and Sue to have negative ages—say that Mary is twice as old as Sue today, but in three years Sue will be twice as old—to demonstrate that the answer that pops out may not be a reasonable one, or that the original translation into mathematics can lose essential features of the original problem. The solution that says that !!M_T=-2, S_T=-1 !! is mathematically irreproachable, and if the original problem had been posed as "Find two numbers such that…" it would be perfectly correct. But translated back to the original context of a problem that asks about the ages of two sisters, the solution is unacceptable. This is the point of the joke about the spherical cow. | CommonCrawl |
Skip to main content Skip to sections
Metallurgical and Materials Transactions B
February 2020 , Volume 51, Issue 1, pp 45–53 | Cite as
MnS Precipitation Behavior of High-Sulfur Microalloyed Steel Under Sub-rapid Solidification Process
Wanlin Wang
Chenyang Zhu
Jie Zeng
Cheng Lu
Peisheng Lyu
Hairui Qian
Hui Xu
First Online: 12 December 2019
A typical high-sulfur microalloyed steel was investigated by a sub-rapid solidification process for grain refinement of the as-cast microstructure. The size and distribution characteristics of the MnS precipitates were analyzed. The variations in the dendrite morphology and secondary dendrite arm spacing (SDAS) under different cooling rates have been studied, which strongly influence the precipitation behavior of MnS. The 3D-morphology of MnS precipitates was revealed by a novel saturated picric acid deep-etching method. Most MnS precipitates with a length smaller than 5 μm were columnar or equiaxed in the corresponding dendrite zones under sub-rapid solidification conditions at cooling rates of 261 to 2484 K/s. Furthermore, an area scan analysis of the precipitates showed the number of small MnS per square millimeter with lengths lower than 3 μm decrease from 200,537 to 110,067. The percentage of large MnS with a length over 5 μm increased from 2.6 to 6.2 pct as the solidification condition changed from sub-rapid to air cooling. In addition, the size of MnS precipitate was found to depend linearly on the SDAS.
Manuscript submitted August 8, 2019.
High-strength medium carbon sulfur-containing microalloyed steels have been widely used in hot forging parts of automobiles, such as crankshafts and connecting rods, due to the advantage of energy-savings with elimination of traditional quenching and tempering processes.[1,2] During the continuous casting process of this type of steels, the precipitation behavior of MnS is crucial, as MnS precipitates are good lubricants for improving the cutting performance of microalloyed steels. MnS precipitates in as-cast steel slabs can be typically classified according to the morphology: globular MnS (Type I); fine rod-like MnS (Type II); and angular MnS (Type III).[3] It is well known that the mechanical properties of high-sulfur steels are closely related to the MnS precipitates' shape and distribution.[4] In traditional continuous casting of sulfurized steels, the size of MnS precipitates is generally larger than 10 μm.[5] In order to obtain better cutting performance, the blooms require prolonged heat treatments to decompose the MnS precipitates into finer rod-like shapes (Type II) having a mean length lower than 5 μm.[6] This extended heat treating processes would consume significant amounts of additional energy. Therefore, less energy-intensive new production methods to ensure finely dispersed MnS inclusions in high sulfur-containing microalloyed steels are necessary. Lower sulfur segregation and finer as-cast microstructure with increasing of solidification cooling rates could reduce the precipitation and growth of sulfide.[7,8]
As the only industrialized sub-rapid solidification process, strip casting is an important technological revolution for the steel industry, which can produce thin strips directly from the liquid metal. Strip casting has the potential to greatly reduce operating and investment costs through the elimination of multiple rolling steps.[9,10] Strip casting has been known to provide solutions for steels with difficult casting issues including macro-segregation, precipitation of large inclusions, larger structures. Due to the rapid cooling experienced during strip casting, the morphology of as-cast microstructure could be significantly refined.[11,12] Electrical steels,[13] TRIP steels,[14] dual phase (DP) steels,[15] and other special steels with complex and non-uniform morphologies have been identified as potential products applicable for strip casting. Some past publications have reported the formation of fine manganese sulfides during rapid solidification of low-sulfur steels, such as stainless steels and high-strength low-alloy steels.[8,16]
However, there has been limited research related to strip casting of high-sulfur microalloyed steels. In particular, studies on the relationship between the secondary arm dendrite spacing (SADS), cooling rate, and mean length of MnS precipitates have yet to be studied to the knowledge of the present authors. The aim of this work is to present a novel method for controlling MnS precipitation via sub-rapid solidification process and reveal the relationship between the distribution characteristics and size of MnS precipitates, cooling rate, and SDAS of the as-cast structure.
Experimental Arrangement
The steel sample was in an as-cast condition produced from bloom continuous casting. The typical high-sulfur microalloyed steel samples were cut into cylinders of diameter 7 mm and height 1.6 mm weighing about 5.0 g (± 0.05 g). The chemical compositions are shown in Table I. The samples were polished with an abrasive paper of grit number 400 to remove the oxidized surface and then cleaned in ethanol with ultrasonic agitation prior to the droplet solidification test.
The Main Chemical Composition of the High-Sulfur Microalloyed Steel (in Mass Percent)
Experimental Apparatus and Procedure
The experimental apparatus for the in-situ observation of the solidification phenomena of the molten metal droplets impinging onto a water-cooled copper substrate was modified from past publications.[10,17, 18, 19, 20, 21] A schematic illustration of the system is shown in Figure 1.
Open image in new window
Schematic illustration of the droplet solidification test
The droplet solidification testing system consists of two parts: droplet ejection and data acquisition. The liquid droplet is obtained by heating the metal specimen using induction coils. The ejection of molten droplets to the copper mold is conducted through a pulse of high-purity Ar (99.999 vol pct). The atmosphere control system allows the control of the oxygen partial pressure. This system allows the metal melting and dropping on the substrate surface under controlled temperature and atmosphere.
Prior to the experiment, the copper mold substrate is cleaned and polished with an abrasive paper of grit number 3500 to ensure comparable surface roughness for each experiment. The oxygen partial pressure is lower than 10−5 atm through a stream of high-purity Ar (99.999 vol pct). The metal specimen is placed within a quartz tube that has a small hole on the bottom. The specimen lies in the middle of the induction coil installed inside the bell jar, where a controlled atmosphere can be ensured. The sample is heated and melted using the induction furnace, and the temperature was measured with a pyrometer placed above the tube. A PID controller receives the temperature signals from the pyrometer and controls the target temperature by adjusting the power of the induction furnace. When the desired temperature is reached, the liquid droplet is ejected through the small hole at the bottom of the tube with the help of a pulse of high-purity Ar (99.999 pct). The droplet subsequently impinges onto the water-cooled copper substrate and solidifies. A charge coupled device (CCD) camera is placed adjacent to the bell jar to record the entire melting and solidification process of the sample.
The target temperature of the liquid metal before ejection is 1550 °C. In order to obtain samples with different cooling rates, one melt sample was retained in the quartz tube and subject to air cooling, and the other melt sample was ejected onto the surface of the water-cooled copper substrate for sub-rapid cooling.
Analysis Method
The solidified samples were cut into halves along the longitudinal direction and prepared through standard metallographic procedures for cross-sectional analysis. In addition, a novel saturated picric acid deep-etching method was developed to reveal the 3D-morphology of MnS precipitates. The samples were subject to morphological examinations using optical microscopy (OM, Jiangnan MR5000, China), scanning electron microscopy (SEM, TESCAN MIRA 3 LMU, Czech) equipped with X-ray energy-dispersive spectrometer (EDS, Oxford X-Max20, England), and electron probe micro-analysis (EPMA, JEOL JXA-8530F, City, Japan) equipped with wavelength dispersive X-ray spectrum system (WDS, XM-86030). The hardness of the samples was identified using a microhardness tester (HMV-2T, Japan). The number and size of MnS precipitates were analyzed by an inclusion automatic detection scanning electron microscopy (ASPEX). The secondary dendrite arm spacing and mean size of MnS precipitates in the as-cast microstructures were analyzed by GetData software from the OM and SEM images.
Droplet Process
A typical experiment conducted for the high-sulfur microalloyed steel droplet from ejection to solidification is provided in Figure 2. The sample is initially heated in the quartz and the initiation of ejection occurs after about 25.9 seconds (Figures 2(a) and (b)). The molten droplet then forms a hemispherical shape on the substrate after ejection and rapidly solidified (Figures 2(b) through (c)). Finally, the temperature of the sample cools down rapidly in less than 10 seconds (Figures 2(b) through (d)).
The droplet solidification process: (a) the heating before ejection, (b) the start of ejection, (c) and (d) the cooling on the water-chilled copper
The microstructure of the solidified droplet is dependent on the heat transfer conditions, and the corresponding dendritic morphology has a significant impact on the internal quality including segregation and porosities of the as-cast ingot.[22] The microalloyed steel droplet solidifies at the bottom in the upward direction away from the copper substrate towards the top (Figure 3(a)). The as-cast microstructure can be divided into three sections of fine, columnar, and equiaxed grain zones due to the different heat transfer conditions. The heat transfer direction is perpendicular to the surface of the mold and a small amount of convective heat transfer is also occurring with the atmosphere along the perpendicular direction of the droplet/gas interface. The morphological characteristics identified in the sub-rapid solidification droplet were consistent with industrial strip cast samples that also show three solidification zones similar to the present work.[11] The other air-cooled sample was solidified from the periphery to the core of the body within the quartz tube (Figure 3(b)). Contrary to the sub-rapid solidified sample, it showed two sections of columnar and equiaxed grain zones due to the much lower cooling rate. It is obvious that the dendrites and grains in the water cooling sample are much finer than the air-cooled sample.
Typical dendritic structures of the solidified samples: (a) sub-rapid solidified droplet, (b) quartz tube solidified sample
Identification of the SDAS
To quantify the dendrite size, the secondary dendrite arm spacing (SDAS) is closely related to the cooling process and utilized to characterize the solidification behavior. In particular, it can be used to evaluate the grain refinement and segregation degree.[23] The SDAS of as-solidified droplets measured in multiple dendritic regions is provided in Figure 3. An average of the multiple measurements was taken and the standard deviations calculated. Position 1 is the bottom of the columnar grain zone, position 2 is the top of the columnar grain zone for the sub-rapid solidified sample. Position 3 is the center of the equiaxed grain zone in the sub-rapid solidified sample. Position 4 is the equiaxed grain zone in the air-cooled sample. The cooling rates were estimated from the secondary dendrite arm spacing in the as-cast structure according to the following equation[24]:
$$ \lambda = \left\{ \begin{aligned} (169.1 - 720.9w_{[C]} ) \cdot C_{R}^{ - 0.4935} ,0 < w_{[C]} \le 0.15 \hfill \\ 143.9 \cdot C_{R}^{ - 0.3616} \cdot w_{[C]}^{{(0.5501 - 1.996w_{[C]} )}} ,w_{[C]} \ge 0.15 \hfill \\ \end{aligned} \right. $$
where the symbol λ represents the measured data of secondary dendrite arm spacing, CR is the cooling rate (K/s), and w[C] is the carbon content (mass pct).
The measured SDAS and corresponding calculated cooling rates are shown in Table II. The results indicate that the SDAS of the droplet increases from the bottom (11.56 μm) to the top (26.11 μm) as the heat transfer varies in the dendrite zone. For comparison, the SDAS of the quartz tube solidified sample increases significantly, with a value of 42.96 μm due to a slow cooling rate of 66 K/s. In the sub-rapid cooled droplet, the corresponding cooling rate of Position 1 near the bottom of the droplet is close to 2500 K/s and Position 3 at the top is decreased to 261 K/s. The cooling rates in these positions correlate to the sub-rapid solidification range.[25]
The SDAS and Corresponding Calculated Cooling Rates
SDAS (μm)
Cooling Rates (K/s)
Distribution of MnS
The distribution characteristics of the MnS precipitates are investigated by EPMA analysis, as shown in Figure 4. Owing to the serious microsegregation of S and Mn elements in the interdendritic regions, the substantial precipitation of MnS begins at 1410 °C in the later stages of solidification.[26] It can be found from Figure 4(a) that the distribution of MnS is distributed along the columnar direction adjacent to the dendrites in the interdentritic region with fast directional cooling conditions. The distribution of MnS shows a dramatic change with the appearance of a columnar-to-equiaxed transition (CET) at lower cooling rates. For the center equiaxed grain zones, the MnS is also equiaxed distributed and concentrated along the dendrite boundaries (Figure 4(b)). In the sub-rapid solidification process, elemental segregation could be minimized by higher cooling rates, which can also influence the distribution behavior of precipitants.[27]
EPMA analysis of MnS precipitates in (a) positions 2 and (b) position 4 of Fig. 3
Size of MnS
To reveal the 3D-morphology of MnS precipitates, the specimens were etched using a saturated picric acid deep-etching method (with a temperature of 70 °C, duration of 20 minutes). The deep-etched surface was cleaned using a polishing cloth with water. The 3D-morphology of MnS can be observed by SEM under secondary electron mode at four typical positions of Figure 3, as shown in Figures 5(d), 6(c) and (d). The EDS mapping of the elements S and Mn confirms the precipitation of MnS in the as-cast structure (Figures 5(e), (f) and 6(e), (f)). It can be observed that the morphology of MnS has different shapes/types at different positions. As shown in Figure 5(a) through (c), the size of MnS decreases significantly along the final to the initial solidification position. The MnS is a large globular-form (Type I) in position 4 (Figures 6(c) and (d)) and becomes a fine rod-like form (Type II) in position 3 (Figure 5(d)). The variation of MnS size is closely related to the cooling rate and SDAS, which will be discussed in Section III–F.
Typical SEM and EDS mapping analysis of MnS precipitates in sub-rapid solidified droplet: (a) SEM of position 3 in Fig. 3, (b) SEM of position 2 in Fig. 3, (c) SEM of position 1 in Fig. 3, (d) 3D-Morphological of MnS in position 3, (e) Element S distribution in (d), (f) Element Mn distribution in (d)
Typical SEM and EDS mapping analysis of MnS precipitates in air-cooled sample: (a) and (b) SEM of position 4 in Fig. 3, (c) and (d) 3D-Morphological of MnS in position 4 of Fig. 3, (e) Element S distribution in (d), (f) Elements Mn distribution in (d)
The mean length of MnS precipitates is analyzed through the GetData software after measuring at least thirty different precipitates along one position of the SEM image and the results are shown in Table III. It can be found that the mean length of MnS precipitates in position 4 of air-cooled droplets (5.32 μm) is larger than the sub-rapid solidified droplets (1.98-4.51 μm). The mean length of MnS precipitates in the sub-rapid solidified sample also show variations with each other from 1.98 to 4.51 μm due to the different cooling rates from the bottom to the top of the molten droplet.
Table III
The Mean Length of MnS Precipitates
Mean Length of MnS (μm)
In order to further compare the length of MnS precipitates in the sub-rapid and air-cooled samples, a statistical analysis of the size and number of MnS inclusions is shown in Figure 7. The area scanning (1.49 × 104μm2) of MnS in positions 2 and 4 represents the sub-rapid and air cooling, respectively. The size distribution of MnS is mainly focused within the length range of below 3 μm, which accounts for more than 75 pct of the observed inclusions. In this size range, the number of MnS inclusions per square millimeter (number density) is 200,537 and 110,067 under the condition of sub-rapid and air-cooled samples, respectively. The number of small MnS inclusions increase with sub-rapid solidification, which suggests higher cooling rates could generate smaller sized MnS inclusions. In addition, the number density of MnS larger than 5 μm is 6376 and 9128, which accounts for 2.6 and 6.2 pct of the total number of inclusions under sub-rapid and air cooling conditions. The decrease in the large sized MnS further confirms that the sub-rapid solidification process could reduce the size and thus refine the MnS precipitates.
Size and number distribution of MnS inclusions for sub-rapid cooling and air cooling conditions: (a) Number density of MnS, (b) Ratio of MnS
Moreover, in order to confirm the mechanical properties of MnS, microhardness tests have also been conducted. The results indicate that the average Vickers microhardness value of MnS (~ 167 HV) is much smaller than the metal matrix (~ 712 HV), which further shows the reason behind the good lubrication ability of MnS to improve the cutting performance of microalloyed steels.
Control Mechanism of MnS Precipitates
In order to better indicate the effect of cooling rate on MnS precipitation, a comparison between past results from a traditional continuous casting process was compared to the present work.[26] The interactive relationship between the secondary dendrite arm spacing (SDAS), cooling rate, and mean length of the precipitated MnS is shown in Figure 8. The cooling rate during traditional continuous casting is only 3 K/s, which is almost 1/1000 of the cooling rate of the sub-rapid solidification process of the present work. Obviously, the size of precipitated MnS with mean length of 13.75 μm in traditional continuous casting is much bigger than in the sub-rapid solidification process (1.98-4.51 μm). A distinct reduction of the size of MnS with decreasing SDAS or increasing cooling rate becomes completely feasible, indicating a shorter growth time of MnS precipitates.
Interactive relationship between the secondary dendrite arm spacing, cooling rate, and mean length of the precipitated MnS
The quantitative relations between the mean length of MnS and the SDAS are further analyzed according to the obtained data of Figure 8. The results suggest that the relationship between the measured mean length of MnS precipitates and the measured SDAS conforms to the following formula:
$$ L = 0.0 9 8\lambda + 1. 4 5 5, $$
where the symbol λ represents the secondary dendrite arm spacing (μm), and L stands for the mean length (μm) of MnS precipitates. The trend of MnS is almost in line with the trend of SDAS in the as-cast structure.
As illustrated in Figure 9, the MnS precipitates with smaller sizes are columnar and/or equiaxed distributed in the corresponding dendrite zones with shorter SDAS due to higher cooling rates during sub-rapid solidification. The reduction of the SDAS limits the time and space for further growth of MnS precipitates and further causes a transition of the MnS morphology from a large globular-form (Type I) to a fine rod-like form (Type II).
Schematic illustration of MnS precipitates control mechanism with droplet solidification process
It should be noted that the control of MnS precipitation in high-sulfur microalloyed steels using a sub-rapid solidification method was not noticed before because the existing works usually focused on optimizing the traditional continuous casting process and its subsequent heat treating process. The size of MnS precipitates in the present work is significantly decreased compared to the typical industrial process. The control of MnS precipitates with a length smaller than 5 μm is difficult through the conventional continuous casting process.
A typical high-sulfur microalloyed steel has been experimentally investigated by a novel sub-rapid solidification method by analyzing the droplet process, microstructure, cooling rate, distribution, and size of MnS precipitates. The following are the main conclusions:
The 3D-morphology of MnS precipitate was revealed by a simple saturated picric acid deep-etching method. Most MnS precipitates with a length smaller than 5 μm were columnar or equiaxed in the corresponding dendrite zones under sub-rapid solidification conditions at cooling rates of 261-2484 K/s
As the solidification condition changed from sub-rapid to air cooling, the number of small MnS per square millimeter with a length lower than 3 μm decreased from 200,537 to 110,067, and the percentage of the large MnS with a length over than 5 μm increased from 2.6 to 6.2 pct.
The length of MnS reduced from 4.51 to 1.98 μm as the SDAS decreased from 26.11 to 11.56 μm due to the different cooling rates from the top to the bottom of the molten droplet. A formula which can be used to predict the size of MnS precipitates was established.
A novel experimental droplet solidification apparatus was developed to simulate the process of strip casting. The sub-rapid solidification process is probably a viable method for obtaining small rod-like form MnS precipitates and thus may has good prospects for industrial application.
This work is supported by the National Natural Science Foundation of China (U1760202), Hunan Provincial Key Research and Development Program (2018WK2051), Opening Foundation of the State Key Laboratory of Advanced Metallurgy (KF19-04), Hunan Provincial Innovation Foundation for Postgraduate (CX2018B089), and Fundamental Research Funds for the Central Universities of Central South University (2018zzts018).
1. M.J. Balart, C.L. Davis, and M. Strangwood: Mater. Sci. Eng. A, 2000, vol. 284, pp. 1-13.CrossRefGoogle Scholar
2. N. Tsunekage, and H. Tsubakino: ISIJ Int., 2001, vol. 41, pp. 498-505.CrossRefGoogle Scholar
3. C.E. Sims: Trans. Am. Inst. Min. Metall. Eng., 1959, vol. 215, pp. 367-93.Google Scholar
4. M. Wu, W. Fang, R.M. Chen, B. Jiang, H.B. Wang, Y.Z. Liu, and H.L. Liang: Mater. Sci. Eng. A, 2019, vol. 744, pp. 324-34.CrossRefGoogle Scholar
5. X.F. Zhang, W.J. Lu, and R.S. Qin: Scripta Mater., 2013, vol. 69, pp. 453-56.CrossRefGoogle Scholar
6. X.J. Shao, X.H. Wang, M. Jiang, W.J. Wang, and F.X. Huang: ISIJ Int., 2011, vol. 51, pp. 1995-2001.CrossRefGoogle Scholar
7. Y. Wang, L. Zhang, H. Zhang, X Zhao, S. Wang, and W. Yang: Metall. Mater. Trans. B, 2017, vol. 48, pp.1004-1013.CrossRefGoogle Scholar
8. S. Malekjani, I.B. Timokhina, J. Wang, P.D. Hodgson, and N.E. Stanford: Mater. Sci. Eng. A, 2013, vol. 581, pp. 39-47.CrossRefGoogle Scholar
9. H. Jiao, Y. Xu, W. Xiong, Y. Zhang, G. Cao, C. Li, J. Niu, and R.D.K. Misra: Mater. Design, 2017, vol. 136, pp. 23-33.CrossRefGoogle Scholar
P. Nolli: PhD Thesis, Carnegie Mellon University, 2007.Google Scholar
11. Z.J. Wang, X.M. Huang, Y.W. Li, G. D. Wang, and H. T. Liu: Mater. Sci. Eng. A, 2019, vol. 747, pp. 185-196.CrossRefGoogle Scholar
12. C.Y. Zhu, W.L. Wang, J. Zeng, C. Lu, L.J. Zhou, and J. Chang: ISIJ Int., 2019, vol. 59, pp. 880-88.CrossRefGoogle Scholar
13. F. Fang, Y.X. Zhang, X. Lu, Y. Wang, M.F. Lan, G. Yuan, R.D.K. Misra, and G.D. Wang: Scripta Mater., 2018, vol. 147, pp. 33-36.CrossRefGoogle Scholar
14. Z.P. Xiong, A.A. Saleh, R.K.W. Marceau, A.S. Taylor, N.E. Stanford, A.G. Kostryzhev, and E.V. Pereloma: Acta Mater., 2017, vol. 134, pp. 1-15.CrossRefGoogle Scholar
15. H.S. Wang, G. Yuan, J. Kang, G.M. Cao, C.G. Li, R.D.K. Misra, and G.D. Wang: Mater. Sci. Eng. A, 2017, vol. 703, pp. 486-95.CrossRefGoogle Scholar
16. T. Dorin, K. Wood, A. Taylor, P. Hodgson, and N. Stanford: Mater. Charact., 2016, vol. 112, pp. 259-68.CrossRefGoogle Scholar
17. P. Nolli, and A.W. Cramb: Metall. Mater. Trans. B, 2008, vol. 39, pp. 56-65.CrossRefGoogle Scholar
18. P. Nolli, and A.W. Cramb: ISIJ Int., 2007, vol. 47, pp. 1284-93.CrossRefGoogle Scholar
19. W.L. Wang, C.Y. Zhu, C. Lu, J. Yu, and L.J. Zhou: Metall. Mater. Trans. A, 2018, vol. 49, pp. 5524-34.CrossRefGoogle Scholar
20. C.Y. Zhu, W.L. Wang, and C. Lu: J Alloys Compd., 2019, vol. 770, pp. 631-39.CrossRefGoogle Scholar
21. C. Lu, W.L. Wang, J. Zeng, C.Y. Zhu, and J. Chang: Metall. Mater. Trans. B, 2019, vol. 50, pp. 77-85.CrossRefGoogle Scholar
22. J. Zeng, W. Chen, W. Yan, Y. Yang, and A. McLean: Mater. Design, 2016, vol. 108, pp. 364-73.CrossRefGoogle Scholar
23. A. Wagner, B.A. Shollock, and M. McLean: Mater. Sci. Eng. A, 2004, vol. 374, pp. 270-79.CrossRefGoogle Scholar
24. Y.M. Won, and B.G. Thomas: Metall. Mater. Trans. A, 2001, vol. 32, pp. 1755-67.CrossRefGoogle Scholar
25. Y.T. Dai, Z.S. Xu, Z.P. Luo, K. Han, Q.J. Zhai, and H.X. Zheng: J. Magn. Magn. Mater., 2018, vol. 454, pp. 356-61.CrossRefGoogle Scholar
26. J. Zeng, W.Q. Chen, and H.G. Zheng: Ironmak. Steelmak., 2017, vol. 44, pp. 676-84.CrossRefGoogle Scholar
27. C.J. Song, W.B. Xia, J. Zhang, Y.Y. Guo, and Q.J. Zhai: Mater. Design, 2013, vol. 51, pp. 262-67.CrossRefGoogle Scholar
© The Minerals, Metals & Materials Society and ASM International 2019
1.School of Metallurgy and EnvironmentCentral South UniversityChangshaPeople's Republic of China
2.National Center for International Research of Clean MetallurgyCentral South UniversityChangshaPeople's Republic of China
Wang, W., Zhu, C., Zeng, J. et al. Metall and Materi Trans B (2020) 51: 45. https://doi.org/10.1007/s11663-019-01752-4
Received 08 August 2019
First Online 12 December 2019
Publisher Name Springer US | CommonCrawl |
Lumerical Support
Element library
Erbium Doped Fiber (EDF) - INTERCONNECT Element
Erbium doped fiber
optical, bidirectional
port 1 Optical Signal
General Properties
Default unit
Defines the name of the element.
Erbium Doped Fiber - -
Defines whether or not to display annotations on the schematic editor.
true - [true, false]
Defines whether or not the element is enabled.
Defines the element unique type (read only).
A brief description of the elements functionality.
Defines the element name prefix.
EDF - -
Defines the element model name.
Defines the element location or source in the library (custom or design kit).
local path
Defines the local path or working folder $LOCAL for the element.
An optional URL address pointing to the element online help.
Standard Properties
Defines the bidirectional or unidirectional element configuration.
bidirectional - [bidirectional, unidirectional
The length of the waveguide.
6 m [0, +∞)
Er density
The Erbium ions in the doped fiber region.
5e+024 m^-3 (0, +∞)
Er lifetime
The fluorescence lifetime of the Erbium metastable level.
ms*
*std. unit is s
(0, +∞)
Er core radius
The radius of the Erbium-doped fiber region.
um*
*std. unit is m
Waveguide/Mode 1 Properties
orthogonal identifier 1
The first identifier used to track an orthogonal mode of an optical waveguide. For most waveguide, two orthogonal identifiers '1' and '2' are available (with the default labels 'TE' and 'TM' respectively).
1 - [1, +∞)
The label corresponding to the first orthogonal identifier.
X - -
The second identifier used to track an orthogonal mode of an optical waveguide. For most waveguide, two orthogonal identifiers '1' and '2' are available (with the default labels 'TE' and 'TM' respectively).
Y - -
Waveguide/Cross Sections Properties
load Er cross section from file
Defines whether or not to load wavelength dependent cross-section parameters from an input file or to use the currently stored values.
false - [true, false]
Er cross section filename
The file containing the wavelength dependent cross section parameters. Refer to the Implementation Details section for the format expected.
Er cross section table
The table containing the wavelength dependent absorption and emission cross section parameters.
<199,3> [0.9336e-006, 0.937218e-006, 0.940837e-006,...] - -
Waveguide/Confinement Factor Properties
confinement factor parameter
Defines whether the confinement values are defined as a table with wavelength dependent values or calculated from fiber specifications.
fiber - [fiber, table
confinement factor mode field
Defines the mode field model used to calculate the confinement factor from the fiber specifications.
LP01 - [LP01, Marcuse, Petermann II, Myslinki
fiber core radius
The core radius of the fiber.
fiber numerical aperture
The numerical aperture of the fiber.
0.23 - (0, +∞)
load confinement factor from file
Defines whether or not to load wavelength dependent confinement factor values from an input file or to use the currently stored values.
confinement factor filename
The file containing the wavelength dependent confinement factor values. Refer to the Implementation Details section for the format expected.
confinement factor table
The table containing the wavelength dependent confinement factor values.
<2> [1.55e-006, 1] - -
Waveguide/Loss Properties
background loss parameter
Defines whether the loss values are defined as a table with wavelength dependent values or a constant value.
constant - [constant, table
constant background loss
The background loss of the fiber.
0 dB/m [0, +∞)
load background loss from file
Defines whether or not to load wavelength dependent loss from an input file or to use the currently stored values.
background loss filename
The file containing the wavelength dependent loss values. Refer to the Implementation Details section for the format expected.
background loss table
The table containing the wavelength dependent loss values.
Waveguide/Concentration Quenching Properties
concentration quenching model
Defines the concentration quenching model.
none - [none, homogeneous, inhomogeneous, combined
cooperative upconversion coefficient
The homogeneous model cooperative upconversion coefficient.
1.1e-024 m^3/s [0, +∞)
relative number of clusters
The inhomogeneous model relative number of clusters.
0.2 ratio [0, 1]
Numerical Properties
enable noise
Defines whether or not to add ASE noise to the output signal.
noise center frequency
The center frequency of the generated noise spectrum.
THz*
*std. unit is Hz
noise bandwidth
The ASE noise spectral range.
[0, +∞)
noise bin width
Defines the noise bins frequency spacing.
GHz*
convert noise bins
Defines if noise bins are incorporated into the signal waveform.
automatic seed
Defines whether or not to automatically create an unique seed value for each instance of this element. The seed will be the same for each simulation run.
The value of the seed for the random number generator. A value zero recreates an unique seed for each simulation run.
convergence tolerance
This determines the convergence tolerance.
0.001 - [0, +∞)
maximum number of iterations
This determines the maximum number of iterations required to reach converges.
100 - [2, +∞)
minimum number of sections
Defines the number of longitudinal sections.
The minimum detectable signal power level.
dBm*
*std. unit is W
(-∞, +∞)
Er ions per cubic meter to ppm wt
Defines the relationship between Erbium concentration in cubic meters and in weight ppm units.
10e+021 ppm-wt [0, +∞)
Diagnostic Properties
run diagnostic
Enables running detailed analysis on the signal, pump and noise propagation (frequency and longitudinal dependencies).
pump frequency threshold
The frequency upper value where a signal is considered a pump or not.
(2.99792e-83, +∞)
Diagnostic/Pumps Properties
pump plot format
Defines the plot format for the results.
1D - [1D, 2D
pump plot kind
This option allow users to choose to plot in units of frequency or wavelength.
wavelength - [frequency, wavelength
pump power unit
Defines the power unit to plot the results.
W - [W, dBm
Diagnostic/Signals Properties
signal plot format
signal plot kind
frequency - [frequency, wavelength
signal power unit
dBm - [W, dBm
Diagnostic/Noise Properties
noise plot format
noise plot kind
noise power unit
Implementation Details
Erbium-doped fiber amplifiers (EDFA) by far dominate as part of the backbone of long-haul optical fiber communications due to their low noise, high and broadband optical gain [1]. EDFA is an optical repeater device that is used to boost the intensity of optical signals being carried through a fiber optic communication system. A typical setup of a simple dual pump EDFA is shown case below:
The core of the EDFA is an optical fiber doped with the rare earth element erbium so that the glass fiber can absorb light at one frequency and emit light at another frequency.
The energy diagram of the Er3+ doped system is presented in the following figure. The pumping process takes place between the ground level 4I15/2 and the excited level 4I13/2 (1480 pump) or 4I11/2 (980 pump) with respective fractional populations n1, n2 and n3.
The level 4I11/2 has a very short lifetime 0.1-10 μs and relaxes nonradiatively to level 4I13/2 and the laser transition takes place between the 4I13/2 and 4I15/2. So, the behaviors of rare-earth-ion doped devices can be described in terms of two-level rate equations for the population inversion density (n2), the pump field (PP), the signal field (PS), and the amplified spontaneous noise (PA):
$$ n_{2} = \frac{\sum_{i} \frac{\sigma_{i}^{a} \Gamma_{i}^{P} P_{P,i}^{\pm}}{h \nu_{i} A}+\sum_{j} \frac{\sigma_{j}^{a} \Gamma_{j}^{P} P_{S, j}^{\pm}}{h \nu_{j} A}+\sum_{k} \frac{\sigma_{k}^{a} \Gamma_{k}^{A} P_{A k}^{\pm}}{h \nu_{k} A}}{\sum_{i} \frac{\left(\sigma_{i}^{a}+\sigma_{i}^{e}\right) \Gamma_{i}^{P} P_{P, i}^{\pm}}{h \nu_{i} A}+\sum_{j} \frac{\left(\sigma_{j}^{a}+\sigma_{j}^{e}\right) \Gamma_{j}^{P} P_{S, j}^{\pm}}{h \nu_{j} A}+\sum_{k} \frac{\left(\sigma_{k}^{a}+\sigma_{k}^{e}\right) \Gamma_{k}^{A} P_{A, k}^{\pm}}{h \nu_{k} A}+\frac{1}{\tau_{E r}}} $$
$$ \frac{\partial P_{P, i}^{\pm}}{\partial z}=\pm \Gamma_{i}^{P} N_{E_{r}}\left(\sigma_{i}^{e} n_{2}-\sigma_{i}^{a} n_{1}\right) P_{P, i}^{\pm} \mp \alpha_{i} P_{P, i}^{\pm} $$
$$ \frac{\partial P_{S, j}^{\pm}}{\partial z}=\pm \Gamma_{j}^{S} N_{E r}\left(\sigma_{j}^{e} n_{2}-\sigma_{j}^{a} n_{1}\right) P_{S, j}^{\pm} \mp \alpha_{j} P_{S, j}^{\pm} $$
$$ \frac{\partial P_{A, k}^{\pm}}{\partial z}=\pm \Gamma_{k}^{A} N_{E r}\left(\sigma_{k}^{e} n_{2}-\sigma_{k}^{a} n_{1}\right) P_{A, k}^{\pm} \pm \Gamma_{k}^{A} 2 h \nu_{k} \Delta\nu_{k} \sigma_{k}^{e} N_{Er} n_{2} \mp \alpha_{i} P_{P, i}^{\pm} $$
$$ \Gamma_{i}=\frac{\int_{s} \rho_{Er}(s) \Psi_{i}(s) d s}{\int_{s} \rho_{E r}(s) d s}, 1=n_{1}+n_{2} $$
where the Γ are the overlap factors between the light-field modes and the erbium distribution, which also knows as the cross section; σa and σe are the absorption and emission cross sections, respectively; A is the effective area of the erbium distribution; ± represent the forward-travelling and backward-travelling directions. The high gain erbium-doped fiber amplifiers require high erbium concentrations (the typical concentration is 0.7*10-19 cm-3), which breaks the assumption of isolated erbium ions and induces the dissipative ion-ion interaction via the energy transfer between neighboring erbium ions, resulting in a reduction in the pump efficiency. This quenching effect was added into the amplifier model via the homogeneous up-conversion process and pair-induced quenching (PIQ) [2]. The homogeneous model assumes that the ions are evenly distributed and the population inversion is given by [3].
$$ n_{2}=\frac{\sum_{i} \frac{\sigma_{i}^{a} \Gamma_{i}^{P} P_{P, i}^{\pm}}{h \nu_{i} A}+\sum_{j} \frac{\sigma_{j}^{a} \Gamma_{j}^{P} P_{S, j}^{\pm}}{h \nu_{j} A}+\sum_{k} \frac{\sigma_{k}^{a} \Gamma_{k}^{A} P_{A, k}^{\pm}}{h \nu_{k} A}}{ \sum_i \frac{\left(\sigma_{i}^{a}+\sigma_{i}^{e}\right) \Gamma_{i}^{P} P_{P, i}^{\pm}}{h \nu_{i} A}+\sum_{j} \frac{\left(\sigma_{j}^{a}+\sigma_{j}^{e}\right) \Gamma_{j}^{P} P_{S, j}^{\pm}}{h \nu_{j} A}+\sum_{k} \frac{\left(\sigma_{k}^{a}+\sigma_{k}^{e}\right) \Gamma_{k}^{A} P_{A, k}^{\pm}}{h \nu_{k} A}+\frac{1}{\tau_{E r}}+C_{u p} n_{2} N_{t}} $$
where Cup is the concentration-independent and host-dependent two-particle up-conversion constant measured in m3/s.
In the inhomogeneous model [4], it is assumed that the ions are not evenly distributed and there are two distinct species: clustered ions and single ions that cannot interact with each other. The total population inversion is the sum of the average population inversion of clustered ions, n2p, and the average population inversion of single ions, n2SI, given by
$$ n_{2} = n_{2}^{p} + n_{2}^{SI} $$
$$ n_{2}^{p}=\frac{2 R \Psi}{\sum_{i} \frac{\left(\sigma_{i}^{a}+\sigma_{i}^{e}\right) \Gamma_{i}^{P} P_{P, i}^{\pm}}{h \nu_{i} A}+\sum_{j} \frac{\left(\sigma_{j}^{a}+\sigma_{j}^{e}\right) \Gamma_{j}^{P} P_{S, j}^{\pm}}{h \nu_{j} A}+\sum_{k} \frac{\left(\sigma_{k}^{a}+\sigma_{k}^{e}\right) \Gamma_{k}^{A} P_{A k}^{\pm}}{h \nu_{k} A}+\frac{1}{\tau_{E r}}} $$
$$ n_{2}^{\mathrm{SI}}=\frac{(1-2 R) \Psi}{\sum_{i} \frac{\left(\sigma_{i}^{a}+\sigma_{i}^{e}\right) \Gamma_{i}^{P} P_{P, i}^{\pm}}{h \nu_{i} A}+\sum_{j} \frac{\left(\sigma_{j}^{a}+\sigma_{j}^{e}\right) \Gamma_{j}^{P} P_{S,j}^{\pm}}{h \nu_{j} A}+\sum_{k} \frac{\left(\sigma_{k}^{a}+\sigma_{k}^{e}\right) \Gamma_{k}^{A} P_{A, k}^{\pm}}{h \nu_{k} A}+\frac{1}{\tau_{Er}}} $$
$$ \Psi=\sum_{i} \frac{\sigma_{i}^{a} \Gamma_{i}^{P} P_{P, i}^{\pm}}{h \nu_{i} A}+\sum_{j} \frac{\sigma_{j}^{a} \Gamma_{j}^{P} P_{S, j}^{\pm}}{h \nu_{j} A}+\sum_{k} \frac{\sigma_{k}^{a} \Gamma_{k}^{A} P_{A, k}^{\pm}}{h \nu_{k} A} $$
where R is the relative number of clusters.
In fact, the interaction between erbium ions should include cooperative up-conversion and pair-ion quenching and the degradation of gain performance caused by concentration quenching should arise from two contributions: cooperative up-conversion between single ions and cluster-induced quenching. In the combined model, the total population inversion is the sum of the average population inversion of clustered ions, n2p, and the average population inversion of single ions, n2SC, given by
$$ n_{2} = n_{2}^{p} + n_{2}^{SC} $$
$$ n_{2}^{s c}=\frac{(1-2 R) \Psi}{\sum_{i} \frac{\left(\sigma_{i}^{a}+\sigma_{i}^{e}\right) \Gamma_{i}^{P} P_{P, i}^{\pm}}{h \nu_{i} A}+\sum_{j} \frac{\left(\sigma_{j}^{a}+\sigma_{j}^{e}\right) \Gamma_{j}^P P_{P, j}^{\pm}}{h \nu_{j} A}+\sum_{k} \frac{\left(\sigma_{k}^{a}+\sigma_{k}^{e}\right) \Gamma_{k}^{A} P_{A, k}^{\pm}}{h \nu_{k} A}+\frac{1}{\tau_{E r}}+C_{u p}(1-2 R) n_{2}^{s c} N_{t}} $$
$$ \Psi=\sum_{i} \frac{\sigma_{i}^{a} \Gamma_{i}^{p} P_{P, i}^{\pm}}{h \nu_{i} A}+\sum_{j} \frac{\sigma_{j}^{a} \Gamma_{j}^{p} P_{S,j}^{\pm}}{h \nu_{j} A}+\sum_{k} \frac{\sigma_{k}^{a} \Gamma_{k}^{A} P_{A, k}^{\pm}}{h \nu_{k} A} $$
To solve coupled equations in the general scheme of bi-directional pumping, the computation involves a dual boundary value problem for the system of differential equations, as shown in the following figure. The numerical solution of the coupled differential equations can be done through using the Runge-Kutta methods and the relaxation method. The Runge-Kutta method involves solving the equations by propagating the light fields forward and backward along the fiber, using the boundary conditions for the signal, pump, forward amplified spontaneous emission (ASE), and backward ASE. The presence of counterpropagating backward ASE creates the necessity for this back-and-forth simulation. In addition, the relaxation method is used to converge to the desired precision.
1. E. Desurvire, Erbium-doped fiber amplifiers: principles and applications, Wiley-Interscience, 2002.
2. Jiang, C., Hu, W. and Zeng, Q., Numerical analysis of concentration quenching model of Er 3+-doped phosphate fiber amplifier. IEEE journal of quantum electronics 39, 266-1271, 2003.
3. P. Blixt et al., "Concentration-dependent upconversion in Er -doped fiber amplifiers: Experiments and modeling," IEEE Photon. Technol. Lett., vol. 3, p. 996, 1991.
4. E. Delevaque et al., "Modeling of pair-induced quenching in erbium doped silicate fibers," IEEE Photon. Technol. Lett., vol. 5, pp. 73–75, Jan. 1993
Erbium doped fiber amplifier
Amplifiers - List of INTERCONNECT Elements
Optical Amplifier (AMP) - INTERCONNECT Element
INTERCONNECT elements that require additional licenses
INTERCONNECT product reference manual | CommonCrawl |
Making Indirect Interactions Explicit in Networks
After reading about Ehresmann & Vanbremeersch's Memory Evolutive Systems and Robert Rosen's Relational Biology, I tried to come up with some concrete takeaways. The main theme seemed to be that category theory provides a language for explicitly describing indirect relationships in graphs.
Networks as Graphs
Networks, or collections of interacting components, are ubiquitous: societies are networks of people, which are networks of cells, which are networks of molecules, which are networks of particles. Networks are often modeled as weighted graphs, collections of vertices connected by edges, where each edge is assigned a weight (usually a real number). Each vertex $A$ represents a network component, each edge $A \xleftarrow{f} B$ represents direct interaction in which $A$ receives $f$ from $B$ (or $B$ sends $f$ to $A$), and the weight $w(f)$ quantifies some property of $f$. For example, if $A$ and $B$ are dominoes, then $f$ could be the action of $A$ being toppled by $B$, and $w(f)$ could be the probability that $A$ will fall if $B$ falls.
However, many networks have properties that are not immediately obvious from their graphical representations. Perhaps $B$ can be toppled by another domino, $C$, which is too far away to topple $A$. When $C$ falls, it initiates a chain of topples that causes $A$ to fall: $C$ topples $B$, and $B$ topples $A.$ There is no arrow $A \leftarrow C,$ though, because $C$ cannot itself hit $A.$
In real-world networks, hidden properties can be meaningful. For example, the targeted dissemination of information through social networks relies on the existence of indirect connections. To this end, it can be helpful to explicitly represent the indirect interactions that graph-theoretic models leave implicit.
Weighted Categories
Category theory [2], an approach to mathematics that describes mathematical objects in terms relationships to other objects (rather than their internal specifics), has been used to describe properties of biological systems [1,3-5] and can be used to restructure graphs so that all interactions, both direct and indirect, are made explicit. Weighted categories are similar to weighted graphs, except that vertices are called objects and edges are called arrows, the weights of arrows are functions rather than real numbers, and any two arrows with the same source and target need not be equivalent. Intuitively, a weighted category $\mathcal{C}$ can be described as
a collection $Ob(\mathcal{C})$ of objects, together with
a collection $Hom(\mathcal{C})$ of arrows $A \xleftarrow{f} B$ between objects, where each arrow $f$ points from a single sending object $B$ to a single receiving object $A$ and has weight $w(f)$.
In mathematical settings, arrows are also called homomorphisms, hence the notation $Hom(C).$ The arrows of a category must additionally satisfy the following criteria:
For any sequence $A_1 \xleftarrow{f_1} \cdots \xleftarrow{f_n} A_{n+1}$ of head-to-toe adjacent arrows, there is a single composite arrow $A_1 \xleftarrow{f_1f_2\cdots f_n} A_{n+1}.$
For each object $X,$ there is an identity arrow $X \xleftarrow{i_X} X,$ and composite arrows are the same regardless of whether identity arrows are included.
The weights satisfy some weight composition rule $(\cdot)$ so that $w(f_1f_2 \cdots f_n) = w(f_1) \cdot w(f_2) \cdot \cdots \cdot w(f_n).$
A careful reader will notice ambiguity in criterion (1):
the arrow obtained by composing $f_1,$ $f_2,$ and $f_3$ is called $f_1f_2f_3,$
the arrow obtained by composing $f_1$ and $f_2f_3$ is called $f_1f_2f_3,$ and
the arrow obtained by composing $f_1f_2$ and $f_3$ is called $f_1f_2f_3.$
This ambiguity is deliberate, because the actual mathematical definition of a category requires that all three "versions" of $f_1f_2f_3$ are indistinguishable and hence coincide.
A concrete example of criterion (2) is that the arrows $f_1 f_2,$ $i_1 f_1 f_2,$ $f_1 i_2 f_2,$ $f_1 f_2 i_3,$ $i_1 f_1 i_2 f_2,$ $i_1 f_1 f_2 i_3,$ and $i_1 f_1 i_2 f_2 i_3$ all coincide.
In criterion (3), simple weight rules include, but are not limited to, addition or multiplication. A straightforward consequence of (3) is that all identity arrows are assigned the same weight, called the identity weight.
Converting Graphs to Categories
Given a weighted graph network model, where vertices in a set $V$ represent network components and edges in a set $E$ represent direct interactions, one can generate a weighted category $\mathcal{C}$ whose arrows represent all communication, both direct and indirect, that occurs in the network. One does this by choosing $Ob(\mathcal{C}) = V$ and generating $Hom(\mathcal{C})$ according to the following steps:
Include edge arrows. For each edge $f$ in $E$, include $f$ in $Hom(\mathcal{C}).$
Include identity arrows. For each vertex $X$ in $V$, include $i_X$ in $Hom(\mathcal{C}).$ Identity edges and identity arrows must coincide.
Include composite arrows. For each sequence of head-to-toe adjacent arrows $f_1, f_2 ..., f_n$ in $Hom(\mathcal{C}),$ include the composite arrow $f_1 f_2 \cdots f_n$ in $Hom(\mathcal{C}).$
The weights are specified entirely by our choice of weight composition rule: since identity arrows are all assigned the identity weight and all other noncomposite arrows can be identified as edges, the rule $w(f_1f_2 \cdots f_n) = w(f_1) \cdot w(f_2) \cdot \cdots \cdot w(f_n)$ specifies the weight of every composite arrow.
The choice of weight composition rule often depends on the modeling context. For example, suppose that in the category of economic entities, $A$ is a manufacturer, $B$ is a middleman, and $C$ is a customer who wants a product from the manufacturer $A.$ In this category, each arrow might represent a sale, and the weight of an arrow might be the amount spent by the buyer. In this case, the amount spent by the final buyer in a sequence of exchanges depends only on the final exchange, so the weight composition rule should be chosen as $w(f) \cdot w(g) = w(f)$ for composable weights $f$ and $g,$ so that
$\begin{align*} w(f_1f_2 \cdots f_n) = w(f_1) \cdot w(f_2) \cdot \cdots \cdot w(f_n) = w(f_1). \end{align*}$
[1] Ehresmann, A. C., & Vanbremeersch, J. P. (2007). Memory evolutive systems; hierarchy, emergence, cognition (Vol. 4). Elsevier.
[2] Eilenberg, S., & MacLane, S. (1945). General theory of natural equivalences. Transactions of the American Mathematical Society, 231-294.
[3] Rosen, R. (1958). The representation of biological systems from the standpoint of the theory of categories. The bulletin of mathematical biophysics, 20(4), 317-341.
[4] Rosen, R. (1958). A relational theory of biological systems. The bulletin of mathematical biophysics, 20(3), 245-260.
[5] Rosen, R. (1959). A relational theory of biological systems II. The bulletin of mathematical biophysics, 21(2), 109-128
Tags: Blog, Category Theory | CommonCrawl |
Symmetry analysis of a Lane-Emden-Klein-Gordon-Fock system with central symmetry
DCDS-S Home
Closed-form solutions for the Lucas-Uzawa growth model with logarithmic utility preferences via the partial Hamiltonian approach
August 2018, 11(4): 655-666. doi: 10.3934/dcdss.2018040
Conditional symmetries of nonlinear third-order ordinary differential equations
Aeeman Fatima a,, , F. M. Mahomed b, and Chaudry Masood Khalique a,
International Institute for Symmetry Analysis and Mathematical Modeling, North-West University, Mafikeng Campus, P Bag X2046, Mafikeng, South Africa
School of Computer Science and Applied Mathematics DST-NRF Centre of Excellence in Mathematical and Statistical Sciences, University of the Witwatersrand, Johannesburg, Wits 2050, South Africa
* Corresponding author: Aeeman Fatima.
Received December 2016 Revised April 2017 Published November 2017
In this work, we take as our base scalar second-order ordinary differential equations (ODEs) which have seven equivalence classes with each class possessing three Lie point symmetries. We show how one can calculate the conditional symmetries of third-order non-linear ODEs subject to root second-order nonlinear ODEs which admit three point symmetries. Moreover, we show when scalar second-order ODEs taken as first integrals or conditional first integrals are inherited as Lie point symmetries and as conditional symmetries of the derived third-order ODE. Furthermore, the derived scalar nonlinear third-order ODEs without substitution are considered for their conditional symmetries subject to root second-order ODEs having three symmetries.
Keywords: Lie point symmetries, conditional symmetries, first integrals, three symmetry equations.
Mathematics Subject Classification: 34A05, 22E05, 22E45, 22E60, 22E70.
Citation: Aeeman Fatima, F. M. Mahomed, Chaudry Masood Khalique. Conditional symmetries of nonlinear third-order ordinary differential equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 655-666. doi: 10.3934/dcdss.2018040
B. Abraham-Shrauner, K. S. Govinder and P. G. L. Leach, Integration of second order ordinary differential equations not possessing Lie point symmetries, Phys. Lett. A, 203 (1995), 169-174. doi: 10.1016/0375-9601(95)00426-4. Google Scholar
D. J. Arrigo and J. M. Hill, Nonclassical symmetries for nonlinear diffusion and absorption, Stud. Appl. Math., 94 (1995), 21-39. doi: 10.1002/sapm199594121. Google Scholar
G. W. Bluman and J. D. Cole, The general similarity solution of the heat equation, J. Math. Mech., 18 (1969), 1025-1042. Google Scholar
S. S. Chern, Sur la géométrie d'une équation différentielle du troiséme ordre, CR Acad Sci Paris, 1937. Google Scholar
S. S. Chern, The geometry of the differential equation $ y''''=F(x, y, y'', y''')$, Sci. Rep. Nat. Tsing Hua Univ., 4 (1940), 97-111. Google Scholar
R. Cherniha and M. Henkel, On non-linear partial differential equations with an infinite-dimensional conditional symmetry, J. Math. Anal. Appl., 298 (2004), 487-500. doi: 10.1016/j.jmaa.2004.05.038. Google Scholar
R. Cherniha and O. Pliukhin, New conditional symmetries and exact solutions of reaction-diffusion-convection equations with exponential nonlinearities, J. Math. Anal. Appl., 403 (2013), 23-37. doi: 10.1016/j.jmaa.2013.02.010. Google Scholar
P. A. Clarkson, Nonclassical symmetry reductions of the Boussinesq equation, Chaos Solitons Fractals, 5 (1995), 2261-2301. doi: 10.1016/0960-0779(94)E0099-B. Google Scholar
P. A. Clarkson, Nonclassical symmetry reductions of nonlinear partial differential equations, Math. Comput. Model., 18 (1993), 45-68. doi: 10.1016/0895-7177(93)90214-J. Google Scholar
P. L. Da Silva and I. L. Freire, Symmetry analysis of a class of autonomous even-order ordinary differential equations, IMA J. Appl. Math., 80 (2015), 1739-1758, arXiv: 1311.0313v2 [mathph] 7 march 2014. doi: 10.1093/imamat/hxv014. Google Scholar
A. Fatima and F. M. Mahomed, Conditional symmetries for ordinary differential equations and applications, Int. J. Non-Linear Mech., 67 (2014), 95-105. doi: 10.1016/j.ijnonlinmec.2014.08.013. Google Scholar
W. I. Fushchich, Conditional symmetry of equations of nonlinear mathematical physics, Ukrain. Math. Zh., 43 (1991), 1456-1470. doi: 10.1007/BF01067273. Google Scholar
G. Gaeta, Conditional symmetries and conditional constants of motion for dynamical systems, Report of the Centre de Physique Theorique Ecole Polytechnique, Palaiseau France, 1 (1993), 1-24. Google Scholar
A. Goriely, Integrability and Nonintegrability of Dynamical Systems, Advanced Series in Nonlinear Dynamics, 19. World Scientific Publishing Co. , Inc. , River Edge, NJ, 2001. doi: 10.1142/9789812811943. Google Scholar
G. Grebot, The characterization of third order ordinary differential equations admitting a transitive fibre-preserving point symmetry group, J. Math. Anal. Appl., 206 (1997), 364-388. doi: 10.1006/jmaa.1997.5219. Google Scholar
N. H. Ibragimov and S. V. Meleshko, Linearization of third order ordinary differential equations by point and contact transformations, J. Math. Anal. Appl., 308 (2005), 266-289. doi: 10.1016/j.jmaa.2005.01.025. Google Scholar
N. H. Ibragimov, S. V. Meleshko and S. Suksern, Linearization of fourth order ordinary differential equation by point transformations, J. Phys. A, 41 (2008), 235206, 19 pp. doi: 10.1088/1751-8113/41/23/235206. Google Scholar
A. H. Kara and F. M. Mahomed, A Basis of conservation laws for partial differential equations, J. Nonlinear Math. Phys., 9 (2002), 60-72. doi: 10.2991/jnmp.2002.9.s2.6. Google Scholar
M. Kunzinger and R. O. Popovych, Generalized conditional symmetries of evolution equations, J. Math. Anal. Appl., 379 (2011), 444-460. doi: 10.1016/j.jmaa.2011.01.027. Google Scholar
P. G. L. Leach, Equivalence classes of second-order ordinary differential equations with three-dimensional Lie algebras of point symmetries and linearisation, J. Math. Anal. Appl., 284 (2003), 31-48. doi: 10.1016/S0022-247X(03)00147-1. Google Scholar
S. Lie, Lectures on Differential Equations with Known Infinitesimal Transformations, Leipzig, Teubner, 1981 (in German Lie's Lectures by G. Sheffers). Google Scholar
F. M. Mahomed, I. Naeem and A. Qadir, Conditional linearizability criteria for a system of third-order ordinary differential equations, Nonlinear Anal. B: Real World Appl., 10 (2009), 3404-3412. doi: 10.1016/j.nonrwa.2008.09.021. Google Scholar
F. M. Mahomed, Symmetry group classification of ordinary differential equations: Survey of some results, Math. Meth. Appl. Sci., 30 (2007), 1995-2012. doi: 10.1002/mma.934. Google Scholar
F. M. Mahomed and A. Qadir, Classification of ordinary differential equations by conditional linearizability and symmetry, Commun. Nonlinear Sci. Numer. Simulat., 17 (2012), 573-584. doi: 10.1016/j.cnsns.2011.06.012. Google Scholar
F. M. Mahomed and A. Qadir, Conditional linearizability criteria for third order ordinary differential equations, J. Nonlinear Math. Phys., 15 (2008), 124-133. doi: 10.2991/jnmp.2008.15.s1.11. Google Scholar
F. M. Mahomed and A. Qadir, Conditional linearizability of fourth-order semilinear ordinary differential equations, J. Nonlinear Math. Phys., 16 (2009), 165-178. doi: 10.1142/S140292510900039X. Google Scholar
F. M. Mahomed and P. G. L. Leach, Symmetry Lie algebras of $n$ order ordinary differential equations, J. Math. Anal. Appl., 151 (1990), 80-107. doi: 10.1016/0022-247X(90)90244-A. Google Scholar
S. V. Meleshko, On linearization of third order ordinary differential equation, J. Phys. A, 39 (2006), 15135-15145. doi: 10.1088/0305-4470/39/49/005. Google Scholar
S. Neut and M. Petitot, La géométrie de l'équation $ y'''=f(x,y,y',y'')$, CR Acad. Sci. Paris Sér. I., 335 (2002), 515-518. doi: 10.1016/S1631-073X(02)02507-4. Google Scholar
P. J. Olver and E. M. Vorob'ev, Nonclassical and conditional symmetries, in: N. H. Ibragiminov (Ed. ), CRC Handbook of Lie Group Analysis, vol. 3, CRC Press, Boca Raton, 1994. Google Scholar
E. Pucci and G. Saccomandi, Evolution equations, invariant surface conditions and functional separation of variables, Physica D: Nonlinear Phenomena, 139 (2000), 28-47. doi: 10.1016/S0167-2789(99)00224-9. Google Scholar
E. Pucci and G. Saccomandi, On the weak symmetry groups of partial differential equations, J. Math. Anal. Appl., 163 (1992), 588-598. doi: 10.1016/0022-247X(92)90269-J. Google Scholar
E. Pucci, Similarity reductions of partial differential equations, J. Phys. A, 25 (1992), 2631-2640. doi: 10.1088/0305-4470/25/9/032. Google Scholar
W. Sarlet, P. G. L. Leach and F. Cantrijn, First integrals versus configurational invariants and a weak form of complete integrability, Physica D, 17 (1985), 87-98. doi: 10.1016/0167-2789(85)90136-8. Google Scholar
S. Spichak and V. Stognii, Conditional symmetry and exact solutions of the Kramers equation, Nonlinear Math. Phys., 2 (1997), 450-454. Google Scholar
S. Suksern, N. H. Ibragimov and S. V. Meleshko, Criteria for the fourth order ordinary differential equations to be linearizable by contact transformations, Common. Nonlinear Sci. Number. Simul., 14 (2009), 2619-2628. doi: 10.1016/j.cnsns.2008.09.021. Google Scholar
C. Wafo Soh and F. M. Mahomed, Linearization criteria for a system of second-order ordinary differential equations, Int. J. Non-Linear Mech., 36 (2001), 671-677. doi: 10.1016/S0020-7462(00)00032-9. Google Scholar
Table Ⅰ. Lie group classification of scalar second-order equations in the real plane
$p=\partial /\partial x$ and $q=\partial /\partial y$
Algebra Canonical forms of generators Representative equations
$L_{1}$ $X_1=p$ $y''=g(y,y')$
$L_{2;1}^I$ $X_1=p,X_2=q$ $y''=g(y')$
$L_{2;1}^{II}$ $X_1=q,X_2=xp+yq$ $xy''=g(y')$
$L_{3;3}^I$ $X_1=p, X_2=q, X_3=xp+(x+y)q$ $y''=Ae^{-y'}$
$L_{3;6}^I$ $X_1=p, X_2=q, X_3=xp+ayq$ $y''=Ay'^{(a-2)/(a-1)}$
$L_{3;7}^I$ $X_1=p, X_2=q, X_3=(bx+y)p+(by-x)q$ $y''=A(1+y'^{2})^{\frac{3}{2}}e^{b\arctan y'}$
$L_{3;8}^I$ $X_1=q, X_2=xp+yq, X_3=2xyp+y^2q$ $xy''=Ay'^{3}-\frac{1}{2}y'$
$L_{3;8}^{II}$ $X_1=q, X_2=xp+yq, X_3=2xyp+(y^2+x^2)q$ $xy''=y'+y'^{3}+A(1+y'^{2})^{\frac{3}{2}}$
$L_{3;8}^{III}$ $X_1=q, X_2=xp+yq, X_3=2xyp+(y^2-x^2)q$ $xy''=y'-y'^{3}+A(1-y'^{2})^{\frac{3}{2}}$
$L_{3;9}$ $X_1=(1+x^2)p+xyq, X_2=xyp+(1+y^2)q$,
$X_3=yp-xq$ $y''=A[\displaystyle{1+y'+(y-xy')^2\over 1+x^2+y^2}]^{3/2}$
Table Ⅱ. Inherited symmetries of derived scalar third-order equations
Representative 2nd-order ODE $A\ne0$ Derived 3rd-order ODE Inherited algebra
$y''=Ae^{-y'}$ $y'''+y''^{2}=0$ $L_{3;3}^{I}$
$y''=Ay'^{(a-2)/(a-1)}$ $y'y'''-\frac{a-2}{a-1}y''^2=0$ $L_{3;6}^I$
$y''=A(1+y'^{2})^{\frac{3}{2}}e^{b\arctan y'}$ $y'''-\frac{3y'+b}{1+y'^2}y''^2=0$ $L_{3;7}^I$
$xy''=Ay'^{3}-\frac{1}{2}y'$ $y'y'''-3y''^2=0$ $L_{3;8}^I$
$xy''=y'+y'^{3}+A(1+y'^{2})^{\frac{3}{2}}$ $y'''+y'''y'^2-3y'y''^2=0$ $L_{3;8}^{II}$
$xy''=y'-y'^{3}+A(1-y'^{2})^{\frac{3}{2}}$ $y'''-y'''y'^2-3y'y''^2=0$ $L_{3;8}^{III}$
$y''=AK^{3/2}$ $y'''=\frac32y''K^{-1}D_x K$ $L_{3;9}$
where $K=\displaystyle{1+y'+(y-xy')^2\over 1+x^2+y^2}$
Miriam Manoel, Patrícia Tempesta. Binary differential equations with symmetries. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1957-1974. doi: 10.3934/dcds.2019082
Elena Celledoni, Brynjulf Owren. Preserving first integrals with symmetric Lie group methods. Discrete & Continuous Dynamical Systems - A, 2014, 34 (3) : 977-990. doi: 10.3934/dcds.2014.34.977
Juan Belmonte-Beitia, Víctor M. Pérez-García, Vadym Vekslerchik, Pedro J. Torres. Lie symmetries, qualitative analysis and exact solutions of nonlinear Schrödinger equations with inhomogeneous nonlinearities. Discrete & Continuous Dynamical Systems - B, 2008, 9 (2) : 221-233. doi: 10.3934/dcdsb.2008.9.221
Martin Oberlack, Andreas Rosteck. New statistical symmetries of the multi-point equations and its importance for turbulent scaling laws. Discrete & Continuous Dynamical Systems - S, 2010, 3 (3) : 451-471. doi: 10.3934/dcdss.2010.3.451
María Rosa, María de los Santos Bruzón, María de la Luz Gandarias. Lie symmetries and conservation laws of a Fisher equation with nonlinear convection term. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1331-1339. doi: 10.3934/dcdss.2015.8.1331
Carsten Collon, Joachim Rudolph, Frank Woittennek. Invariant feedback design for control systems with lie symmetries - A kinematic car example. Conference Publications, 2011, 2011 (Special) : 312-321. doi: 10.3934/proc.2011.2011.312
Wen-Xiu Ma. Conservation laws by symmetries and adjoint symmetries. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 707-721. doi: 10.3934/dcdss.2018044
María-Santos Bruzón, Elena Recio, Tamara-María Garrido, Rafael de la Rosa. Lie symmetries, conservation laws and exact solutions of a generalized quasilinear KdV equation with degenerate dispersion. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020222
Stephen Anco, Maria Rosa, Maria Luz Gandarias. Conservation laws and symmetries of time-dependent generalized KdV equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 607-615. doi: 10.3934/dcdss.2018035
M. Euler, N. Euler, M. C. Nucci. On nonlocal symmetries generated by recursion operators: Second-order evolution equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4239-4247. doi: 10.3934/dcds.2017181
A. V. Bobylev, Vladimir Dorodnitsyn. Symmetries of evolution equations with non-local operators and applications to the Boltzmann equation. Discrete & Continuous Dynamical Systems - A, 2009, 24 (1) : 35-57. doi: 10.3934/dcds.2009.24.35
José F. Cariñena, Fernando Falceto, Manuel F. Rañada. Canonoid transformations and master symmetries. Journal of Geometric Mechanics, 2013, 5 (2) : 151-166. doi: 10.3934/jgm.2013.5.151
Olivier Brahic. Infinitesimal gauge symmetries of closed forms. Journal of Geometric Mechanics, 2011, 3 (3) : 277-312. doi: 10.3934/jgm.2011.3.277
L. Bakker, G. Conner. A class of generalized symmetries of smooth flows. Communications on Pure & Applied Analysis, 2004, 3 (2) : 183-195. doi: 10.3934/cpaa.2004.3.183
Michael Baake, John A. G. Roberts, Reem Yassawi. Reversing and extended symmetries of shift spaces. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 835-866. doi: 10.3934/dcds.2018036
Marin Kobilarov, Jerrold E. Marsden, Gaurav S. Sukhatme. Geometric discretization of nonholonomic systems with symmetries. Discrete & Continuous Dynamical Systems - S, 2010, 3 (1) : 61-84. doi: 10.3934/dcdss.2010.3.61
Júlio Cesar Santos Sampaio, Igor Leite Freire. Symmetries and solutions of a third order equation. Conference Publications, 2015, 2015 (special) : 981-989. doi: 10.3934/proc.2015.0981
Michael Hochman. Smooth symmetries of $\times a$-invariant sets. Journal of Modern Dynamics, 2018, 13: 187-197. doi: 10.3934/jmd.2018017
Giovanni De Matteis, Gianni Manno. Lie algebra symmetry analysis of the Helfrich and Willmore surface shape equations. Communications on Pure & Applied Analysis, 2014, 13 (1) : 453-481. doi: 10.3934/cpaa.2014.13.453
Michal Fečkan, Michal Pospíšil. Discretization of dynamical systems with first integrals. Discrete & Continuous Dynamical Systems - A, 2013, 33 (8) : 3543-3554. doi: 10.3934/dcds.2013.33.3543
Aeeman Fatima F. M. Mahomed Chaudry Masood Khalique | CommonCrawl |
Journal of Innovation and Entrepreneurship
A Systems View Across Time and Space
Outsourcing service activities: related literature
Models' economic specification
Innovation and service outsourcing: an empirical analysis based on data from Tunisian firms
Hanen Sdiri1Email author and
Mohamed Ayadi1
Journal of Innovation and EntrepreneurshipA Systems View Across Time and Space20165:21
https://doi.org/10.1186/s13731-016-0050-z
© Sdiri and Ayadi. 2016
Received: 2 December 2015
Recently, outsourcing services has been an important component of the organizational strategy of service firms. However, most research studies mainly focus on analyzing the determining factors of outsourcing at the expense of its structural effects. The aim of this paper is to examine the extent to which outsourcing relationships can be a source of service innovation by using a sample of 108 Tunisian service firms. Specifically, we are interested in the domestic outsourcing of auxiliary activities. Our results support the evidence of positive effects of outsourcing service activities on the capacity of innovation. This suggests that outsourcing allows Tunisian service firms to create value, increase flexibility and improve the quality of their services.
Services sector
In a competitive context and in an uncertain economic environment, the access to the best available technologies and the creation of value -among others- are two objectives that a service firm cannot always reach in-house by its own means. It is for this reason that many firms have resorted to new ways of managing the relationships with their environment. Indeed, the most frequently used organizational strategies are establishing new forms of collaboration with research centers or clients, using new methods of integration with suppliers and outsourcing an organization's own services (OCDE 2005). Among these forms, the present paper focuses mainly on outsourcing as it represents an important potential source for innovation. Outsourcing allows the access to the specialized technological competences of the external organizations as well as sustaining the research and development (R&D) activities more effectively in order to develop new products/services (i.e., by reducing costs, shrinking the time to market, increasing flexibility and enhancing quality) (Quinn 2000; Espino-Rodríguez and Padrón-Robaina 2004 and Carson 2007).
Therefore, after affecting the industrial activities, the outsourcing approach has now an impact on the service sector. Indeed, this approach has incremented with the development of technology-intensive sectors. Outsourcing is no longer new as its forms have been well developed in the European countries. According to the Outsourcing Barometer published by Young (2010), 70 % of European firms resort to outsourcing. In the Tunisian context, for instance, outsourcing services has recently witnessed an outstanding expansion with 77 % of Tunisian firms resorting to outsourcing (Outsourcing Barometer 2006). Engardio and Arndt (2006) indicate that 18.4 billion dollars of trade in the information technologies and 11.4 billion dollars of company services were outsourced, representing 10 % of the potential market. In addition, the OECD 2005 report shows that the total number of positions that can be affected by outsourcing accounts for about 20 % of the employment in certain countries.
Rare are the empirical studies that analyze the relationship between outsourcing and performance, and these are restricted to the motivations that incentivize firms to outsource. Girma and Görg (2004) have shown that outsourcing is positively linked to labor productivity and total factor productivity. Also, Maskell et al. (2007) have concluded that outsourcing can offer firms not only lower costs but also better quality and access to innovation. Yet, this kind of analysis has never included emerging countries, Tunisia in particular. The present paper aims at analyzing then the effect of the domestic outsourcing of the auxiliary activities on the development of innovations. This work is an attempt at proving the extent to which the domestic outsourcing of service activities leads to lower costs, higher production flexibility and better service quality for Tunisian service firms.
The remainder of the paper is organized as follows. 'Outsourcing service activities: related literature' section outlines a brief literature review. The 'Methods' section deals with the data, the variables measure and the models' economic specification. 'Results and discussion'. section analyzes the main econometric results. The last section is the 'Conclusions' section.
Context and definitions
Outsourcing implies the transfer of goods and services that were previously carried in-house to an external and more specialized provider (Domberger 1998). Nowadays, reinforcing the outsourcing of industrial production arouses growing concerns. This tendency has long existed, but it seems to become more marked and to expand beyond the manufacturing sector to encompass that of services. In the same context, Lankford and Parsa (1999) define outsourcing as the fact of providing services to sources that are external to the firm. Thus, outsourcing means the allocation or reallocation of service activities of an internal source to an external one (Schniederjans et al. 2005).
The topic of outsourcing is a key concept for services and their innovation (Gallouj and Windrum 2009). In this setting, many authors have closely examined the determining factors of taking the decision to outsource. Bartel et al. (2008) have shown that outsourcing activities are more advantageous for a service firm when the technological changes are evolving rapidly. Windrum et al. (2009) have focused on the paradox of outsourcing productivity by examining the links between total outsourcing and operational innovation. They have shown that, on the short term, the 'outsourcing' firms are willing to decrease their marginal production costs. However, Espino-Rodríguez and Padrón-Robaina (2004) have revealed that outsourcing has a great potential not only through reducing costs but also in terms of other operational objectives such as enhancing quality and ensuring production flexibility. Miozzo and Soete (2001) show that the services suppliers, which are historically internalized in large corporations (accounting, advertising, distribution, etc), were outsourced during the last decades mainly in the developed economies.
Outsourcing motivations
Outsourcing is integrated to the essential elements of firms' strategy. Firms are constantly searching an organization which provides high service at lower cost. Therefore, diminishing costs represent the main factor urging firms to outsource (Pierre-Paul 2006). Indeed, outsourcing offers the advantage of transparency and allows better expenditure management. It is considered as a means of re-centring the company activity on its primary competence while allocating its secondary tasks to more specialized providers in order to generate higher added value. As they focus on their primary task, firms provide better solutions through constant technological watch and upgrading of labor methods. The multiplicity and diversity of customers continually enrich firms' know-how and improve the efficiency of their operating methods. Furthermore, competition puts pressure on firms in a bid to seek a standing efficiency. Indeed, the globalization of economy, the shortening of the products life cycle as well as the increase of uncertainties oblige companies to delegate the operations that are doomed to have lower added value for their activities to external providers. In other words, outsourcing enables companies to optimize their operational competitiveness and to adapt to the frequent changes and the constant evolution of their environment.
Data and variables measure
In this paper, the data used are from a survey conducted on Tunisian service firms.1 This survey belongs to the modified version of the third Community Innovation Survey (CIS) 3 and to the second European innovation survey 1997. However, to account for the role of integrating information technologies on the firms' performance, the 2002 survey on information and communication technologies (henceforth ICTs) and electronic trade has also been referred to.
The questionnaire2 used for the survey offers a wide range of data. Apart from the general information about the firm, the questionnaire is built on three major sections: outsourcing, innovation and the use of the ICT. First of all, the surveyed firms were asked about their firm's operational structure. More precisely, they were interviewed about outsourcing activities which were previously realized in-house. Secondly, the survey also questions whether the firm has introduced any innovations during the 2005–2007 period. Finally, a section is devoted to exploring the impact of integrating ICTs within firms. The respondents were asked to specify whether the firm has resorted to any new technologies during 2005–2007 and the extent to which these technologies contributed to sales growth.
Among the 200 questionnaires directly delivered to the firms, only 108 workable responses were obtained, that is to say a 54.5 % response rate. Yet, these observations are not adequately weighted. Nonetheless, to ensure the representativeness of the sample, this latter was stratified by workforce bracket using the NTA3 code of the National Institute of the Statistics (seven classes by number of employees: 1–6, 7–9, 10–19, 20–49, 50–90, 100–199, 200 and more). To each class, a weight was attributed, representing this bracket's weight at the national level, so as to obtain a more representative sample of the service firms in Tunisia.
Table 1 summarizes the determining factors of this operation. It shows that 85.18 % of the surveyed firms declared having resorted to outsourcing during 2005–2007. If the size of firms, in terms of the number of employees, is taken into account, it is noticed that the small firms outsource more than the large ones (22.82 %). Thus, this table shows that 22.22 % of the innovating firms declare having outsourced a part of their functions during the survey period.
Distribution and weighting of the sample firms
INS' firms
Corrected weight
The survey provides information about the sector where a firm operates. We have classified the firms according to three main activities.4 First, ACT1 incarnates the enterprises that belong to sections H (transportation and storage), N (administrative and support service activities) and S (other service activities). Second, ACT2 incorporates the enterprises that are affiliated to sections M (professional, scientific and technical activities) and K (financial and insurance activities). Finally, ACT 3 consists of the enterprises that are in section J (Information and communication). Using the data collected from this survey, we also present the distribution of firms by business sector. We show that the largest number of companies is located in ACT3 (52.78 %), followed by companies operating in ACT2 (32.41 %) and 14.81 % are in ACT1 (see Appendix 1).
Variables measure
The literature includes many indicators to measure the output of innovation. Some use the patent portfolio (Mairesse and Mohnen 2003) while others use the R&D expenses as innovation indicators. This paper uses variables showing the main advantages urging firms to outsource. The choice is justified by the fact that outsourcing represents an important potential source of innovation, according to Quinn (2000) and Espino-Rodríguez and Padrón-Robaina (2004). To measure these advantages, an ordinal 5-point scale is used, showing the degree of importance that firms attribute to the following objectives: reducing costs (red_cout), enhancing services quality (qua_ser) and increasing flexibility (flex_pro).
The innovation output (inserv) is also measured by a binary variable taking the value 1 if the firm has innovated during the last 3 years and the value 0 otherwise. More precisely, the focus is on information stipulating whether a firm has already implemented any product or any new procedure or has even considerably improved any new marketing or operating method in its practices. In fact, we have enclosed with the questionnaire a supplementary explanatory guide where we have defined all the technical terms including service innovation. This latter has been taken from the third Community Innovation Survey (CIS3).5
Prior empirical studies have used different indicators in order to measure outsourcing. Gilley and Rasheed (2000) have measured outsourcing by the share of the total value of the firm's outsourced activities. Following Cusmano et al. (2009), we define outsourcing (exter) by a binary variable showing whether a firm has delegated a given task to a specialist outside the firm between 2005 and 2007. We have essentially focused on the outsourcing of the auxiliary activities that are not at the core of the main activity of the enterprise. We have asked the following question: 'During the three years 2005 to 2007, did your enterprise outsource auxiliary tasks?'
Use of ICTs
These are considered as sources of innovation in services. To measure this variable, some authors use the investments in ICT as an appropriate indicator (Gago and Rubalcaba 2007). In this paper, however, as there are nominal variables in the survey, the factorial analysis is the most suitable method in order to process data and analyze the correlations existing between the different items. This method aims at summarizing the huge amount of data. Therefore, a first multiple correspondence analysis is conducted (MCA) on the items relative to the use of ICTs: 'local Internet network', 'Internet', 'Intranet', 'Exchange of computerized data on Internet', and 'Web site'. This MCA allows the retaining of only one factor, called 'tic'. The MCA results on these items (Table 2) show that, according to the precision of the Kaiser-Meyer-Olkin measure of sampling adequacy (0.753) as well as the significance of the Bartlett Sphericity test, the items are so correlated that they are factorized (χ 2 = 113.45). Thus, the retained dimension presents a good reliability given by the Crombach α value (0.721).
Matrix of components (use of ICTs)
Local internet network
Exchange of computerized data Internet
Web site (their own or shared)
Cronbach's alpha
Kaiser-Meyer-Olkin (KMO)
% variance
Bartlett sphericity test chi square
Relationship with clients
Studies dealing with the innovation in services have focused on the services-specific characteristics that directly affect the development of innovations, such as the interaction between the service provider and the client. In this paper, the interaction with clients is measured by a constructed variable generated from the principal components analysis (PCA) so as to summarize the huge amount of data from the study of items that show the advantages of online business. These items are 'reducing costs', 'increasing the number of clients', 'better coordination with clients and suppliers', and 'shrinking the time to market'. The PCA results on these items and according to the statistic criterion of eigenvalues associated to the axis (λ > 1/4) show that only one dimension called 'intclt' can be retained (Table 3). The reliability of this measure is confirmed by the Bartlett sphericity test (χ 2 = 435.416) and by the Kaiser-Meyer-Olkin (KMO) test (0.846).
Matrix of components (interaction with clients)
Increasing the number of clients
Better coordination with clients and suppliers
Shrinking the time to market
Another variable is also used to show the impact of using Internet on the growth of firms' sales (inter). For this reason, one possibility is to ask firms to indicate whether their sales have changed after using the internet. More precisely, an ordered dichotomic variable is used to show whether sales have increased, decreased or stabilized after using Internet as a marketing tool.
Organizing R&D activities
Although the internal R&D activities are necessary to attract the external competences (Cohen and Levinthal 1990), the extramural R&D, if correctly planned and implemented, can help companies further innovate and hence improve their performances (Caudy 2001). Similar to Huang et al. (2009), a binary variable is used showing whether the company has acquired any external R&D services between 2005 and 2007 as a measure of organizing R&D activities (rd_ex).
Many studies have analyzed the role of the innovating corporate spatial concentration in particular territories (Cusmano et al. 2009 and Uzunidis 2010). These territories can be high-tech park that include companies, research centers or universities. In this paper, to account for the impact of concentration on firms, a binary variable is used showing whether the respondent firm is located or not in the ICT high-tech park (concen). If a firm is located there, it generates synergy effects by developing interaction relationships, and it then benefits from the experiences and the competences of the neighboring firms.
National cooperation
It is widely known that cooperation is an important factor favoring innovation in services (Sdiri and Ayadi 2014). The national cooperation variable (cooperNat) is introduced as a binary variable showing whether a firm has signed, during the 2005–2007 period, any cooperation contracts with other companies in the same field, with consumers, with equipment and software suppliers, with competitors, with research and counseling firms and with universities situated in Tunisia. This variable is introduced in the models to show the extent to which the external relationships enable the development of innovations.
Qualification and age of the firm
In this paper, the level of qualification (Qual) is measured by the number of qualified6 employees divided by the total number of the firm's employees. The age of the firm (age) is determined by the date of its foundation. More precisely, this measure represents the experiences as well as the competences accumulated during the firm's history. Thus, the age is a source for creating innovations and provides more and more absorptive capacities.
Ordered probit model
The answers to the different motivations that allow the Tunisian service firms to outsource a part of their activities are classified according to a 6-point scale. The value zero indicates that the firm gives 'no importance' to the different listed motivations while the value 5 means that it accords a 'very large importance' to them. The discrete and ordered structure of this dependent variable allows the use of ordered response models. The values taken by the multinomial variable (y i = 0, 1, 2, 3, 4, 5) are grouped into intervals where we find only one continuous unobservable latent variable \( {y}_i^{*} \). This kind of model assumes that the values are identical for all observations. Indeed, the level of y * is parameterized by the threshold parameters c j , and a constant is therefore not introduced in the linear model. This model is written as follows7
$$ {y}_i=\left\{\begin{array}{l}0\kern0.96em \mathrm{if}\kern0.48em {y}_i^{*}<{c}_1\\ {}1\kern0.96em \mathrm{if}\kern0.48em {c}_1\le {y}_i^{*}<{c}_2\kern1.08em \forall i=1,\ldots 108\\ {}\dots \\ {}5\kern0.72em \mathrm{if}\kern0.48em {y}_i^{*}>{c}_5\end{array}\right. $$
The threshold parameters c j are in ascending order (c j + 1 ≥ c j ) where the variable \( {y}_i^{*} \) is defined by:
$$ {y}_i^{*}={x}_i{\beta}^{\prime }+{\varepsilon}_i\kern0.72em {\varepsilon}_i\sim N\left[0,1\right] $$
where x i represents the vector of the explanatory variables and ε i is an random error term assumed to have a normal distribution. The parameters β and c j , j = 1,...5 are estimated using the ordered probit model by maximizing the log-likelihood function. The implied probabilities are obtained by
$$ {P}_{ij}= \Pr \left({y}_i=j\right)\forall j=0,\dots, 5 $$
$$ \begin{array}{l}\mathrm{where}; \kern0.84em \Pr \left({y}_i=j\right)= \Pr \left({\alpha}_j<{x}_i{\beta}^{\prime }+{\varepsilon}_i\le {\alpha}_{j+1}\right)\\ {}\kern4.44em = \Pr \left({\alpha}_j<{x}_i{\beta}^{\prime }+{\varepsilon}_i\le {\alpha}_{j+1}\right)\kern0.36em \\ {}\kern4.44em = \Pr \left({\alpha}_j-{x}_i{\beta}^{\prime }<{\varepsilon}_i\le {\alpha}_{j+1}-{x}_i{\beta}^{\prime}\right)\\ {}\kern4.32em =F\left({\alpha}_{j+1}-{x}_i{\beta}^{\prime}\right)-F\left({\alpha}_j-{x}_i{\beta}^{\prime}\right)\end{array} $$
where F(.) denotes the standard normal cumulative distribution function.
Ordered probit model with selection
In the previous section, the contribution of outsourcing to the development of service innovation has been analyzed. This analysis was achieved on the total sample of firms. Nonetheless, it could not be reasonable to admit that the innovating and non-innovating firms are randomly selected from a total population of firms. If that were the case, we would fall into a selection bias problem. Consequently, the maximum likelihood estimator can be irrelevant since it does not account for the selection effects operating by the non-observables in the model.
Many methods can be used to control this bias, namely Heckman (1979)'s two-stage method. Yet, this procedure cannot be applied in the present case. Indeed, Heckman's selection models apply to continuous dependant variables in the interest equation. In this paper, we hold a multinomial ordered data. For that reason, a De Luca and Perotti (2010)8 ordered probit model is used, taking into account the selection bias problem. This model includes two equations, one for the binary indicator of the sample selection (the selection equation) and another for the ordered variable. Accordingly, the observed variable y is determined by another variable which in turn determines whether the innovating effect exists or not (called z *). The variable z * can have the null value if the firm does not innovate and 1 if the firm innovates. Therefore, the variable y is observed only if the selection condition (z = 1) is met. The model is given as follows:
$$ \begin{array}{l}y*={\beta}^{\prime }x+\varepsilon, \\ {}z*={\alpha}^{\prime }v+u,\kern1.8em o\grave{u}\mathrm{u}`\kern0.24em \varepsilon, u\approx N\left(0,0,{\sigma}_{\varepsilon}^2,\rho \right)\end{array} $$
The variables z * and y * are not observed. On the other hand, the variable z is observed and given by:
$$ z=\left\{\begin{array}{l}1\kern1.08em if\kern0.48em z*>0\\ {}0\kern0.96em if\kern0.48em z*\le 0\;\end{array}\right. $$
Empirical validation
Table 4 presents the means, the standard deviations of each variable as well as the correlation matrix between variables used in the models. The table also provides the test based on each coefficient's variance inflation factor (VIF). More precisely, it is noticed that the mean VIF is about 1.36 inferior to 6 and that VIF of each variable is inferior to 10. According to this result, it is proved that there is no multicollinearity problem between the explanatory variables used in these models. Consequently, the heteroscedasticity problem was solved using White's correction. Hence, to check that some variables seem endogenous, Hausman's specification test was used as it allows the detection of any endogeneity bias. Indeed, the test confirms the absence of the endogeneity problem. This means that the residuals obtained from the equations of the first step are not correlated to the measure of innovation, which refutes the endogeneity hypothesis.
The correlation matrix between variables
(1) exter
(2) intclt
−4.35e−09
(3) tic
1.90e−08
(4) inter
−0.51*
(5) cooperNat
(6) age
(7) concen
0.20*
(8) rd_ex
(9) Qual
*Significance at the level of 5 %
The estimations relative to the models with or without selection lead to a quality of adjustment, given by the Wald test χ 2 and the likelihood ratio test LR, that is acceptable at 1 %. On the other hand, to choose the suitable model, the Akaike information criterion is used (Akaike 1974) AIC = − 2LL + 2k as well as the Bayesien information criterion (Schwarz 1978) BIC = − 2LL + k log(n), where k is the number of parameters, LL is the maximum log-likelihood and n the number of observations. As indicated in Table 5 below, the ordered probit model with selection is the most relevant except for the case the 'qua_ser' dimension.
Comparison of models
Standard ordered probit
Ordered probit with selection
(1) red_cout
(2) qua_ser
(3) flex_pro
LR test
Prob > χ 2
Wald χ 7 2
31,940.35a
37,946a
aShows the relevant model
Impact of outsourcing and other innovation explaining variables
Based on the results of Table 6, it is noticed that the interest variable (exter) of the model without selection has a positive coefficient and is statistically significant at 1 %, confirming the hypothesis that outsourcing services is positively correlated to innovation. This result suggests that resorting to outsourcing permits the Tunisian service firms to create value (by reducing costs). Outsourcing abates the marginal production costs and increases profits by producing higher stimuli for innovation (Glass and Saggi 2001). Moreover, it allows increasing flexibility and enhancing the quality of the firms' services. Likewise, the results obtained from the second model (with selection) confirm that outsourcing remains significant also at the level of 1 %. The results of the ordered probit model with selection justify, in some way, the conclusions drawn from the first model. This means that the outsourcing strategy is beneficial for the Tunisian service firms in terms of innovation. A similar effect was noticed by Cantone and Testa (2009). This result unveils that the outsourcing relationships contribute to the development of the firms'organizational capacities. In fact, the motivations impelling firms to outsource are not limited to diminishing costs, but have rather changed to include other exploitation-related objectives such as quality and flexibility (Ehie 2001; Kremic et al. 2006).
Standard ordered probit versus ordered probit with selection
Standard ordered probit (M1)
Ordered probit with selection (M2)
red_cout
qua_ser
flex_pro
Outsourcing (exter)
2.884***
Interaction with clients (intclt)
0.618**
0.527 **
0.968 ***
Use of ICTs (tic)
−0.868***
−0.677 **
Internet business (inter)
National cooperation (cooperNat)
Age of the firm (age)
0.035 *
Concentration (concen)
0.769*
Extramural R&D (rd_ex)
Qualification (Qual)
Athrho
Log-likelihood
−21,319.648
Pseudo R 2
The values between parentheses are the robust standard error corrected by the White method
Significance level: *10 %; **5 %; ***1 %
Table 6 equally shows that the (tic) variable has no considerable impact on innovation while concentration (concen) plainly affects the innovation activity. Indeed, the findings of this paper attest that introducing information technologies in a firm has no impact on reducing costs and production flexibility. On the other hand, it has a significant, but negative, impact on enhancing the quality of service. This result invalidates that of Gago and Rubalcaba (2007) who notice that introducing ICTs is propitious to innovation in services. Nevertheless, it can be said that service firms can introduce ICTs but that does not mean they can manage and valorize these ICTs to develop innovations (Omrane and Bouillon 2004).
As previously mentioned, it is shown that the concentration of firms (concen) positively affects the development of innovations. Actually, establishing a firm in a technology-intensive area (for instance the science parks) contributes to enhancing its new product/service development policy. Due to such favorable technological infrastructure, parks favor the creation and marketing of new products/services. According to this ascertainment, it would be better for service firms to get as close as possible to each other in order to take advantage from productivity and innovation returns. This proximity permits also to firms to reap extra employment opportunity. Consequently, firms become capable of adapting to frequent changes and to the evolution of their environment. Again, the coefficient linked to "the interaction with clients" variable (intclt) bears a positive sign. This implies that the variable intclt has a positive and statistically significant impact on the three dimensions of innovation. The result implies that using the online marketing strategy to meet clients' needs allows firms to reduce costs, improve the quality of their services and increase the flexibility of their productions.
This paper endeavored to analyze how the domestic outsourcing of service activities contributes to the development of innovations. To do so, a standard ordered probit model is, first, used to explain the relationship between outsourcing and innovation. Second, to account for the selection effect, an ordered probit model with selection is adopted. The findings of the two estimating models show that, in accordance with those of Glass and Saggi (2001) and Görg and Hanley (2011), outsourcing positively affects innovation by reducing costs, increasing flexibility and enhancing the quality of services. On the other hand, it is found that corporate concentration positively affects innovation. If a firm is situated in a competence-intensive environment that includes activities such as IT, R&D, data management, architecture and engineering services, it is more likely to adapt to frequent changes and to the evolution of its environment. This advantageous technological infrastructure enables firms to access to the neighboring firms' experiences and competences. Therefore, service firms would better be established close by other ones to take advantage from productivity and innovation profits. Thus, if a firm is established in a given area, it can have a fairly good idea about the surrounding firms. It can, therefore, make a selection among the providers it will accommodate. Accordingly, it can manage all or part of their information system in order to concentrate on its own core task while benefitting from adaptation, flexibility and competitiveness vis-à-vis the market demands and needs.
As there is emphasis on analyzing the impact of innovation in the services sector, the choice of the population was restricted to the firms that mainly provide value-added services: companies linked to ICT-based services according to the nomenclature published in 'The directory of ICT in Tunisia' that is edited by Symbols Media (2005), The Banks listed in the 'Tunisia's Professional Association of Banks and Financial Institutions (APTBEF)' and Insurance Companies that are listed in the 'Tunisian Federation of Insurance Companies (FTUSA)'.
A French version of the questionnaire and a data collection are available upon request.
National Institute of the Statistics (INS): distribution of companies by number of employees in 2007.
For more details, see National Institute of the Statistics (INS): distribution of companies by activities.
According to the CIS3, product (good or service) innovation is 'the market introduction of a new good or service or a significantly improved good or service with respect to its capabilities, such as improved software, user friendliness, components or sub-systems. The innovation (new or improved) must be new to your enterprise, but it does not need to be new to your sector or market. It does not matter if the innovation was originally developed by your enterprise or by other enterprises'.
Are considered qualified the percentage of the number of employees in the firm holding a high academic level (baccalaureate or more).
For further details, see Greene (2003).
De Luca and Perotti (2010) have developed a new opsel command on the STATA software. The opsel command uses a standard maximum likelihood (ML) approach to fit a parametric specification of the model where errors are assumed to follow a bivariate Gaussian distribution.
Community Innovation Survey
ICT:
KMO:
Kaiser-Meyer-Olkin
MCA:
multiple correspondence analysis is conducted
PCA:
principal components analysis
R&D:
VIF:
variance inflation factor
The authors contributed equally to this work. All authors read and approved the final manuscript.
Hanen SDIRI is currently a teaching assistant at the University of Tunis_Higher Institute of Management. He received her Ph.D from the same University. Her core research interests include Innovation analyses, Applied Econometrics and Environmental Economics.
Mohamed AYADI is currently a Professor of Econometrics and Quantitative Economics at the University of Tunis_Higher Institute of Management. He received his Ph.D in Mathematical Economics and Econometrics from the University of Toulouse, France. His core research interests include Innovation and New Products Analyses, Welfare and Poverty Analysis and Consumer Behavior and Public Economic Policies.
Distribution of firms by main activity
Total of firms
Innovative firms
Outsourcing firms
ACT1
UAQUAP-Tunis Higher Institute of Management, 41 Rue de la liberté, 2000 Le Bardo, Tunis, Tunisia
Akaike, H. (1974). A new look at the statistical model identification. Automatic Control, IEEE Transactions on, 19(6), 716–723.View ArticleGoogle Scholar
Bartel, A, S. Lach, N. Sicherman, et C. for Economic Policy Research (Great Britain) (2008). Outsourcing and technological innovations: a firm-level analysis. Centre for Economic Policy Research.Google Scholar
Cantone, L, et P. Testa (2009). The outsourcing of innovation activities in supply chains with high-intensity of research and development. Esperienze d'Impresa 2, 199-221.Google Scholar
Carson, S. (2007). When to give up control of outsourced new product development. Journal of Marketing, 71(1), 49–66.View ArticleGoogle Scholar
Caudy, D. (2001). Using R&D outsourcing as a competitive tool. Medical Device and Diagnostic Industry, 23(3), 115–126.Google Scholar
Cohen, W, et D. Levinthal (1990). Absorptive capacity: a new perspective on learning and innovation. Administrative science quarterly, 35(1), 128–152. Google Scholar
Cusmano, L., Mancusi, M., & Morrison, A. (2009). Innovation and the geographical and organizational dimensions of outsourcing: evidence from Italian firm-level data. Structural Change and Economic Dynamics, 20(3), 183–195.View ArticleGoogle Scholar
De Luca, G. et V. Perotti (2010). Estimation of ordered response models with sample selection. CEIS Working Paper No. 168. Google Scholar
Domberger, S. (1998). The contracting organization: a strategic guide to outsourcing. USA: Oxford University Press.View ArticleGoogle Scholar
Ehie, I. (2001). Determinants of success in manufacturing outsourcing decisions: a survey study. Production and Inventory Management Journal, 42(1), 31–39.Google Scholar
Engardio, P., & Arndt, M. (2006). The future of outsourcing. Business Week, 30, 50–64.Google Scholar
Espino-Rodríguez, T., & Padrón-Robaina, V. (2004). Outsourcing and its impact on operational objectives and performance: a study of hotels in the Canary Islands. International Journal of Hospitality Management, 23(3), 287–306.View ArticleGoogle Scholar
Gago, D., & Rubalcaba, L. (2007). Innovation and ict in service firms: towards a multidimensional approach for impact assessment. Journal of Evolutionary Economics, 17(1), 25–44.View ArticleGoogle Scholar
Gallouj, F., & Windrum, P. (2009). Services and services innovation. Journal of Evolutionary Economics, 19(2), 141–148.View ArticleGoogle Scholar
Gilley, K., & Rasheed, A. (2000). Making more by doing less: an analysis of outsourcing and its effects on firm performance. Journal of management, 26(4), 763–790.View ArticleGoogle Scholar
Girma, S., & Görg, H. (2004). Outsourcing, foreign ownership, and productivity: evidence from UK establishment-level data. Review of International Economics, 12(5), 817–832.View ArticleGoogle Scholar
Glass, A., & Saggi, K. (2001). Innovation and wage effects of international outsourcing. European Economic Review, 45(1), 67–86.View ArticleGoogle Scholar
Görg, H., & Hanley, A. (2011). Services outsourcing and innovation: an empirical investigation. Economic Inquiry, 49(2), 321–333.View ArticleGoogle Scholar
Greene.W.H. Econometric analysis. Fifth Edition, Prentice Hall, 2003.Google Scholar
Heckman, J. (1979). Sample selection bias as a specification error». Econometrica. Journal of the econometric society, 47(1).153–161.Google Scholar
Huang, Y., Chung, H., & Lin, C. (2009). R&D sourcing strategies: determinants and consequences. Technovation, 29(3), 155–169.View ArticleGoogle Scholar
Kremic, T., Tukel, O., & Rom, W. (2006). Outsourcing decision support: a survey of benefits, risks, and decision factors. Supply Chain Management: An International Journal, 11(6), 467–482.View ArticleGoogle Scholar
Lankford, W., & Parsa, F. (1999). Outsourcing: a primer. Management Decision, 37(4), 310–316.View ArticleGoogle Scholar
Mairesse, J. et P. Mohnen (2003). R&D and productivity: a reexamination in light of the innovation surveys. Dans DRUID Summer Conference, pp. 12–14.Google Scholar
Maskell, P., Pedersen, T., Petersen, B., & Dick-Nielsen, J. (2007). Learning paths to offshore outsourcing: from cost reduction to knowledge seeking. Industry & Innovation, 14(3), 239–257.View ArticleGoogle Scholar
Miozzo, M., & Soete, L. (2001). Internationalization of services: a technological perspective. Technological Forecasting and Social Change, 67(2-3), 159–185.View ArticleGoogle Scholar
OCDE. (2005). Manuel d'Oslo : principes directeurs pour le recueil et l'interprétation des données sur l'innovation (OCDE).View ArticleGoogle Scholar
Omrane, D. et J. Bouillon (2004). Tic et relations de services dans une économie globalisée. Rapport francophone, LERASS-Université Paul Sabatier Toulouse 3.Google Scholar
Pierre-Paul, P. (2006). L'externalisation de la production de biens et services : contexte, définition et effets économiques sur le pays d'origine et d'accueil (Chaier de recherche– CEIM).Google Scholar
Quinn, J. (2000). Outsourcing innovation: the new engine of growth. Sloan management review, 41(4), 13–28.Google Scholar
Schniederjans, M, Schniederjans, A, Schniederjans, D. (2005). Outsourcing and insourcing in an international context. ME Sharpe Inc. Routledge; New Ed (30 Septembre 2005).Google Scholar
Schwarz, G. (1978). «Estimating the dimension of a model». The annals of statistics, 6(2), 461–464. Google Scholar
Sdiri, H., & Ayadi, M. (2014). Innovation decision of Tunisian service firms: an empirical analysis, Working Papers 2014-092 (Department of Research, Ipag Business School).Google Scholar
Uzunidis, D. (2010). Milieu innovateur, relations de proximité et entrepreneuriat. Analyse d'une alchimie féconde. Revue Canadienne de Science Régionale, 33, 91–106.Google Scholar
Windrum, P., Reinstaller, A., & Bull, C. (2009). The outsourcing productivity paradox: total outsourcing, firmal innovation, and long run productivity growth. Journal of evolutionary economics, 19(2), 197–229.View ArticleGoogle Scholar
Young, E. (2006). Baromètre outsourcing 2006: Pratiques et tendances de l'externalisation en Tunisie (Andersen).Google Scholar
Young, E. (2010). Baromètre Outsourcing 2010: Pratiques et tendances du marché de l'externalisation en France. Paris: Andersen.Google Scholar | CommonCrawl |
Journal of Engineering and Applied Science
Polymer-based dampening layer application to improve the operating shock tolerance of hard disk drive
Djati Wibowo Djamari ORCID: orcid.org/0000-0001-7624-55121,
Fook Fah Yap2,
Bentang Arief Budiman3 &
Farid Triawan1
Journal of Engineering and Applied Science volume 69, Article number: 13 (2022) Cite this article
This paper discusses a passive vibration control method to improve the shock tolerance of hard disk drives (HDDs) in operating condition (op-shock tolerance). Past works in improving the HDDs' op-shock tolerance includes (i) parking the head when shock is detected, (ii) installing a lift-off limiter, (iii) structural modification of the suspension, and (iv) installing an external vibration isolation. Methods (i) and (iv) have practical issues, method (ii) works only on single shock direction, and method (iii) requires major engineering design/manufacturing work. Compared to these works, this paper proposes a method which has no practical issues and without requiring major engineering design/manufacturing work. The proposed method is to apply a polymer-based dampening layer on the backside of the baseplate with the purpose of increasing the damping ratio of the 1st bending mode of the baseplate. The location of the dampening layer on the baseplate is first determined by modal analysis and then fine-tuned by non-op-shock tests. The op-shock tolerance improvement is confirmed by op-shock tests where 2.5″ HDD with the dampening layer on the baseplate can withstand a 300G 0.5-ms shock without failure while unmodified HDD can only withstand 250G 0.5-ms shock without failure.
The demand for higher density hard disk drives (HDDs) pushes the requirements for the head–disk spacing. The greater the HDDs' density, the smaller the head–disk spacing required (see [1,2,3]). The head–disk spacing can be designed by setting the slider's flying height. Meanwhile, the flying height of the slider affects the stiffness of the air bearing, and more importantly, the shock response of the HDDs (see [4]). In operating condition, HDDs need to be protected from failures which are caused by external disturbance, i.e., external shock. Studies on HDDs' failure mechanism due to external shock can be found in [5, 6] and the references therein. HDDs fail when the head is touching the disk.
Studies on HDDs' failure mechanism show that HDDs have specific op-shock resistance (see [2, 5, 7]). For example, there exists a range of external shock input amplitude and duration for which the head is not touching the disk. A common practice by HDD's manufacturer is to mention the op-shock tolerance of their product for a certain shock duration. An HDD having op-shock tolerance of 350 G 2 ms (milliseconds), where 1 G = 9.81 m/s2, means that it can withstand external shock with a duration of 2 ms up to 350 G of amplitude without failure. A study on shock duration effect to the shock response of HDD can be found in [8]. Generally, HDDs are more prone to failures from short shock duration.
There are various methods for protecting operating HDDs from failure. In a recent work by Nicholson et al. [9], the HDD is protected from external shock by parking the head when the HDD is subjected to shock. In parking position, where the disk can vibrate without touching the head, the HDD has relatively higher shock tolerance. However, the read/write performance of the HDD is sacrificed since it cannot perform its task in parking position. In the work of Ng et al. [10], the HDD is protected by installing a lift-off limiter. When the positive shock is high enough, the slider will move away from the disk and be separated from the air bearing (lift-off). This separation breaks the air bearing, and the sudden return of the slider makes the head touch the disk. This phenomenon is commonly called head-slap. The lift-off limiter prevents the slider from moving away from the disk to sustain the air bearing. However, the lift-off limiter can only work for one side of the disk during positive shock and the other side of the disk during negative shock.
Another method to protect HDDs from failure focus on modifying the HDDs' structure to improve the op-shock tolerance. In the work [5], a stiffer suspension design is proposed. On short shock duration of less than 2 ms, stiffer suspension design works to increase the op-shock tolerance of the HDD. However, for a shock duration of 2 ms and longer, the stiffer suspension design has minimal effect. The work [2] focuses on HDD with a secondary stage actuator which used for fine control of track following in high-density HDDs. HDD with secondary stage actuator has poor shock tolerance due to large mass at the tip. The work [2] proposes a secondary actuator design that has a lower mass without sacrificing the stroke sensitivity of the actuator. With a lower mass of secondary stage actuators, it is expected that the HDD with secondary stage actuators has better op-shock tolerance. The work [11] proposes topology design optimization of suspension to improve HDD suspension dynamic characteristics. Although it is claimed that the optimized suspension design results in dynamic response improvement over shock input, the work does not study the op-shock tolerance improvement.
Another method to protect HDDs from failure rely on external shock isolation. The work [12] proposes a rubber mount design to isolate operating HDDs from shock and vibration. The rubber mounts work by reducing the shock energy transmitted to the HDD's baseplate which could result in higher op-shock tolerance. However, as studied by Djamari [13], the improvement to the op-shock tolerance by using rubber mounts is not significant when using minimum external footprint. Generally, an external shock isolation system needs a large footprint for it to be able to work effectively [14, 15]. Effective external shock isolation must have a relatively low dominant frequency and thus makes the isolated system vibrate with a large amplitude when subjected to a shock input. Recent work on external shock isolation of HDDs can be found in [7, 16, 17].
In summary, the op-shock tolerance of HDDs can be improved through the following: (i) intervention of HDD operation, (ii) design modification of the internal structure of HDDs, and (iii) installing an external shock isolation system. Method (i) improves the op-shock tolerance significantly, but it sacrifices HDD performance which is not practical. Method (ii) could potentially improve the op-shock tolerance, but there is a significant cost in changing the internal HDD structure design. Meanwhile, method (iii) offers little to no changes in HDD design, but it needs sufficient footprint which may not be practical at some point. All three methods solve the problem but are impractical which could limit the application of the HDDs. This paper proposes a method of external HDD shock isolation that neither changes the footprint of the standard HDD form factor nor change the HDD's design. It also does not sacrifice the HDD's performance and it works for both positive and negative shocks.
The proposed method in this paper is to apply a dampening layer (damper in a form of a thin polymer layer) on the backside of the baseplate with the purpose of increasing the damping ratio of the 1st bending mode of the baseplate, thus reducing the shock transmissibility to the HAA (Head Actuator Assembly). The application of damper to reduce the vibration of a structure is a common engineering solution. However, the damper location and how much damping that must be applied depend on the problem at hand and is not obvious. Recent work by Sezgen and Tinkir [18] shows that damper application is effective in reducing the vibration of a structure, and genetic algorithm is used to optimize the damper configuration. The work by Biglari et al. [19] shows that frictional damper can be used to reduce the residual vibration of a flexible manipulator, and several optimization methods are utilized to obtain the optimum structure of the damper. While Sezgen and Tinkir use mathematical model only to investigate the damper application, Biglari et al. use mathematical model and perform experimentation to test the optimized damper structure. Similar to these works, this paper also uses mathematical model of HDD to show the effectiveness of the damper, and similar to Biglari et al., this work perform experimentation to show the vibration reduction (non-operating shock tests) after the damper is applied to the structure being studied. However, in HDD case, vibration reduction of the structure (obtained through non-operating shock tests) needs to be verified with op-shock tests to show that damper application improves the op-shock tolerance. Therefore, in this work, the op-shock tests are done in addition to the non-operating shock test.
The rest of this paper is organized as follows. The "Methods" section discusses the problem statement and methodology. The "Results and discussion" section discusses theoretical background of the proposed method, MATLAB simulation of a simple HDD model, and experiment results for HDDs under non-operating and operating conditions. The "Conclusion" section concludes this paper.
The problem under consideration is a 2.5″ HDD in operating condition, with a single platter (see Fig. 1). The arm is positioned at the outer-disk position, which is the worst position due to large displacement of the disk at the outer-disk position compared to other arm's position that is closer to the disk-spindle when the HDD is subjected to shock. The problem in this paper is to design an external shock isolator for the HDD so that it can withstand higher op-shock without failure compared to the baseline HDD. In this context, failure means the head touches the disk. As a design constraint, the external shock isolator must not change the form factor of the HDD.
Illustration of 2.5″ HDD with single platter
We hypothesize that a reduction in the relative arm–disk displacement when the HDD is subjected to shock translates to the improvement of the op-shock tolerance. In other words, when the HDD structure is modified such that the relative arm–disk displacement is reduced, then the modified HDD can withstand higher op-shock without failure. To this end, let us consider Fig. 1 which illustrates the HDD structure. Let had be the vertical distance between the arm and the disk (the distance between point A and D, in Fig. 1). When the HDD is subjected to shock, i.e., the shock comes through the baseplate and then transmitted to the HAA and the disk, then had changes over time. If had becomes too small due to the shock (the arm becomes too close to the disk), the arm pushes the suspension towards the disk and the pushing force could break the air bearing which can make the head to touch the disk.
On the other hand, if had becomes too large due to the shock (the arm is moving away too far from the disk), the arm pulls the suspension away from the disk and the air bearing could also break due to too much force pulling the suspension away from the disk. The returning movement of the suspension can potentially result in head slap. Therefore, to reduce the risk of the head touching the disk, the changes in had must be kept as small as possible. The logic behind the above analysis is that when the head is not separated from the disk, the bending mode of the suspension is much higher than the bending mode of the arm. Thus, during the shock and before the separation between the head and the disk occurs, the suspension follows the movement of the arm. In conclusion, in this paper, we assume that the reduction in the changes of had when the HDD is subjected to shock implies the improvement in the op-shock tolerance of the HDD.
To reduce the changes in had when the HDD is subjected to shock, the proposed method in this paper is to apply a dampening layer to the backside of the baseplate, in between the baseplate and the PCB (Printed Circuit Board). The dampening layer is a thin polymer material that has high damping factor. This application increases the damping factor of the baseplate. The question is, how much and where we must apply the dampening layer?
Firstly, a theoretical analysis is done to mathematically show that increasing the damping factor of the baseplate can reduce the arm–disk relative displacement when the HDD is subjected to external shock. The theoretical analysis is done by modeling the baseplate, HAA, and disk using mass-spring-damper system. It will be shown that increasing the damping factor of the baseplate increases the damping ratio for all mode shapes and it reduces the changes in had when the HDD is subjected to shock.
The second step is to perform simulations on a simple model of HDD using MATLAB. The purpose is to find out the reduction in the changes of had for several shock input durations. It will also be shown that the application of damper will have a negative effect if too much damping factor is added to the baseplate. A comparison of reduction in the changes of had between application of damper on the baseplate and application of damper on the arm structure is also done to show that applying damper on the baseplate is more effective in reducing the changes in had.
The next step is to define the areas on the baseplate where the dampening layer will be applied. For this, the non-op-shock tests are carried out for HDDs with and without the dampening layer, and the arm–disk relative displacement is measured by using fiber optic interferometer. The best dampening layer configuration is then used in the op-shock tests to verify the op-shock tolerance improvement. In the tests, the dampening layer selection is not done based on the previous steps since the primary criterion for the dampening layer is its low outgassing property and it should be thin enough such that it does not affect the overall PCB assembly. Thus, for the tests, we use the available dampening layer product suitable for HDD application.
Theoretical analysis
A model of baseplate–HAA–disk under consideration is shown in Fig. 2. The mass, stiffness, and damping coefficient of HAA are denoted by ma, ka, and ca, respectively. The mass, stiffness, and damping coefficient of the disk are denoted by md, kd, and cd respectively. The mass, stiffness, and damping coefficient of the baseplate are denoted by mb, kb, and cb, respectively. Meanwhile, ms is the shaker mass and F is the external force applied to shaker mass (note that F is a function of time, t). The states xa, xd, xb, and xs are the displacement of the arm tip, disk tip, baseplate, and the shaker, respectively. In the discussion in this section, we will see the effect of changing cb to the changes in had. In our experiment that will be presented in subsections "Non-operating shock experiments" and "Operating shock experiments," cb is increased by applying a polymer-based dampening layer to the back of the baseplate. Due to relatively small mass and stiffness of the dampening layer compared to the mass and stiffness of the baseplate, in the analysis done in this section, we assume that the increase in cb does not affect the value of mb and kb.
A simple model of baseplate-HAA-Disk with shaker
The equation of motion of the model in Fig. 2 is the following:
$$ \left[\begin{array}{cccc}{m}_s& 0& 0& 0\\ {}0& {m}_b& 0& 0\\ {}0& 0& {m}_a& 0\\ {}0& 0& 0& {m}_d\end{array}\right]\left[\begin{array}{c}\begin{array}{c}{\ddot{x}}_s\\ {}{\ddot{x}}_b\end{array}\\ {}\begin{array}{c}{\ddot{x}}_a\\ {}{\ddot{x}}_d\end{array}\end{array}\right]+\left[\begin{array}{cccc}{c}_b& -{c}_b& 0& 0\\ {}-{c}_b& \overline{c}& -{c}_a& -{c}_d\\ {}0& -{c}_a& {c}_a& 0\\ {}0& -{c}_d& 0& {c}_d\end{array}\right]\left[\begin{array}{c}\begin{array}{c}{\dot{x}}_s\\ {}{\dot{x}}_b\end{array}\\ {}\begin{array}{c}{\dot{x}}_a\\ {}{\dot{x}}_d\end{array}\end{array}\right] $$
$$ +\left[\begin{array}{cccc}{k}_b& -{k}_b& 0& 0\\ {}-{k}_b& \overline{k}& -{k}_a& -{k}_d\\ {}0& -{k}_a& {k}_a& 0\\ {}0& -{k}_d& 0& {k}_d\end{array}\right]\left[\begin{array}{c}\begin{array}{c}{x}_s\\ {}{x}_b\end{array}\\ {}\begin{array}{c}{x}_a\\ {}{x}_d\end{array}\end{array}\right]=\left[\begin{array}{c}F\\ {}0\\ {}0\\ {}0\end{array}\right] $$
where \( \overline{c}={c}_a+{c}_b+{c}_d \) and \( \overline{k}={k}_a+{k}_b+{k}_d \). Let x = [xs xb xa xd]T, Eq. (1) can be compactly written as
$$ M\ddot{x}+C\dot{x}+ Kx={F}_v $$
where M, C, and K are the mass matrix, damping matrix, and stiffness matrix, respectively.
Let \( {\dot{x}}_s={v}_s \), \( {\dot{x}}_b={v}_b,{\dot{x}}_a={v}_a,{\dot{x}}_d={v}_d, \) and \( v={\left[{v}_s\kern0.5em {v}_b\kern0.5em {v}_a\kern0.5em {v}_d\right]}^T \), the state space equation of the simplified model is given by:
$$ \left[\begin{array}{c}\dot{x}\\ {}\dot{v}\end{array}\right]=\left[\begin{array}{cc}{0}_4& {I}_4\\ {}-{M}^{-1}K& -{M}^{-1}C\end{array}\right]\left[\begin{array}{c}x\\ {}v\end{array}\right]+\left[\begin{array}{c}0\\ {}{M}^{-1}\end{array}\right]{F}_v $$
which can simply be written as
$$ \left[\begin{array}{c}\dot{x}\\ {}\dot{v}\end{array}\right]=A\left[\begin{array}{c}x\\ {}v\end{array}\right]+B{F}_v $$
where A is the state matrix and B is the input matrix. Assuming nonzero damping with underdamped condition, then the eigenvalues of the state matrix can be expressed as \( \left\{0,0,-{p}_2\pm j{\omega}_{d_2},-{p}_3\pm j{\omega}_{d_3},-{p}_4\pm j{\omega}_{d_4}\right\} \), where \( {p}_4={\zeta}_4{\omega}_{n_4} \), \( {p}_3={\zeta}_3{\omega}_{n_3} \), and \( {p}_2={\zeta}_2{\omega}_{n_2} \) are the real part of the eigenvalues, \( {\omega}_{d_2}={\omega}_2\sqrt{1-{\zeta}_2^2} \), \( {\omega}_{d_3}={\omega}_3\sqrt{1-{\zeta}_3^2} \), and \( {\omega}_{d_4}={\omega}_4\sqrt{1-{\zeta}_4^2} \) are the imaginary part of the eigenvalues or the damped natural frequencies. The first two zero eigenvalues correspond to the rigid body mode of all masses or the 1st mode of the system. In this setting, ζ2, ζ3, and ζ4 are the damping ratio for the 2nd, 3rd, and 4th modes, respectively. The 2nd mode is the first bending mode of the baseplate, the 3rd mode is the first bending mode of the disk, and the 4th mode is the first bending mode of the arm. Considering only the flexible modes, we know that the simple model is a stable system, i.e., it will return to its equilibrium after it is disturbed temporarily (for example by knocking the arm tip). When the system is disturbed temporarily, the real part of the eigenvalues of matrix A determines how fast the simple model return to the equilibrium, and the imaginary part determines the oscillation frequency of the response. For vibrating system, the convergence speed is represented by ζωn, and the oscillation of the response is represented by the ωd.
It can be shown by parametric study, by inserting values and varying the variables (since the closed form solution of the eigenvalues of matrix A is not possible to be shown), that the damping factor of the baseplate, cb, affects the damping ratio of all nonzero modes. Meanwhile, the damping factor of the disk, cd, dominantly affects only the 3rd mode, and the damping factor of the arm, ca, dominantly affects only the damping ratio of the 4th mode. This is due to the coupling between the baseplate and the arm–disk. Meanwhile, the arm is not coupled to the disk.
The force F is a shock input which models the impact when HDD is dropped. To show the effectiveness of increasing the baseplate's damping factor in minimizing the changes of had, we assume that F is an impulse and thus to obtain the solution of (2), we assume initial state vs(0) > 0, while the rest of the initial states being zero and F is set to be zero. Let \( L=\left[{l}_1\kern0.5em \cdots \kern0.5em {l}_8\right] \) with \( {l}_i={\left[{l}_{i1}\kern0.5em \cdots \kern0.5em {l}_{i8}\right]}^T \) for i = 1, ⋯, 8 be the left eigenvector of the state matrix, where qT is the transpose of q, and \( R=\left[{r}_1\kern0.5em \cdots \kern0.5em {r}_8\right] \) with \( {r}_i={\left[{r}_{i1}\kern0.5em \cdots \kern0.5em {r}_{i8}\right]}^T \) for i = 1, ⋯, 8 be the right eigenvector of the state matrix, the solution to (2) can be expressed as
$$ \left[\begin{array}{c}x(t)\\ {}v(t)\end{array}\right]={e}^{At}\left[\begin{array}{c}x(0)\\ {}v(0)\end{array}\right]=R{e}^{\Lambda \mathrm{t}}L\left[\begin{array}{c}x(0)\\ {}v(0)\end{array}\right] $$
where Λ is a diagonal matrix of the eigenvalues of matrix A:
$$ \Lambda =\mathit{\operatorname{diag}}\left\{{\lambda}_1,{\lambda}_2,\cdots, {\lambda}_8\right\}\Lambda =\mathit{\operatorname{diag}}\left\{0,0,-{p}_2+j{\omega}_{d_2},-{p}_2-j{\omega}_{d_2},-{p}_3+j{\omega}_{d_3},-{p}_3-j{\omega}_{d_3},-{p}_4+j{\omega}_{d_4},-{p}_4-j{\omega}_{d_4}\right\} $$
Let \( z={\left[{x}^T\kern0.5em {v}^T\right]}^T={\left[\begin{array}{ccc}{z}_1& \cdots & {z}_8\end{array}\right]}^T \), the solution to (2) can be written as follows:
$$ {z}_i(t)=\sum \limits_{j=1}^8{r}_{ji}{l}_{1j}{e}^{\lambda_jt}{z}_1(0)+\cdots +\sum \limits_{j=1}^8{r}_{ji}{l}_{8j}{e}^{\lambda_jt}{z}_8(0);i=1,\cdots, 8 $$
Since zi(0) = 0 for all i except for z5(0) = vs(0) > 0, we can write (5) as
$$ {z}_i(t)=\sum \limits_{j=1}^8{r}_{ji}{l}_{5j}{e}^{\lambda_jt}{z}_5(0);i=1,\cdots, 8 $$
The relative displacement between the arm and the disk, δad, can be expressed as
$$ {\delta}_{ad}={z}_3(t)-{z}_4(t)=\sum \limits_{j=1}^8{r}_{j3}{l}_{5j}{e}^{\lambda_jt}{z}_5(0)-\sum \limits_{j=1}^8{r}_{j4}{l}_{5j}{e}^{\lambda_jt}{z}_5(0){\delta}_{ad}={z}_5(0)\sum \limits_{j=1}^8\left({r}_{j3}-{r}_{j4}\right){l}_{4j}{e}^{\lambda_jt} $$
If we want had to be as constant as possible, then it implies that δad must be as small as possible. Clearly, if (rj3 − rj4) = 0 for all j, then δad will be zero. However, this is not possible to happen. The most reasonable method in minimizing δad is by minimizing the term \( {e}^{\lambda_jt} \) for all j. The first two λ ′ s are zero due to the rigid body mode, so these cannot be changed. Meanwhile, the remaining six λ ′ s can be changed by modifying the spring and damper of the model. As we said earlier, changing cb allows us to change the damping ratio of the nonzero modes. Thus, the six λ ′ s will be significantly affected by cb.
When we increase cb, assuming that ω2, ω3, and ω4 are unchanged due to non-significant change in mb and kb, then ζ2, ζ3, and ζ4 will be larger. This results in the smaller values of \( {\omega}_{d_2},{\omega}_{d_3} \), and \( {\omega}_{d_4} \), which in turn reduces the oscillation frequency of the shock response. Other than that, due to larger ζ2, ζ3, and ζ4, then ∣p2 ∣ , ∣ p3 ∣ , and ∣p4∣ will be larger or that the real part of nonzero eigenvalues will be more to the left of the imaginary axis. This results in a higher convergence rate of δad. On the other hand, if we only make the ca to be larger, i.e., only ζ4 is increased, then only \( {\omega}_{d_4} \) will become smaller and only |p4| that will become larger. Thus, the convergence rate of \( {e}^{\lambda_3t} \) and \( {e}^{\lambda_4t} \) will still dominate the convergence of δad. This results in no to little improvement in the reduction of δad.
Simulation of the simplified baseplate–HAA–disk model
Typically for 2.5″ HDD, the first bending mode of the baseplate is around 600–800 Hz, the first bending mode of the disk is around 1000–1100 Hz, and the first bending mode of the arm is around 1500–1600 Hz (see [20]). To simulate the model in Fig. 2, we first define mb = 35 g, ma = 2.8 g, md = 5 g, and ms = 1 · 106 g. These values are taken from the typical mass of the 2.5″ HDD and the shaker mass is defined for ease in defining the force input. Then, we tune \( {k}_b=6.7153\cdotp {10}^5\frac{N}{m},{k}_a=2.2409\cdotp {10}^5\frac{N}{m}, \) and kd = 1.6581 · 106 such that the undamped-nonzero modes assume values around the typical modes of 2.5″ HDD. The undamped-eigenpairs of the model simulated in this paper are:
$$ {\gamma}_1=0\ Hz,{U}_1={s}_1{\left[1\kern0.5em 1\kern0.5em 1\kern0.5em 1\right]}^T{\gamma}_2=600\ Hz,{U}_2={s}_2{\left[\begin{array}{cccc}0& 1& 1.21& 1.77\end{array}\right]}^T{\gamma}_3=1000\ Hz,{U}_3={s}_3{\left[\begin{array}{cccc}0& 1& 1.97& -4.71\end{array}\right]}^T{\gamma}_4=1500\ Hz,{U}_4={s}_4{\left[\begin{array}{cccc}0& 1& -8.79& -0.57\end{array}\right]}^T $$
where si for i = 1, 2, 3, 4 is a real number. The damping factor of the baseplate, arm, and disk is tuned such that the damping ratio for all modes is 0.01. This value is a conservative damping ratio value of metals [21]. The damping factor of the baseplate, arm, and disk which results in this damping ratio is called as the unmodified configuration \( \left({c}_b^0=3.9584\frac{Ns}{m},{c}_a^0=0.4486\frac{Ns}{m},{c}_d^0=0.5089\frac{Ns}{m}\right) \). MATLAB is used to simulate the simplified model.
The force, F, given to the shaker is a half-sine input such that the peak acceleration of the shaker is 100 G. Three shock durations are used in the simulation, they are 0.5 ms, 1 ms, and 2 ms. The δad of the unmodified configuration are then compared with the other two cases: (i) the case where the damping factor of the baseplate (cb) is increased and (ii) the case where the damping factor of the arm (ca) is increased. The case where the damping factor of the disk is increased is not performed since it is unlikely that we can modify the damping factor of the disk in practice. The value of δad which defines failure of the HDD is not studied in this paper, and this section focuses on the study of reducing δad by increasing the damping factor of the baseplate and the arm.
Figures 3, 4, and 5 show the arm–disk relative displacement response for three shock durations (0.5 ms, 1 ms, and 2 ms). We can see from Figs. 3, 4, and 5 that increasing the damping factor of the baseplate can effectively reduce the relative arm–disk displacement response compared to increasing the damping factor of the arm only. These results can be explained as follows:
δad response, HDD subjected to shock input 100 G 0.5 ms
δad response, HDD subjected to shock input 100 G 1 ms
As we have discussed in the subsection "Theoretical analysis", increasing ca only will only increase the damping ratio of the arm mode. Since the response of the disk is not affected by the increase of ca, the arm and disk response could be more out of phase compared to the unmodified configuration. Out of phase here means that the arm and disk are moving in different directions so that their relative displacement becomes larger. We can see from Figs. 3, 4, and 5 that at some points the arm–disk relative displacement response is larger than the unmodified configuration. On the other hand, by increasing cb, the damping ratio of all modes are increased. The δad is reduced, as expected from the discussion in the subsection "Theoretical analysis".
The optimum damping factor increase of the baseplate is also investigated. The damping factor of the baseplate is increased incrementally from \( {c}_b^0 \) up to \( 70{c}_b^0 \), and the range between the 1st maximum peak and the 1st minimum peak of the relative arm–disk displacement response over time is measured for three shock duration cases. Let (δmax − δmin) denotes the difference between the 1st maximum peak and the 1st minimum peak. Let also (δmax − δmin)0 denotes the difference between the 1st maximum peak and the 1st minimum peak when \( {c}_b={c}_b^0 \). Figure 6 plots \( \frac{\left({\delta}^{max}-{\delta}^{min}\right)}{{\left({\delta}^{max}-{\delta}^{min}\right)}^0}\times 100\% \) versus β, where β is the damping factor multiplier, i.e., \( {c}_b=\beta {c}_b^0 \).
Displacement range between the 1st maximum peak and the 1st minimum peak of the arm–disk displacement response with varied baseplate damping factor
From Fig. 6, it can be observed that the optimum damping factor for three shock duration cases is around 25 times of the unmodified value. The 100% displacement range is when the baseplate damping factor is set to be equal to \( {c}_b^0 \) or β = 1. From Fig. 6, if we use more than 25 times of \( {c}_b^0 \), the improvement for 0.5 ms shock duration starts to decrease. While for 1 ms shock duration, when we use more than 25 times of \( {c}_b^0 \), the shock resistance improvement becomes less and less significant. This case is different for the 2-ms shock duration: the shock resistance improvement is increasing almost linearly with the increase in the damping factor of the baseplate.
The above phenomenon can be explained by examining the transmissibility curve for a single degree of freedom with base excitation (see [22]). From the transmissibility curve, when the excitation frequency is close to the natural frequency of the system, a relatively high damping ratio is very effective to reduce the transmissibility. However, when the excitation frequency is higher than the natural frequency of the system, a system with a high damping ratio has higher transmissibility than the system with a low damping ratio. Thus, if we want to reduce the displacement on base excitation problem, when the excitation frequency is higher than the natural frequency, we should choose a relatively a low damping ratio.
The baseplate's natural frequency is around 600–800 Hz, which makes the baseplate mode to be excited by all shock duration of 2 ms, 1 ms, and 0.5 ms (see the FFT of the shock inputs in Fig. 7).
FFT of shock inputs with duration of 0.5 ms, 1 ms, and 2 ms generated using MATLAB
For the shock duration of 2 ms, the FFT shows that the dominant excitation frequency range of the shock input is close to the baseplate's natural frequency. This is the reason that the improvement curve for the shock duration 2 ms of Fig. 5 keeps increasing when we increase the damping value of the baseplate. For the shock duration of 1 ms, the shock excitation frequency range is a little bit higher than the baseplate's natural frequency, and thus high damping value of the baseplate (when β > 25 in Fig. 6) is not effective to reduce the arm–disk relative displacement response. Lastly, for the shock duration of 0.5 ms, the shock excitation frequency range is much higher than the baseplate's natural frequency. As a result, the high damping value of the baseplate (when β > 25 in Fig. 6) yields in the lower improvement in the arm–disk displacement response.
Remark: the label high and low damping ratio in the above discussion is concluded solely from the results in Fig. 6.
Non-operating shock experiments
The subsection "Simulation of the simplified baseplate–HAA–disk model" indicates that increasing the damping factor of the baseplate results in the reduction of arm–disk relative displacement or δad when the HDD is subjected to shock input. In this subsection, we apply the same method as in the subsection "Simulation of the simplified baseplate–HAA–disk model" to a real 2.5″ HDD which is to increase the damping factor of the baseplate, and then, we perform the non-operating shock tests to find out how much reduction to δad that can be obtained on the real 2.5″ HDD. There are two specific things we first need to answer in performing the non-operating shock tests:
The method in increasing the damping factor of the baseplate.
The location on the baseplate where we should increase its damping factor.
In answering the first point, we chose a dampening layer from manufacturer 3M which has low outgassing property. The dampening layer material is based on polymer (see [23]). This property is important so that the dampening layer does not contaminate the internal environment of the HDDs. The thickness of the dampening layer is 0.05 mm and it is easy to be applied to the baseplate since it is working like a tape. The dampening layer used in the experiment is given in Fig. 8.
Dampening layer
To answer the second point, a non-op-shock tests were performed. Referring to the discussion in the subsection "Simulation of the simplified baseplate–HAA–disk model", the non-op-shock tests use the arm–disk relative displacement as the improvement indicator. Several configurations of dampening layer placement on the baseplate were tested and the arm–disk relative displacement was monitored. The best dampening layer configuration from non-op-shock test is then used in HDDs for op-shock tests. In both non-operating and op-shock tests, commercial 2.5″ HDDs are used. While the non-op-shock tests were done to find out the optimum dampening layer configuration, the op-shock tests were done to find out the op-shock tolerance improvement by using the optimum dampening layer configuration.
We note that the HDDs used in the experiments use single stage actuator. In the interest of the result from the subsection "Simulation of the simplified baseplate–HAA–disk model", where the shock duration 0.5 ms has an optimum point with the lowest β, the non-operating and op-shock tests are carried out using 0.5 ms shock duration.
The non-op-shock tests involve the use of a shock tower to simulate the HDDs being dropped to the floor. The experiment set up is shown in Fig. 9. The shock tower has a guide-pole that holds the shock table such that the shock table can be dropped into the base and keeping the HDD facing in one direction during the drop test. The drop height can be adjusted to adjust the shock magnitude (the G level), while a soft material such as Delrin (a kind of plastic) can be placed on the drop area to adjust the shock duration. The shock magnitude and duration are adjusted and confirmed by using accelerometer attached on the shock table. To measure the δad, that is the relative displacement between the arm and the disk, a laser Doppler interferometer is used. The laser Doppler has two probes, where laser probe A shone the arm tip and laser probe B shone the outer disk point. The output from the laser Doppler interferometer is the relative displacement measured by the two probes and the initial measurement is normalized to zero. The output from the accelerometer and the laser Doppler interferometer are routed to the dynamic signal analyzer for recording purpose.
Experiment setup
As we can see in Fig. 9, the HDD is tested in parking condition. To sustain the structural stiffness of the HDD during shock tests, the top cover of the HDD is still used, but the top cover is partially cut on the area where the lasers are being pointed (not shown in Fig. 9). It is worth noting that a non-op-shock test by positioning the arm-tip on top of the outer-disk is not possible since the laser can only point to the arm tip in that situation. In addition, since the suspension's stiffness is relatively low, the arm bending mode is not affected significantly by the parking position.
To find suitable areas on the baseplate to apply the dampening layer, we refer to the discussion in the subsection "Theoretical analysis" where the damping factor increase on the baseplate is meant to increase the damping ratio of the baseplate's 1st bending mode, disk's 1st bending mode, and the arm's 1st bending mode. We denote these modes as the low frequency modes. Thus, the dampening layer must be placed on baseplate's areas that has high strain on these low frequency modes. To this end, we perform a modal analysis on the finite element model (FEM) of the HDD being used in the experiment. The modal analysis is done using finite element analysis package, ANSYS.
Figure 10 shows the finite element model of the baseplate, where the four corners A, B, C, and D are constrained in all direction for the modal analysis. The modal analysis is done on the full finite element model of the HDD, but in Fig. 10 we only select the baseplate elements to show the strain measurement locations and the first bending mode of the baseplate. The first bending mode of the baseplate of our HDD model is 729 Hz which is within the range of 600–800 Hz. The strain between two adjacent nodes are done on 6 selected lines as shown in Fig. 10. To measure the strain, the eigenvector along the line for several modes is taken from the modal analysis result. Then, the strain is calculated as the difference between the eigenvector element between two adjacent nodes. This strain is a relative strain, and not to be confused with absolute strain measurement related to stress when the structure is being loaded. The strain measurement on the 6 lines is shown in Fig. 11.
Boundary condition on baseplate, strain measurement location, and first bending mode of the baseplate at 729 Hz
Strain measurement result on 6 lines of the baseplate
From Fig. 11, the first three lowest frequencies are the 1st bending mode of the baseplate, disk, and arm, respectively. The high or low strain is determined by comparing the relative strain on the mode, and not to be compared with the other mode. For example, at line 1, Fig. 11 shows that the mid points of line 1 has high strain for the 729 Hz, 957 Hz, and the 1551 Hz. Meanwhile, for 1731 Hz, the leftmost, mid, and rightmost points have comparable strain. Since the low frequency modes have the highest strain at the mid points, we mark the mid points of line 1 as having high strain for low frequency modes, while the leftmost and the rightmost points of line 1 have high strain for high frequency modes. The information from Fig. 11 is used as initial guess to apply the dampening layer on the back of the baseplate. The leftmost picture of Fig. 12 is the first configuration that we try. From there, we performed several trial and error until we find the most optimum dampening layer configuration.
Samples of dampening layer configurations tested on the non-op-shock tests; the white tape is the dampening layer
The dampening layer is applied to the backside of the baseplate by first removing the PCB. Figure 12 shows a sample of dampening layer placement configuration which was tested during the non-op-shock tests. Only one HDD is used in the non-op-shock tests, which means that the new damping layer is placed by first removing the previously tested dampening layer. This is done to eliminate differences between different HDD batches. For each HDD, 10 drop tests are done, and the results are averaged.
In this paper, we show the results from the optimum dampening layer configuration only. The optimum dampening layer configuration is shown in Fig. 13. By optimum, we mean the configuration which results in the highest reduction of the peak of the arm–disk relative displacement. The optimum dampening layer placement that we found on this HDD is on the edge of the disk-spindle area and below the HAA's pivot. It is noted that the same dampening layer configuration might not work on different types of HDDs, since they will have different 1st bending mode of the baseplate.
Optimum dampening layer placement during non-op-shock tests
Figure 14 shows the relative arm–disk displacement response over time obtained from the experiment. The initial value of the measurement is zero since the relative displacement measured by the laser Doppler interferometer is normalized at zero. The results from both the unmodified and HDD with dampening layer are plotted on this figure. The shock level used in the experiment was 100 G 0.5 ms. From Fig. 14, the \( \frac{\left({\delta}^{max}-{\delta}^{min}\right)}{{\left({\delta}^{max}-{\delta}^{min}\right)}^0}\times 100\% \) is around 80%. This number (80%) is close to the result in subsection 2.2 where damping factor of baseplate is set to 25 times of the unmodified value. Meanwhile, other dampening layer configurations shown in Fig. 12 have \( \frac{\left({\delta}^{max}-{\delta}^{min}\right)}{{\left({\delta}^{max}-{\delta}^{min}\right)}^0}\times 100\% \) of 88%, 92%, 94%, and 91%.
Relative arm–disk displacement response from experiment; HDD subjected to shock input of 100G 0.5 ms
Operating shock experiments
In the op-shock experiments, two sets of HDDs were prepared. The HDDs used are of similar type with the one tested in the non-operating tests, and they have similar number of platter and capacity. The first set is two (2) unmodified HDDs (HDDs without dampening layer), and the second set is three (3) HDDs with the dampening layer on the baseplate (the dampening layer placement configuration is the one shown in Fig. 13).
Like the non-op-shock test, a shock duration of 0.5 ms was used. As discussed in the previous subsection, a shock duration of 0.5 ms has the least room for improvement in terms of non-op-shock tolerance, and therefore, it will be our interest to find out the op-shock resistance improvement for 0.5-ms shock duration. The two sets of HDDs were shock-tested using the same shock table as used in the non-op-shock test. In this op-shock test, the G level is increased incrementally by 25 G from 200 G until all HDD fail. The shock duration is kept the same (0.5 ms) for all op-shock tests. The op-shock test results are given in Table 1.
Table 1 Op-shock test results of unmodified HDD and HDD with dampening layer on the baseplate (all shock is of 0.5-ms duration)
The failure indicator in the op-shock test is whether the read/write head touches/slaps the disk. To retain the same op-shock test condition for all HDDs, a routine was run in each op-shock test to position the actuator arm in the outer disk diameter (OD) position. A program was run to scan and check the disk for the bad sector before and after the op-shock test. The number of bad sectors before and after op-shock tests is then compared. If the bad sector was found increasing after the op-shock test, this means the disk was damaged in some places due to head slap when the HDD is subjected to the shock, and thus the HDD fails the op-shock test.
From the results of Table 1, the unmodified HDD starts to fail at 275 G 0.5 ms, while for the HDD with dampening layer; they all fail at 325 G 0.5 ms. This shows a 50 G advantage of the HDDs with the dampening layer compared to the unmodified HDDs. The 50 G advantage of the HDD with the dampening layer can be explained as follows. From the non-op-shock tests, we know that the HDD with the dampening layer has lower arm–disk relative displacement (experiments are done by applying the same shock level to HDD with and without dampening layer). This also means that a higher shock level is needed so that the HDD with dampening layer has similar arm-disk relative displacement. Here, we refer to the discussion in the "Methods" section in which the arm–disk relative displacement is strongly related to the head-disk failure mechanism.
The op-shock resistance improvement from this experiment is 18.18% \( \left(\frac{50}{275}\times 100\%\right) \). Although this number is close to the improvement found in the non-op-shock test results (relative arm-disk displacement reduction of 80% or improvement of 20%) and MATLAB simulation on the simplified model done in the subsection "Simulation of the simplified baseplate–HAA–disk model", it is difficult to take precise conclusion on the percentage of improvement since the increment of the op-shock test level that can be done is 25 G. However, we can conclude that, qualitatively, the improvement found in the non-op-shock tests is also found on the op-shock tests.
A strategy to apply dampening layer on the baseplate of the HDD to increase its op-shock tolerance has been presented in this paper. Technical analysis and simulation on the simplified model of baseplate-HAA-disk suggest that increasing the damping factor of the baseplate has a potential to improve the op-shock tolerance of the HDD. This improvement is implied by the reduction in the arm–disk relative displacement when the HDD is subjected to shock. The findings from the technical analysis and simulation on the simplified model are verified by the non-op-shock tests on the real HDD. Finally, op-shock tests were done to test the hypothesis, and the results from the tests show that HDDs with dampening layer have higher op-shock resistance compared to HDDs without the dampening layer.
In this work, we found that the dampening layer cannot be placed arbitrarily. The dampening layer must be placed on areas so that the 1st bending mode of the baseplate, disk, and arm are dampened out. The dampening layer placement is initiated by modal analysis to obtain an initial guess of the area. After that, a non-op-shock tests are done to fine-tune the area.
One of the problems that has not been addressed by this paper is to devise a method to generalize the solution for different HDDs. With the proposed method, different HDDs need different sets of experiments and thus it is not practical for a mass production system. Another issue is that thermal insulation due to the application of the dampening layer has not been addressed yet in this study. These problems will be discussed in our future study.
All data are available from the authors.
HAA:
Head actuator assembly
FEM:
Finite element model
Outer disk
PCB:
Suzuki K, Maeda R, Chu J, Kato T, Kurita M (2003) An active head slider using a piezoelectric cantilever for in situ flying-height control. IEEE Trans Magn 39(2 I). https://doi.org/10.1109/TMAG.2003.808934
Hong EJ, Kim WS, Ho SL (2006) Design modification of micro-actuator to improve shock resistance of HDD. https://doi.org/10.1109/APMRC.2006.365928
Marchon B, Pitchford T, Hsia YT, Gangopadhyay S (2013) The head-disk interface roadmap to an areal density of 4 Tbit/in 2. Adv Tribol. https://doi.org/10.1155/2013/521086
Jayson EM, Murphy J, Smith PW, Talke FE (2003) Effects of air bearing stiffness on a hard disk drive subject to shock and vibration. J Tribol 125(2). https://doi.org/10.1115/1.1509770
Li L, Bogy DB (2014) Operational shock failure mechanisms in hard disk drives. J Tribol 136(3). https://doi.org/10.1115/1.4027209
Lin CC (2002) Finite element analysis of a computer hard disk drive under shock. J Mech Des Trans ASME 124(1). https://doi.org/10.1115/1.1424299
Shengkai Y, Jianqiang M, Wei H, Weidong Z, Chin TC (2016) Flexible support for hard disk drives to enhance operational shock resistance. https://doi.org/10.1109/APMRC.2016.7524284
Luo J, Shu DW, Shi BJ, Gu B (2007) The pulse width effect on the shock response of the hard disk drive. Int J Impact Eng 34(8). https://doi.org/10.1016/j.ijimpeng.2006.07.005
Nicholson JW, Hobbet JR, Jakes PJ. Systems and methods for protecting hard disk drives. Singapore: Lenovo; 2020. https://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=10,741,209.PN.&OS=PN/10,741,209&RS=PN/10,741,209.
HwaLiang Ng, Camalig CJB, Buang A. Head-slap mitigation in a data storage device. 2012. https://patents.google.com/patent/US8089733B2/en.
Zhu DL, Wang AL, Jiang T (2006) Topology design to improve HDD suspension dynamic characteristics. Struct Multidiscip Optim 31(6). https://doi.org/10.1007/s00158-005-0581-6
Lim S, Chang YB, Park NC, Park YP (2006) Optimal design of rubber mounts supporting notebook hdd for shock and vibration isolation. https://doi.org/10.1109/APMRC.2006.365930
Djamari DW (2010) Mitigating shock of operating HDDs using smart suspensions. Singapore: Nanyang Technological University
Yap FF, Vahdati N, Harmoko H (2006) Design and analysis of vibration isolation systems for hard disk drives. J Magn Magn Mater 303(2 SPEC. ISS). https://doi.org/10.1016/j.jmmm.2006.01.114
Harmoko H, Yap FF, Vahdati N, Li C (2009) Design and analysis of shock and random vibration isolation of operating hard disk drive in harsh environment. Shock Vib 16(2). https://doi.org/10.1155/2009/959714
Eguchi T, Tomida K, Okazaki T. Hard disk drive with a vibration isolation frame. 2018.
Schwager MA, Miller MT. Suspended hard disk drive system for portable computers. 2017. https://patents.google.com/patent/US9778701.
Sezgen HÇ, Tinkir M (2021) Optimization of torsional vibration damper of cranktrain system using a hybrid damping approach. Eng Sci Technol Int J 24(4). https://doi.org/10.1016/j.jestch.2021.02.008
Biglari H, Golmohammadi M, Hayati S, Hemmati S (2021) Vibration reduction of a flexible robot link using a frictional damper. JVC/Journal Vib Control 27(9–10). https://doi.org/10.1177/1077546320936092
Jang GH, Seo CH, Lee HS (2007) Finite element modal analysis of an HDD considering the flexibility of spinning disk-spindle, head-suspension-actuator and supporting structure. Microsyst Technol 13(8–10). https://doi.org/10.1007/s00542-006-0276-y
Adams V, Askenazi A (1999) Building better products with finite element analysis. OnWord Press
Meirovitch L. Fundamentals of vibrations. McGraw-Hill Professional; 2003.
3M, "3M Ultra-Pure Viscoelastic Damping Polymer 242NR02." p. 4, 2012. https://multimedia.3m.com/mws/media/307979O/3mtm-ultra-pure-viscoelastics-damping-polymer-242nr02.pdf
The authors would like to acknowledge the support from Indonesia Endowment Fund for Education (LPDP) and Center of Research and Community Service (CRCS) of Sampoerna University.
This research is funded by the Indonesia Endowment Fund for Education (LPDP) under Research and Innovation Program (RISPRO) for electric vehicle development with contract no. PRJ-85/LPDP/2020 and Center of Research and Community Service (CRCS) of Sampoerna University.
Mechanical Engineering Study Program, Sampoerna University, Jakarta, Indonesia
Djati Wibowo Djamari & Farid Triawan
School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, Singapore
Fook Fah Yap
Faculty of Mechanical and Aerospace Engineering, Institut Teknologi Bandung, Bandung, Indonesia
Bentang Arief Budiman
Djati Wibowo Djamari
Farid Triawan
D.W.D. conceived and designed the simulation and experiments, performed the simulation and experiments, analyzed and interpreted the data, wrote the original paper, and wrote the revised manuscript. D.W.D., F.F.Y., B.A.B., and F.T. read and approved the final manuscript.
Correspondence to Djati Wibowo Djamari.
Djamari, D.W., Yap, F.F., Budiman, B.A. et al. Polymer-based dampening layer application to improve the operating shock tolerance of hard disk drive. J. Eng. Appl. Sci. 69, 13 (2022). https://doi.org/10.1186/s44147-021-00062-4
Shock tolerance
Baseplate mode | CommonCrawl |
Physics And Astronomy (138)
Materials Research (111)
Chemistry (109)
Statistics and Probability (22)
Psychiatry (13)
MRS Online Proceedings Library Archive (99)
Epidemiology & Infection (22)
Proceedings of the International Astronomical Union (19)
The Journal of Agricultural Science (16)
Journal of Materials Research (10)
Psychological Medicine (10)
Microscopy and Microanalysis (7)
The Journal of Laryngology & Otology (6)
The European Physical Journal - Applied Physics (5)
Bulletin of Entomological Research (3)
International Astronomical Union Colloquium (3)
Journal of Helminthology (3)
Publications of the Astronomical Society of Australia (3)
Agricultural and Resource Economics Review (2)
Animal Science (2)
British Journal of Nutrition (2)
Journal of Mechanics (2)
Powder Diffraction (2)
Cambridge University Press (5)
Materials Research Society (109)
International Astronomical Union (23)
BSAS (16)
Brazilian Society for Microscopy and Microanalysis (SBMM) (7)
The Australian Society of Otolaryngology Head and Neck Surgery (6)
Testing Membership Number Upload (3)
Northeastern Agricultural and Resource Economics Association (2)
Nutrition Society (2)
Global Science Press (1)
International Glaciological Society (1)
Mineralogical Society (1)
Nestle Foundation - enLINK (1)
Royal College of Speech and Language Therapists (1)
The Association for Asian Studies (1)
test society (1)
Cambridge Handbooks in Psychology (2)
Cambridge Handbooks (2)
Cambridge Handbooks of Psychology (2)
A COMPREHENSIVE STUDY OF THE 14C SOURCE TERM IN THE 10 MW HIGH-TEMPERATURE GAS-COOLED REACTOR
X Liu, W Peng, L Wei, M Lou, F Xie, J Cao, J Tong, F Li, G Zheng
Journal: Radiocarbon , First View
Published online by Cambridge University Press: 24 June 2019, pp. 1-15
While assessing the environmental impact of nuclear power plants, researchers have focused their attention on radiocarbon (14C) owing to its high mobility in the environment and important radiological impact on human beings. The 10 MW high-temperature gas-cooled reactor (HTR-10) is the first pebble-bed gas-cooled test reactor in China that adopted helium as primary coolant and graphite spheres containing tristructural-isotropic (TRISO) coated particles as fuel elements. A series of experiments on the 14C source terms in HTR-10 was conducted: (1) measurement of the specific activity and distribution of typical nuclides in the irradiated graphite spheres from the core, (2) measurement of the activity concentration of 14C in the primary coolant, and (3) measurement of the amount of 14C discharged in the effluent from the stack. All experimental data on 14C available for HTR-10 were summarized and analyzed using theoretical calculations. A sensitivity study on the total porosity, open porosity, and percentage of closed pores that became open after irradiating the matrix graphite was performed to illustrate their effects on the activity concentration of 14C in the primary coolant and activity amount of 14C in various deduction routes.
Experimental Investigation of 14C in the Primary Coolant of the 10 MW High Temperature Gas-Cooled Reactor
F Xie, W Peng, J Cao, X Feng, L Wei, J Tong, F Li, K Sun
Journal: Radiocarbon / Volume 61 / Issue 3 / June 2019
Print publication: June 2019
The very high temperature reactor (VHTR) is a development of the high-temperature gas-cooled reactors (HTGRs) and one of the six proposed Generation IV reactor concept candidates. The 10 MW high temperature gas-cooled reactor (HTR-10) is the first pebble-bed gas-cooled test reactor in China. A sampling system for the measurement of carbon-14 (14C) was established in the helium purification system of the HTR-10 primary loop, which could sample 14C from the coolant at three locations. The results showed that activity concentration of 14C in the HTR-10 primary coolant was 1.2(1) × 102 Bq/m3 (STP). The production mechanisms, distribution characteristics, reduction routes, and release types of 14C in HTR-10 were analyzed and discussed. A theoretical model was built to calculate the amount of 14C in the core of HTR-10 and its concentration in the primary coolant. The activation reaction of 13C has been identified to be the dominant 14C source in the core, whereas in the primary coolant, it is the activation of 14N. These results can supplement important information for the source term analysis of 14C in HTR-10 and promote the study of 14C in HTGRs.
Consistent improvements in soil biochemical properties and crop yields by organic fertilization for above-ground (rapeseed) and below-ground (sweet potato) crops
X. P. Li, C. L. Liu, H. Zhao, F. Gao, G. N. Ji, F. Hu, H. X. Li
Journal: The Journal of Agricultural Science / Volume 156 / Issue 10 / December 2018
Published online by Cambridge University Press: 19 March 2019, pp. 1186-1195
Although application of organic fertilizers has become a recommended way for developing sustainable agriculture, it is still unclear whether above-ground and below-ground crops have similar responses to chemical fertilizers (CF) and organic manure (OM) under the same farming conditions. The current study investigated soil quality and crop yield response to fertilization of a double-cropping system with rapeseed (above-ground) and sweet potato (below-ground) in an infertile red soil for 2 years (2014–16). Three fertilizer treatments were compared, including CF, OM and organic manure plus chemical fertilizer (MCF). Organic fertilizers (OM and MCF) increased the yield of both above- and below-ground crops and improved soil biochemical properties significantly. The current study also found that soil-chemical properties were the most important and direct factors in increasing crop yields. Also, crop yield was affected indirectly by soil-biological properties, because no significant effects of soil-biological activities on yield were detected after controlling the positive effects of soil-chemical properties. Since organic fertilizers could not only increase crop yield, but also improve soil nutrients and microbial activities efficiently and continuously, OM application is a reliable agricultural practice for both above- and below-ground crops in the red soils of China.
Accuracy of self-reported weight in the Women's Health Initiative
Juhua Luo, Cynthia A Thomson, Michael Hendryx, Lesley F Tinker, JoAnn E Manson, Yueyao Li, Dorothy A Nelson, Mara Z Vitolins, Rebecca A Seguin, Charles B Eaton, Jean Wactawski-Wende, Karen L Margolis
Journal: Public Health Nutrition / Volume 22 / Issue 6 / April 2019
Published online by Cambridge University Press: 19 November 2018, pp. 1019-1028
Print publication: April 2019
To assess the extent of error present in self-reported weight data in the Women's Health Initiative, variables that may be associated with error, and to develop methods to reduce any identified error.
Prospective cohort study.
Forty clinical centres in the USA.
Women (n 75 336) participating in the Women's Health Initiative Observational Study (WHI-OS) and women (n 6236) participating in the WHI Long Life Study (LLS) with self-reported and measured weight collected about 20 years later (2013–2014).
The correlation between self-reported and measured weights was 0·97. On average, women under-reported their weight by about 2 lb (0·91 kg). The discrepancies varied by age, race/ethnicity, education and BMI. Compared with normal-weight women, underweight women over-reported their weight by 3·86 lb (1·75 kg) and obese women under-reported their weight by 4·18 lb (1·90 kg) on average. The higher the degree of excess weight, the greater the under-reporting of weight. Adjusting self-reported weight for an individual's age, race/ethnicity and education yielded an identical average weight to that measured.
Correlations between self-reported and measured weights in the WHI are high. Discrepancies varied by different sociodemographic characteristics, especially an individual's BMI. Correction of self-reported weight for individual characteristics could improve the accuracy of assessment of obesity status in postmenopausal women.
Epidemiological survey and sequence information analysis of swine hepatitis E virus in Sichuan, China
Y. Y. Li, Z. W. Xu, X. J. Li, S. Y. Gong, Y. Cai, Y. Q. Chen, Y. M. Li, Y. F. Xu, X. G. Sun, L. Zhu
Journal: Epidemiology & Infection / Volume 147 / 2019
Published online by Cambridge University Press: 19 November 2018, e49
Hepatitis E is an important zoonosis that is prevalent in China. Hepatitis E virus (HEV) is a pathogen that affects humans and animals and endangers public health in China. In this study, the detection of HEV epidemics in swine in Sichuan Province, China, was carried out by nested real-time PCR. A total of 174 stool samples and 160 bile samples from swine in Sichuan Province were examined. In addition, software was used to analyse the biological evolution of HEV. The results showed that within 2 years of swine HEV (SHEV) infection in China, SHEV was first detected in Sichuan Province. HEV was endemic in Sichuan; the positive rate for pig farms was 11.1%, and the total positive sample rate was 10.5%. The age of swine with the highest positive rate (17.9%) was 5–9 weeks. The examined swine species in order of highest to lowest HEV infection rates were Chenghua pig, Large White, Duroc, Pietrain, Landrace and Hampshire. Nucleotide and amino acid sequence analysis showed that the HEV epidemic in swine in Sichuan Province was related to genotype IV, which had the highest homology to HEV in Beijing. Sichuan strains have greater variation than Chinese representative strains, which may indicate the presence of new HEV strains.
Improving the estimation and partitioning of plant nitrogen in the RiceGrow model
L. Tang, R. J. Chang, B. Basso, T. Li, F. X. Zhen, L. L. Liu, W. X. Cao, Y. Zhu
Journal: The Journal of Agricultural Science / Volume 156 / Issue 8 / October 2018
Published online by Cambridge University Press: 28 November 2018, pp. 959-970
Print publication: October 2018
Plant nitrogen (N) links with many physiological progresses of crop growth and yield formation. Accurate simulation is key to predict crop growth and yield correctly. The aim of the current study was to improve the estimation of N uptake and translocation processes in the whole rice plant as well as within plant organs in the RiceGrow model by using plant and organ maximum, critical and minimum N dilution curves. The maximum and critical N (Nc) demand (obtained from the maximum and critical curves) of shoot and root and Nc demand of organs (leaf, stem and panicle) are calculated by N concentration and biomass. Nitrogen distribution among organs is computed differently pre- and post-anthesis. Pre-anthesis distribution is determined by maximum N demand with no priority among organs. In post-anthesis distribution, panicle demands are met first and then the remaining N is allocated to other organs without priority. The amount of plant N uptake depends on plant N demand and N supplied by the soil. Calibration and validation of the established model were performed on field experiments conducted in China and the Philippines with varied N rates and N split applications; results showed that this improved model can simulate the processes of N uptake and translocation well.
Effect of stocking rate on grazing behaviour and diet selection of goats on cultivated pasture
L. Q. Wan, K. S. Liu, W. Wu, J. S. Li, T. C. Zhao, X. Q. Shao, F. He, H. Lv, X. L. Li
Journal: The Journal of Agricultural Science / Volume 156 / Issue 7 / September 2018
Published online by Cambridge University Press: 17 October 2018, pp. 914-921
Print publication: September 2018
Cultivated pastures in southern China are being used to improve forage productivity and animal performance, but studies on grazing behaviour of goats in these cultivated pastures are still rare. In the current study, the grazing behaviour of Yunling black goats under low (5 goats/ha) and high (15 goats/ha) stocking rates (SRs) was evaluated. Data showed that the proportion of time goats spent on activities was: eating (0.59–0.87), ruminating (0.05–0.35), walking (0.03–0.06) and resting (0.01–0.03). Compared with low SR, goats spent more time eating and walking, and less time ruminating and resting under high SR. Goats had similar diet preferences under both SR and preferred to eat grasses (ryegrass and cocksfoot) more than a legume (white clover). The distribution of eating time on each forage species was more uniform under high v. low SR. Bites/step, bite weight and daily intake were greater under low than high SR. Results suggest that the SR affects grazing behaviour of goats on cultivated pasture, and identifying an optimal SR is critical for increasing bite weight and intake.
The Maia Detector Journey: Development, Capabilities and Applications
C G Ryan, D P Siddons, R Kirkham, A J Kuczewski, P A Dunn, G De Geronimo, A. Dragone, Z Y Li, G F Moorhead, M Jensen, D J Paterson, M D de Jonge, D L Howard, R Dodanwela, G A Carini, R Beuttenmuller, D Pinelli, L Fisher, R M Hough, A Pagès, S A James, P Davey
Journal: Microscopy and Microanalysis / Volume 24 / Issue S1 / August 2018
Published online by Cambridge University Press: 01 August 2018, pp. 720-721
Print publication: August 2018
Experimental platform for the investigation of magnetized-reverse-shock dynamics in the context of POLAR
HPL Laboratory Astrophysics
B. Albertazzi, E. Falize, A. Pelka, F. Brack, F. Kroll, R. Yurchak, E. Brambrink, P. Mabey, N. Ozaki, S. Pikuz, L. Van Box Som, J. M. Bonnet-Bidaud, J. E. Cross, E. Filippov, G. Gregori, R. Kodama, M. Mouchet, T. Morita, Y. Sakawa, R. P. Drake, C. C. Kuranz, M. J.-E. Manuel, C. Li, P. Tzeferacos, D. Lamb, U. Schramm, M. Koenig
Journal: High Power Laser Science and Engineering / Volume 6 / 2018
Published online by Cambridge University Press: 16 July 2018, e43
The influence of a strong external magnetic field on the collimation of a high Mach number plasma flow and its collision with a solid obstacle is investigated experimentally and numerically. The laser irradiation ( $I\sim 2\times 10^{14}~\text{W}\cdot \text{cm}^{-2}$ ) of a multilayer target generates a shock wave that produces a rear side plasma expanding flow. Immersed in a homogeneous 10 T external magnetic field, this plasma flow propagates in vacuum and impacts an obstacle located a few mm from the main target. A reverse shock is then formed with typical velocities of the order of 15–20 $\pm$ 5 km/s. The experimental results are compared with 2D radiative magnetohydrodynamic simulations using the FLASH code. This platform allows investigating the dynamics of reverse shock, mimicking the processes occurring in a cataclysmic variable of polar type.
Rabbit SLC15A1, SLC7A1 and SLC1A1 genes are affected by site of digestion, stage of development and dietary protein content
L. Liu, H. Liu, L. Ning, F. Li
Journal: animal / Volume 13 / Issue 2 / February 2019
Published online by Cambridge University Press: 22 June 2018, pp. 326-332
Print publication: February 2019
Peptide transporter 1 (SLC15A1, PepT1), excitatory amino acid transporter 3 (SLC1A1, EAAT3) and cationic amino acid transporter 1 (SLC7A1, CAT1) were identified as genes responsible for the transport of small peptides and amino acids. The tissue expression pattern of rabbit (SLC15A1, SLC7A1 and SLC1A1) across the digestive tract remains unclear. The present study investigated SLC15A1, SLC7A1 and SLC1A1 gene expression patterns across the digestive tract at different stages of development and in response to dietary protein levels. Real time-PCR results indicated that SLC15A1, SLC7A1 and SLC1A1 genes throughout the rabbits' entire development and were expressed in all tested rabbit digestive sites, including the stomach, duodenum, jejunum, ileum, colon and cecum. Furthermore, SLC7A1 and SLC1A1 mRNA expression occurred in a tissue-specific and time-associated manner, suggesting the distinct transport ability of amino acids in different tissues and at different developmental stages. The most highly expressed levels of all three genes were in the duodenum, ileum and jejunum in all developmental stages. All increased after lactation. With increased dietary protein levels, SLC7A1 mRNA levels in small intestine and SLC1A1 mRNA levels in duodenum and ileum exhibited a significant decreasing trend. Moreover, rabbits fed a normal level of protein had the highest levels of SLC15A1 mRNA in the duodenum and jejunum (P<0.05). In conclusion, gene mRNA differed across sites and with development suggesting time and sites related differences in peptide and amino acid absorption in rabbits. The effects of dietary protein on expression of the three genes were also site specific.
Induction of nuclear factor-κB signal-mediated apoptosis and autophagy by reactive oxygen species is associated with hydrogen peroxide-impaired growth performance of broilers
X. Chen, R. Gu, L. Zhang, J. Li, Y. Jiang, G. Zhou, F. Gao
Journal: animal / Volume 12 / Issue 12 / December 2018
Published online by Cambridge University Press: 03 May 2018, pp. 2561-2570
The oxidative study has always been particularly topical in poultry science. However, little information about the occurrence of cellular apoptosis and autophagy through the reactive oxygen species (ROS) generation in nuclear factor-κB (NF-κB) signal pathway was reported in the liver of broilers exposed to hydrogen peroxide (H2O2). So we investigated the change of growth performance of broilers exposed to H2O2 and further explored the occurrence of apoptosis and autophagy, as well as the expression of NF-κB in these signaling pathways in the liver. A total of 320 1-day-old Arbor Acres male broiler chickens were raised on a basal diet and randomly divided into five treatments which were arranged as non-injected treatment (Control), physiological saline (0.75%) injected treatment (Saline) and H2O2 treatments (H2O2(0.74), H2O2(1.48) and H2O2(2.96)) received an intraperitoneal injection of H2O2 with 0.74, 1.48 and 2.96 mM/kg BW. The results showed that compared to those in the control and saline treatments, 2.96 mM/kg BW H2O2-treated broilers exhibited significantly higher feed/gain ratio at 22 to 42 days and 1 to 42 days, ROS formation, the contents of oxidation products, the mRNA expressions of caspases (3, 6, 8), microtubule-associated protein 1 light chain 3 (LC3)-II/LC3-I, autophagy-related gene 6, Bcl-2 associated X and protein expressions of total caspase-3 and total LC3-II, and significantly lower BW gain at 22 to 42 days and 1 to 42 days, the activities of total superoxide dismutase and glutathione peroxidase, the expression of NF-κB in the liver. Meanwhile, significantly higher feed/gain ratio at 1 to 42 days, ROS formation, the contents of protein carbonyl and malondialdehyde, the mRNA expression of caspase-3 and the protein expressions of total caspase-3 and total LC3-II, as well as significantly lower BW gain at 22 to 42 days and 1 to 42 days were observed in broilers received 1.48 mM/kg BW H2O2 treatment than those in control and saline treatments. These results indicated that oxidative stress induced by H2O2 had a negative effect on histomorphology and redox status in the liver of broilers, which was associated with a decline in growth performance of broilers. This may attribute to apoptosis and autophagy processes triggered by excessive ROS that suppress the NF-κB signaling pathway.
Germination vigour difference of superior and inferior rice grains revealed by physiological and gene expression studies
Y. F. Zhao, H. Z. Sun, H. L. Wen, Y. X. Du, J. Zhang, J. Z. Li, T. Peng, Q. Z. Zhao
Journal: The Journal of Agricultural Science / Volume 156 / Issue 3 / April 2018
Superior and inferior rice grains have different weights and are located on the upper primary branch and lower secondary branches of the panicle, respectively. To study differences in germination vigour of these two types of grain, a number of factors were investigated from 0 to 48 h of germination. The present study demonstrated that in inferior grains the starch granule structure was looser at 0 h, with full water absorption at 48 h, while in superior grains the structure was tight and dense. Relative water content increased, and dry matter decreased, more rapidly in inferior grains than in superior ones. Abscisic acid and gibberellin levels, as well as α-amylase activity, also changed more rapidly in inferior grains, while soluble sugar content and amylase coding gene expression increased more rapidly in inferior than superior grains during early germination. The expression of OsGAMYB was higher in inferior grains at 24 h but higher in superior grains at 48 h. The phenotypic index of seedlings was higher in seedlings from superior grains at the two-leaf stage. However, the thousand-grain weight and yield per plant in superior and inferior plants showed no significant difference at harvest. The present study indicates that inferior grains germinate faster than superior ones in the early germination stage. Although inferior grains produced weaker seedlings, it is worthwhile using them in rice production due to their comparative yield potential over that of superior grains.
Studies of the relationship between rice stem composition and lodging resistance
M. Y. Gui, D. Wang, H. H. Xiao, M. Tu, F. L. Li, W. C. Li, S. D. Ji, T. X. Wang, J. Y. Li
Published online by Cambridge University Press: 25 May 2018, pp. 387-395
Plant height and lodging resistance can affect rice yield significantly, but these traits have always conflicted in crop cultivation and breeding. The current study aimed to establish a rapid and accurate plant type evaluation mechanism to provide a basis for breeding tall but lodging-resistant super rice varieties. A comprehensive approach integrating plant anatomy and histochemistry was used to investigate variations in flexural strength (a material property, defined as the stress in a material just before it yields in a flexure test) of the rice stem and the lodging index of 15 rice accessions at different growth stages to understand trends in these parameters and the potential factors influencing them. Rice stem anatomical structure was observed and the lignin content the cell wall was determined at different developmental stages. Three rice lodging evaluation models were established using correlation analysis, multivariate regression and artificial radial basis function (RBF) neural network analysis, and the results were compared to identify the most suitable model for predicting optimal rice plant types. Among the three evaluation methods, the mean residual and relative prediction errors were lowest using the RBF network, indicating that it was highly accurate and robust and could be used to establish a mathematical model of the morphological characteristics and lodging resistance of rice to identify optimal varieties.
Isolation, purification and identification of the active compound of turmeric and its potential application to control cucumber powdery mildew
W. J. Fu, J. Liu, M. Zhang, J. Q. Li, J. F. Hu, L. R. Xu, G. H. Dai
Cucumber powdery mildew is a destructive foliar disease caused by Podosphaera xanthii (formerly known as Sphaerotheca fuliginea) that substantially damages the yield and quality of crops. The control of this disease primarily involves the use of chemical pesticides that cause serious environmental problems. Currently, numerous studies have indicated that some plant extracts or products potentially have the ability to act as natural pesticides to control plant diseases. It has been reported that turmeric (Curcuma longa L.) and its extract can be used in agriculture due to their insecticidal and fungicidal properties. However, the most effective fungicidal component of this plant is still unknown. In the current study, the crude extract of C. longa L. was found to have a fungicidal effect against P. xanthii. Afterwards, eight fractions (Fr.1–Fr.8) were gradually separated from the crude extract by column chromatography. Fraction 1 had the highest fungicidal effect against this pathogen among the eight fractions. The active compound, (+)-(S)-ar-turmerone, was separated from Fr 1 by semi-preparative high-performance liquid chromatography and identified based on its 1H nuclear magnetic resonance (NMR) and 13C NMR spectrum data. The EC50 value of (+)-(S)-ar-turmerone was found to be 28.7 µg/ml. The compound also proved to have a curative effect. This is the first study to report that the compound (+)-(S)-ar-turmerone has an effect on controlling this disease. These results provide a basis for developing a new phytochemical fungicide from C. longa L. extract.
Effects of tributyrin supplementation on short-chain fatty acid concentration, fibrolytic enzyme activity, nutrient digestibility and methanogenesis in adult Small Tail ewes
Q. C. Ren, J. J. Xuan, Z. Z. Hu, L. K. Wang, Q. W. Zhan, S. F. Dai, S. H. Li, H. J. Yang, W. Zhang, L. S. Jiang
In vivo and in vitro trials were conducted to assess the effects of tributyrin (TB) supplementation on short-chain fatty acid (SFCA) concentrations, fibrolytic enzyme activity, nutrient digestibility and methanogenesis in adult sheep. Nine 12-month-old ruminally cannulated Small Tail ewes (initial body weight 55 ± 5.0 kg) without pregnancy were used for the in vitro trial. In vitro substrate made to offer TB at 0, 2, 4, 6 and 8 g/kg on a dry matter (DM) basis was incubated by ruminal microbes for 72 h at 39°C. Forty-five adult Small Tail ewes used for the in vivo trial were randomly assigned to five treatments with nine animals each for an 18-d period according to body weight (55 ± 5.0 kg). Total mixed ration fed to ewes was also used to offer TB at 0, 2, 4, 6 and 8 g/kg on a DM basis. The in vitro trial showed that TB supplementation linearly increased apparent digestibility of DM, crude protein, neutral detergent fibre and acid detergent fibre, and enhanced gas production and methane emissions. The in vivo trial showed that TB supplementation decreased DM intake, but enhanced ruminal fermentation efficiency. Both in vitro and in vivo trials showed that TB supplementation enhanced total SFCA concentrations and carboxymethyl cellulase activity. The results indicate that TB supplementation might exert advantage effects on rumen microbial metabolism, despite having an enhancing effect on methanogenesis.
Effects of in ovo feeding of l-arginine on breast muscle growth and protein deposition in post-hatch broilers
L. L. Yu, T. Gao, M. M. Zhao, P. A. Lv, L. Zhang, J. L. Li, Y. Jiang, F. Gao, G. H. Zhou
Journal: animal / Volume 12 / Issue 11 / November 2018
Published online by Cambridge University Press: 26 February 2018, pp. 2256-2263
Print publication: November 2018
In ovo feeding (IOF) of l-arginine (Arg) can affect growth performance of broilers, but the response of IOF of Arg on breast muscle growth is unclear, and the mechanism involved in protein deposition remains unknown. Hense, this experiment was conducted to evaluate the effects of IOF of Arg on breast muscle growth and protein-deposited signalling in post-hatch broilers. A total of 720 fertile eggs were collected from 34-week-old Arbor Acres breeder hens and distributed to three treatments: (1) non-injected control group; (2) 7.5 g/l (w/v) NaCl diluent-injected control group; (3) 0.6 mg Arg/egg solution-injected group. At 17.5 days of incubation, fertile eggs were injected 0.6 ml solutions into the amnion of the injected groups. Upon hatching, 80 male chicks were randomly assigned to eight replicates of 10 birds each and fed ad libitum for 21 days. The results indicated that IOF of Arg increased relative breast muscle weight compared with those of control groups at hatch, 3-, 7- and 21-day post-hatch (P<0.05). In the Arg-injected group, the plasma total protein and albumen concentrations were higher at 7- and 21-day post-hatch than those of control groups (P<0.05). The alanine aminotransferase activity in Arg group was higher at hatch than that of control groups (P<0.05). The levels of triiodothyronine at four time points and thyroxine hormones at hatch, 7- and 21-day post-hatch in Arg group were higher than those of control groups (P<0.05). In addition, IOF of Arg increased the amino acid concentrations of breast muscle at hatch, 7- and 21-day post-hatch (P<0.05). In ovo feeding of Arg also enhanced mammalian target of rapamycin, ribosomal protein S6 kinase-1 and eIF4E-bindingprotein-1 messenger RNA expression levels at hatch compared with those of control groups (P<0.05). It was concluded that IOF of Arg treatment improved breast muscle growth, which might be associated with the enhancement of protein deposition.
Follow Up of GW170817 and Its Electromagnetic Counterpart by Australian-Led Observing Programmes
Gravitational Wave Astronomy
I. Andreoni, K. Ackley, J. Cooke, A. Acharyya, J. R. Allison, G. E. Anderson, M. C. B. Ashley, D. Baade, M. Bailes, K. Bannister, A. Beardsley, M. S. Bessell, F. Bian, P. A. Bland, M. Boer, T. Booler, A. Brandeker, I. S. Brown, D. A. H. Buckley, S.-W. Chang, D. M. Coward, S. Crawford, H. Crisp, B. Crosse, A. Cucchiara, M. Cupák, J. S. de Gois, A. Deller, H. A. R. Devillepoix, D. Dobie, E. Elmer, D. Emrich, W. Farah, T. J. Farrell, T. Franzen, B. M. Gaensler, D. K. Galloway, B. Gendre, T. Giblin, A. Goobar, J. Green, P. J. Hancock, B. A. D. Hartig, E. J. Howell, L. Horsley, A. Hotan, R. M. Howie, L. Hu, Y. Hu, C. W. James, S. Johnston, M. Johnston-Hollitt, D. L. Kaplan, M. Kasliwal, E. F. Keane, D. Kenney, A. Klotz, R. Lau, R. Laugier, E. Lenc, X. Li, E. Liang, C. Lidman, L. C. Luvaul, C. Lynch, B. Ma, D. Macpherson, J. Mao, D. E. McClelland, C. McCully, A. Möller, M. F. Morales, D. Morris, T. Murphy, K. Noysena, C. A. Onken, N. B. Orange, S. Osłowski, D. Pallot, J. Paxman, S. B. Potter, T. Pritchard, W. Raja, R. Ridden-Harper, E. Romero-Colmenero, E. M. Sadler, E. K. Sansom, R. A. Scalzo, B. P. Schmidt, S. M. Scott, N. Seghouani, Z. Shang, R. M. Shannon, L. Shao, M. M. Shara, R. Sharp, M. Sokolowski, J. Sollerman, J. Staff, K. Steele, T. Sun, N. B. Suntzeff, C. Tao, S. Tingay, M. C. Towner, P. Thierry, C. Trott, B. E. Tucker, P. Väisänen, V. Venkatraman Krishnan, M. Walker, L. Wang, X. Wang, R. Wayth, M. Whiting, A. Williams, T. Williams, C. Wolf, C. Wu, X. Wu, J. Yang, X. Yuan, H. Zhang, J. Zhou, H. Zovaro
Journal: Publications of the Astronomical Society of Australia / Volume 34 / 2017
Published online by Cambridge University Press: 20 December 2017, e069
The discovery of the first electromagnetic counterpart to a gravitational wave signal has generated follow-up observations by over 50 facilities world-wide, ushering in the new era of multi-messenger astronomy. In this paper, we present follow-up observations of the gravitational wave event GW170817 and its electromagnetic counterpart SSS17a/DLT17ck (IAU label AT2017gfo) by 14 Australian telescopes and partner observatories as part of Australian-based and Australian-led research programs. We report early- to late-time multi-wavelength observations, including optical imaging and spectroscopy, mid-infrared imaging, radio imaging, and searches for fast radio bursts. Our optical spectra reveal that the transient source emission cooled from approximately 6 400 K to 2 100 K over a 7-d period and produced no significant optical emission lines. The spectral profiles, cooling rate, and photometric light curves are consistent with the expected outburst and subsequent processes of a binary neutron star merger. Star formation in the host galaxy probably ceased at least a Gyr ago, although there is evidence for a galaxy merger. Binary pulsars with short (100 Myr) decay times are therefore unlikely progenitors, but pulsars like PSR B1534+12 with its 2.7 Gyr coalescence time could produce such a merger. The displacement (~2.2 kpc) of the binary star system from the centre of the main galaxy is not unusual for stars in the host galaxy or stars originating in the merging galaxy, and therefore any constraints on the kick velocity imparted to the progenitor are poor.
Acetate alters the process of lipid metabolism in rabbits
C. Fu, L. Liu, F. Li
Journal: animal / Volume 12 / Issue 9 / September 2018
Published online by Cambridge University Press: 04 December 2017, pp. 1895-1902
An experiment was conducted to investigate the effect of acetate treatment on lipid metabolism in rabbits. New Zealand Rabbits (30 days, n=80) randomly received a subcutaneous injection (2 ml/injection) of 0, 0.5, 1.0 or 2.0 g/kg per day body mass acetate (dissolved in saline) for 4 days. Our results showed that acetate induced a dose-dependent decrease in shoulder adipose (P<0.05). Although acetate injection did not alter the plasma leptin and glucose concentration (P>0.05), acetate treatment significantly decreased the plasma adiponectin, insulin and triglyceride concentrations (P<0.05). In adipose, acetate injection significantly up-regulated the gene expression of peroxisome proliferator-activated receptor gamma (PPARγ), CCAAT/enhancer-binding protein α (C/EBPα), differentiation-dependent factor 1 (ADD1), adipocyte protein 2 (aP2), carnitine palmitoyltransferase 1 (CPT1), CPT2, hormone-sensitive lipase (HSL), G protein-coupled receptor (GPR41), GPR43, adenosine monophosphate-activated protein kinase α1 (AMPKα1), adiponectin receptor (AdipoR1), AdipoR2 and leptin receptor. In addition, acetate treatment significantly increased the protein levels of phosphorylated AMPKα, extracellular signaling-regulated kinases 1 and 2 (ERK1/2), p38 mitogen-activated protein kinase (P38 MAPK) and c-jun amino-terminal kinase (JNK). In conclusion, acetate up-regulated the adipocyte-specific transcription factors (PPARγ, C/EBPα, aP2 and ADD1), which were associated with the activated GPR41/43 and MAPKs signaling. Meanwhile, acetate decreased fat content via the upregulation of the steatolysis-related factors (HSL, CPT1 and CPT2), and AMPK signaling may be involved in the process.
Impact of an intervention programme on knowledge, attitudes and practices of population regarding severe fever with thrombocytopenia syndrome in endemic areas of Lu'an, China
Y. LYU, C.-Y. HU, L. SUN, W. QIN, P.-P. XU, J. SUN, J.-Y. HU, Y. YANG, F.-L. LI, H.-W. CHANG, X.-D. LI, S.-Y. XIE, K.-C. LI, X.-X. HUANG, F. DING, X.-J. ZHANG
Journal: Epidemiology & Infection / Volume 146 / Issue 1 / January 2018
Knowledge, attitudes and practices (KAP) of the population regarding severe fever with thrombocytopenia syndrome (SFTS) in endemic areas of Lu'an in China were assessed before and after an intervention programme. The pre-intervention phase was conducted using a sample of 425 participants from the 12 selected villages with the highest rates of endemic SFTS infection. A predesigned interview questionnaire was used to assess KAP. Subsequently, an intervention programme was designed and applied in the selected villages. KAP was re-assessed for each population in the selected villages using the same interview questionnaire. Following 2 months of the programme, 339 participants had completed the re-assessed survey. The impact of the intervention programme was evaluated using suitable statistical methods. A significant increase in the KAP and total KAP scores was noted following the intervention programme, whereas the proportion of correct knowledge, the positive attitudes and the effective practices toward SFTS of respondents increased significantly. The intervention programme was effective in improving KAP level of SFTS in populations that were resident in endemic areas. | CommonCrawl |
Journal of Therapeutic Ultrasound
Theoretical investigation of transgastric and intraductal approaches for ultrasound-based thermal therapy of the pancreas
Serena J. Scott1,
Matthew S. Adams1,2,
Vasant Salgaonkar1,
F. Graham Sommer3 &
Chris J. Diederich1,2
Journal of Therapeutic Ultrasound volume 5, Article number: 10 (2017) Cite this article
The goal of this study was to theoretically investigate the feasibility of intraductal and transgastric approaches to ultrasound-based thermal therapy of pancreatic tumors, and to evaluate possible treatment strategies.
This study considered ultrasound applicators with 1.2 mm outer diameter tubular transducers, which are inserted into the tissue to be treated by an endoscopic approach, either via insertion through the gastric wall (transgastric) or within the pancreatic duct lumen (intraductal). 8 patient-specific, 3D, transient, biothermal and acoustic finite element models were generated to model hyperthermia (n = 2) and ablation (n = 6), using sectored (210°–270°, n = 4) and 360° (n = 4) transducers for treatment of 3.3–17.0 cm3 tumors in the head (n = 5), body (n = 2), and tail (n = 1) of the pancreas. A parametric study was performed to determine appropriate treatment parameters as a function of tissue attenuation, blood perfusion rates, and distance to sensitive anatomy.
Parametric studies indicated that pancreatic tumors up to 2.5 or 2.7 cm diameter can be ablated within 10 min with the transgastric and intraductal approaches, respectively. Patient-specific simulations demonstrated that 67.1–83.3% of the volumes of four sample 3.3–11.4 cm3 tumors could be ablated within 3–10 min using transgastric or intraductal approaches. 55.3–60.0% of the volume of a large 17.0 cm3 tumor could be ablated using multiple applicator positions within 20–30 min with either transgastric or intraductal approaches. 89.9–94.7% of the volume of two 4.4–11.4 cm3 tumors could be treated with intraductal hyperthermia. Sectored applicators are effective in directing acoustic output away from and preserving sensitive structures. When acoustic energy is directed towards sensitive structures, applicators should be placed at least 13.9–14.8 mm from major vessels like the aorta, 9.4–12.0 mm from other vessels, depending on the vessel size and flow rate, and 14 mm from the duodenum.
This study demonstrated the feasibility of generating shaped or conformal ablative or hyperthermic temperature distributions within pancreatic tumors using transgastric or intraductal ultrasound.
Pancreatic cancer is a particularly severe disease, with a 5-year survival rate of about 6% in the United States [1]. It is the fourth most common cause of cancer-related deaths, causing 39,590 deaths per year in the United States [1]. Although surgery provides the best chance for cure [2, 3], most cases have progressed to advanced or locally advanced disease by the time of diagnosis [4, 5], and over 80% of patients are not candidates for resection [2, 5]. For patients who are not surgical candidates, prolongation of survival and palliative relief of symptoms are the major goals of medical treatment [3, 4, 6], with chemotherapy and radiotherapy as the most common interventions [2, 4]. Palliative care for advanced disease may include surgery, radiotherapy, biliary stenting, gastroduodenal stenting, analgesia, celiac plexus blockage, and prophylactic anticoagulants to care for conditions such as pain, jaundice, gastrointestinal obstruction, and venous embolism [2, 4, 6]. Thermal ablation has been shown to cause a reduction in tumor volume, to lower pain, and in some studies, to prolong survival [5, 7, 8].
Various ablative modalities have been considered for care of advanced pancreatic cancer, including RF ablation (RFA), microwave ablation, cryoablation, photodynamic therapy, high intensity focused ultrasound (HIFU), and irreversible electroporation, with HIFU and RFA receiving the most attention [5]. During thermal ablation, it is key to preserve sensitive tissues such as the duodenum and the peripancreatic vasculature, so a margin of tumor tissue is often left viable [7, 9]. In some studies, sensitive tissues were flushed with cold saline, and the vena cava could be covered with wet gauze during intraoperative interventions to prevent thermal injury [7, 9]. RFA of pancreatic cancer is usually performed during open surgery [9], but has also been performed using endoscopic approaches [10, 11]. The RF needle can be inserted through the working channel of an echoendoscope and into the pancreas through the wall of the stomach or duodenum under ultrasound guidance [10]. This endoscopic approach is less invasive than surgical and percutaneous approaches, and hence should result in fewer complications, but while the stomach and duodenum could potentially be water-cooled, this does not allow for active cooling of most sensitive tissues. Computed tomography (CT), magnetic resonance (MR), or ultrasound guidance and treatment monitoring may be applied to improve outcomes and reduce complications during thermal ablation [5, 8]. RF and other ablative therapies can be performed alongside the placement of biliary or duodenal stents as necessary to reduce complications and procedure time [5]. HIFU can be performed completely noninvasively with MR or ultrasound guidance, though limitations include bowel gas obstruction of the sonication path, which may preclude treatment of some targets, and the need to compensate for respiratory motion [8].
Endoscopic ultrasound applicators placed in the stomach or duodenum have previously been developed for thermal ablation of pancreatic tumors, applying either HIFU [12] or unfocused high intensity ultrasound ablation [13] techniques. Since transgastric and intraductal probes are advanced into the tumor itself, there are fewer concerns about breathing artifacts or motion of the applicator along the stomach wall than with other endoscopic ultrasound techniques. Transgastric and intraductal applicators can also access targets farther from the stomach wall than unfocused endoscopic probes placed within the stomach or duodenum. Catheter-based ultrasound ablation is also generally faster than HIFU, in which a small focal zone is scanned over the tumor, as the heated region covers a larger volume [14].
Small-diameter ultrasound applicators with tubular transducers could potentially allow for thermal therapy of pancreatic tumors with fewer limitations than HIFU or RF ablation. Such applicators have been applied in the past in the prostate, liver, bone, heart, and brain using catheter or balloon-based cooling, and are in clinical use for hyperthermia [14–21]. These flexible applicators are advanced directly into or adjacent to the tissue to be treated, and achieve 15–21 mm of thermal penetration within 5–10 min with spatial control along the length and angle of the applicator [15, 22–24]. For applications in the pancreas, the ultrasound applicator could be advanced through the working channel of an echoendoscope, then through the stomach wall, duodenal wall, pancreatic duct, or biliary duct, and directly into the tumor under ultrasound imaging guidance. Intraductal ablation can be done using techniques similar to those applied in prior treatments of the pancreas and biliary ducts using RFA [25–27] or planar ultrasound applicators [28]. Transgastric ablation could be performed using techniques similar to those of needle-based RF ablation [29, 30] and catheter-based cryotherm ablation, which uses a 1.8 mm probe [10, 31]. Sectored ultrasound transducers can be used for sparing of adjacent sensitive tissues, by providing directionality that RF, microwave, and laser ablation lack.
The goal of this study is to apply theoretical models to evaluate the feasibility of ultrasound-based thermal therapy using endoscopic approaches to deliver interstitial or catheter-based ultrasound devices specific for this application. Theoretical modeling techniques that have been previously tested and validated broadly in both bone and soft tissue [16, 17, 32, 33] are applied in this theoretical study to perform a preliminary investigation. Numerical models of several patient cases are created to assess various approaches, applicator configurations, and treatment parameters. Both ablation and hyperthermia, which has been shown to improve the effects of chemotherapy and radiation in the treatment of pancreatic cancer [34, 35], are considered. Parametric studies are performed to determine the necessary safety margins for preservation of major blood vessels and the duodenum, and to investigate the effects of the acoustic absorption coefficient and blood perfusion rates on treatment outcomes.
Endoscopic ultrasound devices
The ultrasound applicators considered in this study are to be deployed through the working channels of endoscopic probes, which are often up to 3.2 mm inner diameter (ID) [36]. Two different approaches are applied: direct insertion of an ultrasound applicator into the tumor through the wall of the stomach or duodenum, and insertion of a more flexible applicator through the pancreatic duct. This can be performed using routes commonly employed during endoscopic ultrasound-guided biopsy and pancreatic duct stenting, and techniques previously employed for transgastric ablation of the pancreas [10, 29–31]. Both hyperthermia and ablation are considered for the intraductal approach. Only ablation is considered for the transgastric approach, to balance the risks associated this slightly more invasive technique with a more thorough and direct treatment.
Both types of applicators consist of 2–3 tubular ultrasound transducers (7 MHz; 1.2 mm outer diameter (OD); 7.5, 10, or 15 mm long) mounted on the distal tip of an applicator. This device configuration is modeled after those commonly used in similar interstitial ultrasound applicators [14, 33, 37], with a slightly smaller diameter to minimize damage to the stomach wall and for easier maneuverability within the pancreatic duct. These transducers, which radiate radially outward, can be sectored to attain directional control of the acoustic output, such that the transducers radiate only to one side [37]. 210°, 270°, and 360° active sectors were considered in this study. The applicators are water-cooled for acoustic coupling and to prevent thermal damage to the transducers.
The transgastric applicators are inserted into the tumor through the stomach or duodenal wall, and are deployed within a water-cooled catheter (Fig. 1a), similar to catheter-based ultrasound applicators designed for percutaneous insertion [37, 38]. The plastic catheter (1.6 mm ID, 2.11 mm OD) around the transgastric applicator is modeled after the cooling flow lumen and wall thicknesses of catheters for interstitial applicators [39, 40], and is assumed to have an ultrasound attenuation coefficient of 43.9 Np/m/MHz [39]. Intraductal applicators would be advanced through the biliary or pancreatic duct in order to access the tumor. The transducers are surrounded by a single distensible water-cooled balloon (2.2 mm OD, 12.7 μm thick, Fig. 1b), similar to that used in transurethral prostate ablation [41]. The applicator is designed to be small enough to fit within the duct [42] while maintaining a layer of cooling water around the transducers. It is similar in size to pancreatic stents [43] and, when uninflated, intraductal ultrasound imaging probes [44]. The balloon around the intraductal applicator is assumed to be so thin as to cause negligible acoustic attenuation.
Diagram of transgastric (a) and intraductal (b) applicators for ultrasound-based thermal therapy of the pancreas. Transgastric applicators are operated from within a water-cooled catheter (1.6 mm ID, 2.11 mm OD), while intraductal applicators have a thin water-cooled balloon (2.2 mm OD) around the transducers. 2–3 transducers can be mounted on each applicator, with an outer diameter of 1.2 mm and a length of 7.5, 10, or 15 mm
Acoustic and biothermal simulations
Heat transfer through physiological tissues was modeled using Pennes bioheat equation [45]:
$$ \rho c\frac{dT}{dt} = \nabla \cdot \mathrm{k}\nabla \mathrm{T} - {\upomega \mathrm{c}}_b\left( T-{T}_b\right)+ Q $$
where ρ is density (kg/m3), c is specific heat (J/kg/°C), T is temperature (°C), t is time (s), k is thermal conductivity (W/m/°C), ω is the blood perfusion rate (kg/m3/s), Q is the heat deposition due to ultrasound (W/m3), the subscript b refers to blood, and capillary blood temperature T b is assumed to be 37 °C. To approximate the effects of heating-induced microvascular stasis, which occurs at a thermal dose of around 300 EM43°C [16], blood perfusion rates in all tissues were assumed to reduce to zero at a temperature of 54 °C, which was found in this computational study to correspond to 300 EM43°C at the acoustic intensities and durations typically applied. The material properties of the various tissues considered are specified in Table 1.
Table 1 Material properties of tissues. ƒ represents frequency, in units of MHz
Tissues that reached 52 °C were considered to be ablated [46], and hyperthermia simulations were performed with an aim of maintaining temperatures of 40–47 °C throughout the target volume. Sensitive tissues, including the stomach wall, intestines, blood vessels, liver, kidneys, spleen, and bones, were to be kept below a safety threshold of 45 °C [47].
The acoustic heat deposition from a tubular ultrasound source can be modeled as a radially radiating intensity profile well-collimated to the length and sector angle of the transducer [33, 41]:
$$ Q=2\alpha {I}_s\frac{r_t}{r}{e}^{-2{\displaystyle {\int}_{r_t}^r\mu d{r}^{{\textstyle \hbox{'}}}}} $$
where α is the ultrasound absorption coefficient (Np/m), I s is the acoustic intensity on the transducer surface (W/m2), r t is the transducer radius (m), r is the radial distance from the central axis of the transducer (m), and μ is the ultrasound attenuation coefficient (Np/m). The ultrasound absorption coefficient is assumed to be equivalent to the ultrasound attenuation coefficient, with scattered energy locally absorbed.
Heat transfer in tissue was modeled using COMSOL Multiphysics 4.4 (COMSOL, Inc., Burlington, MA) in conjunction with MATLAB (Mathworks, Inc., Natick, MA). An initial tissue temperature of 37 °C was assumed for all simulations. A Dirichlet boundary condition constrained the outermost boundaries of the tissue volume, far from the heated region, to 37 °C. Blood flow through larger vessels [48], as well as water cooling of the catheter and balloon, are modeled using convective boundary conditions:
$$ -\widehat{n}\cdot \left(- k\nabla \mathrm{T}\right)= h\left({T}_f- T\right) $$
where \( \widehat{n} \) is the unit vector normal to the vessel, catheter, or balloon surface, h is the heat transfer coefficient (W/m2/°C), and T f is the fluid temperature, as specified in
Table 2. To simplify the modeling of the vessels in the patient-specific study, which had non-constant diameters and complex geometries, the vessels were assigned a heat transfer coefficient based on size and flow rate. Unique heat transfer coefficients were calculated for the aorta and vena cava. In the remaining vasculature the heat transfer coefficient was calculated to be approximately 750 W/m2/°C for large vessels such as the portal vein, superior mesenteric vein, etc., and 1000 W/m2/°C for smaller vessels such as the splenic artery, superior mesenteric artery, hepatic artery, etc.
Table 2 Heat transfer coefficients and fluid temperatures for convective flow boundaries
Convergence tests were performed to select mesh sizes and time steps small enough for accuracy. The finite element mesh size was limited to a maximum of 0.45 mm on the applicator, and no more than 7–7.5 mm overall, with a finer mesh on highly heated volumes, a wider mesh on the outer tissue boundaries, and gradual transitions in mesh size over space. An implicit transient solver with variable time stepping was used in ablation simulations as necessary in order to track the dynamic spreading of high temperatures over short treatment intervals. Initial time steps were small (<1 s), and gradually increased automatically to no more than 15 s as the solution converged. When determining the amount of time it takes for sensitive tissue to heat to dangerous temperatures in parametric studies, maximum time steps were limited to 2–15 s at time points when the tissue was expected to approach the temperature threshold. Hyperthermia distributions were calculated using a steady-state solver under steady-state conditions, which assume that target temperatures are achieved within a short interval and maintained for a typical 30–60 min treatment session.
Patient-specific models
To investigate approaches for transgastric and intraductal thermal therapy using tubular ultrasound transducers, eight 3D patient-specific models were developed. The 3D models were made by segmenting CT scans and then creating 3D finite element meshes based on individual patient anatomies (Fig. 2). Individual organs and structures, including tumors and blood vessels, were identified. Applicator positions and treatment parameters were selected using an empirical, iterative approach to maximize tumor coverage without damaging sensitive anatomy. Finite element modeling was performed to calculate temperatures throughout the tissue volume. Ablation was modeled using time-dependent simulations, with a proportional integral (PI) controller to determine applied powers, while hyperthermia was modeled at steady state, with constant powers selected empirically.
Process used for creation of 3D patient-specific models
To create the models, CT scans of six patients with tumors (1.9–4.8 mm long along the applicator axis, 3.3–17.0 cm3, Table 3) in the head (n = 4), body (n = 1), or tail (n = 1) of the pancreas were segmented using a combination of manual and semi-automatic techniques. Note the UCSF Internal Review Board (IRB) considers the use of de-identified data (images used herein) without a key back to the subject as "Not Human Subjects Research", and does not require approval. All organs near each target were selected for segmentation, and could include the tumor, pancreas, pancreatic duct (if visibly distended), duodenal wall, stomach wall, gastrointestinal contents, liver, kidney, spleen, vertebrae, and bowel. Thermally significant blood vessels, such as the aorta, vena cava, portal vein, superior mesenteric vein, splenic vein, renal veins, and superior mesenteric artery, were also segmented. Image segmentation and finite element mesh generation were performed using the Mimics Innovation Suite (Materialise NV, Leuven, Belgium), as illustrated in Fig. 2. The finite element mesh was imported into COMSOL Multiphysics, where acoustic and thermal modeling of ablation and/or hyperthermia was performed.
Table 3 Treatment parameters for patient-specific cases
An iterative approach was employed for each patient case to determine appropriate treatment parameters, such as applicator position, transducer length, transducer sector angle, power, and treatment time. Power levels for hyperthermia were empirically selected to raise a maximum volume of the tumor to 40–47 °C while maintaining critical anatomy, such as the duodenum and blood vessels, under 45 °C. In transient ablation simulations, a PI controller (kp = 0.375 W/°C, ki = 0.003 W/°C/s) was used to maintain maximum temperatures in the tumor volume of 80 or 85 °C, with a lower target temperature used in cases with longer treatment durations and in cases with sensitive tissues in proximity to the most heated regions. For clinical implementation of such a controller, temperature measurements from MR temperature imaging or needle-based temperature probes could be employed. Ablation treatments, which were simulated using a time-dependent model, were considered complete, and treatment was ended, when the full tumor volume reached a lethal temperature of 52 °C, or when sensitive anatomy to be preserved neared 45 °C, whichever came first.
Transgastric approaches were modeled in cases with an available insertion path from the stomach or duodenum to the tumor that did not transverse any sensitive anatomy. Intraductal approaches were modeled in cases in which most of the tumor volume was within 1.5 cm of the pancreatic or bile duct. In cases in which the tumor is on one side of the bile duct, a transgastric applicator can be placed through the center of the tumor, and an intraductal applicator can use sectored transducers to heat to one side. However, the far side of a tumor adjacent to the duct may be too far from an intraductal applicator for it to be treated effectively, so for this reason, the tumors considered in Cases 6 and 8 were modeled only with a transgastric approach. To simulate realistic applicator positions as delivered through the working channel of an endoscope, the transgastric applicators were inserted through the stomach or duodenal wall at an acute angle to the tangent plane of the wall. In Cases 1, 2, and 8, in which the stomach or duodenum was close to the targeted volume and in danger of thermal damage, the stomach and duodenum were assumed to be filled with 22 °C cooling water 2–5 min before treatment began. This cooling was modeled using an initial temperature condition within the stomach and duodenum of 22 °C, and an initial temperature of 37 °C elsewhere.
Hyperthermia was considered only for tumors near the duodenum that could be treated intraductally using a single applicator position. To warrant the invasiveness of a transgastric approach, only the aggressive treatment of ablation was considered for such cases. The tail of the pancreas was judged to be too far from the duodenum to be accessed endoscopically through the pancreatic duct. To be treated with hyperthermia, which lasts on the order of an hour, the tumor also had to be small enough to treat the full volume using a single applicator position, as multiple positions would require excessive amounts of time.
Parametric studies
Parametric studies were performed to investigate necessary treatment parameters as a function of distance from critical blood vessels, distance from the duodenum, the acoustic absorption coefficient, and blood perfusion rates. Simple geometric models and meshes were created in COMSOL to represent the various tissues, with the same meshing and modeling parameters as the patient-specific models. The applicator was positioned in the center of a large cylinder of soft tissue. A PI controller (kp = 0.375 W/°C, ki = 0.003 W/°C/s) was used to maintain maximum tissue temperatures of 80 °C.
Parametric studies of preservation of blood vessels
A wide variety of critical blood vessels are in close proximity to the pancreas, including but not limited to the aorta, the vena cava, the portal vein, the superior mesenteric vein and artery, the splenic vein and artery, the renal veins and arteries, the common hepatic artery, the right gastroepiploic vein, the celiac artery, and the left gastric artery. A parametric study was performed to evaluate the effect of any errors in estimation of the heat transfer coefficients of the vessels and to determine the distance between the applicator and the vessels necessary for the vessels to be fully preserved (T < 45 °C).
In these 3D models, the blood vessels were represented by an 8 cm high tube oriented parallel to the applicator (Fig. 3a). The transgastric applicator considered had two 1 cm long, 360° transducers, which were positioned in the center of a 12 cm OD, 8 cm high cylinder of tissue surrounding both the applicator and the vessel (Fig. 3a). The aorta and vena cava were represented by 18 mm ID, 22 mm OD hollow cylinders; large vessels such as the portal vein by 11 mm ID, 13 mm OD hollow cylinders, and small vessels such as the superior mesenteric artery by 5 mm ID, 5.8 mm OD hollow cylinders. The distances between the vessels and the applicators reported in this study are measured from the outer surface of the vessel to the center of the applicator.
Diagram of geometries of parametric studies, showing the applicator in the center of a cylinder of tissue. Geometries for studies of blood vessel and duodenal heating include two concentric cylinders representing the inner and outer surfaces of a blood vessel (a) or duodenal wall (b). In 2D studies of the impacts of attenuation and perfusion, the temperature distributions in the axisymetric geometry was rotated about the central axis to represent a 3D cylinder (c)
A wide variety of heat transfer coefficients, ranging from 46 or 184 kg/m3/s up to 1500 or 3000 kg/m3/s, were applied for each vessel size bracket in this parametric study. The range considered is far wider than the heat transfer coefficients calculated for the vessels in each size grouping. Such a wide range is modeled so that the effects of vessel size on temperature can be observed as a function of the heat transfer coefficient.
Parametric study of duodenal heating
Because the pancreas is in such close proximity to the stomach and duodenum, a parametric study was performed to assess techniques for preserving these tissues. As most tissue properties are similar for these two materials, and the majority of pancreatic tumors arise in the head of the pancreas [6] which is near the duodenum, heating of the duodenum was modeled in this study. The effects of the distance from the applicator to the duodenum, the use of sectored transducers, and water-cooling of the duodenum on the peak temperatures generated in the duodenum were evaluated. 360° and 270° sector angles were considered, with the 270° sectors directed away from the duodenum. The applicator was placed 2–30 mm from the duodenum, as measured from the center of the applicator to the outer surface of the duodenal wall. To water-cool the duodenum, it was assumed that the duodenal lumen was filled with 22 °C water immediately before heating began, and that the water was left in the duodenum and gradually warmed by the surrounding tissue. In cases without cooling, the initial temperature of the duodenal lumen was assumed to be 37 °C. In cases with cooling, the duodenal lumen was assumed to have an initial temperature of 22 °C, and all surrounding tissues were assumed to have an initial temperature of 37 °C.
In these 3D models, a transgastric applicator with two 1 cm long transducers was positioned in the center of a large cylinder of tissue (8 cm high, 12 cm diameter). Two concentric 8 cm high cylinders parallel to the applicator represented inner and outer surfaces of the duodenum, which was assumed to have an outer diameter of 25 mm (Fig. 3b) [49]. Duodenal wall thickness has been reported as 1.5–3 mm [49, 50], so a wall thickness of 3 mm was assumed to avoid underestimation of duodenal heating. The contents of the duodenum were modeled as water.
Parametric study of absorption and perfusion
As the acoustic properties and blood perfusion rates of pancreatic tumors have not been widely reported in the literature, a parametric study was performed to evaluate the effects of ranges of acoustic attenuation and blood perfusion rates on ablation outcomes. The maximum lesion diameter (T > 52 °C) was calculated for a variety of tissue attenuations (35–85 Np/m at 7 MHz) and perfusion rates (0–10 kg/m3/s), ranges chosen to encompass a variety of recorded tumor perfusion rates [51], necrotic tumor cores, and a variety of attenuation values common in healthy and cancerous soft tissues [52]. All other material properties for pancreatic tumors were set to those of pancreatic tissue, as shown in Table 1. An applicator with two 15 mm long 360° transducers was considered. It was placed with the transducers in the center of a 9 cm high, 5 cm radius cylinder of tissue (Fig. 3c), and heating was performed for 10 min, considering both transgastric and intraductal applicators.
Hyperthermia and/or ablation treatments of 6 pancreatic tumors were simulated using intraductal and/or transgastric approaches. Four tumors were in the head of the pancreas, one was in the body, and one was in the tail. Eight treatments were simulated: four intraductal and four transgastric approaches, with six ablations and two hyperthermia treatments. The tumor sizes, treatment parameters, and treatment outcomes for the 8 cases are summarized in Table 3. 67.1–83.3% of the volumes of four small and medium-sized tumors (3.3–7.9 cm3) could be ablated within 10 min using one applicator position. In contrast, only 55.3 or 60% of the volume of a larger (17.0 cm3) tumor that required multiple repositionings of the applicator could be ablated within 20 min using intraductal or transgastric approaches, respectively. Intraductal hyperthermia was able to treat relatively large portions (89.9–94.7%) of two small and medium sized tumors (4.4 & 11.4 cm3). In all cases, sensitive anatomy was kept at temperatures of no more than 45 °C (Table 4).
Table 4 Maximum temperature (°C) in tumors and the sensitive anatomy modeled in the eight cases considered
Both 360° and sectored transducers were used. To protect sensitive anatomy while heating in all directions in order to treat larger volumes, 360° applicators were placed off-center within the tumors, away from sensitive tissues (Fig. 4), in Cases 4–6. In four cases, transducers with wide sector angles of 210 or 270 °C carefully directed away from sensitive anatomy were utilized.
3D images showing placement of applicator off-center within tumor to avoid heating of the duodenum (Case 6). a Map of temperatures on the tumor, duodenum, stomach, and blood vessels. b 52 °C contour shown relative to the positions of the tumor, duodenum, and stomach
In Case 3, the pancreatic tail tumor was surrounded by several critical organs, including the right kidney, the spleen, and the splenic vein. To protect the blood vessels, a 270° directional applicator was placed adjacent to the vessels closest to the tumor, and the acoustic energy was directed away from these vessels (Fig. 5). A small portion of the tumor in the direction of the non-active sector was heated through conduction, but there was little heating penetration in the non-active direction, and minimal heating in the direction of the nearby vessels. To avoid renal heating, the applicator was placed perpendicular to the kidney surface, though not deep enough to puncture it, so that the well-collimated acoustic beam would not enter the kidney.
a 3D rendering of the anatomy in Case 3, showing the position of the vessels, kidney, and spleen around the tumor. b Map of temperature on tumor and blood vessel surfaces, with the 52 °C contour (red) and the 270 °C applicator shown
In Case 7 the duodenum, renal vein, and vena cava were located in close proximity to a tumor in the head of the pancreas, with the duodenum on the right side of the tumor and these two vessels immediately posterior to the tumor (Fig. 6). As directional heating could not be applied to direct all heat away from both of these structures without leaving significant portions of the tumor untreated, this case was modeled using only hyperthermia and a 360° applicator. 97.3% of the tumor volume was heated to a target temperature of 40 °C or higher, with a maximum tumor temperature of 47.1 °C near the applicator. The maximum temperatures on nearby sensitive anatomy were 42.9, 39.7, 38.7, 37.8, and 37.3 °C in the duodenum, blood vessels, bone, liver, and kidney, respectively.
a 3D image of anatomy showing position of tumor between vena cava and duodenum (Case 7), with several organs not shown for clarity. b 2D map of temperatures during intraductal hyperthermia treatment. Superior mesenteric artery is abbreviated SMV
Both transgastric and intraductal approaches were considered to treat a large (4.8 cm long) tumor in the head of the pancreas. As the stomach and major vessels were very close to the tumor, 270° sectored applicators were employed to minimize energy directed towards these sensitive tissues. To cover this tumor, the applicator, which contained two 10 mm transducers, was inserted to two depths for both approaches, and rotated to 1–2 positions at each depth. There were a total of 3–4 applicator positions and rotations over the course of each of the transgastric and intraductal ablations. To provide additional cooling for both approaches, the stomach was assumed to have been filled with 22 °C water before the start of the first ablations, in which the applicator was placed closest to the stomach wall. Hyperthermia was not performed in this case because treatment at 4 discrete positions for 30–60 min each would take an unreasonable amount of time. The transgastric approach, employing 3 applicator positions, allowed for 10.2 cm3 of the 17.0 cm3 tumor to be ablated over 20 min, with maximum stomach and blood vessel temperatures of 44.7 and 44.6 °C, respectively (Fig. 7). The intraductal approach, employing 4 discrete applicator positions, allowed for 9.4 cm3 of the 17.0 cm3 tumor to be ablated over 30 min, with maximum stomach and blood vessel temperatures of 43.2 and 45.0 °C, respectively.
a 3D image of anatomy showing position of large tumor (Cases 1&2) between the stomach wall and several major blood vessels (white). b-d Axial 2D maps of temperatures at the end of ablations at each of three position of the transgastric applicator (Case 1)
A simulated parametric study was performed to determine the effects of errors in the estimated heat transfer coefficient on calculated heating of blood vessels. Figure 8 shows the maximum temperatures on the inner and outer surfaces of blood vessels of various sizes after 10 min of heating, for a variety of heat transfer coefficients and distances between the transgastric applicator and the blood vessels. For a wide margin of error, that the heat transfer coefficients are at least 25% of the estimated values, acoustic energy can be directed at blood vessels provided that the applicator is positioned at least 15.8 mm from the vena cava, 15.1 mm from the aorta, 14.4 mm from large blood vessels such as the portal vein, or 13.4 mm from small blood vessels such as the superior mesenteric artery, assuming that maximum tissue temperatures are limited to 80 °C. Using the estimated heat transfer coefficients without a margin of error, the applicator should be positioned at least 14.8 mm from the vena cava, 13.9 mm from the aorta, 12.0 mm from large blood vessels, and 9.4 mm from small blood vessels.
Maximum temperature on the outer (solid) and inner (dashed) surfaces of blood vessels after 10 min of heating, as a function of distance from the applicator to the outer surface of the blood vessel, for a variety of heat transfer coefficients. The temperature of the aorta is given in (a), the temperature of large 13.0 mm OD vessels is given in (b), and the temperature of small 5.8 mm OD vessels is given in (c)
Blood vessel temperatures were more sensitive to changes in the heat transfer coefficient when the applicator was closer to the vessels. Changes in heat transfer coefficients also had a larger effect on temperatures in smaller vessels than larger vessels, and this effect was more pronounced on the outer surface on the vessels. As expected, vessel temperatures were higher when the applicator was closer to the vessel, with an exception in some cases with low heat transfer coefficients when the applicator was close enough (<4 mm) for catheter cooling to play a role.
A parametric study was performed to determine the treatment parameters necessary for preservation of the duodenum. The maximum temperature on the duodenum for 360° and 270° applicators 2–30 mm from the duodenum is plotted in Fig. 9, with treatment times of 5 and 10 min and initial temperatures of 22 and 37 °C.
The maximum temperature of the duodenum (°C) when heated by a 360° applicator or a 270° applicator directed away from the duodenum, as a function of distance from the applicator to the duodenum, for 5 and 10 min treatment times and for initial temperatures of the duodenal lumen of 22 °C and 37 °C
The distance from the applicator to the duodenum and the sector size of the applicator had more pronounced effects on the maximum duodenum temperature than initial temperature or treatment time. Use of a 270° applicator aimed directly away from the duodenum decreases the duodenal temperatures up to 21.1 °C when compared to a 360° applicator for a 10 min ablation. A 360° applicator parallel to the duodenum would have to be positioned at least 14 mm from the duodenum to avoid damage (<45 °C) after 10 min of ablation, assuming an initial temperature of 37 °C. A 270° applicator, however, could be placed 7 mm from the duodenum under the same conditions.
Reducing the treatment time from 10 to 5 min had only a moderate effect on duodenal temperatures for 360 °C applicators, with maximum duodenal temperatures decreasing by at most 4.1 °C. For 270° applicators, this reduction in treatment time had even less effect, with a change in maximum duodenal temperatures of only 1.7 °C at most.
Filling of the duodenum with 22 °C cooling water prior to treatment had a negligible effect on the maximum duodenum temperature at the end of 10 min treatments with 360° applicators and treatments of 5 or 10 min with 270° applicators directed away from the duodenum. With short treatment times using 360° applicators, the initial cooling has a small effect. Cooling reduced the maximum duodenum temperature by 2.1 °C or less after 5 min treatments with 360° applicators, and only 1.3 °C or less after 10 min treatments with 360° applicators. For 270° applicators directed away from the duodenum, duodenal cooling reduced the maximum duodenum temperature by only 1.0 and 0.4 °C after 5 and 10 min treatments, respectively.
The thermal lesion diameter that can be attained after 10 min of heating with transgastric and intraductal applicators is plotted in Fig. 10 for a variety of acoustic attenuation coefficients and blood perfusion rates. Assuming a tumor attenuation of 68 Np/m and perfusion of 4.5 kg/m3/s (Table 1), this study indicated that pancreatic tumors up to 2.5 or 2.7 cm diameter can be ablated within 10 min using the transgastric and intraductal approaches, respectively. Intraductal applicators produced slightly larger (1.1–2.5 mm in diameter) thermal lesions than transgastric applicators, particularly in tissues with lower attenuations. Variations in blood perfusion rates generally had more pronounced effects on thermal lesion diameters than did variations in acoustic attenuation coefficients. Variations in acoustic attenuation had a greater effect on lesion sizes at lower blood perfusion rates than at higher rates.
Fig. 10
The maximum thermal lesion diameter (T > 52 °C) after 10 min of heating as a function of tissue attenuation, for a variety of blood perfusion rates for both transgastric (solid lines) and intraductal (dashed lines) applicators
This study was performed to provide a broad and preliminary investigation of the feasibility of transgastric and intraductal approaches to ultrasound-based thermal therapy of pancreatic tumors. Such an approach would provide localized directionality of heating which RF, microwave, and laser ablation lack, without the acoustic window limitations faced by HIFU. Through patient specific modeling and parametric studies we have demonstrated that this technology would have the ability to treat tumors positioned > 4–5 cm from the gastrointestinal (GI) tract, which ultrasound applicators sonicating from within the tract cannot access. Patient-specific simulations herein have demonstrated the ability of transgastric and intraductal ultrasound to generate therapeutic temperature distributions within the majority of the volumes of pancreatic tumors without damage to sensitive structures. Further, parametric studies identified appropriate treatment parameters for a variety of settings to avoid thermal damage to the duodenum or major blood vessels, and to identify limitations of the technology.
Sensitive tissues can be protected through careful placement of the applicator and through the use of sectored applicators. Guidance of applicator placement may be achieved clinically through MR, CT, ultrasound, or fluoroscopic guidance. Needle-based temperature probes or MR temperature imaging could be applied to confirm correct placement and to ensure that sensitive tissues are not overheated. To prevent overheating in the direction of non-targeted anatomy, transducers with wide sector angles (210° or 270°) were employed in four patient-specific simulations. Wide sector angles allow for heating of large tumor volumes, and only limited heating dominated by conduction occurs in the direction of the sensitive structures. Sectored applicators can be positioned closer to sensitive anatomy than 360° applicators, with the emitted energy directed away from these organs. Filling the duodenum with cold water prior to treatment was not found to be a highly effective means of tissue protection, reducing the duodenal temperature by 2.1 °C or less (Fig. 9). Without active cooling of the duodenum, 360° applicators can be placed 14 mm, and directional applicators can be placed 7 mm, from the duodenal wall without damaging the tissue over the course of a 10 min treatment. The high blood perfusion rates in the duodenum, as well as the relatively high wall thickness considered in order to simulate a worst case scenario, may have contributed to this result.
Applicator placement was determined based on the tumor shape and the location of sensitive structures. In most cases, the applicator was placed in the center of the target volume. If possible, the applicator axis was aligned with the long axis of the tumor, as the ablated zone has the shape of an ellipsoid with the long axis aligned with the applicator. In cases where sensitive anatomy is to one side of the tumor, the applicator could be placed either perpendicular to and outside the surface of the sensitive structure, such that the well-collimated acoustic output does not enter it (Case 8), or off-center within the tumor, similar to the 360° applicator positioned away from sensitive tissues in Case 6 (Fig. 4). An ultrasound applicator can be placed closer to blood vessels, which are self-cooling due to blood flow, than to organs, as can be seen by comparing Figs. 8 and 9. Our finding that a 360° ultrasound applicator should be placed at least 12 mm from large blood vessels like the portal vein is in accordance with a study by Wu et al. [53]. They found that 5 mm is an insufficient safety margin between an RFA site and major peripancreatic vessels, namely the portal vein, in this study in which three deaths were caused by portal vein thrombosis followed by massive gastrointestinal hemorrhage [53].
The selection of either a transgastric or intraductal approach can be decided based on the size and location of the tumor. A transgastric approach can be used in cases with an available route from the stomach or duodenum that does not transverse major blood vessels or sensitive tissues. Depending on the size of the duodenum and the constriction and distance from the pyloric sphincter, attaining the desired insertion angle may be easier from the stomach, where there is greater freedom of motion, than from the duodenum. An intraductal approach is more appropriate for tumors near the head of the pancreas that can be accessed from and are immediately adjacent to or encapsulating the pancreatic duct. The intraductal approach can ablate slightly larger volumes than the transgastric approach (26.8 mm vs. 25.4 mm diameter, assuming 4.5 kg/m3/s perfusion and 68 Np/m attenuation). For tumors partially encapsulating the duct, directional sectored applicators may be more appropriate, but they produce much smaller ablated volumes.
The intraductal approach is less invasive than the transgastric one, as it does not require any penetration through the stomach or duodenal wall. Although other transgastric interventions are being investigated with larger instruments than the transgastric ultrasound applicators presented herein [54], less invasive procedures are generally preferable when possible, thus making the intraductal approach more attractive when feasible. However, the full target volume must be accessible (<1.2 cm for ablation from applicator to outer target boundary) from the pancreatic or bile duct. The size of the duct may potentially limit some intraductal treatments to tumors near the head of the pancreas, which is acceptable, as about 70–85% of pancreatic tumors arise in the head [6]. Procedures commonly used for dilatation and placement of stents could be applied during the placement of the ultrasound device.
Hyperthermia for the treatment of pancreatic cancer, as an adjunct to radiation therapy, chemotherapy, immunotherapy, or nanoparticle drug delivery, has potential clinical utility. The feasibility of low temperature heating or hyperthermia delivery is largely dependent on the tumor size and location. Hyperthermia can be applied using the less invasive intraductal approach, following procedures commonly used for pancreatic stenting. Large tumors requiring multiple applicator positions are inappropriate for hyperthermia, as treatment at each position would require 30–60 min. Hyperthermia also has all the limitations associated with the intraductal approach, including less flexibility in selection of applicator positions than the transgastric approach. Hyperthermia may be preferred over ablation for moderately sized tumors with sensitive anatomy on multiple sides of the tumor, as in Case 7, where sensitive tissues could not be readily protected solely by the use of directional transducers.
Larger proportions of tumor volumes were treated with hyperthermia (89.9–94.7%) than with ablation (55.3–83.3%) in this study. This may be due in part to the lower risk of thermal injury to sensitive tissues during hyperthermia placing fewer limitations on treatment, and to the selection of tumors small enough for treatment with a single applicator position. Because hyperthermia uses inherently lower and safer temperatures than ablation, larger proportions of tumor volumes can be treated with less heating of nearby sensitive tissues.
When planning treatments of pancreatic tumors, the attenuation coefficients of the tumor tissue and desmoplastic stroma should be taken into consideration in order to optimize treatment parameters. Parametric studies demonstrate the significant effect tissue attenuation rates have on the size of the ablated region, especially in cases with low blood perfusion rates (Fig. 10). Pancreatic ductal adenocarcinoma, a common form of pancreatic cancer, is known to exhibit extensive and heterogeneous fibrosis [55, 56]. Fibrous tissues generally have higher attenuation coefficients than other soft tissues [52], which could result in preferential heating of the desmoplastic stroma nearest the applicator, and less acoustic propagation through the stroma into the tumor. This could possibly result in reduced heating of both the portion of the tumor further from the applicator, and any sensitive anatomy. A study to obtain measurements of tissue properties for pancreatic tumor samples, as related to their fibrous content and appearance under diagnostic medical imaging, would be extremely useful in informing further development of ultrasound-based therapeutic strategies for pancreatic cancer. Both attenuation coefficients, and the distribution of desmoplastic stroma in and around the targeted tumor, should be taken into account when planning treatments.
The favorable findings of this exploratory study, although preliminary, indicate that further investigation of these approaches for delivering ultrasonic or thermal therapies for the treatment of pancreatic cancer is warranted. The pathway for additional development toward clinical implementation could include the design, fabrication, and experimental evaluations of intraluminal and transgastric devices specific for the pancreas. Specific image guidance approaches, such as MRI with non-invasive temperature monitoring [57–59], or ultrasound or CT imaging with electromagnetic tracking [60–62], could be integrated as a means to precisely position and verify therapy delivery. Incorporation of optimization-based patient specific treatment planning [39, 63, 64] could be applied for a priori or real-time determination of ideal positioning, applicator selection, and applied power trajectories. This could include model-based feedback treatment control, to optimally determine parameters for conformal targeting and ensuring adequate safety zones. Detailed and extensive in vivo studies in large animals could be performed to characterize thermal dosimetry, to closely evaluate heating around ducts and vessels, to define limitations and safety information, to provide feedback to designs and monitoring approaches, and to validate guidance and planning as above. Given successful development within this framework, precise delivery of safe and conformal hyperthermia or thermal ablation of target regions within the pancreas can possibly be delivered in a minimally-invasive fashion, thereby providing a superior alternative to other invasive modalities in regions where extracorporeal HIFU may not be practicable.
This study has demonstrated the feasibility of ablation of 2.5 cm diameter targets in the pancreas using transgastric and intraductal ultrasound applicators, provided that there is sufficient separation of sonicated regions from sensitive organs and blood vessels. To preserve sensitive structures, 360° applicators should be placed at least 13.9–14.8 mm from major vessels like the aorta or vena cava, 9.4–12.0 mm from other sizable vessels, and 14 mm from the duodenum. Alternatively, sectored transducers can be positioned closer to sensitive anatomy, with the emitted energy directed away from these structures. In cases with tumors near or encapsulating the pancreatic duct, intraductal hyperthermia may provide a therapy option in conjunction with radiation or chemotherapy.
3D:
CT:
GI:
HIFU:
High intensity focused ultrasound
MR:
Proportional integral
RFA:
Siegel R, et al. Cancer statistics, 2014. CA Cancer J Clin. 2014;64(1):9–29.
Morganti AG, et al. A systematic review of resectability and survival after concurrent chemoradiation in primarily unresectable pancreatic cancer. Ann Surg Oncol. 2010;17(1):194–205.
Hidalgo M. Pancreatic cancer. N Engl J Med. 2010;362(17):1605–17.
Vincent A, et al. Pancreatic cancer. Lancet. 2011;378(9791):607–20.
Keane MG, et al. Systematic review of novel ablative methods in locally advanced pancreatic cancer. World J Gastroenterol. 2014;20(9):2267.
Brescia FJ. Palliative care in pancreatic cancer. Cancer Control. 2004;11(1):39–45.
Cantore M, et al. Combined modality treatment for patients with locally advanced pancreatic adenocarcinoma. Br J Surg. 2012;99(8):1083–8.
Khokhlova TD, Hwang JH. HIFU for palliative treatment of pancreatic cancer. J Gastrointest Oncol. 2011;2(3):175–84.
D'Onofrio M, et al. Radiofrequency ablation of locally advanced pancreatic adenocarcinoma: an overview. World J Gastroenterol. 2010;16(28):3478.
Arcidiacono PG, et al. Feasibility and safety of EUS-guided cryothermal ablation in patients with locally advanced pancreatic cancer. Gastrointest Endosc. 2012;76(6):1142–51.
Pai M, et al. Endoscopic Ultrasound Guided Radiofrequency Ablation (EUS-RFA) for Pancreatic Ductal Adenocarcinoma. Gut. 2013;62 Suppl 1:A153.
Li T, et al. Endoscopic high-intensity focused US: technical aspects and studies in an in vivo porcine model (with video). Gastrointest Endosc. 2015;81(5):1243–50.
Adams MS, et al. Thermal therapy of pancreatic tumours using endoluminal ultrasound: Parametric and patient-specific modelling. Int J Hyperth. 2016;32(2):97–111.
Salgaonkar VA, Diederich CJ. Catheter-based ultrasound technology for image-guided thermal therapy: Current technology and applications. Int J Hyperth. 2015;31(2):203–15.
Nau WH, et al. MRI-guided interstitial ultrasound thermal therapy of the prostate: A feasibility study in the canine model. Med Phys. 2005;32(3):733–43.
Prakash P, et al. Multiple applicator hepatic ablation with interstitial ultrasound devices: Theoretical and experimental investigation. Med Phys. 2012;39(12):7338–49.
Scott JS, Prakash P, Salgaonkar V, Jones PD, Cam RN, Han M, Rieke V, Burdette EC, Diederich CJ. Interstitial ultrasound ablation of tumors within or adjacent to bone: Contributions of preferential heating at the bone surface. Proc. SPIE 8584, Energy-based Treatment of Tissue and Assessment VII, 85840Z. 2013. doi:10.1117/12.2002632.
Scott SJ, et al. Interstitial ultrasound ablation of vertebral and paraspinal tumors: Parametric and patient specific simulations. Int J Hyperth. 2014;30(4):228–44.
Kangasniemi M, et al. Multiplanar MR temperature-sensitive imaging of cerebral thermal treatment using interstitial ultrasound applicators in a canine model. J Magn Reson Imaging. 2002;16(5):522–31.
Pauly KB, et al. Magnetic resonance-guided high-intensity ultrasound ablation of the prostate. Top Magn Reson Imaging. 2006;17(3):195–207.
Diederich CJ. Endocavity and catheter-based ultrasound devices. In: Moros E, editor. Physics of Thermal Therapy: Fundamentals and Clinical Applications. New York: CRC Press; 2012. p. 189–200.
Deardorff DL, Diederich CJ, Nau WH. Control of interstitial thermal coagulation: Comparative evaluation of microwave and ultrasound applicators. Med Phys. 2000;28(1):104–17.
Deardorff DL, Diederich CJ. Axial control of thermal coagulation using a multi-element interstitial ultrasound applicator with internal cooling. IEEE Trans Ultrason Ferroelectr Freq Control. 2000;47(1):170–8.
Nau WH, Diederich CJ, Stauffer PR. Directional power deposition from direct-coupled and catheter-cooled interstitial ultrasound applicators. Int J Hyperth. 2000;16(2):129–44.
Steel AW, et al. Endoscopically applied radiofrequency ablation appears to be safe in the treatment of malignant biliary obstruction. Gastrointest Endosc. 2011;73(1):149–53.
Wadsworth CA, Westaby D, Khan SA. Endoscopic radiofrequency ablation for cholangiocarcinoma. Curr Opin Gastroenterol. 2013;29(3):305–11.
Figueroa-Barojas P. et al., Safety and efficacy of radiofrequency ablation in the management of unresectable bile duct and pancreatic cancer: a novel palliation technique. J Oncol. 2013;2013:1-5.
Prat F, et al. Endoscopic treatment of cholangiocarcinoma and carcinoma of the duodenal papilla by intraductal high-intensity US: Results of a pilot study. Gastrointest Endosc. 2002;56(6):909–15.
Yoon WJ, Brugge WR. Endoscopic ultrasonography-guided tumor ablation. Gastrointest Endosc Clin N Am. 2012;22(2):359–69.
Pai M, et al. Endoscopic ultrasound guided radiofrequency ablation, for pancreatic cystic neoplasms and neuroendocrine tumors. World J Gastrointest Surg. 2015;7(4):52.
Carrara S, et al. Tumors and new endoscopic ultrasound-guided therapies. World J Gastrointest Endosc. 2013;5(4):141–7.
Prakash P, Salgaonkar VA, Diederich CJ. Modelling of endoluminal and interstitial ultrasound hyperthermia and thermal ablation: Applications for device design, feedback control and treatment planning. Int J Hyperth. 2013;29(4):296–307.
Diederich CJ, Hynynen K. Induction of hyperthermia using an intracavitary multielement ultrasonic applicator. IEEE Trans Biomed Eng. 1989;36(4):432–8.
Tschoep-Lechner KE, et al. Gemcitabine and cisplatin combined with regional hyperthermia as second-line treatment in patients with gemcitabine-refractory advanced pancreatic cancer. Int J Hyperthermia. 2013;29(1):8–16.
Ishikawa T, et al. Phase II trial of combined regional hyperthermia and gemcitabine for locally advanced or metastatic pancreatic cancer. Int J Hyperth. 2012;28(7):597–604.
Brugge WR. EUS-guided tumor ablation with heat, cold, microwave, or radiofrequency: will there be a winner? Gastrointest Endosc. 2009;69(2):S212–6.
Nau WH, Diederich CJ, Burdette EC. Evaluation of multielement catheter-cooled interstitial ultrasound applicators for high-temperature thermal therapy. Med Phys. 2001;28(7):1525–34.
Diederich CJ. Ultrasound applicators with integrated catheter-cooling for interstitial hyperthermia: Theory and preliminary experiments. Int J Hyperth. 1996;12(2):279–97.
Chen X, et al. Optimisation-based thermal treatment planning for catheter-based ultrasound hyperthermia. Int J Hyperth. 2010;26(1):39–55.
Scott SJ, et al. Approaches for modeling interstitial ultrasound ablation of tumors within or adjacent to bone: Theoretical and experimental evaluations. Int J Hyperth. 2013;29(7):629–42.
Ross AB, et al. Highly directional transurethral ultrasound applicators with rotational control for MRI-guided prostatic thermal therapy. Phys Med Biol. 2004;49(2):189.
Hadidi A. Pancreatic duct diameter: Sonographic measurement in normal subjects. J Clin Ultrasound. 1983;11(1):17–22.
Pfau PR, et al. Pancreatic and biliary stents. Gastrointest Endosc. 2013;77(3):319–27.
Tantau M, et al. Intraductal ultrasonography for the assessment of preoperative biliary and pancreatic strictures. J Gastrointest Liver Dis. 2008;17(2):217–22.
Pennes HH. Analysis of tissue and arterial blood temperatures in the resting human forearm. J Appl Physiol. 1948;1(2):93–122.
Kinsey AM, et al. Transurethral ultrasound applicators with dynamic multi-sector control for prostate thermal therapy: In vivo evaluation under MR guidance. Med Phys. 2008;35(5):2081–93.
Dewhirst MW, et al. Basic principles of thermal dosimetry and thermal thresholds for tissue damage from hyperthermia. Int J Hyperthermia. 2003;19(3):267–94.
Haemmerich D, et al. Hepatic bipolar radiofrequency ablation creates coagulation zones close to blood vessels: A finite element study. Med Biol Eng Comput. 2003;41(3):317–23.
Cronin CG, et al. Normal small bowel wall characteristics on MR enterography. Eur J Radiol. 2010;75(2):207–11.
Fleischer AC, Muhletaler CA, James Jr A. Sonographic assessment of the bowel wall. Am J Roentgenol. 1981;136(5):887–91.
Vaupel P, Kallinowski F, Okunieff P. Blood flow, oxygen and nutrient supply, and metabolic microenvironment of human tumors: A review. Cancer Res. 1989;49(23):6449–65.
Duck F. Physical Properties of Tissue: A Comprehensive Reference Book. London: Academic Press Limited; 1990.
Wu Y, et al. High operative risk of cool‐tip radiofrequency ablation for unresectable pancreatic head cancer. J Surg Oncol. 2006;94(5):392–5.
Shaikh SN, Thompson CC. Natural orifice translumenal surgery: Flexible platform. World J Gastrointest Surg. 2010;2(6):210–6.
Whatcott C, et al. Tumor-stromal interactions in pancreatic cancer. Crit Rev Oncog. 2013;18(1–2):135–51.
Verbeke C. Morphological heterogeneity in ductal adenocarcinoma of the pancreas - Does it matter? Pancreatology. 2016;16(3):295–301.
Bing C, et al. Drift correction for accurate PRF-shift MR thermometry during mild hyperthermia treatments with MR-HIFU. Int J Hyperthermia. 2016;32(6):673–87.
de Senneville BD, et al. MR thermometry for monitoring tumor ablation. Eur Radiol. 2007;17(9):2401–10.
Rieke V, Butts Pauly K. MR thermometry. J Magn Reson Imaging. 2008;27(2):376–90.
Ebbini ES, ter Haar G. Ultrasound-guided therapeutic focused ultrasound: current status and future directions. Int J Hyperthermia. 2015;31(2):77–89.
Sanchez Y et al. Navigational Guidance and Ablation Planning Tools for Interventional Radiology. Curr Probl Diagn Radiol. 2016;1–9.
Rajagopal M, Venkatesan AM. Image fusion and navigation platforms for percutaneous image-guided interventions. Abdom Radiol (NY). 2016;41(4):620–8.
Prakash P, Diederich CJ. Considerations for theoretical modelling of thermal ablation with catheter-based ultrasonic sources: Implications for treatment planning, monitoring and control. Int J Hyperth. 2012;28(1):69–86.
Paulides MM, et al. Simulation techniques in hyperthermia treatment planning. Int J Hyperth. 2013;29(4):346–57.
Bamber J, Hill C. Acoustic properties of normal and cancerous human liver—I. Dependence on pathological condition. Ultrasound Med Biol. 1981;7(2):121–33.
Kandel S, et al. Whole-organ perfusion of the pancreas using dynamic volume CT in patients with primary pancreas carcinoma: acquisition technique, post-processing and initial results. Eur Radiol. 2009;19(11):2641–6.
Delrue L, et al. Tissue perfusion in pathologies of the pancreas: assessment using 128-slice computed tomography. Abdom Imaging. 2012;37(4):595–601.
Gerweck LE. Hyperthermia in cancer therapy: The biological basis and unresolved questions. Cancer Res. 1985;45(8):3408–14.
Mcintosh RL, Anderson V. A comprehensive tissue properties database provided for the thermal assessment of a human at rest. Biophys Rev Lett. 2010;5(03):129–51.
Giering K, Lamprecht I, Minet O. Specific heat capacities of human and animal tissues. Proc. SPIE 2624, Laser-Tissue Interaction and Tissue Optics, 188 (January 10, 1996). doi:10.1117/12.229547; http://dx.doi.org/10.1117/12.229547.
Phillips M, et al. Irreversible electroporation on the small intestine. Br J Cancer. 2012;106(3):490–5.
Mott HA. Chemists' manual: A practical treatise on chemistry, qualitative and quantitative analysis, stoichiometry, blowpipe analysis, mineralogy, assaying, toxicology, etc., etc., etc. New York: D. Van Nostrand; 1877.
Werner J, Buse M. Temperature profiles with respect to inhomogeneity and geometry of the human body. J Appl Physiol. 1988;65(3):1110–8.
Williams LR, Leggett RW. Reference values for resting blood flow to organs of man. Clin Phys Physiol Meas. 1989;10(3):187–217.
Wootton JH, et al. Implant strategies for endocervical and interstitial ultrasound hyperthermia adjunct to HDR brachytherapy for the treatment of cervical cancer. Phys Med Biol. 2011;56(13):3967.
Poutanen T, et al. Normal aortic dimensions and flow in 168 children and young adults. Clin Physiol Funct Imaging. 2003;23(4):224–9.
Gabe IT, et al. Measurement of instantaneous blood flow velocity and pressure in conscious man with a catheter-tip velocity probe. Circulation. 1969;40(5):603–14.
Wexler L, et al. Velocity of blood flow in normal human venae cavae. Circ Res. 1968;23(3):349–59.
Arienti V, et al. Doppler ultrasonographic evaluation of splanchnic blood flow in coeliac disease. Gut. 1996;39(3):369–73.
Zironi G, et al. Value of measurement of mean portal flow velocity by Doppler flowmetry in the diagnosis of portal hypertension. J Hepatol. 1992;16(3):298–303.
Perišić MD, Ćulafić DM, Kerkez M. Specificity of splenic blood flow in liver cirrhosis. Rom J Intern Med. 2005;43(1–2):141–51.
Sato S, et al. Splenic artery and superior mesenteric artery blood flow: nonsurgical Doppler US measurement in healthy subjects and patients with chronic liver disease. Radiology. 1987;164(2):347–52.
We gratefully acknowledge support by the National Institutes of Health grants R01CA122276, R01CA111981, and P01 CA159992.
Availability of data and material
Please contact corresponding author for data requests.
All authors helped design the study. SJS performed the majority of the modeling analysis herein, with assistance from MSA and guidance from VAS, FGS, and CJD. All authors read and approved the manuscript.
Department of Radiation Oncology, Thermal Therapy Research Group, University of California, San Francisco, 1600 Divisadero Street, Suite H1031, San Francisco, CA, 94143-1708, USA
Serena J. Scott, Matthew S. Adams, Vasant Salgaonkar & Chris J. Diederich
UC Berkeley – UC San Francisco Graduate Program in Bioengineering, California, USA
Matthew S. Adams & Chris J. Diederich
Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
F. Graham Sommer
Serena J. Scott
Matthew S. Adams
Vasant Salgaonkar
Chris J. Diederich
Correspondence to Chris J. Diederich.
Scott, S.J., Adams, M.S., Salgaonkar, V. et al. Theoretical investigation of transgastric and intraductal approaches for ultrasound-based thermal therapy of the pancreas. J Ther Ultrasound 5, 10 (2017). https://doi.org/10.1186/s40349-017-0090-2
Accepted: 07 February 2017
Thermal ablation
Theoretical model | CommonCrawl |
Optimization and Engineering
A stochastic optimization formulation for the transition from open pit to underground mining
James A. L. MacNeil
Roussos G. Dimitrakopoulos
As open pit mining of a mineral deposit deepens, the cost of extraction may increase up to a threshold where transitioning to mining through underground methods is more profitable. This paper provides an approach to determine an optimal depth at which a mine should transition from open pit to underground mining, based on managing technical risk. The value of a set of candidate transition depths is calculated by optimizing the production schedules for each depth's unique open pit and underground operations which provide yearly discounted cash flow projections. By considering the sum of the open pit and underground mining portion's value, the most profitable candidate transition depth is identified. The optimization model presented is based on a stochastic integer program that integrates geological uncertainty and manages technical risk. The proposed approach is tested on a gold deposit. Results show the benefits of managing geological uncertainty in long-term strategic decision-making frameworks. Additionally, the stochastic result produces a 9% net present value increase over a similar deterministic formulation. The risk-managing stochastic framework also produces operational schedules that reduce a mining project`s susceptibility to geological risk. This work aims to approve on previous attempts to solve this problem by jointly considering geological uncertainty and describing the optimal transition depth effectively in 3-dimensions.
Mine production scheduling Stochastic optimization Stochastic mine planning
The transition from open pit (OP) to underground (UG) methods requires a large capital cost for development and potential delays in production but can provide access to a large supply of reserves and subsequently extend a mine's life. Additionally, an operating mine may benefit from such a transition because of the opportunity to utilize existing infrastructure and equipment, particularly when in a remote location. Optimization approaches towards the open pit to underground transition decision (or OP-UG) may commence with discretizing the space above and below ground into selective units. For surface mining, material is typically discretized into mining blocks, while underground material is frequently grouped into stopes of varying size depending on the mining method chosen. From there and through production scheduling optimization, the interaction between the OP and UG components can be modeled to realistically value the asset under study.
Historically, operations research efforts in mine planning have been focused on open pits as opposed to underground operations. Most commonly, the open pit planning process begins by determining the ultimate pit limits, and the industry standard is the nested implementation of the Lerchs–Grossman's algorithm (Lerchs and Grossmann 1965; Whittle 1988, 1999). This algorithm utilizes a maximum closure concept to determine optimal pit limits, and a nested implementation facilitates economic discounting. For underground mine planning, optimization techniques are less advanced than those employed for open pit mines and heavily depend on the mining method used. In practice, long-term underground planning is divided into two phases: stope design and production sequencing. For stope design methods, the floating stope algorithm (Alford 1995) is the oldest computerized design tool available, although not an optimization algorithm. Mine optimization research has developed methods that schedule the extraction of discretized units in underground mines (e.g. Trout 1995; Nehring and Topal 2007) based on mixed integer programming (MIP) approaches. Nehring et al. (2009), Little and Topal (2011), and Musingwini (2016) extend MIP approaches to reduce the solution times by combining decision variables and also extend application. More recent are the efforts to develop geological risk-based optimization approaches for stope design and production sequencing; these have been shown to provide substantial advantages, including more reliable forecasts, increased metal production and higher cash flows (Bootsma et al. 2014; Carpentier et al. 2016).
Some of the world's largest mines are expected to reach their ultimate pit in the next 15 years (Kjetland 2012). Despite the importance of the topic, there is no well-established algorithm to simultaneously generate an optimal mine plan that outlines the transition from open pit mining to underground (Fuentes and Caceres 2004) or approaches that can address the topic of technical risk management, similarly to approaches for open pit mining (e.g. Godoy and Dimitrakopoulos 2004; Montiel and Dimitrakopoulos 2015; Goodfellow and Dimitrakopoulos 2016; Montiel et al. 2016). The first attempt to address the OP to UG transition was made by Popov (1971), while more recently, a movement towards applying optimization techniques has been made starting with Bakhtavar et al. (2008) who present a heuristic method that compares the economic value of mine blocks when extracted through OP versus their value when extracted by UG techniques. The method iterates progressively downwards through a deposit, concluding that the optimal transition is the depth reached when the value of a block mined by UG methods exceeds the corresponding OP mining value. A major drawback of this method is that it provides a transition depth only described in two dimensions, which is unrealistic from a practical standpoint. An effort is presented in Newman et al. (2013) where the transition depth is formulated as a longest-path network flow. Each path within the network has a unique extraction sequence, a transition depth and a corresponding net present value (NPV). A major limitation of this development is again that it amounts to a 2D solution of what is a 3D problem, as the orebody is discretized into horizontal strata for the above and below ground mining components. At the same time a worst-case bench-wise mining schedule is adopted for open pit production and a bottom-up schedule for the underground block caving component of the mine. These highly constrained mining bench-wise progressions have been demonstrated to be far from optimal (Whittle 1988) and are rarely implemented in practice. More realistic selective mining units and an optimized schedule can provide a more accurate representation of a mine's value, and this is the approach taken by Dagdelen and Traore (2014) who further extend this OP to UG transition idea to the context of a mining complex. In this work, the authors investigate the transition decision at a currently operating open pit mine that exists within the context of a mining complex that is comprised of five producing open pits, four stockpiles and one processing plant. Dagdelen and Traore (2014) take an iterative approach by evaluating a set of selected transition depths through optimizing the life-of-mine production schedules of both the open pit and underground mines using mixed linear integer programming techniques. The authors begin by using Geovia's Whittle software (Geovia 2012) to generate a series of pits which provide an ultimate pit contour. The crown pillar, a large portion of undisturbed host material that serves as protection between the lowest OP working and the highest UG levels, is located below the ultimate pit. The location of the ultimate pit and crown pillar provide a basis for the underground mine design. Optimized life-of-mine production schedules are then created to determine yearly cash flow and resulting NPV. This procedure is repeated for progressively deeper transition depths until the NPV observed in the current iteration is less than what was seen for a previously considered transition depth, at which point the authors conclude that the previously considered depth, with a higher NPV, is optimal.
All the above mentioned attempts to optimize the OP-UG transition depth fail to consider geological uncertainty, a major cause of failure in mining projects (Vallee 2000). Stochastic optimizers integrate and manage space dependent geological uncertainty (grades, material types, metal, and pertinent rock properties) in the scheduling process, based on its quantification with geostatistical or stochastic simulation methods (e.g. Goovaerts 1997; Soares et al. 2017; Zagayevskiy and Deutsch 2016). Such scheduling optimizers have been long shown to increase the net present value of an operation, while providing a schedule that defers risk and has a high probability of meeting metal production and cash flow targets (Godoy 2003; Ramazan and Dimitrakopoulos 2005; Jewbali 2006; Kumral 2010; Albor and Dimitrakopoulos 2010; Goodfellow 2014; Montiel 2014; Gilani and Sattarvand 2016; and others). Implementing such frameworks is extremely valuable when making long-term strategic decisions because of their ability to accurately value assets.
In this paper, the financial viability of a set of candidate transition depths is evaluated in order to identify the most profitable transition depth. To generate an accurate projection of the yearly cash flows that each candidate transition depth is capable of generating, a yearly life-of-mine extraction schedule is produced for both the OP and UG components of the mine. A two-stage stochastic integer programming (SIP) formulation for production scheduling is presented, which is similar to the work developed by Ramazan and Dimitrakopoulos (2005, 2013). The proposed method improves upon previous developments related to the OP-UG transition problem by simultaneously incorporating geological uncertainty into the long-term decision-making while providing a transition depth described in three dimensions that can be implemented and understood by those who operate the mine.
In the following sections, the method of evaluating a set of pre-selected candidate transition depths to determine which is optimal is discussed. Then a stochastic integer programming formulation used to produce a long-term production schedule for each of the pre-selected candidate transition depths is presented. Finally, a field test of the proposed method is analyzed as the method is applied to a gold mine.
2 Method
2.1 The general set up: candidate transition depths
The method proposed herein to determine the transition depth from OP to UG mining is based on the discretization of the orebody space into different selective units and then accurately assessing the value of the OP and UG portions of the mine based on optimized yearly extraction sequences of these discretized units. More specifically, this leads to a set of several candidate transition depths being assessed in terms of value. The candidate depth that corresponds to the highest total discounted profit is then deemed optimal for the mine being considered. Stochastic integer programming (SIP) provides the required optimization framework to make an informed decision, as this optimizer considers stochastic representations of geological uncertainty while generating the OP and UG long-term production schedules that accurately predict discounted cash flows.
For each transition depth being considered, the OP optimization process begins by discretizing the OP orebody space into blocks, sized based on operational selectivity. Candidate transition depths can be primarily identified based on feasible crown pillar locations. A crown pillar envelope outlined by a geotechnical study delineates an area that the crown pillar can be safely located within. As the crown pillar location changes within this envelope, the extent of the OP and UG orebody also changes and the impact this has on yearly discounted cash flow can be investigated (Fig. 1). The year in which the transition is planned to occur varies across the candidate transition depths. Since the orebodies vary in size across the candidate transition depths, it is logical to allocate more years of open pit production for transition depths with a larger OP orebody and vice versa. In addition to a unique transition year, each candidate transition depth corresponds to a unique ultimate open pit limit, crown pillar location and underground orebody domain, all of which are described in the three-dimensional space.
The process of generating a set candidate transition depths begins with a large potential open pit and underground orebody. From there a series of crown pillar locations are identified along with the correspondingly unique OP and UG orebodies for each candidate transition depth
An optimization solution outlining a long-term schedule that maximizes NPV is produced separately for the OP and UG operations at each of the candidate transition depths considered. Once optimal extraction sequences for the open pit and underground portions have been derived for each depth, the value of transitioning at a certain depth can be determined by summing the economic value of the OP and UG components. From here, the combined NPVs at each depth can be compared to easily identify the most favorable transition decision. This process is outlined in Fig. 2.
Schematic representation of the proposed optimization approach. The approach begins with identifying a set of candidate transition depths, then evaluating the economic viability of each through optimized productions schedules that project cash flows under geological uncertainty. Comparisons can be made within the set of transition depths to determine the most profitable option
2.2 Stochastic integer programming: mine scheduling optimization
The proposed stochastic integer program (SIP) aims to maximize discounted cash flow and minimize deviations from key production targets while producing an extraction schedule that abides by the relevant constraints. The OP optimization produces a long-term schedule that outlines a yearly extraction sequence of mining blocks, while UG optimization adopts the same two-stage stochastic programming approach for scheduling stope extraction. The formulation for both OP and UG scheduling are extremely similar, so only the OP formulation is shown. The only difference for the UG formulation is that stopes are being scheduled instead of blocks, and yearly metal is being constrained instead of yearly waste as seen in the OP formulation.
2.3 Developing risk-management based life-of-mine plans: open pit optimization formulation
The objective function for the OP SIP model shown in Eq. (1) maximizes discounted cash flows and minimizes deviations from targets, and is similar to that presented by Ramazan and Dimitrakopoulos (2013). Part 1 of the objective function contains first-stage decision variables, \(b_{i}^{t}\) which govern what year a given block i is extracted within. These are scenario-independent decision variables and the metal content of each block is uncertain at the time this decision is made. The terms in Part 1 of Eq. (1) represent the profits generated as a result of extracting certain blocks in a year and these profits are appropriately discounted based on which period they are realized in.
Part 2 of Eq. (1) contains second-stage decision variables that are used to manage the uncertainty in the ore supply during the optimization. These recourse variables (d) are decision variables determined once the geological uncertainty associated with each scenario has been unveiled. At this time, the gap above or below the mine's annual ore and waste targets is known on a scenario-dependent basis and these deviations are discouraged throughout the life-of-mine. This component of the objective function is important because it is reasonable to suggest that if a schedule markedly deviates from the yearly ore and waste targets, then it is unlikely that the projected NPV of the schedule will be realized throughout a mine's life. Therefore, including these variables in the objective function and reducing deviations allows the SIP to produce a practical and feasible schedule along with cash flow projections that have a high probability of being achieved once production commences.
The following notation is used to formulate the first-stage of the OP SIP objective function:
i is the block identifier;
t is a scheduling time period;
\(b_{i}^{t} = \left\{ {\begin{array}{*{20}l} 1 \hfill & {{\text{Block}}\;i\;{\text{is}}\;{\text{mined}}\;{\text{through}}\;{\text{OP}}\;{\text{in}}\;{\text{period}}\;t;} \hfill \\ 0 \hfill & {\text{Otherwise}} \hfill \\ \end{array} } \right.\)
\(g_{i}^{s}\) grade of block i in orebody model s;
\(Rec\) is the mining and processing recovery of the operation;
T i is the weight of block i;
\(NR_{i} = T_{i} \times g_{i}^{s} \times Rec \times \left( {{\text{Price}} - {\text{Selling}}\;{\text{Cost}}} \right)\) is the net revenue generated by selling all the metal contained in block i in simulated orebody s;
MC i is the cost of mining block i;
PC i is the processing cost of block i;
\(E\left\{ {V_{i} } \right\} = \left\{ {\begin{array}{*{20}l} {NR_{i} - MC_{i} - PC_{i} } \hfill & {{\text{if}}\;NR_{i} > PC_{i} } \hfill \\ { - MC_{i} } \hfill & {{\text{if}}\;NR_{i} \le PC_{i} } \hfill \\ \end{array} } \right.\) is the economic value of a block i;
r is the discount rate;
\(E\left\{ {\left( {NPV_{i}^{t} } \right)} \right\} = \frac{{E\left\{ {V_{i}^{0} } \right\}}}{{\left( {1 + r} \right)^{t} }}\) is the expected NPV if the block i is mined in period t;
N is the number of selective mining units available for scheduling;
z is an identifier for the transition depth being considered;
P z is the number of production periods scheduled for candidate transition depth z.
The following notation is used to formulate the second-stage of the OP SIP objective function:
s is a simulated orebody model;
S is the number of simulated orebody models;
w and o are target parameters, or type of production targets; w is for the waste target; o if for the ore production target;
u is the maximum target (upper bound);
l is the minimum target (lower bound);
\(d_{su}^{to} ,d_{su}^{tw}\) are the excessive amounts for the target parameters produced;
\(d_{sl}^{to} , d_{sl}^{tw}\) are the deficient amounts for the target parameters produced;
\(c_{u}^{to} ,c_{l}^{to} ,c_{u}^{tw} ,c_{l}^{tw}\) are unit costs for \(d_{su}^{to} ,d_{sl}^{to} ,d_{su}^{tw} ,d_{sl}^{tw}\) respectively in the optimization's objective function.
OP Objective function
$$Max \underbrace {{\mathop \sum \limits_{t = 1}^{{P_{z} }} \mathop \sum \limits_{i = 1}^{N} E\left\{ {\left( {NPV_{i}^{t} } \right)} \right\}b_{i}^{t} }}_{{{\text{Part}}\,1}} - \underbrace {{\mathop \sum \limits_{s = 1}^{S} \mathop \sum \limits_{t = 1}^{{P_{z} }} \frac{1}{S}\left( {c_{u}^{to} d_{su}^{to} + c_{l}^{to} d_{sl}^{to} + c_{u}^{tw} d_{su}^{tw} + c_{l}^{tw} d_{sl}^{tw} } \right)}}_{{{\text{Part}}\,2}}$$
OP Constraints
The following notation is required for the constraints:
W tar is the targeted amount of waste material to be mined in a given period;
O tar is the targeted amount of ore material to be mined in a given period;
O si is the ore tonnage of block i in the orebody model s;
Q UG,tar is the yearly metal production target during underground mining;
MCap min is the minimum amount of material required to be mined in a given period;
MCap max is the maximum amount of material that can possibly be mined in a given period;
l i is the set of predecessor for block i.
Scenario-Dependent
Waste constraints for each time period t
$$\mathop \sum \limits_{i = 1}^{N} W_{si} b_{i}^{t} - d_{su}^{tg} + d_{sl}^{tg} = W_{tar} \quad s = 1,2, \ldots ,S;\;\;t = 1,2, \ldots ,P_{z}$$
Processing constraints
$$\mathop \sum \limits_{i = 1}^{N} O_{si} b_{i}^{t} - d_{su}^{to} + d_{sl}^{to} = O_{tar} \quad s = 1,2, \ldots ,S;\;\;t = 1,2, \ldots ,P_{z}$$
Scenario-Independent
Precedence constraints
$$b_{i}^{t} - \mathop \sum \limits_{k = 1}^{t} b_{h}^{k} \le 0\quad i = 1,2, \ldots ,N;\quad t = 1,2, \ldots ,P_{z} ;\;\;h \in l_{i}$$
Mining capacity constraints
$$MCap_{min} \le \mathop \sum \limits_{i = 1}^{N} T_{i} b_{i}^{t} \le MCap_{max} \quad t = 1,2, \ldots ,P_{z}$$
Reserve constraints
$$\mathop \sum \limits_{t = 1}^{{P_{z} }} b_{i}^{t} \le 1\quad i = 1,2, \ldots ,N$$
Constraints (2) and (3) are scenario-dependent constraints that quantify the magnitude of deviation within each scenario from the waste and ore targets based on first-stage decision variables (\(b_{i}^{t}\)). Constraints (4)–(6) contain only first-stage decision variables (\(b_{i}^{t}\)) and thus are scenario-independent. The precedence constraint (4) ensures that the optimizer mines the blocks overlying a specific block i before it can be considered for extraction. The reserve constraint (6) prevents the optimizer from mining a single block i more than once.
The size of OP mine scheduling applications cause computational issues when using commercial solvers since it can take long periods of time to arrive at or near an optimal solution, if able to solve (Lamghari et al. 2014). In order to overcome these issues, metaheuristics can be used. These are algorithms which efficiently search the solution space and have the proven ability to find high quality solutions in relatively small amounts of time (Ferland et al. 2007; Lamghari and Dimitrakopoulos 2012; Lamghari et al. 2014). To be effective these algorithms must be specifically tailored to match the nature of the problem being solved. In the context of mine production scheduling, the tabu search algorithm is well suited, and a parallel implementation is utilized here to schedule the open pit portion of the deposit for each transition depth that is considered (Lamghari and Dimitrakopoulos 2012; Senecal 2015). For more details on tabu search, the reader is referred to the Appendix.
2.4 Developing risk-managing life-of-mine plans: underground optimization formulation
The UG scheduling formulation is very similar to the OP formulation. Both have objective functions which aim to maximize discounted profits, while minimizing deviations from key production targets. The UG objective function is similar to that proposed for the OP scheduling function in Eq. (1), except the binary decision variables can be represented using \(a_{j}^{t}\) which designates the period in which extraction-related activities occur for each stope j. As well, recourse variables in the second portion of the objective function aim to limit deviations from the ore and metal targets, as opposed to the ore and waste targets in the OP objective function. Since UG mining methods have a higher level of selectivity than OP mining, waste is often not mined, but rather left in situ and only valuable material is produced. Therefore, it is more useful to constrain the amount of yearly metal produced in a UG optimization. Underground cost structure is viewed from a standpoint of cost per ton of material extracted. This standard figure contains expenses related to development, ventilation, drilling, blasting, extracting, backfilling and overhead. In terms of size and complexity, the UG scheduling model presented here is simpler than the OP model. The reduced size is due to only considering long-term extraction constraints and a small number of mining units that require scheduling. This allows for the schedule to be conveniently solved using IBM ILOG CPLEX 12.6 (IBM 2011), a commercially available software which relies on mathematical programming techniques to provide an exact solution.
UG Constraints
Scenario-Dependent:
Metal constraints for each time period t
$$\mathop \sum \limits_{j = 1}^{M} g_{sj} O_{sj} a_{j}^{t} - d_{su}^{tm} + d_{sl}^{tm} = Q_{UG,tar} \quad s = 1,2, \ldots ,S;\;\;t = 1, \ldots ,P_{z}$$
$$\mathop \sum \limits_{j = 1}^{M} O_{sj} a_{i}^{t} - d_{su}^{to} + d_{sl}^{to} = O_{tar} \quad s = 1,2, \ldots ,S;\;\;t = 1, \ldots ,P_{z}$$
$$a_{j}^{t} - \mathop \sum \limits_{k = 1}^{t} a_{h}^{k} \le 0\quad j = 1,2, \ldots ,M;\;\;t = 1, \ldots ,P_{z} ;\;\;h \in l_{j}$$
$$MCap_{min}^{UG} \le \mathop \sum \limits_{j = 1}^{M} T_{j} a_{j}^{t} \le MCap_{max}^{UG} \quad t = P_{OP} + 1, \ldots ,P$$
Equations (7)–(10) show the constraints included in the UG SIP formulation. In Eq. (9), the set of predecessors for each stope (l j ) is defined by considering the relevant geotechnical issues which constrain the sequencing optimization. These precedence relationships are created using the Enhanced Production Scheduler (EPS) software from Datamine (Datamine Software 2013). For the application presented in this paper, the precedence relationships implemented were passed along by industry-based collaborators who operate the mine. Once the optimization for both the OP and UG components is completed for each candidate transition depth, the optimal transition depth can then be identified as the depth z that leads to a maximum value of the expression below.
$$NPV_{z}^{OP} + NPV_{z}^{UG} \quad z = 1, \ldots ,D$$
3 Application at a gold deposit
In order to evaluate the benefits of the proposed method, it is applied to a gold deposit that has been altered to suit an OP-UG transition scenario. In this case study, the optimal transition depth from open pit to underground mining of a gold operation is investigated. The mine's life begins with open pit mining and will transition to production through underground mining by implementing the underhand cut and fill method. Underground production is planned to commence immediately after open pit production ceases. On the mine site there is one mill processing stream with a fixed recovery curve. No stockpile is considered. A crown pillar envelope for the deposit is identified a priori along with four crown pillar locations within this envelope leading to four distinct candidate transition depths which are evaluated. The size of the crown pillar remains, although the location changes. Investigating the impact of the size of OP and UG mines on the dimensions of the crown pillar is a topic for future research. Each transition depth possesses a unique above and below ground orebody, dictated by a varying crown pillar location in the vertical plane. The year in which the transition between mining methods occurs varies throughout the candidate transition depths to accommodate increased reserves in the OP or UG orebody as the location of the crown pillar shifts. It should be noted that the capital investment required to ramp up UG mining is not considered in the application presented, and can be integrated to the results of the approach presented; as expected, the related capital investment would have an impact on overall project NPV. The combined OP and UG mine life is 14 years for all candidate transition depths tested. The discrepancy in orebody size and reserves that can be accessed by OP and UG methods for each candidate transition depth along with the transition year is shown in Table 1. As the size of the OP deepens and the number of OP blocks increases, the amount of UG stopes within the accessible underground resource decreases. Despite the variation in the number of blocks and stopes in each OP and UG mine, the annual tonnage capacity remains the same. It is also important to note that the tonnage varies throughout the UG stopes targeted for production. A schematic of how the crown pillar location varies can be seen in Fig. 3. The relevant economic and technical parameters used to generate the optimization models are shown in Table 2.
Size of orebodies and life of mine length at each transition depth
Transition Depth 1
Number of OP blocks
Number of UG stopes
Production years through OP
Production years through UG
Schematic of transition depths based on open pit orebodies and crown pillar location
Economic and technical parameters
Metal price
$900/oz
Crown pillar height
Economic discount rate
Processing cost/ton
OP mining cost/ton
UG mining cost/ton
OP mining rate
18,500,000 t/year
UG mining rate
350,000 t/year
OP mining recovery
UG mining recovery
3.1 Stochastic optimization results and risk analysis
The transition depth determined to be optimal for the proposed stochastic optimization framework is Transition Depth 2 (TD 2) as seen in Fig. 4. This transition depth can be described by having a crown pillar located at an elevation of 760ft, and access to 72,585 open pit blocks and 356 stopes. The optimal transition depth in this case study provides a 5% higher NPV than the next best candidate transition depth and a 13% NPV improvement over the least optimal depth. Such a large impact on the financial outcome of a mine confirms that in-depth analysis before making this type of long-term strategic decision is beneficial.
Risk profile on NPV of stochastic schedules. Lines show the expected NPV for each transition depth while considering geological uncertainty. It should be noted that Transition Depth 1 makes the transition in year 7, Transition Depth 2 in year 8, Transition Depth 3 in year 9 and Transition Depth 4 in year 10. Transition Depth 2 is the most profitable decision of the set, with an expected NPV of $540 M
In order to evaluate the risk associated with stochastic decision making, a risk analysis is performed on the life-of-mine plans corresponding to the optimal transition depth stated above. Similar analysis has been done extensively on open pit case studies (Dimitrakopoulos et al. 2002; Godoy 2003; Jewbali 2006; Leite and Dimitrakopoulos 2014; Ramazan and Dimitrakopoulos 2005, 2013; Goodfellow 2014). To do so, a set of 20 simulated scenarios of the grades of the deposit are used and passed through the long-term production schedule determined for the optimal transition depth, which in this case is Transition Depth 2. This process provides the yearly figures for mill production tonnages, metal production and cash flow projections for each simulation if the schedule was implemented and the grades within a given simulation were realized.
Figure 5 shows that the stochastic schedule produced for Transition Depth 2 has a high probability of meeting mill input tonnage targets on a yearly basis. The ability to meet targets translates into a high level of certainty with regards to realizing yearly cash flow projections once production commences; this is expanded upon later. Stochastic schedules perform well during risk analysis because the inherent geological variability within the deposit is captured within the simulations and then considered while making scheduling decisions in a stochastic framework.
Performance of stochastic schedule in meeting yearly ore targets
Figure 5 there are large deviations from the target yearly ore production targets in period 7 and 8, before the Transition Depth 2 schedule shifts to underground production in period 9. This is because geological risk discounting (Ramazan and Dimitrakopoulos 2005, 2013) is utilized as a risk management technique during OP scheduling, which penalizes deviation from the production targets more heavily in the early years of production. This is valuable in the capital-intensive mining sector to increase certainty within early year project revenue and potentially decrease the length of a project's payback period. In addition to this, common long-term scheduling practices within the mining industry involve updating the schedule on a yearly basis as new information about the orebody is gathered, so the large deviations later in the open pit mine life are not a large cause for concern. After the transition is made to underground mining in year 9, a high penalty is incurred on deviations from ore targets to ensure that ore targets are met in the early years of the underground mine. This leads to a tight risk profile throughout the underground life of the mine (periods 9–14). Figure 6 shows the stochastic schedule's ability to produce metal at a steady rate throughout the entire life-of-mine.
Risk profile on cumulative metal produced by the stochastic schedule
3.2 Comparison to deterministic optimization result
To showcase the benefit of incorporating geological uncertainty into long-term strategic decision making, the SIP result is benchmarked against a deterministic optimization that uses the same formulation. The deterministic optimization process however receives an input of only a single orebody model containing estimated values for the grade of each block and stope. Yearly production scheduling decisions are made based on these definitive grade estimates, and from there yearly cash flows streams are projected. This procedure is followed for each of the four transition depths considered, as was done for the stochastic case. Geovia's Whittle software (Geovia 2012) is used to schedule the open-pit portion of the mine, while an MIP is used for the underground scheduling. This underground scheduling utilizes the deterministic equivalent of the stochastic underground schedule formulation seen earlier. The projected yearly discounted cash flows can be seen and suggest that Transition Depth 2 (TD 2) is also optimal from a deterministic perspective (Fig. 7).
Risk profile on NPV of deterministic schedules produced by considering a single estimated orebody model. Lines show the expected NPV for each transition depth. It should be noted that Transition Depth 1 makes the transition in year 7, Transition Depth 2 in year 8, Transition Depth 3 in year 9 and Transition Depth 4 in year 10. Transition Depth 2 is the most profitable decision of the set, with an expected NPV of $520 M
To assess the deterministic framework's ability to manage geological uncertainty, risk analysis is performed on the deterministic schedule for the optimal transition depth 2. The 20 geological (grade) simulations mentioned earlier are passed through the deterministic schedule produced for Transition Depth 2, and the yearly cash projections based on each simulation are summarized in Fig. 8. The results are compared to identical analysis on the stochastic schedule, also for Transition Depth 2. The P50 (median) NPV of the simulations when passed through the stochastic schedule is 9% or $42 M higher than the P50 observed for the deterministic case. Further to that point, this analysis suggests that there is a 90% chance that the deterministic schedule's NPV falls below the NPV of the stochastic schedule.
Risk analysis of projected deterministic NPV. The impact of geological uncertainty on the deterministic schedule can be quantified through risk analysis. The NPV of the deterministic schedule falls from $520 M to $497 M as the impact of geological uncertainty is considered. The stochastic schedule remains robust to uncertainty with an NPV of $540 M, 9% or $43 M higher than the projected deterministic value when considering geological uncertainty in the cash flow projections
In Fig. 8, the NPV projected by risk analysis is 5% below what the optimizer originally predicted. Along with this, there is a large variation in the yearly cash generated. Figure 8 also concludes that there is a 70% chance that once production commences, the realized NPV will be less than the original projection. Figure 8 shows that the P50 of the stochastic risk profiles for transition depth 2 are higher than the deterministic projected NPV and the P50 of the deterministic risk profiles by 4% and 9% respectively. This trend of increased value for the stochastic framework extends to other transition depths as well. Figure 9 shows that in addition to the stochastic schedule at the optimal transition depth (TD 2) generating a higher NPV than the optimal deterministic result, also TD 2, the next best transition depth in the stochastic case (TD 3) is $17 M or 3.4% lower than the optimal deterministic result.
Comparison of NPV at different transition depths
The increased NPVs seen for the stochastic approach are due to the method's ability to consider multiple stochastically generated scenarios of the mineral deposit, so as to manage geological (metal grade) uncertainty and local variability while making scheduling decisions. Overall, the stochastic scheduler is more informed and motivated to mine lower risk, high grade areas early in the mine life and defer extraction of lower grade and risky materials to later periods.
Figure 10 shows the magnitude of deviation from a predetermined yearly mill tonnage for the schedules produced by both the stochastic and deterministic optimizer at transition depth 2. Figure 10 shows the median (P50) of deviations from the yearly mill tonnage targets for the stochastic and deterministic schedules with respect to the 20 simulated orebody models. Throughout the entire life of the mine, the stochastic schedule limits these deviations while the deterministic schedule has no control over such risk. The deterministic schedule's inability to meet yearly mill input tonnage is a cause for concern and suggests that the mine is unlikely to meet important targets once production commences if such a schedule is implemented.
Magnitude of deviation from yearly mill input tonnage target. Based on deterministic and stochastic schedules produced for Transition Depth 2 yearly ore tonnage projections can be made along with how these projections deviate from the yearly tonnage target. Show here is the difference in magnitude of deviations for a deterministic schedule created with no information regarding geological uncertainty
Figure 11 shows a visual comparison between the stochastic and deterministic schedules produced for Transition Depth 2. The shading in Fig. 11 describes which period a mining block is scheduled to be extracted in. Overall, the stochastic schedule appears to be smoother and more mineable than the deterministic schedule, meaning that large groups of nearby blocks are scheduled to be extracted within the same period. As well, both cross-sections reveal that the stochastic schedule mines more material than the deterministic schedule produced by Geovia's Whittle (Geovia 2012), resulting in a larger ultimate pit for the stochastic case. These differences stem from Whittle determining the ultimate pit before scheduling by utilizing a single estimated orebody model containing smoothed grade values. In the stochastic case, the task of determining the ultimate pit contour is done while having knowledge of 20 geological simulations which provide detailed information on the high and low grade areas within the deposit. In this case the stochastic scheduler identifies profitable deep-lying high-grade material that cannot be captured using traditional deterministic methods.
Two cross-sectional views of the schedule obtained by the proposed SIP (left) and the deterministic schedule produced by Whittle (right) for Transition Depth 2. The colored regions indicate the period in which a group of material is scheduled for extraction. (Color figure online)
4 Conclusions and future work
A new method for determining the optimal OP-UG transition depth is presented. The proposed method improves upon previously developed techniques by jointly taking a truly three-dimensional approach to determining the optimal OP-UG transition depth, through the optimization of extraction sequences for both OP and UG components while considering geological uncertainty and managing the related risk. The optimal transition decision is effectively described by a transition year, a three-dimensional optimal open pit contour, a crown pillar location and a clearly defined underground orebody. In the case study, it was determined that the second of four transition depths evaluated is optimal which involves transitioning to underground mining in period 9. Making the decision to transition at the second candidate transition depth evaluated results in a 13% increase in NPV over the worst-case decision, as predicted by the stochastic framework. Upon closer inspection through risk analysis procedures, the stochastic framework is shown to provide a more realistic valuation of both the OP and UG assets. In addition to this, the stochastic framework produces operationally implementable production schedules that lead to a 9% NPV increase and reduction in risk when compared to the deterministic result. It is shown that the yearly cash flow projections outlined by the deterministic optimizer for the underground mine life are unlikely to be met, resulting in misleading decision criteria. Overall, the proposed stochastic framework has proven to provide a robust approach to determining an optimal open pit to underground mining transition depth. Future studies should aim to improve on this method by considering more aspects of financial uncertainty such as inflation and mining costs.
The work in this paper was funded from the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant 411270-10, and the COSMO consortium of mining companies—AngloGold Ashanti, Barrick Gold, BHP Billiton, De Beers, Newmont Mining, Kinross Gold and Vale. We thank the reviewers for their valuable comments.
In the presented work, a parallel implementation of tabu search is used to solve the large open pit mine scheduling problem in a reasonable time. This metaheuristic method takes advantage of the multi-core processing architecture in modern computers to effectively distribute tasks and find high quality solutions. Essentially, the algorithm perturbs an initial feasible production schedule by changing the yearly scheduling decision for a given block, then impact of these perturbations is evaluated and they are accepted based on their ability to increase the value of the solution. As the algorithm accepts perturbation and progresses through the solution space, it prohibits itself from repeatedly visiting the same solution by labeling these previously visited solutions as tabu (forbidden) for a certain amount of time. The tabu search procedure stops after a specified number of proposed perturbations have been evaluated which fail to improve the solution. In order to prevent the algorithm from getting trapped in a locally (as opposed to globally) optimal solution, a diversification strategy is included in the metaheuristic to generate new, unique starting solutions that can then be improved.
The specific implementation used in the work presented here is known as Parallel Independent tabu search (Senecal 2015) where the so termed master–slave (Hansen 1993) parallel algorithm design is used. In this scheme, a master thread delegates the task of performing tabu search to each available thread and provides them with a unique starting solution. These threads then operate independently to identify the best solution possible using tabu search. The solutions for each are then compared to identify the optimal solution. With this efficient implementation of tabu search, more instances of the algorithm can be run simultaneously to thoroughly cover the solution space in less time than a purely sequential and single threaded approach. More algorithmic details can be found in Lamghari and Dimitrakopoulos (2012).
Albor F, Dimitrakopoulos D (2010) Algorithmic approach to pushback design based on stochastic programming: method, application and comparisons. Min Technol 199:88–101Google Scholar
Alford C (1995) Optimisation in underground mine design. In: Proceedings of the application of computers and operations research in the mineral industry (APCOM) XXV, Brisbane, pp 213–218Google Scholar
Bakhtavar E, Shahriar K, Oraee K (2008) A model for determining the optimal transition depth over from open pit to underground mining. In: Proceedings of the 5th international conference and exhibition on mass mining, Luleå, pp 393–400Google Scholar
Bootsma MT, Alford C, Benndorf J, Buxton MWN (2014) Cut-off grade-based sublevel stope mine optimisation—introduction and evaluation of an optimisation approach and method for grade risk quantification. In: Proceedings of the orebody modelling and strategic mine planning, AusIMM, Perth, pp 281–290Google Scholar
Carpentier S, Gamache M, Dimitrakopoulos R (2016) Underground long-term mine production scheduling with integrated geological risk management. Min Technol 125(2):93–102CrossRefGoogle Scholar
Dagdelen K, Traore I (2014) Open pit transition depth determination through global analysis of open pit and underground mine production scheduling. In: Proceedings of the orebody modelling and strategic mine planning, AusIMM, Perth, pp 195–200Google Scholar
Datamine Software (2013) Studio 5 manual. www.dataminesoftware.com/
Dimitrakopoulos R, Farrelly C, Godoy MC (2002) Moving forward from traditional optimisation: grade uncertainty and risk effects in open pit mine design. Trans IMM Min Technol 111:82–89CrossRefGoogle Scholar
Ferland JA, Amaya J, Djuimo MS (2007) Application of a particle swarm algorithm to the capacitated open pit mining problem. Stud Comput Intell (SCI) 76:127–133Google Scholar
Fuentes S, Caceres J (2004) Block/panel caving pressing final open pit limit. CIM Bull 97:33–34Google Scholar
Geovia (2012) Geovia Whittle. www.3ds.com/products-services/geovia/products/whittle
Gilani S-O, Sattarvand J (2016) Integrating geological uncertainty in long-term open pit mine: production planning by ant colony optimization. Comput Geosci 87:31–40CrossRefGoogle Scholar
Godoy M (2003) The effective management of geological risk in long-term production scheduling of open-pit mines. Ph.D. Thesis, University of Queensland, BrisbaneGoogle Scholar
Godoy M, Dimitrakopoulos R (2004) Managing risk and waste mining in long-term production scheduling. SME Trans 316:43–50Google Scholar
Goodfellow R (2014) Unified modeling and simultaneous optimization of open pit mining complexes with supply uncertainty. Ph.D. Thesis, McGill University, MontrealGoogle Scholar
Goodfellow R, Dimitrakopoulos R (2016) Global optimization of open pit mining complexes with uncertainty. Appl Soft Comput J 40(C):292–304. doi: 10.1016/j.asoc.2015.11.038 CrossRefGoogle Scholar
Goovaerts P (1997) Geostatistics for natural resources evaluation. Oxford University Press, OxfordGoogle Scholar
Hansen P (1993) Model programs for computational science: a programming methodology for multicomputers. Concurr Comput 5(5):407–423Google Scholar
IBM (2011) IBM ILOG CPLEX optimization studio, CPLEX user's manual 12. IBM Corporation, pp 1–148Google Scholar
Jewbali A (2006) Modelling geological uncertainty for stochastic short term production scheduling in open pit metal mines. Ph.D. Thesis, University of Queensland, Brisbane, QldGoogle Scholar
Kjetland R (2012) Chuquicamata's life underground will cost a fortune, but is likely to pay off for Codelco. Copper Investing News. Retrieved from: http://copperinvestingnews.com/12788-chuquicamata-underground-mining-codelco-chile-open-pit.html
Kumral M (2010) Robust stochastic mine production scheduling. Eng Optim 42:567–579MathSciNetCrossRefGoogle Scholar
Lamghari A, Dimitrakopoulos R (2012) A diversified Tabu search for the open-pit mine production scheduling problem with metal uncertainty. Eur J Oper Res 222(3):642–652CrossRefzbMATHGoogle Scholar
Lamghari A, Dimitrakopoulos R, Ferland AJ (2014) A variable neighborhood descent algorithm for the open-pit mine production scheduling problem with metal uncertainty. J Oper Res Soc 65:1305–1314CrossRefGoogle Scholar
Leite A, Dimitrakopoulos R (2014) Mine scheduling with stochastic programming in a copper deposit: application and value of the stochastic solution. Min Sci Technol 24(6):255–262Google Scholar
Lerchs H, Grossmann IF (1965) Optimum design of open pit mines. Can Inst Min Metall Bull 58:17–24Google Scholar
Little J, Topal E (2011) Strategies to assist in obtaining an optimal solution for an underground mine planning problem using Mixed Integer Programming. Int J Min Miner Eng 3(2):152–172CrossRefGoogle Scholar
Montiel L (2014) Globally optimizing a mining complex under uncertainty: integrating components from deposits to transportation systems. Ph.D. Thesis, McGill University, MontrealGoogle Scholar
Montiel L, Dimitrakopoulos R (2015) Optimizing mining complexes with multiple processing and transportation alternatives: an uncertainty-based approach. Eur J Oper Res 247:166–178CrossRefzbMATHGoogle Scholar
Montiel L, Dimitrakopoulos R, Kawahata K (2016) Globally optimising open-pit and underground mining operations under geological uncertainty. Min Technol 125(1):2–14CrossRefGoogle Scholar
Musingwini C (2016) Optimization in underground mine planning-developments and opportunities. J South Afr Inst Min Metall 116(9):809–820CrossRefGoogle Scholar
Nehring M, Topal E (2007) Production schedule optimisation in underground hard rock mining using mixed integer programming. In: Project evaluation conference, pp 169–175Google Scholar
Nehring M, Topal E, Little J (2009) A new mathematical programming model for production schedule optimisation in underground mining operations. J S Afr Inst Min Metall 110:437–446Google Scholar
Newman A, Yano C, Rubio E (2013) Mining above and below ground: timing the transition. IIE Trans 45(8):865–882CrossRefGoogle Scholar
Popov G (1971) The working of mineral deposits. Mir Publishers, MoscowGoogle Scholar
Ramazan S, Dimitrakopoulos R (2005) Stochastic optimisation of long-term production scheduling for open pit mines with a new integer programming formulation. Orebody Modell Strateg Mine Plann AusIMM Spectr Ser 14:385–391Google Scholar
Ramazan S, Dimitrakopoulos R (2013) Production scheduling with uncertain supply: a new solution to the open pit mining problem. Optim Eng 14:361–380CrossRefzbMATHGoogle Scholar
Senecal R (2015) Parallel Implementation of a tabu search procedure for stochastic mine scheduling. M.E. Thesis, McGill University, MontrealGoogle Scholar
Soares A, Nunes R, Azevedo L (2017) Integration of uncertain data in geostatistical modelling. Math Geosci 49(2):253–273MathSciNetCrossRefzbMATHGoogle Scholar
Trout P (1995) Underground mine production scheduling using mixed integer programming. In: Proceedings of the application of computers and operations research in the mineral industry (APCOM) XXV, Brisbane, pp 395–400Google Scholar
Vallee M (2000) Mineral resource + engineering, economic and legal feasibility = ore reserve. Can Min Metall Soc Bull 93:53–61MathSciNetGoogle Scholar
Whittle J (1988) Beyond optimisation in open pit design. In: Proceedings Canadian conference on computer applications in the mineral industries, pp 331–337Google Scholar
Whittle J (1999) A decade of open pit mine planning and optimization—the craft of turning algorithms into packages. In: Proceeding of 28th computer applications in the mineral industries, pp 15–24Google Scholar
Zagayevskiy Y, Deutsch CV (2016) Multivariate geostatistical grid-free simulation of natural phenomena. Math Geosci 48(8):891–920MathSciNetCrossRefzbMATHGoogle Scholar
1.COSMO—Stochastic Mine Planning Laboratory, Department of Mining and Materials EngineeringMcGill UniversityMontrealCanada
2.Group for Research in Decision Analysis (GERAD)MontrealCanada
MacNeil, J.A.L. & Dimitrakopoulos, R.G. Optim Eng (2017) 18: 793. https://doi.org/10.1007/s11081-017-9361-6
Received 27 December 2015 | CommonCrawl |
From formulasearchengine
Diagram of an RTG used on the Cassini probe
A radioisotope thermoelectric generator (RTG, RITEG) is an electrical generator that uses an array of thermocouples to convert the heat released by the decay of a suitable radioactive material into electricity by the Seebeck effect.
RTGs have been used as power sources in satellites, space probes, and unmanned remote facilities such as a series of lighthouses built by the former Soviet Union inside the Arctic Circle. RTGs are usually the most desirable power source for robotic or unmaintained situations that need a few hundred watts (or less) of power for durations too long for fuel cells, batteries, or generators to provide economically and in places where solar cells are not practical. Safe use of RTGs requires containment of the radioisotopes long after the productive life of the unit.
3 Fuels
3.1 Criteria for selection of isotopes
3.1.1 238Pu
3.1.2 90Sr
3.1.3 210Po
3.1.4 241Am
4 Life span
5 Efficiency
6.1 Radioactive contamination
6.2 Nuclear fission
7 Subcritical multiplicator RTG
8 RTG for interstellar probes
9.1 Space
9.2 Terrestrial
9.3 Nuclear power systems in space
A pellet of 238PuO2 to be used in an RTG for either the Cassini or Galileo mission. The initial output is 62 watts. The pellet glows red hot because of the heat generated by the radioactive decay (primarily α). This photo was taken after insulating the pellet under a graphite blanket for several minutes and then removing the blanket.
In the same brief letter where he introduced the communications satellite, Arthur C. Clarke suggested that, with respect to spacecraft, "the operating period might be indefinitely prolonged by the use of thermocouples."[1][2]
RTGs were developed in the US during the late 1950s by Mound Laboratories in Miamisburg, Ohio under contract with the United States Atomic Energy Commission. The project was led by Dr. Bertram C. Blanke.[3]
The first RTG launched into space by the United States was SNAP 3 in 1961, aboard the Navy Transit 4A spacecraft. One of the first terrestrial uses of RTGs was in 1966 by the US Navy at uninhabited Fairway Rock in Alaska. RTGs were used at that site until 1995.
A common RTG application is spacecraft power supply. Systems for Nuclear Auxiliary Power (SNAP) units were used for probes that traveled far from the Sun rendering solar panels impractical. As such, they were used with Pioneer 10, Pioneer 11, Voyager 1, Voyager 2, Galileo, Ulysses, Cassini, New Horizons and the Mars Science Laboratory. RTGs were used to power the two Viking landers and for the scientific experiments left on the Moon by the crews of Apollo 12 through 17 (SNAP 27s). Because the Apollo 13 moon landing was aborted, its RTG rests in the South Pacific ocean, in the vicinity of the Tonga Trench.[4] RTGs were also used for the Nimbus, Transit and LES satellites. By comparison, only a few space vehicles have been launched using full-fledged nuclear reactors: the Soviet RORSAT series and the American SNAP-10A.
In addition to spacecraft, the Soviet Union constructed many unmanned lighthouses and navigation beacons powered by RTGs.[5] Powered by strontium-90 (90Sr), they are very reliable and provide a steady source of power. CriticsTemplate:Who argue that they could cause environmental and security problems as leakage or theft of the radioactive material could pass unnoticed for years, particularly as the locations of some of these lighthouses are no longer known due to poor record keeping. In one instance, the radioactive compartments were opened by a thief.[5] In another case, three woodsmen in Georgia came across two ceramic RTG heat sources that had been stripped of their shielding. Two of the three were later hospitalized with severe radiation burns after carrying the sources on their backs. The units were eventually recovered and isolated.[6]
There are approximately 1,000 such RTGs in Russia. All of them have long exhausted their 10-year engineered life spans. They are likely no longer functional, and may be in need of dismantling. Some of them have become the prey of metal hunters, who strip the RTGs' metal casings, regardless of the risk of radioactive contamination.[7]
The United States Air Force uses RTGs to power remote sensing stations for Top-ROCC and Save-Igloo radar systems predominantly located in Alaska.[8]
In the past, small "plutonium cells" (very small 238Pu-powered RTGs) were used in implanted heart pacemakers to ensure a very long "battery life".[9] Template:As of, about 90 were still in use.
The design of an RTG is simple by the standards of nuclear technology: the main component is a sturdy container of a radioactive material (the fuel). Thermocouples are placed in the walls of the container, with the outer end of each thermocouple connected to a heat sink. Radioactive decay of the fuel produces heat which flows through the thermocouples to the heat sink, generating electricity in the process.
A thermocouple is a thermoelectric device that converts thermal energy directly into electrical energy using the Seebeck effect. It is made of two kinds of metal (or semiconductors) that can both conduct electricity. They are connected to each other in a closed loop. If the two junctions are at different temperatures, an electric current will flow in the loop.
Inspection of Cassini spacecraft RTGs before launch
New Horizons in assembly hall
Criteria for selection of isotopes
The radioactive material used in RTGs must have several characteristics:
Its half-life must be long enough so that it will release energy at a relatively constant rate for a reasonable amount of time. The amount of energy released per time (power) of a given quantity is inversely proportional to half-life. An isotope with twice the half-life and the same energy per decay will release power at half the rate per mole. Typical half-lives for radioisotopes used in RTGs are therefore several decades, although isotopes with shorter half-lives could be used for specialized applications.
For spaceflight use, the fuel must produce a large amount of power per mass and volume (density). Density and weight are not as important for terrestrial use, unless there are size restrictions. The decay energy can be calculated if the energy of radioactive radiation or the mass loss before and after radioactive decay is known. Energy release per decay is proportional to power production per mole. Alpha decays in general release about 10 times as much energy as the beta decay of strontium-90 or caesium-137.
Radiation must be of a type easily absorbed and transformed into thermal radiation, preferably alpha radiation. Beta radiation can emit considerable gamma/X-ray radiation through bremsstrahlung secondary radiation production and therefore requires heavy shielding. Isotopes must not produce significant amounts of gamma, neutron radiation or penetrating radiation in general through other decay modes or decay chain products.
The first two criteria limit the number of possible fuels to fewer than 30 atomic isotopes[10] within the entire table of nuclides. Plutonium-238, curium-244 and strontium-90 are the most often cited candidate isotopes, but other isotopes such as polonium-210, promethium-147, caesium-137, cerium-144, ruthenium-106, cobalt-60, curium-242, americium-241 and thulium isotopes have also been studied.
238Pu
Plutonium-238 has a half-life of 87.7 years, reasonable power density of 0.54 kilowatts per kilogram., and exceptionally low gamma and neutron radiation levels. 238Pu has the lowest shielding requirements; Only three candidate isotopes meet the last criterion (not all are listed above) and need less than 25 mm of lead shielding to block the radiation. 238Pu (the best of these three) needs less than 2.5 mm, and in many cases, no shielding is needed in a 238Pu RTG, as the casing itself is adequate. 238Pu has become the most widely used fuel for RTGs, in the form of plutonium(IV) oxide (PuO2). Unlike the latter RTG fuels, 238Pu must be specifically synthesized and is not abundant as a nuclear waste product. At present only Russia has maintained consistent 238Pu production, while the USA restarted production at ~1.5 kg a year in 2013 after a ~25-year hiatus. At present these are the only countries with declared production of 238Pu in quantities useful for RTGs. 238Pu is produced at typically 85% purity and its purity decreases over time.[11]
Strontium-90 has been used by the Soviet Union in terrestrial RTGs. Strontium-90 decays by β emission, with minor γ emission. While its half life of 28.8 years is much shorter than that of 238Pu, it also has a much lower decay energy: thus its power density is only 0.46 kilowatts per kilogram. Because the energy output is lower it reaches lower temperatures than 238Pu, which results in lower RTG efficiency. 90Sr is a high yield waste product of nuclear fission and is available in large quantities at a low price.[12]
210Po
Some prototype RTGs, first built in 1958 by the US Atomic Energy Commission, have used polonium-210. This isotope provides phenomenal power density because of its high radioactive activity, but has limited use because of its very short half-life of 138 days. A kilogram of pure 210Po in the form of a cube would be about 48 mm (about 2 inches) on a side and emit about 140 kW: achieving temperatures beyond 1200 K and becoming hot enough to vaporize itself.
Americium-241 is a potential candidate isotope with a longer half-life than 238Pu: 241Am has a half-life of 432 years and could hypothetically power a device for centuries. However, the power density of 241Am is only 1/4 that of 238Pu, and 241Am produces more penetrating radiation through decay chain products than 238Pu and needs more shielding. Even so, its shielding requirements in an RTG are the second lowest of all possible isotopes: only 238Pu requires less. With a current global shortage[9] of 238Pu, 241Am is being studied as RTG fuel by the ESA[13] 241Am advantage over 238Pu is that it is produced as nuclear waste and is nearly isotopically pure. Prototype designs of 241Am RTGs expect 2-2.2 We/kg for 5-50 We RTGs design, putting 241Am RTGs at parity with 238Pu RTGs within that power range. [14]
90Sr-powered Soviet RTGs in dilapidated and vandalized condition.
Most RTGs use 238Pu, which decays with a half-life of 87.7 years. RTGs using this material will therefore diminish in power output by a factor of 1−0.51/87.74, or 0.787%, per year.
One example is the RTG used by the Voyager probes—23 years after production, the radioactive material inside the RTG will have decreased in power by 16.6%, i.e. providing 83.4% of its initial output; starting with a capacity of 470 W, after this length of time it would have a capacity of only 392 W. A related (and unexpected) loss of power in the Voyager RTGs is the degrading properties of the bi-metallic thermocouples used to convert thermal energy into electrical energy, the RTGs were working at about 67% of their total original capacity instead of the expected 83.4%. By the beginning of 2001, the power generated by the Voyager RTGs had dropped to 315 W for Voyager 1 and to 319 W for Voyager 2.[15]
This life span of an RTG was of particular importance during the Galileo mission. Originally intended to launch in 1986, it was delayed by the Space Shuttle Challenger accident. Because of this unforeseen event, the probe had to sit in storage for 4 years before launching in 1989. Subsequently, its RTGs had decayed somewhat, necessitating replanning the power budget for the mission.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}
RTGs use thermoelectric couples or "thermocouples" to convert heat from the radioactive material into electricity. Thermocouples, though very reliable and long-lasting, are very inefficient; efficiencies above 10% have never been achieved and most RTGs have efficiencies between 3–7%. Thermoelectric materials in space missions to date have included silicon–germanium alloys, lead telluride and tellurides of antimony, germanium and silver (TAGS). Studies have been done on improving efficiency by using other technologies to generate electricity from heat. Achieving higher efficiency would mean less radioactive fuel is needed to produce the same amount of power, and therefore a lighter overall weight for the generator. This is a critically important factor in spaceflight launch cost considerations.
Template:Thermoelectric effect A thermionic converter—an energy conversion device which relies on the principle of thermionic emission—can achieve efficiencies between 10–20%, but requires higher temperatures than those at which standard RTGs run. Some prototype 210Po RTGs have used thermionics, and potentially other extremely radioactive isotopes could also provide power by this means, but short half-lives make these unfeasible. Several space-bound nuclear reactors have used thermionics, but nuclear reactors are usually too heavy to use on most space probes.
Thermophotovoltaic cells work by the same principles as a photovoltaic cell, except that they convert infrared light emitted by a hot surface rather than visible light into electricity. Thermophotovoltaic cells have an efficiency slightly higher than thermocouples and can be overlaid on top of thermocouples, potentially doubling efficiency. Systems with radioisotope generators simulated by electric heaters have demonstrated efficiencies of 20%,[16] but have not been tested with actual radioisotopes. Some theoretical thermophotovoltaic cell designs have efficiencies up to 30%, but these have yet to be built or confirmed. Thermophotovoltaic cells and silicon thermocouples degrade faster than thermocouples, especially in the presence of ionizing radiation.
Dynamic generators can provide power at more than 4 times the conversion efficiency of RTGs. NASA and DOE have been developing a next-generation radioisotope-fueled power source called the Stirling Radioisotope Generator (SRG) that uses free-piston Stirling engines coupled to linear alternators to convert heat to electricity. SRG prototypes demonstrated an average efficiency of 23%. Greater efficiency can be achieved by increasing the temperature ratio between the hot and cold ends of the generator. The use of non-contacting moving parts, non-degrading flexural bearings, and a lubrication-free and hermetically sealed environment have, in test units, demonstrated no appreciable degradation over years of operation. Experimental results demonstrate that an SRG could continue running for decades without maintenance. Vibration can be eliminated as a concern by implementation of dynamic balancing or use of dual-opposed piston movement. Potential applications of a Stirling radioisotope power system include exploration and science missions to deep-space, Mars, and the Moon.
The increased efficiency of the SRG may be demonstrated by a theoretical comparison of thermodynamic properties, as follows. These calculations are simplified and do not account for the decay of thermal power input due to the long half-life of the radioisotopes used in these generators. The assumptions for this analysis include that both systems are operating at steady state under the conditions observed in experimental procedures (see table below for values used). Both generators can be simplified to heat engines to be able to compare their current efficiencies to their corresponding Carnot efficiencies. The system is assumed to be the components, apart from the heat source and heat sink.,[17][18][19]
The thermal efficiency, denoted ηth, is given by:
ηth=Desired OutputRequired Input=Wout′Qin′{\displaystyle \eta _{th}={\frac {\text{Desired Output}}{\text{Required Input}}}={\frac {W'_{out}}{Q'_{in}}}}
Where primes ( ' ) denote the time derivative.
From a general form of the First Law of Thermodynamics, in rate form:
ΔE′sys=Qin′+Win′−Qout′−Wout′{\displaystyle \Delta E'^{\mathrm {sys} }=Q'_{in}+W'_{in}-Q'_{out}-W'_{out}\,}
Assuming the system is operating at steady state and Win′=0{\displaystyle W'_{in}=0\,} ,
Wout′=Qin′−Qout′{\displaystyle W'_{out}=Q'_{in}-Q'_{out}\,}
ηth, then, can be calculated to be 110 W / 2000 W = 5.5% (or 140 W / 500 W = 28% for the SRG). Additionally, the Second Law efficiency, denoted ηII, is given by:
ηII=ηthηth,rev{\displaystyle \eta _{II}={\frac {\eta _{th}}{\eta _{th,rev}}}}
Where ηth,rev is the Carnot efficiency, given by:
ηth=1−TheatsinkTheatsource{\displaystyle \eta _{th}=1-{\frac {T_{heatsink}}{T_{heatsource}}}}
In which Theat sink is the external temperature (which has been measured to be 510 K for the MMRTG (Multi-Mission RTG) and 363 K for the SRG) and Theat source is the temperature of the MMRTG, assumed 823 K (1123 K for the SRG). This yields a Second Law efficiency of 14.46% for the MMRTG (or 41.37% for the SRG).
Diagram of a stack of general purpose heat source modules as used in RTGs
RTGs pose a risk of radioactive contamination: if the container holding the fuel leaks, the radioactive material may contaminate the environment.
For spacecraft, the main concern is that if an accident were to occur during launch or a subsequent passage of a spacecraft close to Earth, harmful material could be released into the atmosphere; therefore their use in spacecraft and elsewhere has attracted controversy.[20][21]
However, this event is not considered likely with current RTG cask designs. For instance, the environmental impact study for the Cassini–Huygens probe launched in 1997 estimated the probability of contamination accidents at various stages in the mission. The probability of an accident occurring which caused radioactive release from one or more of its 3 RTGs (or from its 129 radioisotope heater units) during the first 3.5 minutes following launch was estimated at 1 in 1,400; the chances of a release later in the ascent into orbit were 1 in 476; after that the likelihood of an accidental release fell off sharply to less than 1 in a million.[22] If an accident which had the potential to cause contamination occurred during the launch phases (such as the spacecraft failing to reach orbit), the probability of contamination actually being caused by the RTGs was estimated at about 1 in 10.[23] In any event, the launch was successful and Cassini–Huygens reached Saturn.
The plutonium-238 used in these RTGs has a half-life of 87.74 years, in contrast to the 24,110 year half-life of plutonium-239 used in nuclear weapons and reactors. A consequence of the shorter half-life is that plutonium-238 is about 275 times more radioactive than plutonium-239 (i.e. Template:Convert/g compared to Template:Convert/g[24]). For instance, 3.6 kg of plutonium-238 undergoes the same number of radioactive decays per second as 1 tonne of plutonium-239. Since the morbidity of the two isotopes in terms of absorbed radioactivity is almost exactly the same,[25] plutonium-238 is around 275 times more toxic by weight than plutonium-239.
The alpha radiation emitted by either isotope will not penetrate the skin, but it can irradiate internal organs if plutonium is inhaled or ingested. Particularly at risk is the skeleton, the surface of which is likely to absorb the isotope, and the liver, where the isotope will collect and become concentrated.
{{safesubst:#invoke:anchor|main}} There have been several known accidents involving RTG-powered spacecraft:
The first one was a launch failure on 21 April 1964 in which the U.S. Transit-5BN-3 navigation satellite failed to achieve orbit and burned up on re-entry north of Madagascar.[26] The Template:Convert plutonium metal fuel in its SNAP-9a RTG was injected into the atmosphere over the Southern Hemisphere where it burned up, and traces of plutonium-238 were detected in the area a few months later.
The second was the Nimbus B-1 weather satellite whose launch vehicle was deliberately destroyed shortly after launch on 21 May 1968 because of erratic trajectory. Launched from the Vandenberg Air Force Base, its SNAP-19 RTG containing relatively inert plutonium dioxide was recovered intact from the seabed in the Santa Barbara Channel five months later and no environmental contamination was detected.[27]
In 1969 the launch of the first Lunokhod lunar rover mission failed, spreading polonium 210 over a large area of Russia [28]
The failure of the Apollo 13 mission in April 1970 meant that the Lunar Module reentered the atmosphere carrying an RTG and burned up over Fiji. It carried a SNAP-27 RTG containing Template:Convert of plutonium dioxide which survived reentry into the Earth's atmosphere intact, as it was designed to do, the trajectory being arranged so that it would plunge into 6–9 kilometers of water in the Tonga trench in the Pacific Ocean. The absence of plutonium-238 contamination in atmospheric and seawater sampling confirmed the assumption that the cask is intact on the seabed. The cask is expected to contain the fuel for at least 10 half-lives (i.e. 870 years). The US Department of Energy has conducted seawater tests and determined that the graphite casing, which was designed to withstand reentry, is stable and no release of plutonium should occur. Subsequent investigations have found no increase in the natural background radiation in the area. The Apollo 13 accident represents an extreme scenario because of the high re-entry velocities of the craft returning from cis-lunar space (the region between Earth's atmosphere and the Moon). This accident has served to validate the design of later-generation RTGs as highly safe.
Mars 96 launched in 1996, but failed to leave Earth orbit, and re-entered the atmosphere a few hours later. The two RTGs onboard carried in total 200 g of plutonium and are assumed to have survived reentry as they were designed to do. They are thought to now lie somewhere in a northeast-southwest running oval 320 km long by 80 km wide which is centred 32 km east of Iquique, Chile.[29]
A SNAP-27 RTG deployed by the astronauts of Apollo 14 identical to the one lost in the reentry of Apollo 13
To minimize the risk of the radioactive material being released, the fuel is stored in individual modular units with their own heat shielding. They are surrounded by a layer of iridium metal and encased in high-strength graphite blocks. These two materials are corrosion- and heat-resistant. Surrounding the graphite blocks is an aeroshell, designed to protect the entire assembly against the heat of reentering the Earth's atmosphere. The plutonium fuel is also stored in a ceramic form that is heat-resistant, minimising the risk of vaporization and aerosolization. The ceramic is also highly insoluble.
Many Beta-M RTGs produced by the Soviet Union to power lighthouses and beacons have become orphaned sources of radiation. Several of these units have been illegally dismantled for scrap metal (resulting in the complete exposure of the Sr-90 source), fallen into the ocean, or have defective shielding due to poor design or physical damage. The US Department of Defense cooperative threat reduction program has expressed concern that material from the Beta-M RTGs can be used by terrorists to construct a dirty bomb.[5]
28 U.S. space missions have safely flown radioisotope energy sources since 1961.[30]
{{ safesubst:#invoke:Unsubst||$N=Unreferenced section |date=__DATE__ |$B= {{ safesubst:#invoke:Unsubst||$N=Unreferenced |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }} }} RTGs and nuclear power reactors use very different nuclear reactions. Nuclear power reactors use controlled nuclear fission. When an atom of U-235 or Pu-239 fuel fissions, neutrons are released that trigger additional fissions in a chain reaction at a rate that can be controlled with neutron absorbers. This is an advantage in that power can be varied with demand or shut off entirely for maintenance. It is also a disadvantage in that care is needed to avoid uncontrolled operation at dangerously high power levels.
Chain reactions do not occur in RTGs, so heat is produced at an unchangeable, though steadily decreasing rate that depends only on the amount of fuel isotope and its half-life. An accidental power excursion is impossible. However, if a launch or re-entry accident occurs and the fuel is dispersed, the combined power output of the now radionuclides set free does not drop. In an RTG, heat generation cannot be varied with demand or shut off when not needed. Therefore, auxiliary power supplies (such as rechargeable batteries) may be needed to meet peak demand, and adequate cooling must be provided at all times including the pre-launch and early flight phases of a space mission.
Subcritical multiplicator RTG
Because of the shortage of plutonium-238, a new kind of RTG assisted by subcritical reactions has been proposed.[31] In this kind of RTG, the alpha decay from the radioisotope is also used in alpha-neutron reactions with a suitable element such as beryllium. This way a long-lived neutron source is produced. Because the system is working with a criticality close to but less than 1, i.e. Keff < 1, a subcritical multiplication is achieved which increases the neutron background and produces energy from fission reactions. Although the number of fissions produced in the RTG is very small (making their gamma radiation negligible), because each fission reaction releases almost 30 times more energy than each alpha decay (200 MeV compared to 6 MeV), up to a 10% energy gain is attainable, which translates into a reduction of the 238Pu needed per mission. A subcritical multiplicator RTG was investigated at the Idaho National Laboratory at the Center for Space Nuclear Research (CSNR) in 2013.[32]Template:Not in citation
RTG for interstellar probes
RTG have been proposed for use on realistic interstellar precursor missions and interstellar probes.[33] An example of this is the Innovative Interstellar Explorer (2003–current) proposal from NASA.[34] A RTG using 241Am was proposed for this type of mission in 2002.[33] This could support mission extensions up to 1000 years on the interstellar probe, because the power output would be more stable in the long-term than plutonium.[33] Other isotopes for RTG were also examined in the study, looking at traits such as watt/gram, half-life, and decay products.[33] An interstellar probe proposal from 1999 suggested using three advanced radioisotope power source (ARPS).[35]
The RTG electricity can be used for powering scientific instruments and communication to Earth on the probes.[33] One mission proposed using the electricity to power ion engines, calling this method radioisotope electric propulsion (REP).[33]
A typical RTG is powered by radioactive decay and features electricity from thermoelectric conversion, but for the sake of knowledge, some systems with some variations on that concept are included here:
Name & Model
Used On (# of RTGs per User)
Radio-
Max fuel
used (kg)
Power/Mass (W/kg)
Electrical (W)
Heat (W)
ASRG* prototype design (not launched), Discovery Program ~140 (2x70) ~500 238Pu ~1 ~34 4.1
MMRTG MSL/Curiosity rover ~110 ~2000 238Pu ~4 <45 2.4
GPHS-RTG Cassini (3), New Horizons (1), Galileo (2), Ulysses (1) 300 4400 238Pu 7.8 55.9–57.8[36] 5.2-5.4
MHW-RTG LES-8/9, Voyager 1 (3), Voyager 2 (3) 160[36] 2400[37] 238Pu ~4.5 37.7[36] 4.2
SNAP-3B Transit-4A (1) 2.7[36] 52.5 238Pu ? 2.1[36] 1.3
SNAP-9A Transit 5BN1/2 (1) 25[36] 525[37] 238Pu ~1 12.3[36] 2.0
SNAP-19 Nimbus-3 (2), Pioneer 10 (4), Pioneer 11 (4) 40.3[36] 525 238Pu ~1 13.6[36] 2.9
modified SNAP-19 Viking 1 (2), Viking 2 (2) 42.7[36] 525 238Pu ~1 15.2[36] 2.8
SNAP-27 Apollo 12–17 ALSEP (1) 73 1,480 238Pu[38] 3.8 20 3.65
Buk (BES-5)** US-As (1) 3000 100,000 235U 30 ~1000 3.0
SNAP-10A*** SNAP-10A (1) 600[39] 30,000 Enriched uranium 431 1.4
* The ASRG is not really a RTG, it uses a stirling power device that runs on radioisotope (see stirling radioisotope generator)
** The BES-5 Buk (БЭС-5) reactor was a fast breeder reactor which used thermocouples based on semiconductors to convert heat directly into electricity.[40][41]
*** The SNAP-10A used enriched uranium fuel, zirconium hydride as a moderator, liquid sodium potassium alloy coolant, and was activated or deactivated with beryllium reflectors.[39] Reactor heat fed a thermoelectric conversion system for electrical production.[39]
Radioisotope
Max fuel used
Beta-M Obsolete Soviet unmanned
lighthouses & beacons 10 230 90Sr 0.26 560
Efir-MA 30 720 ? ? 1250
IEU-1 80 2200 ? ? 2500
IEU-2 14 580 ? ? 600
Gong 18 315 ? ? 600
Gorn 60 1100 90Sr ? 1050
IEU-2M 20 690 ? ? 600
IEU-1M 120 (180) 2200 (3300) ? ? 2(3) × 1050
Sentinel 25[42] Remote U.S. arctic monitoring sites 9–20 SrTiO3 0.54 907–1814
Sentinel 100F[42] 53 Sr2TiO4 1.77 1234
Nuclear power systems in space
Known spacecraft/nuclear power systems and their fate. Systems face a variety of fates, for example, Apollo's SNAP-27 were left on the Moon.[43] Some other spacecraft also have small radioisotope heaters, for example each of the Mars Exploration Rovers have a 1 watt radioisotope heater. Spacecraft use different amounts of material, for example MSL Curiosity has 4.8 kg of plutonium-238 dioxide,[44] while the Cassini spacecraft has 32.7 kg.[45]
Name and/or model
Fate/location
MSL/Curiosity rover MMRTG (1) 2011 Mars surface
Apollo 12 SNAP-27 ALSEP 1969 Lunar surface (Ocean of Storms)[43]
Apollo 13 SNAP-27 ALSEP 1970 Earth re-entry (over Pacific near Fiji)
Apollo 14 SNAP-27 ALSEP 1971 Lunar surface (Fra Mauro)
Apollo 15 SNAP-27 ALSEP 1971 Lunar surface (Hadley–Apennine)
Apollo 16 SNAP-27 ALSEP 1972 Lunar surface (Descartes Highlands)
Apollo 17 SNAP-27 ALSEP 1972 Lunar surface (Taurus–Littrow)
Transit-4A SNAP-3B? (1) 1961 Earth orbit
Transit 5A3 SNAP-3 (1) 1963 Earth orbit
Transit 5BN-1 SNAP-3 (1) 1963 Earth orbit
Transit 5BN-2 SNAP-9A (1) 1963 Earth orbit
Transit 9 1964 Earth orbit
Transit 5B4 1964 Earth orbit
Transit 5BN-3 SNAP-9A (1) 1964 Failed to reach orbit[46]
Nimbus-B SNAP-19 (2) 1968 Recovered after crash
Nimbus-3 SNAP-19 (2) 1969 Earth re-entry 1972
Pioneer 10 SNAP-19 (4) 1972 Ejected from Solar System
Viking 1 lander modified SNAP-19 1976 Mars surface (Chryse Planitia)
Viking 2 lander modified SNAP-19 1976 Mars surface
Cassini GPHS-RTG (3) 1997 Orbiting Saturn
New Horizons GPHS-RTG (1) 2006 Leaving the Solar System
Galileo GPHS-RTG (2), 1989 Jupiter atmospheric entry
Ulysses GPHS-RTG (1) 1990 Heliocentric orbit
LES-8 MHW-RTG 1976 Near geostationary orbit
Voyager 1 MHW-RTG(3) 1977 Ejected from Solar System
{{#invoke:Portal|portal}} Template:Cmn
↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
↑ Template:Cite web
↑ 5.0 5.1 5.2 Template:Cite webTemplate:Dead link
↑ Alaska fire threatens air force nukes, WISE
↑ 9.0 9.1 Nuclear-Powered Cardiac Pacemakers, LANL Cite error: Invalid <ref> tag; name "Ref_f" defined multiple times with different content
↑ NPE chapter 3 Radioisotope Power Generation
↑ Alexandra Witze, Nuclear power: Desperately seeking plutonium, NASA has 35 kg of 238Pu to power its deep-space missions - but that will not get it very far., Nature, 25 Nov 2014
↑ Rod Adams, RTG Heat Sources: Two Proven Materials, 1 Sep 1996, Retrieved 20 Jan 2012.
↑ Dr Major S. Chahal, [1], UK Space Agency, 9 Feb 2012, retrieved 13 Nov 2014.
↑ R.M. Ambrosi, et al. [2], Nuclear and Emerging Technologies for Space (2012), retrieved 23 Nov 2014.
↑ An Overview and Status of NASA's Radioisotope Power Conversion Technology NRA, NASA, November 2005
↑ http://large.stanford.edu/courses/2011/ph241/chenw1/docs/TM-2005-213981.pdf
↑ http://solarsystem.nasa.gov/rps/docs/ASRGfacts2_10rev3_21.pdf
↑ Nuclear-powered NASA craft to zoom by Earth on Tuesday, CNN news report, 16 August 1999
↑ Valley says pee-eww to plutonium plan, Idaho Mountain Express and Guide, 22 July 2005
↑ Cassini Final Supplemental Environmental Impact Statement, Chapter 4, NASA, September 1997 (Links to other chapters and associated documents)
↑ Cassini Final Supplemental Environmental Impact Statement, Appendix D, Summary of tables of safety analysis results, Table D-1 on page D-4, see conditional probability column for GPHS-RTG
↑ Physical, Nuclear, and Chemical, Properties of Plutonium, IEER Factsheet
↑ Mortality and Morbidity Risk Coefficients for Selected Radionuclides, Argonne National LaboratoryTemplate:Dead link
↑ {{#invoke:citation/CS1|citation |CitationClass=book }}
↑ Mars 96 timeline, NASA
↑ Template:Cite webTemplate:Dead link
↑ Design of a high power (1 kWe), subcritical, power source http://csnrstg.usra.edu/public/default.cfm?content=330&child=345
↑ 33.0 33.1 33.2 33.3 33.4 33.5 Ralph L. McNutt, et all – Interstellar Explorer (2002) – Johns Hopkins University (.pdf)
↑ 36.00 36.01 36.02 36.03 36.04 36.05 36.06 36.07 36.08 36.09 36.10 "Space Nuclear Power" G.L.Bennett 2006
↑ 37.0 37.1 http://www.totse.com/en/technology/space_astronomy_nasa/spacnuke.html
↑ 39.0 39.1 39.2 Template:Cite web
↑ 42.0 42.1 Template:Cite web
↑ 43.0 43.1 {{#invoke:citation/CS1|citation |CitationClass=book }}
↑ Ruslan Krivobok: Russia to develop nuclear-powered spacecraft for Mars mission. Ria Novosti, 11 November 2009, retrieved 2 January 2011
Template:Refbegin
Safety discussion of the RTGs used on the Cassini-Huygens mission.
Nuclear Power in Space (PDF)
Detailed report on Cassini RTG (PDF)
Detailed lecture on RTG fuels (PDF)
Detailed chart of all radioisotopes
Stirling Thermoelectic Generator
Toxicity profile for plutonium, Agency for Toxic substances and Disease Registry, U.S. Public Health Service, December 1990
Environmental Impact of Cassini-Huygens Mission.
Expanding Frontiers with Radioisotope Power Systems (PDF)
Template:Refend
Template:Sister
NASA Radioisotope Power Systems website – RTG page
NASA JPL briefing, Expanding Frontiers with Radioisotope Power Systems – gives RTG information and a link to a longer presentation
SpaceViews: The Cassini RTG Debate
Stirling Radioisotope Generator
DOE contributions – good links
Idaho National Laboratory – Producer of RTGs
Idaho National Laboratory MMRTG page with photo-based "virtual tour"
Template:Nuclear technology Template:Footer energy Template:Voyager program {{ safesubst:#invoke:Unsubst||$N=Use dmy dates |date=__DATE__ |$B= }}
Retrieved from "https://en.formulasearchengine.com/index.php?title=Radioisotope_thermoelectric_generator&oldid=286968"
Commons category with local link different than on Wikidata
Nuclear power in space
Electrical generators
Battery (electricity) | CommonCrawl |
An improved clear cell renal cell carcinoma stage prediction model based on gene sets
Fangjun Li1,
Mu Yang2,
Yunhe Li1,
Mingqiang Zhang1,
Wenjuan Wang2,
Dongfeng Yuan1 &
Dongqi Tang2
BMC Bioinformatics volume 21, Article number: 232 (2020) Cite this article
Clear cell renal cell carcinoma (ccRCC) is the most common subtype of renal cell carcinoma and accounts for cancer-related deaths. Survival rates are very low when the tumor is discovered in the late-stage. Thus, developing an efficient strategy to stratify patients by the stage of the cancer and inner mechanisms that drive the development and progression of cancers is critical in early prevention and treatment.
In this study, we developed new strategies to extract important gene features and trained machine learning-based classifiers to predict stages of ccRCC samples. The novelty of our approach is that (i) We improved the feature preprocessing procedure by binning and coding, and increased the stability of data and robustness of the classification model. (ii) We proposed a joint gene selection algorithm by combining the Fast-Correlation-Based Filter (FCBF) search with the information value, the linear correlation coefficient, and variance inflation factor, and removed irrelevant/redundant features. Then the logistic regression-based feature selection method was used to determine influencing factors. (iii) Classification models were developed using machine learning algorithms. This method is evaluated on RNA expression value of clear cell renal cell carcinoma derived from The Cancer Genome Atlas (TCGA). The results showed that the result on the testing set (accuracy of 81.15% and AUC 0.86) outperformed state-of-the-art models (accuracy of 72.64% and AUC 0.81) and a gene set FJL-set was developed, which contained 23 genes, far less than 64. Furthermore, a gene function analysis was used to explore molecular mechanisms that might affect cancer development.
The results suggested that our model can extract more prognostic information, and is worthy of further investigation and validation in order to understand the progression mechanism.
Clear cell renal cell carcinoma (ccRCC) accounts for 60–85% of RCC [1, 2], which represents 2–3% of all cancers with a general annual increase of 5% [3, 4]. ccRCC is usually asymptomatic in the early stages, with about 25–30% of patients having metastasis by the time of diagnosis [5]. Moreover, patients who had localized ccRCCs removed by nephrectomy have a high risk of metastatic relapse [6]. ccRCC has high resistance to chemotherapy and radiotherapy, leading to poor prognosis [7, 8]. Detecting ccRCC in the early stage can help prevent and treat cancer at early stages. Also, understanding key genetic drivers for progression can help to develop new treatments.
Gene expression profiling has the potential for the classification of different tumor types since they play an important role in tumor development and metastasis. Machine learning-based methods which make use of gene expression profiling have been developed for discriminating stages in various cancers [9], including ccRCC [10, 11]. Rahimi [9] recommended using a multiple kernel learning (MKL) formulation on pathways/gene sets to learn an early- and late-stage cancer classification model. Jagga [10] and Bhalla [11] trained different machine learning models using genes selected by Weka and achieved a maximum AUROC of 0.8 and 0.81 on ccRCC respectively. Although some researchers have distinguished early and advanced stages of ccRCC using the classification models, the stability of the classification model is not guaranteed and there is still room for improvement in model performance.
This work aimed to extract significant features from high-dimensional gene data using data mining techniques and make more accurate and reliable predictions of ccRCC tumor stages with machine learning algorithms. For data preprocessing, we used the Chi-merge binning and WOE encoding algorithm to accomplish data discretization, thus reducing the impact of statistical noise and increasing the stability of the classification model. For gene selection, a joint selection strategy to remove irrelevant/redundant features was proposed, and the final FJL-set with 23 genes was derived as an aggregated result. Specifically, we aggregate Fast-Correlation-Based Filter search (FCBFSearch), joint statistical measures (the information value, the linear correlation coefficient, and variance inflation factor) and logistic regression-based feature selection. For the classification model, five different supervised machine learning algorithms were evaluated on an independent testing set. Finally, a simple and comprehensible SVM based prediction model using 23 selected genes performed best with an accuracy of 81.15% and AUC 0.86 — higher than the state-of-the-art method with fewer genes.
The RNAseq expression data along with their clinical information for Kidney Renal Clear Cell Carcinoma (KIRC) samples from The Cancer Genome Atlas (TCGA) project were used to distinguish between early- and late-stage ccRCC. RSEM values of KIRC used as gene expression values and clinical annotations for cancer patients were derived from UCSC Xena (https://xenabrowser.net/datapages/). FPKM values of KIRC were derived in TCGA for comparison with RSEM.
Samples with Stage I and II were considered as early-stage (i.e. localized cancers) and the remaining samples with Stage III and IV were labeled as late-stage cancers. After this processing, 604 samples from early- and late- stages were retained. 80% samples (482 samples) were picked randomly as the training set and the remaining 20% (122 samples) were used as the independent test set. Table 1 shows the datasets used in this study.
Table 1 Summary of TCGA - KIRC that was used in the training and test set
Feature selection and classification algorithms with preprocessed gene expression profiles were used to detect early- and late-stage samples. Due to the wide range and highly correlated nature of gene expression data, the performance of classification models with raw features were not robust. Therefore, feature selection was conducted before classification, and only on the training set. Five supervised machine learning algorithms were used on gene sets to predict their pathological stages. Figure 1 demonstrates the overall algorithm framework used in this work.
The overall algorithm framework
Feature preprocessing
To increase the stability and robustness of the classification model, Chi-merge binning and WOE encoding for discretizing genetic features were conducted. The range of each numeric RSEM attribute for different genes can be very wide. While some extremely large values seldom appear, they can cause prediction impairment because of seldom reversal patterns and extreme values. Grouping similar properties with similar predictive intensity will increase the instability of models and allow the understanding of the logical trend of "early−/ late-stage" bias of each feature.
Chi-merge binning
Binning and encoding are techniques purposed to reduce the impact of statistical noise. It is widely used in credit risk prediction and other applications. However, no prior works apply this method to cancer classification problems. Instead, they put the normalized genetic features into machine learning models directly.
Chi-merge is the most widely used automatic grading algorithm. It is partitioned in such a way that the early-stage and late-stage samples are as different as possible in the proportion of adjacent boxes. The disadvantage of Chi-merge is that it requires mass computation, so it may not be a good choice for selecting features from all genes.
WOE encoding
After binning, the original numeric characteristics are transformed into categorical ones, and it is impossible to put the discretized variables directly into the model. Therefore, variables of discrete type need to be coded. WOE encoding was used in our experiments to encode these categorical variables.
Weight of evidence (WOE) is based on the ratio of early-stage to late-stage samples at each level. It weighs the strength of feature attributes to distinguish between early- and late-stage accounts.
$$ {WOE}_i=1\mathrm{n}\left(\frac{E_i/E}{L_i/L}\right)=1\mathrm{n}\left(\frac{E_i/{L}_i}{E/L}\right)=1\mathrm{n}\left(\frac{E_i}{L_i}\right)-1\mathrm{n}\left(\frac{E}{\;L}\right) $$
Here Ei is the number of early-stage samples in bin i, Li is the number of bad early-stage samples in bin i, E is the total number of early-stage samples, and L is the total number of bad early-stage samples.
In the second set of experiments, the RSEM values were transformed using log2 after adding 1.0. Then the log2 transformed values were normalized. The following equations were used for computing the transformation and normalization:
$$ x={\log}_2\left( RSEM+1\right) $$
$$ z=\frac{x-\overline{x}}{s} $$
Where x is the log-transformed gene expression, \( \overline{x} \) is the mean of training samples, and s is the standard deviation of the training samples.
Feature selection
A hybrid feature selection method was developed which aimed to produce a feature subset from aggregated feature selection algorithms. All these algorithms were conducted on the training set. The feature selection method was composed of three parts: (1) FCBFSearch, (2) joint statistical measures, and (3) logistic regression-based feature selection. In this way, irrelevant/redundant attributes in data sets can be removed, the instability and perturbation issues of single feature selection algorithms can be alleviated, and the subsequent learning task can be enhanced.
Fast correlation-based filter search
When there are a lot of variables, there is a strong relevance/redundance between the variables. If all the variables are put together into classification models, the significance of important variables is reduced, and in extreme cases, sign distortion occurs. The Fast Correlation-Based Filter (FCBF) Search algorithm is a feature selection algorithm based on information theory [12], which takes into account both feature correlation and feature redundancy. It uses dominant correlation to distinguish related features in high-dimensional datasets.
FCBFSearch was performed on the original training data without data preprocessing. In addition, a random sampling method was used to select the robust features. FCBFSearch was conducted 10 times with random sampling 10-fold cross-validation every time on the training dataset, after which 10 subsets of features were obtained. The features with an overlap number of more than 8 were selected for the data preprocessing and the following joint statistical measures processions.
Joint statistical measures
Joint statistical feature selection was done on preprocessed FCBFSearch features. The method combines various statistical measures to assess feature importance and relevance and filter out redundant features.
Univariate Analysis
The information value (IV) is used to assess the overall predictive power of the feature, i.e. the ability of the feature to separate early-and late-stage samples. It expresses the amount of information of the predictor in separating early- from late-stage in the target variable.
\( IV=\sum \left(\frac{G_i}{G}-\frac{B_i}{B}\right)\ln \left(\frac{G_i/G\ }{B_i/B\ }\right) \)\( \mathrm{IV}=\sum \left(\frac{{\mathrm{G}}_{\mathrm{i}}}{\mathrm{G}}-\frac{{\mathrm{B}}_{\mathrm{i}}}{\mathrm{B}}\right)\ln \left(\frac{{\mathrm{G}}_{\mathrm{i}}/\mathrm{G}}{{\mathrm{B}}_{\mathrm{i}}/\mathrm{B}}\right) \) (4).
Where Gi is the proportion of early-stage samples of bin i in all early-stage samples and Bi is the proportion of late-stage samples of bin i in all late-stage samples.
IV < 0.02 represents an unpredicted variable, 0.02–0.10 is weakly predictive, 0.10–0.30 is moderately predictive, and > 0.30 is strongly predictive. In the experiment, we rejected variables whose IV was lower than 0.1.
The linear correlation coefficient was used to measure the correlation between two variables. The larger the absolute value of the linear correlation coefficient is, the more likely it is to be a linear expression for another variable. Linear correlation has two meanings: positive correlation and negative correlation. It is desirable to avoid both of these situations because it is hoped that the correlation between the two variables is as small as possible. In the present study, 0.7 was chosen as the baseline. If the absolute value of the correlation coefficient was greater than 0.7, the one with lower IV score was selected.
After this, collinearity analysis was performed since the collinearity problem tends to reduce the significance of a variable. The Variance Inflation Factor (VIF) was used to evaluate multivariate linear correlation.
$$ {VIF}_i=\frac{1}{1-{R}_i^2} $$
Where Ri is the R2 value of xi and {x1, x2, …, xi − 1, xi + 1, xi + 2, …, xN} . When the calculated VIF is far less than 10, there is no collinearity problem.
Logistic regression-based feature selection
In the present study, logistic regression (LR) was used as the classification model in feature selection progress in order to find which factors were influential in discriminating early- and late-stage samples, and how these factors quantitatively affect the model.
To guarantee the validity and significance of the variables sent to the logistic regression model, we checked the coefficients and p values of the input variables which indicate the influence of the independent variable on the dependent variable and whether early- and late-stage genetic expression significantly change. Some variables' p values are higher than 0.1 before checking, and it means that there is no obvious correlation between the two parameters. In our study, we filtered variables whose p-value exceeded the threshold 0.1 and the values of coefficients were positive.
Classification algorithm
Five machine learning algorithms: Support Vector Machine (SVM), Logistic Regression, Multi-Layer Perception (MLP), Random Forest (RF) and Naive Bayes (NB) were used for generating the classification models. RBF kernel of SVM at different parameters, gamma∈[10− 9, 10− 7, ..., 10, 103], c∈[− 5, − 3, ..., 13, 15] was used for optimizing the SVM performance. SVM, MLP, RF, and NB were implemented using the Sklearn package in Python.
10-fold cross-validation
The five supervised machine learning algorithms were trained on the subset features from feature selection and further validated by 10-fold cross-validation.
Independent dataset test
An independent testing set is used to exclude the "memory" effect or bias for trained classification models. We did not use this testing set for feature selection or model training. We only evaluated the performance of the classification model on it, and the model was trained on the training set.
Analysis of selected genes
The Database for Annotation, Visualization and Integrated Discovery (DAVID, version 6.7) [13] and KEGG [14] database was used to explain the meaning of functional from the molecular or higher levels and associate the genes with related pathways. As a main bioinformatics database for analyzing gene function and understanding the biological functions, GO is integrated with other databases in DAVID [15]. A meaningful biological explanation for the selected genes through the enrichment analysis, and correlating genes with diseases in the mechanism is needed. P < 0.05 was considered statistically significant.
Experiments were performed on the TCGA - KIRC dataset that was constructed with labeling strategies shown in Table 1. The results of every feature selection procedures and performance of the classification algorithm are shown.
Experiment settings
The feature selection process and classification models were conducted on the training set while the performance of models was evaluated using 10-fold cross-validation on the training set as well as on the independent testing set. We implemented the initial FCBFSearch in Weka 3.8, and the attribute evaluator 'SymmetricalUncertAttributeSetEval' with the search method of 'FCBFSearch' was used to accomplish this process. All data preprocessing feature extraction, joint statistical feature selection measures, and classification algorithms were in Python programming language, and the related code is publicly available in the github (https://github.com/lfj95/FJL-model). The details of experimental settings in compared methods are described in the Supplementary Methods.
Data preprocessing results
Binning and encoding deals with the long tail data distribution
To show the role of binning and encoding, the data distribution of 3 representative genes were plotted. Expression values of these 3 genes (Fig. 2) shows that the original dataset had long tail distributions, and the probability of occurrence of maximum value was very small. In addition, this kind of data distribution can cause great interference to the classification procedure so that it is unstable. After Chi-merge binning and WOE encoding, the training data were discretized and mapped to values between − 3 and 3. These results indicate that binning and encoding could normalize variables to similar scales and reduce the effect of the data distribution.
Comparison of data distribution of 3 representative genes before and after binning and encoding
Feature selection results
In this section, the results of each feature selection step: (1) FCBFSearch, (2) joint statistical measures, and (3) logistic regression-based feature selection are shown.
FCBFSearch
The selection frequencies of genes selected by FCBFSearch are shown in Table S2. The 101 genes that were selected more than 8 times are marked in bold. FCBFSearch was conducted on gene data without preprocessing, following the discretization process which eliminated 6 genes whose maximum bin occupied more than 90% during the preprocessing process. So only 95 genes went to joint statistical measures.
The information value was employed for finding the importance of genes, linear correlation coefficient, and the variance inflation factor for discovering associations among genes. Thirty genes whose IV score was lower than 0.1 were removed (Table S3) since the predictor was not useful for modeling. After this process, there were 65 genes left, and gene MFSD2A had the highest IV 0.455. In addition, 27 genes reached an IV score of 0.2, as shown in Fig. 3A. Therefore, the prediction ability of individual variables collected was strong, and the prediction ability of selecting the appropriate feature combination was available.
Performance of feature selection algorithms. (a) IV score of 95 genes (higher than 0.1 in blue, lower than 0.1 in red). (b) Validity and significance test of variables. The coefficients of all selected variables are negative but the p values of some genes are higher than 0.1. After the phase-out, the significance of residual variables are guaranteed
Correlation coefficients between genes were all lower than the threshold value 0.7 and the calculated VIF were all far less than 10. So, no genes were removed in this step, indicating that genes included in the classification model all had high importance and low correlation.
To guarantee the correctness and significance of the variables sent to the logistic regression model, the coefficients and p values of the input variables were checked to eliminate variables that were not valid and not significant, respectively. Figure 3B shows variables before and after filtering, the coefficients and p values which indicate the influence of the independent variable on the dependent variable and whether early- and late-stage genetic expression significantly changed. As can be seen, some variables' p values were higher than 0.1 before checking. This means that there is no obvious correlation between the two parameters. The variable size was reduced from 65 to 23 after stepwise iteration removed insignificant variables, while the remaining p-values did not exceed the threshold 0.1 and the values of coefficients were all negative.
Classification results
In this section, the classification results of the model and the baseline models are shown. Prediction models on the independent test set with 122 samples, in terms of area under the receiver operating characteristic curve (AUC), accuracy, Matthews Correlation Coefficient (MCC), specificity, and sensitivity were evaluated. The generalization ability of the algorithm was also reflected by a 10-fold cross-validation experiment. For each fold, separate classifiers were trained, and the result finally obtained was the average of 10-folds.
FJL-set-based models
Twenty-three genes in the FJL-set with the preprocessing method shown in 3.1.1 were used to classify "early- and late-stage" on the five machine learning algorithms -- SVM, MLP, Random Forest, Decision Tree, and Naive Bayes (Table 2).
Table 2 The performance of machine learning based-models developed using FLJ-set of 23 selected features on the training set with 10-fold cross-validation set and independent testing set for gene data without discretization
Sensitivities of all the models were in the range of 0.612–0.776 with the highest sensitivity of 0.776 for MLP. Specificities of the models varied in a range with the lowest of 0.767 for logistic regression and the highest of 0.877 for SVM. The best sensitivity-specificity trade-off was observed for the SVM Classifier with a sensitivity of 0.714 and specificity of 0.877. The classification accuracy of the generated prediction models ranged from 76.23% for Random Forest to 81.15% for SVM, and the AUC score ranged from 0.819 for Naive Bayes to 0.860 for SVM. Based on accuracy and AUC, we inferred the SVM based prediction model outperformed the other four machine learning algorithms implemented in the study. The MCC of the models developed in the study was between 0.496 and 0.609. It is notable that among the four evaluated prediction models, the model based on SVM had the highest specificity, accuracy, AUC.
The ROC curve (Fig. 4) was plotted to summarize the performance of different models in discriminating early- and late-stage ccRCC in the preprocessed test data sets. One hundred and twenty-two test samples were used to evaluate the prediction power of the five classifiers with two preprocessing methods. Among the prediction models, SVM and Logistic Regression achieved the maximum value of 0.860 for AUC. Naive Bayes had the least AUC of 0.819, about 0.04 lower than SVM. In real-word applications, logistic regression is also a good choice.
Receivers Operating Characteristic curve (ROC) for all the five classifiers with discretization
No feature selection based models
We first conducted experiments without feature selection to explain the performance of models developed using machine learning techniques. We used 20,530 gene features with the preprocessing method as shown in 3.1.2. The classification result on the testing set is shown in Table 3.
Table 3 The performance of machine learning-based models developed using different sets of selected features, which include whole gene sets without feature selection, RCSP-set-Weka-Hall, FCBF-set, and FJL-set
The performance of AUC on the testing set was 0.806 in SVM and 0.768 in LR. The results of traditional machine learning algorithms before feature selection were not high, especially for logistic regression, whose performance was highly affected by the wide range and highly correlated gene expression data. Therefore, feature selection is essential to improve prediction accuracy.
RCSP-set-Weka-hall based models
The best results were compared with Bhalla's results. The research [11] that Bhalla et al. did selected a subset of genes that are components of cancer hallmark processes and obtained a good performance of the model. We conducted experiments with these 38 genes on both training set with 10-fold cross-validation and on a test set. The preprocessing method used is as described in 3.1.2, the same as that used in their study. The classification result on the testing set is shown in Table 3.
As reported in their paper, they achieved an accuracy of 77.7% with AUC 0.83 on their training data and accuracy of 72.64% with AUC of 0.78 on their validation data with 104 test samples. In the present experiment, their method was repeated in Python and an accuracy of 77.87% with AUC of 0.844 with SVM on our test data with 122 test samples was obtained, while the results on the training set using 10-fold cross-validation were 70.35% in accuracy and 0.769 in AUC (Table 3).
FCBF-set-based models
In this section, the feature selection was performed by Weka on preprocessed data with the method described in 3.1.2 and the number of features was reduced from 20,530 to 101 features (FCBF-set). LR based models did not perform well with these 101 genes, with an accuracy of 72.95% and AUC of 0.789 on the test set. SVM based models gave the best performance with an accuracy of 74.23% with AUC 0.793 on the training data using 10-fold cross-validation and an accuracy of 75.41% with AUC of 0.826 on the testing set (Table 3), which were higher than the results of RCSP-set-Weka-Hall based model. For certainty of results, we made 100 random sets from 60% validation samples to test the biomarkers in these random sets as well, and the mean of randomized experiments is shown in Table 3.
It can be seen that FJL set-based models perform best, which confirms that the genes selected with our method have a certain significance for the division of pathological stages. Also, there is a consistency between the results of 10-fold cross-validation and results on the testing set.
Besides, FPKM values were experimented in the same process with RSEM. Accuracy and AUC are also better than RCSP-set-Weka-Hall set, as were shown in the Table S5, indicating that the experimental method is also applicable to FPKM and it also can get a good classification result.
Biological mechanisms identified by selected genes
Many filtered genes in our method were confirmed to associate with tumor in the previous literature. UFSP2 combined with the nuclear receptor coactivator ASC1 is involved in the development of breast cancer [16]. GPR68 is a mediator interacting with pancreatic cancer-associated fibroblasts and tumor cells [17]. RXRA mutation drives about a quarter of bladder cancer [18]. CACNA1D mutation causes increased Ca2+ influx, further stimulating aldosterone production and cell proliferation in adrenal glomerulosa [19]. CASP9 expression has an apoptosis-inducing and anti-proliferative effect in breast cancer [20]. High expression of PLA2G2A can cause short survival in human rectal cancer [21]. KIAA0652 (ATG13) mediates the inhibition of autophagy in DNA damage via the mTOR pathway [22]. CTSG (Cathepsin G) is thought to be an effective therapeutic target in acute myeloid leukemia patients [23] and could rapidly enhance NK cytotoxicity [24]. HUS1b is confirmed to have the function of checkpoint activation in the response to DNA damage, and its overexpression induces cell death [25]. Saitohin polymorphism is associated with the susceptibility of late-onset Alzheimer's disease [26] and does not associate with the cancer. RNF115 is broadly overexpressed in ERα-positive breast tumors [27]. Wintergerst L et al. [28] reported that CENPBD1 can predict clinical outcomes of head and neck squamous cell carcinoma patients. Tumor cells produce IL-31, and IL-31 and its receptor are confirmed to affect the tumor microenvironment [28].
Functional roles of the 23 hub genes are shown in Table S4. The results in GO analysis showed that the biological processes (BP) were proteolysis, G-protein coupled receptor signaling pathway, and regulation of insulin secretion (Fig. 5). G-protein coupled receptor signaling mediates kidney dysfunction [29]. Also, elevated circulating levels of urea in chronic kidney disease can cause the dysfunction of secretory insulin [30]. Genetic changes in molecular function (MF) show that there are enrichment terms including protein kinase binding and peptidase activity. The most varied term in cell component (CC) was the extracellular region. KEGG analysis found that the selected genes were mostly enriched in the Neuroactive ligand-receptor interaction.
GO and KEGG pathway enrichment analysis of selected genes
In this study, we presented an effective computational framework with a higher capability to discriminate the stage of ccRCC tumor samples. Previous work identified a panel with these genes that can use gene expression data to effectively distinguish between early and late ccRCC patients [11]. Different machine learning algorithms have also been applied [9, 11]. However, given the selected gene set, we speculated that the prediction performance can be improved with better feature processing methods. The major contributions of the proposed method are (1) an improved feature preprocessing method by discretization of gene expression data through Chi-merge binning and WOE encoding, (2) gene panel selection through FCBFSearch, joint statistical measures (IV, the linear correlation coefficient and VIF), and logistic regression-based feature selection. We eliminated noisy and extraneous genetic features during this process and finally obtained a hub gene set (FJL-set) which consists of 23 genes, (3) validation of the performances of machine learning algorithms. Our model can achieve a higher predictive accuracy than baseline models while using less selected genes, and (4) analyzation of the genes' functions. It was found that the targeted genes were confirmed to associate with cancer in the existing research.
There are two main directions of our future work. We will first try other basic feature selection methods other than FCBFSearch on the whole gene set, leading to more accurate classifiers. Then this discrimination algorithm will be applied to other diseases and datasets. By doing so, we will be able to validate the generalization ability of our model.
All code are available at https://github.com/lfj95/FJL-model.
Hakimi Ari A, Pham CG, Hsieh JJ. A clear picture of renal cell carcinoma. Nat Genet. 2013;45(8):849.
Cancer Genome Atlas Research Network. "Comprehensive molecular charac-terization of clear cell renal cell carcinoma." Nature 499.7456 (2013): 43.
Ljungberg, Börje, et al. "Guidelines on renal cell carcinoma." European associ-ation of urology (2013): 5–56.
Fitzmaurice C, et al. Global, regional, and national cancer incidence, mortality, years of life lost, years lived with disability, and disability-adjusted life-years for 32 cancer groups, 1990 to 2015: a systematic analysis for the global burden of disease study. JAMA Oncol. 2017;3(4):524–48.
Karakiewicz PI, et al. Multi-institutional validation of a new renal cancer-specific survival nomogram. J Clin Oncol. 2007;25:1316.
Pantuck AJ, Zisman A, Belldegrun AS. The changing natural history of renal cell carcinoma. J Urol. 2001;166(5):1611–23.
Wood CG. Multimodal approaches in the management of locally advanced and metastatic renal cell carcinoma: combining surgery and system-ic therapies to improve patient outcome. Clin cancer res 13: 697s–702s. [6] Singh, Noor Pratap, Raju S. Bapi, and P. K. Vinod. "machine learning models to predict the progression from early to late stages of papillary renal cell car-cinoma.". Comput Biol Med. 2007;100(2018):92–9.
Muselaers CHJ, et al. Indium-111-labeled girentuximab immunoSPECT as a diagnostic tool in clear cell renal cell carcinoma. Eur Urol. 2013;63:1101–6.
Rahimi A, Gönen M. Discriminating early-and late-stage cancers using multiple kernel learning on gene sets. Bioinformatics. 2018;34(13):i412–21.
Jagga, Zeenia, and Dinesh Gupta. "Classification models for clear cell renal carcinoma stage progression, based on tumor RNAseq expression trained su-pervised machine learning algorithms." BMC proceedings. Vol. 8. No. 6. Bio-Med Central, 2014.
Bhalla, Sherry, et al. Gene expression-based biomarkers for discriminating early and late stage of clear cell renal cancer. Sci Rep. 2017;7:44997.
Hubbard T, et al. The Ensembl genome database project. Nucleic Acids Res. 2002;30(1):38–41.
Safran M, et al. GeneCards version 3: the human gene integrator. Database. 2010;2010.
Kanehisa M. The KEGG database. Silico simulation of biological processes. 2002;247(914):91–103.
Ashburner M, et al. Gene ontology: tool for the unification of biolo-gy. Nat Genet. 2000;25(1):25.
Yoo HM, Kang SH, Kim JY, Lee JE, Seong MW, Lee SW, Ka SH, Sou YS, Komatsu M, Tanaka K, Lee ST, Noh DY, Baek SH, Jeon YJ, Chung CH. Modification of ASC1 by UFM1 is crucial for ERα transactivation and breast cancer development. Mol Cell. 2014 Oct 23;56(2):261–74. https://doi.org/10.1016/j.molcel.2014.08.007.
Wiley SZ, Sriram K, Liang W, Chang SE, French R, McCann T, Sicklick J, Nishihara H, Lowy AM, Insel PA. GPR68, a proton-sensing GPCR, mediates interaction of cancer-associated fibroblasts and cancer cells. FASEB J. 2018 Mar;32(3):1170–83. https://doi.org/10.1096/fj.201700834R.
Halstead AM, Kapadia CD, Bolzenius J, Chu CE, Schriefer A, Wartman LD, Bowman GR, Arora VK. Bladder-cancer-associated mutations in RXRA activate peroxisome proliferator-activated receptors to drive urothelial proliferation. Elife. 2017 Nov 16;6. doi: https://doi.org/10.7554/eLife.30862.
Scholl UI, Goh G, Stölting G, de Oliveira RC, Choi M, Overton JD, et al. Somatic and germline CACNA1D calcium channel mutations in aldosterone-producing adenomas and primary aldosteronism. Nat Genet. 2013;45(9):1050–4. https://doi.org/10.1038/ng.2695.
Sharifi M, Moridnia A. Apoptosis-inducing and antiproliferative effect by inhibition of miR-182-5p through the regulation of CASP9 expression in human breast cancer. Cancer Gene Ther. 2017;24(2):75–82. https://doi.org/10.1038/cgt.2016.79.
He HL, Lee YE, Shiue YL, Lee SW, Lin LC, Chen TJ, et al. PLA2G2A overexpression is associated with poor therapeutic response and inferior outcome in rectal cancer patients receiving neoadjuvant concurrent chemoradiotherapy. Histopathology. 2015 Jun;66(7):991–1002. https://doi.org/10.1111/his.12613.
Czarny P, Pawlowska E, Bialkowska-Warzecha J, Kaarniranta K, Blasiak J. Autophagy in DNA damage response. Int J Mol Sci. 2015;16(2):2641–62. https://doi.org/10.3390/ijms16022641.
Alatrash G, Garber HR, Zhang M, Sukhumalchandra P, Qiu Y, Jakher H, et al. Cathepsin G is broadly expressed in acute myeloid leukemia and is an effective immunotherapeutic target. Leukemia. 2017;31(1):234–7. https://doi.org/10.1038/leu.2016.249.
Yamazaki T, Aoki Y. Cathepsin G enhances human natural killer cytotoxicity. Immunology. 1998;93(1):115–21. https://doi.org/10.1046/j.1365-2567.1998.00397.x.
Rumbajan JM, et al. The HUS1B promoter is hypomethylated in the placentas of low-birth-weight infants. Gene. 2016;583(2):141–146. https://doi.org/https://doi.org/10.1016/j.gene.2016.02.025.
Huang R, Tian S, Cai R, Sun J, Xia W, Dong X, et al. Saitohin Q7R polymorphism is associated with late-onset Alzheimer's disease susceptibility among caucasian populations: a meta-analysis. J Cell Mol Med. 2017;21(8):1448–56. https://doi.org/10.1111/jcmm.13079.
Wang Z, Nie Z, Chen W, Zhou Z, Kong Q, Seth AK, et al. RNF115/BCA2 E3 ubiquitin ligase promotes breast cancer cell proliferation through targeting p21Waf1/Cip1 for ubiquitin-mediated degradation. Neoplasia. 2013;15(9):1028–35.
Ferretti E, Corcione A, Pistoia V. The IL-31/IL-31 489 receptor axis: general features and role in tumor microenvironment. J Leukoc Biol. 2017;102(3):711–17. https://doi.org/10.1189/jlb.3MR0117-033R.
Kamal FA, Travers JG, Schafer AE, Ma Q, Devarajan P, Blaxall BC. G protein-coupled receptor-G-protein βγ-subunit signaling mediates renal dysfunction and fibrosis in heart failure. J Am Soc Nephrol. 2017;28(1):197–208. https://doi.org/10.1681/ASN.2015080852.
Koppe L, Nyam E, Vivot K, Manning Fox JE, Dai XQ, Nguyen BN, et al. Urea impairs β cell glycolysis and insulin secretion in chronic kidney disease. J Clin Invest. 2016;126(9):3598–612. https://doi.org/10.1172/JCI86181.
The work presented in this paper was supported in part by the Key Research and Development Program of Shandong Province (2016CYJS01A04), the Major Science and Technology Innovation Project of Shandong Province (2018YFJH0503) and the National Natural Science Foundation of China under Grant 61671278, 81570407 and 81970743.
School of Information Science and Engineering, Shandong University, supported by Shandong Provincial Key Laboratory of Wireless Communication Technologies, Jinan, 250100, China
Fangjun Li, Yunhe Li, Mingqiang Zhang & Dongfeng Yuan
Center for Gene and Immunothererapy, The Second Hospital of Shandong University, Jinan, 250033, China
Mu Yang, Wenjuan Wang & Dongqi Tang
Fangjun Li
Mu Yang
Yunhe Li
Mingqiang Zhang
Wenjuan Wang
Dongfeng Yuan
Dongqi Tang
FL and YL filtered the features and built predictive model. MY and WW acquired the expression file and clinical data from the public database. MZ performed statistical calculations in the article. DY and DT designed experiment and analyzed results of the model. FL and MY are the major contributors in writting the draft. The author(s) read and approved the final manuscript.
Correspondence to Dongfeng Yuan or Dongqi Tang.
All authors declare that they have no competing interests.
Fangjun Li and Mu Yang are co-first author.
The differences of experimental settings between the compared method in the reference and in this article. Table S2. Gene selection result of FCBFSearch with 10 times of 10-fold cross validation in training set. Table S3. Gene selection result of joint statistical measures, following 30 genes were removed during this process. Table S4. Functional roles of 23 hub genes with selected times ≥8. Table S5. The performance of machine learning-based models using the value of FPKM and RSEM respectively.
Li, F., Yang, M., Li, Y. et al. An improved clear cell renal cell carcinoma stage prediction model based on gene sets. BMC Bioinformatics 21, 232 (2020). https://doi.org/10.1186/s12859-020-03543-0
Clear cell renal cell carcinoma
Cancer stage
Machine Learning and Artificial Intelligence in Bioinformatics | CommonCrawl |
Cost of electricity by source | Wikipedia audio article
In electrical power generation, the distinct ways of generating electricity incur significantly different costs. Calculations of these costs can be made at the point of connection to a load or to the electricity grid. The cost is typically given per kilowatt-hour or megawatt-hour It includes the initial capital, discount rate, as well as the costs of continuous operation, fuel, and maintenance. This type of calculation assists policymakers, researchers and others to guide discussions and decision making The levelized cost of energy (LCOE) is a measure of a power source that allows comparison of different methods of electricity generation on a consistent basis. It is an economic assessment of the average total cost to build and operate a power-generating asset over its lifetime divided by the total energy output of the asset over that lifetime. The LCOE can also be regarded as the average minimum price at which electricity must be sold in order to break-even over the lifetime of the project == Cost factors == While calculating costs, several internal cost factors have to be considered. Note the use of "costs," which is not the actual selling price, since this can be affected by a variety of factors such as subsidies and taxes: Capital costs (including waste disposal and decommissioning costs for nuclear energy) – tend to be low for fossil fuel power stations; high for wind turbines, solar PV (photovoltaics); very high for waste to energy, wave and tidal, solar thermal, and nuclear Fuel costs – high for fossil fuel and biomass sources, low for nuclear, and zero for many renewables. Fuel costs can vary somewhat unpredictably over the life of the generating equipment, due to political and other factors Factors such as the costs of waste (and associated issues) and different insurance costs are not included in the following: Works power, own use or parasitic load – that is, the portion of generated power actually used to run the station's pumps and fans has to be allowed for.To evaluate the total cost of production of electricity, the streams of costs are converted to a net present value using the time value of money. These costs are all brought together using discounted cash flow === Levelized cost of electricity === The levelized cost of electricity (LCOE), also known as Levelized Energy Cost (LEC), is the net present value of the unit-cost of electricity over the lifetime of a generating asset. It is often taken as a proxy for the average price that the generating asset must receive in a market to break even over its lifetime. It is a first-order economic assessment of the cost competitiveness of an electricity-generating system that incorporates all costs over its lifetime: initial investment, operations and maintenance, cost of fuel, cost of capital The levelized cost is that value for which an equal-valued fixed revenue delivered over the life of the asset's generating profile would cause the project to break even. This can be roughly calculated as the net present value of all costs over the lifetime of the asset divided by the total electrical energy output of the asset.The levelized cost of electricity (LCOE) is given by: L C O E = sum of costs over lifetime sum of electrical energy produced over lifetime = ∑ t = 1 n I t + M t + F t ( 1 + r ) t ∑
t = 1 n E t ( 1 + r ) t {\displaystyle \mathrm {LCOE} ={\frac {\text{sum of costs over lifetime}}{\text{sum of electrical energy produced over lifetime}}}={\frac {\sum _{t=1}^{n}{\frac {I_{t}+M_{t}+F_{t}}{\left({1+r}\right)^{t}}}}{\sum _{t=1}^{n}{\frac {E_{t}}{\left({1+r}\right)^{t}}}}}} Typically the LCOE is calculated over the design lifetime of a plant, which is usually 20 to 40 years, and given in the units of currency per kilowatt-hour or megawatt-day, for example AUD/kWh or EUR/kWh or per megawatt-hour, for example AUD/MWh (as tabulated below) However, care should be taken in comparing different LCOE studies and the sources of the information as the LCOE for a given energy source is highly dependent on the assumptions, financing terms and technological deployment analyzed. In particular, assumption of capacity factor has significant impact on the calculation of LCOE. Thus, a key requirement for the analysis is a clear statement of the applicability of the analysis based on justified assumptions.Many scholars, such as Paul Joskow, have described limits to the "levelized cost of electricity" metric for comparing new generating sources In particular, LCOE ignores time effects associated with matching production to demand. This happens at two levels: Dispatchability, the ability of a generating system to come online, go offline, or ramp up or down, quickly as demand swings The extent to which the availability profile matches or conflicts with the market demand profile.Thermally lethargic technologies like coal and nuclear are physically incapable of fast ramping. Capital intensive technologies such as wind, solar, and nuclear are economically disadvantaged unless generating at maximum availability since the LCOE is nearly all sunk-cost capital investment. Intermittent power sources, such as wind and solar, may incur extra costs associated with needing to have storage or backup generation available At the same time, intermittent sources can be competitive if they are available to produce when demand and prices are highest, such as solar during summertime mid-day peaks seen in hot countries where air conditioning is a major consumer. Despite these time limitations, leveling costs is often a necessary prerequisite for making comparisons on an equal footing before demand profiles are considered, and the levelized-cost metric is widely used for comparing technologies at the margin, where grid implications of new generation can be neglected Another limitation of the LCOE metric is the influence of energy efficiency and conservation (EEC) EEC has caused the electricity demand of many countries to remain flat or decline. Considering only the LCOE for utility scale plants will tend to maximise generation and risks overestimating required generation due to efficiency, thus "lowballing" their LCOE. For solar systems installed at the point of end use, it is more economical to invest in EEC first, then solar (resulting in a smaller required solar system than what would be needed without the EEC measures). However, designing a solar system on the basis of LCOE would cause the smaller system LCOE to increase (as the energy generation [measured in kWh] drops faster than the system cost [$]). The whole of system life cycle cost should be considered, not just the LCOE of the energy source. LCOE is not as relevant to end-users than other financial considerations such as income, cashflow, mortgage, leases,
rent, and electricity bills. Comparing solar investments in relation to these can make it easier for end-users to make a decision, or using cost-benefit calculations "and/or an asset's capacity value or contribution to peak on a system or circuit level" === Avoided cost === The US Energy Information Administration has recommended that levelized costs of non-dispatchable sources such as wind or solar may be better compared to the avoided energy cost rather than to the LCOE of dispatchable sources such as fossil fuels or geothermal. This is because introduction of fluctuating power sources may or may not avoid capital and maintenance costs of backup dispatchable sources. Levelized Avoided Cost of Energy (LACE) is the avoided costs from other sources divided by the annual yearly output of the non-dispatchable source However, the avoided cost is much harder to calculate accurately === Marginal cost of electricity === A more accurate economic assessment might be the marginal cost of electricity. This value works by comparing the added system cost of increasing electricity generation from one source versus that from other sources of electricity generation (see Merit Order) === External costs of energy sources === Typically pricing of electricity from various energy sources may not include all external costs – that is, the costs indirectly borne by society as a whole as a consequence of using that energy source. These may include enabling costs, environmental impacts, usage lifespans, energy storage, recycling costs, or beyond-insurance accident effects The US Energy Information Administration predicts that coal and gas are set to be continually used to deliver the majority of the world's electricity. This is expected to result in the evacuation of millions of homes in low-lying areas, and an annual cost of hundreds of billions of dollars' worth of property damage.Furthermore, with a number of island nations becoming slowly submerged underwater due to rising sea levels, massive international climate litigation lawsuits against fossil fuel users are currently beginning in the International Court of Justice.An EU funded research study known as ExternE, or Externalities of Energy, undertaken over the period of 1995 to 2005 found that the cost of producing electricity from coal or oil would double over its present value, and the cost of electricity production from gas would increase by 30% if external costs such as damage to the environment and to human health, from the particulate matter, nitrogen oxides, chromium VI, river water alkalinity, mercury poisoning and arsenic emissions produced by these sources, were taken into account. It was estimated in the study that these external, downstream, fossil fuel costs amount up to 1%–2% of the EU's entire Gross Domestic Product (GDP), and this was before the external cost of global warming from these sources was even included. Coal has the highest external cost in the EU, and global warming is the largest part of that cost.A means to address a part of the external costs of fossil fuel generation is carbon pricing — the method most favored by economics for reducing global-warming emissions. Carbon pricing charges those who emit carbon dioxide (CO2) for their emissions That charge, called a 'carbon price', is the amount that must be paid for the right to emit one tonne of CO2 into the atmosphere Carbon pricing usually takes the form of a carbon tax or a requirement to purchase permits to emit (also called "allowances") Depending on the assumptions of possible accidents and their probabilites external costs for nuclear power vary significantly and can reach between 0.2 and 200 ct/kWh. Furthermore, nuclear power is working under an insurance framework
that limits or structures accident liabilities in accordance with the Paris convention on nuclear third-party liability, the Brussels supplementary convention, and the Vienna convention on civil liability for nuclear damage and in the U.S. the Price-Anderson Act. It is often argued that this potential shortfall in liability represents an external cost not included in the cost of nuclear electricity; but the cost is small, amounting to about 0.1% of the levelized cost of electricity, according to a CBO study.These beyond-insurance costs for worst-case scenarios are not unique to nuclear power, as hydroelectric power plants are similarly not fully insured against a catastrophic event such as the Banqiao Dam disaster, where 11 million people lost their homes and from 30,000 to 200,000 people died, or large dam failures in general. As private insurers base dam insurance premiums on limited scenarios, major disaster insurance in this sector is likewise provided by the state.Because externalities are diffuse in their effect, external costs can not be measured directly, but must be estimated. One approach estimate external costs of environmental impact of electricity is the Methodological Convention of Federal Environment Agency of Germany That method arrives at external costs of electricity from lignite at 10.75 Eurocent/kWh, from hard coal 8.94 Eurocent/kWh, from natural gas 4.91 Eurocent/kWh, from photovoltaic 1.18 Eurocent/kWh, from wind 0.26 Eurocent/kWh and from hydro 0.18 Eurocent/kWh. For nuclear the Federal Environment Agency indicates no value, as different studies have results that vary by a factor of 1,000. It recommends the nuclear given the huge uncertainty, with the cost of the next inferior energy source to evaluate Based on this recommendation the Federal Environment Agency, and with their own method, the Forum Ecological-social market economy, arrive at external environmental costs of nuclear energy at 10.7 to 34 ct/kWh === Additional cost factors === Calculations often do not include wider system costs associated with each type of plant, such as long distance transmission connections to grids, or balancing and reserve costs Calculations do not include externalities such as health damage by coal plants, nor the effect of CO2 emissions on the climate change, ocean acidification and eutrophication, ocean current shifts. Decommissioning costs of nuclear plants are usually not included (The USA is an exception, because the cost of decommissioning is included in the price of electricity, per the Nuclear Waste Policy Act), is therefore not full cost accounting These types of items can be explicitly added as necessary depending on the purpose of the calculation. It has little relation to actual price of power, but assists policy makers and others to guide discussions and decision making.These are not minor factors but very significantly affect all responsible power decisions: Comparisons of life-cycle greenhouse gas emissions show coal, for instance, to be radically higher in terms of GHGs than any alternative. Accordingly, in the analysis below, carbon captured coal is generally treated as a separate source rather than being averaged in with other coal Other environmental concerns with electricity generation include acid rain, ocean acidification and effect of coal extraction on watersheds Various human health concerns with electricity generation, including asthma and smog, now dominate decisions in developed nations that incur health care costs publicly. A Harvard University Medical School study estimates the US health costs of coal alone at between 300 and 500 billion US dollars annually
While cost per kWh of transmission varies drastically with distance, the long complex projects required to clear or even upgrade transmission routes make even attractive new supplies often uncompetitive with conservation measures (see below), because the timing of payoff must take the transmission upgrade into account == Current global studies == === Lazard (2018) === In November, 2018, Lazard found that not only are utility-scale solar and wind cheaper than fossil fuels, "[i]n some scenarios, alternative energy costs have decreased to the point that they are now at or below the marginal cost of conventional generation." Overall, Lazard found "The low end levelized cost of onshore wind-generated energy is $29/MWh, compared to an average illustrative marginal cost of $36/MWh for coal. The levelized cost of utility-scale solar is nearly identical to the illustrative marginal cost of coal, at $36/MWh. This comparison is accentuated when subsidizing onshore wind and solar, which results in levelized costs of energy of $14/MWh and $32/MWh, respectively. … The mean levelized cost of energy of utility-scale PV technologies is down approximately 13% from last year and the mean levelized cost of energy of onshore wind has declined almost 7%." === Bloomberg (2018) === Bloomberg New Energy Finance estimates a "global LCOE for onshore wind [of] $55 per megawatt-hour, down 18% from the first six months of [2017], while the equivalent for solar PV without tracking systems is $70 per MWh, also down 18%." Bloomberg does not provide its global public LCOEs for fossil fuels, but it notes in India they are significantly more expensive: "BNEF is now showing benchmark LCOEs for onshore wind of just $39 per MWh, down 46% on a year ago, and for solar PV at $41, down 45%. By comparison, coal comes in at $68 per MWh, and combined-cycle gas at $93." === IRENA (2018) === The International Renewable Energy Agency (IRENA) released a study based on comprehensive international datasets in January 2018 which projects the fall by 2020 of the kilowatt cost of electricity from utility scale renewable projects such as onshore wind farms to a point equal or below that of electricity from conventional sources === Banks (2018) === The European Bank for Reconstruction and Development (EBRD) says that "renewables are now cheapest energy source", elaborating: "the Bank believes that renewable energy markets in many of the countries where it invests have reached a stage where the introduction of competitive auctions will lead both to a steep drop in electricity prices and an increase in investment." The World Bank (World Bank) President Jim Yong Kim agreed on 10 October 2018: "We are required by our by-laws to go with the lowest cost option, and renewables have now come below the cost of [fossil fuels]." == Regional and historical studies == === Australia ===
According to various studies, the cost for wind and solar has dramatically reduced since 2006. For example, the Australian Climate Council states that over the 5 years between 2009–2014 solar costs fell by 75% making them comparable to coal, and are expected to continue dropping over the next 5 years by another 45% from 2014 prices. They also found that wind has been cheaper than coal since 2013, and that coal and gas will become less viable as subsidies are withdrawn and there is the expectation that they will eventually have to pay the costs of pollution.A CO2CRC report, printed on the 27th of November 2015, titled "Wind, solar, coal and gas to reach similar costs by 2030:", provides the following updated situation in Australia. "The updated LCOE analysis finds that in 2015 natural gas combined cycle and supercritical pulverised coal (both black and brown) plants have the lowest LCOEs of the technologies covered in the study. Wind is the lowest cost large-scale renewable energy source, while rooftop solar panels are competitive with retail electricity prices. By 2030 the LCOE ranges of both conventional coal and gas technologies as well as wind and large-scale solar converge to a common range of A$50 to A$100 per megawatt hour." An updated report, posted on the 27th of September 2017, titled "Renewables will be cheaper than coal in the future. Here are the numbers", indicated that a 100% renewables system is competitive with new-build supercritical (ultrasupercritical) coal, which, according to the Jacobs calculations in the report link above, would come in at around A$75(80) per MWh between 2020 and 2050 This projection for supercritical coal is consistent with other studies by the CO2CRC in 2015 (A$80 per MWh) and used by CSIRO in 2017 (A$65-80 per MWh) === France === The International Energy Agency and EDF have estimated for 2011 the following costs. For nuclear power, they include the costs due to new safety investments to upgrade the French nuclear plant after the Fukushima Daiichi nuclear disaster; the cost for those investments is estimated at 4 €/MWh. Concerning solar power, the estimate of 293 €/MWh is for a large plant capable of producing in the range of 50–100 GWh/year located in a favorable location (such as in Southern Europe). For a small household plant that can produce around 3 MWh/year, the cost is between 400 and 700 €/MWh, depending on location. Solar power was by far the most expensive renewable source of electricity among the technologies studied, although increasing efficiency and longer lifespan of photovoltaic panels together with reduced production costs have made this source of energy more competitive since 2011. By 2017, the cost of photovoltaic solar power had decreased to less than 50 €/MWh === Germany === In November 2013, the Fraunhofer Institute for Solar Energy Systems ISE assessed the levelised generation costs for newly built power plants in the German electricity sector PV systems reached LCOE between 0.078 and 0.142 Euro/kWh in the third quarter of 2013,
depending on the type of power plant (ground-mounted utility-scale or small rooftop solar PV) and average German insolation of 1000 to 1200 kWh/m² per year (GHI). There are no LCOE-figures available for electricity generated by recently built German nuclear power plants as none have been constructed since the late 1980s An update of the ISE study was published in March 2018 === Japan === A 2010 study by the Japanese government (pre-Fukushima disaster), called the Energy White Paper, concluded the cost for kilowatt hour was ¥49 for solar, ¥10 to ¥14 for wind, and ¥5 or ¥6 for nuclear power. Masayoshi Son, an advocate for renewable energy, however, has pointed out that the government estimates for nuclear power did not include the costs for reprocessing the fuel or disaster insurance liability. Son estimated that if these costs were included, the cost of nuclear power was about the same as wind power === United Kingdom === The Institution of Engineers and Shipbuilders in Scotland commissioned a former Director of Operations of the British National Grid, Colin Gibson, to produce a report on generation levelised costs that for the first time would include some of the transmission costs as well as the generation costs. This was published in December 2011. The institution seeks to encourage debate of the issue, and has taken the unusual step among compilers of such studies of publishing a spreadsheet.On 27 February 2015 Vattenfall Vindkraft AS agreed to build the Horns Rev 3 offshore wind farm at a price of 10.31 Eurocent per kWh. This has been quoted as below £100 per MWh In 2013 in the United Kingdom for a new-to-build nuclear power plant (Hinkley Point C: completion 2023), a feed-in tariff of £92.50/MWh (around 142 USD/MWh) plus compensation for inflation with a running time of 35 years was agreed.The Department for Business, Energy and Industrial Strategy (BEIS) publishes regular estimates of the costs of different electricity generation sources, following on the estimates of the merged Department of Energy and Climate Change (DECC). Levelised cost estimates for new generation projects begun in 2015 are listed in the table below === United States === ==== Energy Information Administration ==== The following data are from the Energy Information Administration's (EIA) Annual Energy Outlook released in 2015 (AEO2015). They are in dollars per megawatt-hour (2013 USD/MWh). These figures are estimates for plants going into service in 2020. The LCOE below is calculated based off a 30-year recovery period using a real after tax weighted average cost of capital (WACC) of 6.1%. For carbon intensive technologies 3 percentage points are added to the WACC (This is approximately equivalent fee of $15 per metric ton of carbon dioxide CO2) Since 2010, the US Energy Information Administration (EIA) has published the Annual Energy Outlook (AEO), with yearly LCOE-projections for future utility-scale facilities to be commissioned in about five years' time. In 2015, EIA has been criticized by the Advanced Energy Economy (AEE) Institute after its release of the AEO 2015-report to "consistently underestimate
the growth rate of renewable energy, leading to 'misperceptions' about the performance of these resources in the marketplace". AEE points out that the average power purchase agreement (PPA) for wind power was already at $24/MWh in 2013. Likewise, PPA for utility-scale solar PV are seen at current levels of $50–$75/MWh These figures contrast strongly with EIA's estimated LCOE of $125/MWh (or $114/MWh including subsidies) for solar PV in 2020 The electricity sources which had the most decrease in estimated costs over the period 2010 to 2017 were solar photovoltaic (down 81%), onshore wind (down 63%) and advanced natural gas combined cycle (down 32%) For utility-scale generation put into service in 2040, the EIA estimated in 2015 that there would be further reductions in the constant-dollar cost of concentrated solar power (CSP) (down 18%), solar photovoltaic (down 15%), offshore wind (down 11%), and advanced nuclear (down 7%). The cost of onshore wind was expected to rise slightly (up 2%) by 2040, while natural gas combined cycle electricity was expected to increase 9% to 10% over the period ==== NREL OpenEI (2015) ==== OpenEI, sponsored jointly by the US DOE and the National Renewable Energy Laboratory (NREL), has compiled a historical cost-of-generation database covering a wide variety of generation sources. Because the data is open source it may be subject to frequent revision Note: Only Median value = only one data point Only Max + Min value = Only two data points ==== California Energy Commission (2014) ==== LCOE data from the California Energy Commission report titled "Estimated Cost of New Renewable and Fossil Generation in California". The model data was calculated for all three classes of developers: merchant, investor-owned utility (IOU), and publicly owned utility (POU) ==== Lazard (2015) ==== In November 2015, the investment bank Lazard headquartered in New York, published its ninth annual study on the current electricity production costs of photovoltaics in the US compared to conventional power generators. The best large-scale photovoltaic power plants can produce electricity at 50 USD per MWh. The upper limit at 60 USD per MWh. In comparison, coal-fired plants are between 65 USD and $150 per MWh, nuclear power at 97 USD per MWh Small photovoltaic power plants on roofs of houses are still at 184–300 USD per MWh, but which can do without electricity transport costs. Onshore wind turbines are 32–77 USD per MWh. One drawback is the intermittency of solar and wind power. The study suggests a solution in batteries as a storage, but these are still expensive so far.Lazard's long standing Levelized Cost of Energy (LCOE) report is widely considered and industry benchmark In 2015 Lazard published its inaugural Levelized Cost of Storage (LCOS) report, which was developed by the investment bank Lazard in collaboration
with the energy consulting firm, Enovation.Below is the complete list of LCOEs by source from the investment bank Lazard NOTE: ** Battery Storage is no longer include in this report (2015). It has been rolled into its own separate report LCOS 1.0, developed in consultation with Enovation Partners (See charts below) Below are the LCOSs for different battery technologies. This category has traditionally been filled by Diesel Engines. These are "Behind the meter" applications Below are the LCOSs for different battery technologies. This category has traditionally been filled by Natural Gas Engines. These are "In front of the meter" applications ==== Lazard (2016) ==== On December 15, 2016 Lazard released version 10 of their LCOE report and version 2 of their LCOS report ==== Lazard (2017) ==== On November 2, 2017 the investment bank Lazard released version 11 of their LCOE report and version 3 of their LCOS report Below are the unsubsidized LCOSs for different battery technologies for "Behind the Meter" (BTM) applications Below are the Unsubsidized LCOSs for different battery technologies "Front of the Meter" (FTM) applications Note: Flow battery value range estimates === Global === ==== IEA and NEA (2015) ==== The International Energy Agency and the Nuclear Energy Agency published a joint study in 2015 on LCOE data internationally === Other studies and analysis === ==== Buffett Contract (2015) ==== In a power purchase agreement in the United States in July 2015 for a period of 20 years of solar power will be paid 3.87 UScent per kilowatt hour (38.7 USD/MWh). The solar system, which produces this solar power, is in Nevada (USA) and has 100 MW capacity ==== Sheikh Mohammed Bin Rashid solar farm (2016) ==== In the spring of 2016 a winning bid of 2.99 US cents per kilowatt-hour of photovoltaic solar energy was achieved for the next (800MW capacity) phase of the Sheikh Mohammed Bin Rashid solar farm in Dubai ==== Brookings Institution (2014) ==== In 2014, the Brookings Institution published The Net Benefits of Low and No-Carbon Electricity Technologies which states, after performing an energy and emissions cost analysis, that "The net benefits of new nuclear, hydro, and natural gas combined cycle plants far outweigh the net benefits of new wind or solar plants", with the most cost effective low carbon power technology being determined to be nuclear power ==== Brazilian electricity mix: the Renewable and Non-renewable Exergetic Cost (2014) ====
As long as exergy stands for the useful energy required for an economic activity to be accomplished, it is reasonable to evaluate the cost of the energy on the basis of its exergy content Besides, as exergy can be considered as measure of the departure of the environmental conditions, it also serves as an indicator of environmental impact, taking into account both the efficiency of supply chain (from primary exergy inputs) and the efficiency of the production processes In this way, exergoeconomy can be used to rationally distribute the exergy costs and CO2 emission cost among the products and by-products of a highly integrated Brazilian electricity mix. Based on the thermoeconomy methodologies, some authors have shown that exergoeconomy provides an opportunity to quantify the renewable and non-renewable specific exergy consumption; to properly allocate the associated CO2 emissions among the streams of a given production route; as well as to determine the overall exergy conversion efficiency of the production processes Accordingly, the non-renewable unit exergy cost (cNR) [kJ/kJ] is defined as the rate of non-renewable exergy necessary to produce one unit of exergy rate/flow rate of a substance, fuel, electricity, work or heat flow, whereas the Total Unit Exergy Cost (cT) includes the Renewable (cR) and Non-Renewable Unit Exergy Costs. Analogously, the CO2 emission cost (cCO2) [gCO2/kJ] is defined as the rate of CO2 emitted to obtain one unit of exergy rate/flow rate == Renewables == === Photovoltaics === Photovoltaic prices have fallen from $76.67 per watt in 1977 to nearly $0.23 per watt in August 2017, for crystalline silicon solar cells. This is seen as evidence supporting Swanson's law, which states that solar cell prices fall 20% for every doubling of cumulative shipments. The famous Moore's law calls for a doubling of transistor count every two years By 2011, the price of PV modules per MW had fallen by 60% since 2008, according to Bloomberg New Energy Finance estimates, putting solar power for the first time on a competitive footing with the retail price of electricity in some sunny countries; an alternative and consistent price decline figure of 75% from 2007 to 2012 has also been published, though it is unclear whether these figures are specific to the United States or generally global The levelised cost of electricity (LCOE) from PV is competitive with conventional electricity sources in an expanding list of geographic regions, particularly when the time of generation is included, as electricity is worth more during the day than at night. There has been fierce competition in the supply chain, and further improvements in the levelised cost of energy for solar lie ahead, posing a growing threat to the dominance of fossil fuel generation sources in the next few years. As time progresses, renewable energy technologies generally get cheaper, while fossil fuels generally get more expensive: The less solar power costs, the more favorably it compares to conventional power, and the more attractive it becomes to utilities and energy users around the globe. Utility-scale solar power [could in 2011] be delivered in California at prices well below $100/MWh ($0.10/kWh) less than most other peak generators, even those running on low-cost natural gas. Lower solar module costs also stimulate demand from consumer markets where the cost of solar compares very favourably to retail electric rates In the year 2015, First Solar agreed to supply solar power at 3.87 cents/kWh levelised price from its 100 MW Playa Solar 2 project which
is far cheaper than the electricity sale price from conventional electricity generation plants From January 2015 through May 2016, records have continued to fall quickly, and solar electricity prices, which have reached levels below 3 cents/kWh, continue to fall. In August 2016, Chile announced a new record low contract price to provide solar power for $29.10 per megawatt-hour (MWh). In September 2016, Abu Dhabi announced a new record breaking bid price, promising to provide solar power for $24.2 per MWh In October 2017, Saudi Arabia announced a further low contract price to provide solar power for $17.90 per MWh.With a carbon price of $50/ton (which would raise the price of coal-fired power by 5c/kWh), solar PV is cost-competitive in most locations The declining price of PV has been reflected in rapidly growing installations, totaling a worldwide cumulative capacity of 297 GW by end 2016. According to some estimates total investment in renewables for 2011 exceeded investment in carbon-based electricity generation.In the case of self consumption, payback time is calculated based on how much electricity is not brought from the grid. Additionally, using PV solar power to charge DC batteries, as used in Plug-in Hybrid Electric Vehicles and Electric Vehicles, leads to greater efficiencies, but higher costs. Traditionally, DC generated electricity from solar PV must be converted to AC for buildings, at an average 10% loss during the conversion. Inverter technology is rapidly improving and current equipment has reached 99% efficiency for small scale residential, while commercial scale three-phase equipment can reach well above 98% efficiency However, an additional efficiency loss occurs in the transition back to DC for battery driven devices and vehicles, and using various interest rates and energy price changes were calculated to find present values that range from $2,057.13 to $8,213.64 (analysis from 2009).It is also possible to combine solar PV with other technologies to make hybrid systems, which enable more stand alone systems. The calculation of LCOEs becomes more complex, but can be done by aggregating the costs and the energy produced by each component. As for example, PV and cogen and batteries while reducing energy- and electricity-related greenhouse gas emissions as compared to conventional sources === Solar thermal === LCOE of solar thermal power with energy storage which can operate round the clock on demand, has fallen to AU$78/MWh (US$61/MWh) in August 2017. Though solar thermal plants with energy storage can work as stand alone systems, combination with solar PV power can deliver further cheaper power. Cheaper and dispatchable solar thermal storage power need not depend on costly or polluting coal/gas/oil/nuclear based power generation for ensuring stable grid operation.When a solar thermal storage plant is forced to idle due to lack of sunlight locally during cloudy days, it is possible to consume the cheap excess infirm power from solar PV, wind and hydro power plants (similar to a lesser efficient, huge capacity and low cost battery storage system) by heating the hot molten salt to higher temperature for converting the stored thermal energy in to electricity during the peak demand hours when the electricity sale price is profitable === Wind power === Current land-based windIn the windy great plains expanse of the central United States
new-construction wind power costs in 2017 are compellingly below costs of continued use of existing coal burning plants. Wind power can be contracted via a power purchase agreement at two cents per kilowatt hour while the operating costs for power generation in existing coal-burning plants remain above three cents Current offshore windIn 2016 the Norwegian Wind Energy Association (NORWEA) estimated the LCoE of a typical Norwegian wind farm at 44 €/MWh, assuming a weighted average cost of capital of 8% and an annual 3,500 full load hours, i.e. a capacity factor of 40%. NORWEA went on to estimate the LCoE of the 1 GW Fosen Vind onshore wind farm which is expected to be operational by 2020 to be as low as 35 €/MWh to 40 €/MWh. In November 2016, Vattenfall won a tender to develop the Kriegers Flak windpark in the Baltic Sea for 49.9 €/MWh, and similar levels were agreed for the Borssele offshore wind farms. As of 2016, this is the lowest projected price for electricity produced using offshore wind Historic levelsIn 2004, wind energy cost a fifth of what it did in the 1980s, and some expected that downward trend to continue as larger multi-megawatt turbines were mass-produced As of 2012 capital costs for wind turbines are substantially lower than 2008–2010 but are still above 2002 levels. A 2011 report from the American Wind Energy Association stated, "Wind's costs have dropped over the past two years, in the range of 5 to 6 cents per kilowatt-hour recently…. about 2 cents cheaper than coal-fired electricity, and more projects were financed through debt arrangements than tax equity structures last year…. winning more mainstream acceptance from Wall Street's banks…. Equipment makers can also deliver products in the same year that they are ordered instead of waiting up to three years as was the case in previous cycles…. 5,600 MW of new installed capacity is under construction in the United States, more than double the number at this point in 2010. 35% of all new power generation built in the United States since 2005 has come from wind, more than new gas and coal plants combined, as power providers are increasingly enticed to wind as a convenient hedge against unpredictable commodity price moves."This cost has additionally reduced as wind turbine technology has improved. There are now longer and lighter wind turbine blades, improvements in turbine performance and increased power generation efficiency. Also, wind project capital and maintenance costs have continued to decline. For example, the wind industry in the USA in 2014 was able to produce more power at lower cost by using taller wind turbines with longer blades, capturing the faster winds at higher elevations. This opened up new opportunities in Indiana, Michigan, and Ohio. The price of power from wind turbines built 300 to 400 ft (91 to 122 m) above the ground can now compete with conventional fossil fuels like coal. Prices have fallen to about 4 cents per kilowatt-hour in some cases and utilities have been increasing the amount of wind energy in their portfolio, saying it is their cheapest option == See also == == Further reading == Economic Value of U.S. Fossil Fuel Electricity Health Impacts. United States Environmental Protection Agency The Hidden Costs of Electricity: Comparing
the Hidden Costs of Power Generation Fuels Civil Society Institute Lazard's Levelized Cost of Energy Analysis – Version 11.0 (Nov. 2017 | CommonCrawl |
Circular Motion
Some important terms in Circular Motion
Circle is a polygon having an infinite number of sides. So, when a particle moves in a circular path, it changes its direction at each point ie. an infinite number of times.
The motion of a point particle along a circle.
Angular displacement: It is the angle described by the radius when a particle moves from one point to another in a circular path.
Here, θ is the angular displacement.
The relation between angular displacement and linear displacement
θ =, where, s is the arc length.
\(\theta = \frac Sr\) where s is the arc length.
Angular velocity in a circular motion.
Angular velocity: The rate of change of angular displacement is known as angular velocity.
$$ \omega = \frac {\Delta \theta}{\Delta t} $$
Unit of angular velocity is radian/sec.
The relation between linear velocity and angular velocity in the circular motion.
Relation between angular velocity and linear velocity
$$\text {As} \: \theta =\frac Sr \Delta \theta = \frac {\Delta S}{\Delta r} $$
$$ \text {or,}\:\omega = \frac {\Delta \theta}{\Delta t} \times \frac 1r $$
$$ \text {or,}\:\omega = V \times \frac 1r $$
$$ \boxed {\therefore\: V = \omega r} $$
Instantaneous angular velocity: It is the velocity at a particular instant of time.
$$\omega = \Delta t \rightarrow 0 = \frac {d \theta}{dt}$$
Angular Acceleration: The rate of change of angular velocity with respect to time.
$$\alpha = \frac {\Delta \omega}{\Delta t}$$
The unit of angular acceleration is rad/sec2.
Relation between angular acceleration and linear acceleration
$$V = \omega \times r$$
$$ \text {or,} \Delta V = \Delta \omega \times r $$
$$ \text {or,} \frac {\Delta V} {\Delta t} = \frac {\Delta \omega}{\Delta t} \times r $$
$$ \boxed {\therefore a = \infty \times r}$$
Frequency: Number of rotation completed in one second is known as frequency. Its notation is 'f' and the unit is 'Hertz'.
$$ f =\frac {\text {cycle}}{\text {frequency}} $$
Time period: Time taken to complete one rotation is known as time period. It is denoted by 'T' and its unit is second.
Relation between f and T
f rotation in 1 second
1 rotation in 1/f second
$$\therefore T = \frac 1F$$
Expression for Centripetal Acceleration
Consider a particle moving with constant speed v in a circular path of radius r with centre at O. At a point A on its path, its velocity \(\overrightarrow A\) which is acted along tangent AC. After short interval of time \(\Delta t\) body is at B where velocity of body is \(\overrightarrow B\) acted along tangent BD since the direction of velocity is different from A to B. Hence there is change in velocity from A to B which has to find change in velocity from A to B.
(a) Acceleration in a circle (b) Change in velocity
$$\Delta V = \overrightarrow {V_B} - \overrightarrow {V_A} = \overrightarrow {V_B} - (-\overrightarrow {V_A})$$
Now the magnitude and direction of change in velocity is calculated by using triangle law of vector addition such the magnitude and direction on \(\vec V_B\) is represented by PQ, -\(\vec V_A\) is represented by QR side when magnitude and direction of resultant are represented by PR side of PQR as shown in figure b.
$$\text {i.e} \overrightarrow {PR} = \overrightarrow {PQ} + \overrightarrow {QR}$$
$$\Delta V = \overrightarrow {V_B} - \overrightarrow {V_A}$$
Since PR vector is acted towards the centre of circle hence change in velocity as well as acceleration acted towards the centre.
Using triangle law of vector addition,
$$R = \sqrt {P^2 +Q^2 + 2PQ\cos\theta }$$
$$\Delta V = \sqrt {V_B^2 + V_A^2 + 2V_AV_B \cos (180 - \Delta \theta)}$$
$$ = \sqrt {V_B^2 + V_A^2 + 2V^2\cos \Delta \theta } $$
$$= \sqrt {2V^2 (1 - \cos\Delta\theta)}$$
$$=\sqrt {2V^2 \times 2\sin^2 \frac {\Delta \theta}{2}}$$
$$\Delta V = 2V\sin^2 \frac {\Delta \theta}{2}\dots i$$
Now centripetal acceleration
$$\frac {\Delta V}{\Delta t} = \frac {2V\sin \frac {\Delta \theta}{2}}{\Delta t} $$
$$= 2.V\frac { \frac {\sin \frac {\Delta \theta}{2}}{\frac {\Delta \theta}{2}}}{\Delta t} . \frac {\Delta \theta}{2}$$
A and B are neighbourhood points
$$\Delta t \rightarrow 0, \Delta \theta \rightarrow 0, \frac {\Delta \theta}{2} \rightarrow 0$$
$$\therefore a = \lim \limits_{\Delta t \to 0} \frac {2 V \Delta \theta}{2 \times \Delta t }$$
$$a = V\omega $$
$$= V \times\frac {V}{r}$$
$$= \frac {V^2}{r} \dots (ii)$$
forΔt→0 B is verry close to A andΔθ becomes verry small .
since v=rω,the centripetal acceleration can be written as
$$\boxed {\therefore a = \omega ^2 r}$$
It is the angle described by the radius when a particle moves from one point to another in a circular path.
The rate of change of angular displacement is known as angular velocity.
centripetal acceleration can be written as
Time taken to complete one rotation is known as the time period. It is denoted by 'T' and its unit is second.
Circular Motions Short Question Numericals
If the earth stops rotating, the apparent value of 'g' on its surface will ______.
decreases everywhere
remains same everywhere
increases everywhere
increase at some places and remain the same at some other place
In periodic motion angular velocity increases is the time period ______.
does not exist
remains constant
An object of mass 1 kg is rotating in the circle of radius 2 m with constant speed of 3m/s centripetal force required is ______.
A small boy whirl a stone in vertical circle of radius 1.5 m. Angular speed of stone when it moves upwards ______.
none of the answers are correct
An object of mass 1 kg is rotating in a circle of radius 1 m such that its speed changes from 2m/s to 4m/s in 2 seconds. The tangential force acting on body is ______.
An object of mass 1 kg is rotating in a circle of radius 2 m. Find change in angular speed of object if its speed changes from 2m/s to 4m/s ______.
1 rad/s
A stone of mass 1 kg is rotating in a vertical circle of radius 2 m with speed 20 m/s at its lowest point. Find tension in the string when string makes 60o angle with vertical while ascending. (g = 10m/s2 )
Two objects of equal mass moving in circles such that ratio of radii and speed in circles are 1:2 and 2:1 respectively. Then ratio of their centripetal forces is ______.
Let V be speed of planet in its orbit and V' be its speed when its distance from sun is made half then ______.
$$V' =V $$
$$V' = sqrt 2 V $$
$$V' = frac V2$$
$$V' = 2V$$
ASK ANY QUESTION ON Some important terms in Circular Motion | CommonCrawl |
Anoxic metabolism and biochemical production in Pseudomonas putida F1 driven by a bioelectrochemical system
Bin Lai1,2,
Shiqin Yu1,2,
Paul V. Bernhardt3,
Korneel Rabaey4,
Bernardino Virdis1,2 &
Jens O. Krömer ORCID: orcid.org/0000-0001-5006-08191,2
Biotechnology for Biofuels volume 9, Article number: 39 (2016) Cite this article
The Erratum to this article has been published in Biotechnology for Biofuels 2017 10:155
Pseudomonas putida is a promising host for the bioproduction of chemicals, but its industrial applications are significantly limited by its obligate aerobic character. The aim of this paper is to empower the anoxic metabolism of wild-type Pseudomonas putida to enable bioproduction anaerobically, with the redox power from a bioelectrochemical system (BES).
The obligate aerobe Pseudomonas putida F1 was able to survive and produce almost exclusively 2–Keto-gluconate from glucose under anoxic conditions due to redox balancing with electron mediators in a BES. 2-Keto-gluconate, a precursor for industrial anti-oxidant production, was produced at an overall carbon yield of over 90 % based on glucose. Seven different mediator compounds were tested, and only those with redox potential above 0.207 V (vs standard hydrogen electrode) showed interaction with the cells. The productivity increased with the increasing redox potential of the mediator, indicating this was a key factor affecting the anoxic production process. P. putida cells survived under anaerobic conditions, and limited biofilm formation could be observed on the anode's surface. Analysis of the intracellular pools of ATP, ADP and AMP showed that cells had an increased adenylate energy charge suggesting that cells were able to generate energy using the anode as terminal electron acceptor. The analysis of NAD(H) and NADP(H) showed that in the presence of specific extracellular electron acceptors, the NADP(H) pool was more oxidised, while the NAD(H) pool was unchanged. This implies a growth limitation under anaerobic conditions due to a shortage of NADPH and provides a way to limit biomass formation, while allowing cell maintenance and catalysis at high purity and yield.
For the first time, this study proved the principle that a BES-driven bioconversion of glucose can be achieved for a wild-type obligate aerobe. This non-growth bioconversion was in high yields, high purity and also could deliver the necessary metabolic energy for cell maintenance. By combining this approach with metabolic engineering strategies, this could prove to be a powerful new way to produce bio-chemicals and fuels from renewables in both high yield and high purity.
Contrary to renewable fuels, which have steadily increased their share in the energy sector [1], bulk and speciality chemicals are still mainly derived from petroleum and natural gas. However, industrial biotechnology has significantly developed over the recent decades, and it now offers more solutions for the sustainable production of chemicals from renewable resources than ever before [2]. A range of products are currently produced in biotechnological processes [3], including enzymes, amino acids, antibiotics, alcohols, organic acids and vitamins using ever-expanding range of evolved and genetically engineered microorganisms. Such biotechnological processes, however, often face limitations based on redox balance, carbon yields or product toxicity.
A class of microbes that was recently recognised as a promising new platform for the production of chemical feedstocks (often toxic even to the microbial production strain) are the pseudomonads [4]. They have been used to produce antimicrobial aromatics such as phenol [5] and show, in comparison with other industrial organisms such as Escherichia coli or baker's yeast, particular advantages in solvent tolerance [6, 7]. This allows higher product concentrations of compounds with solvent properties [8–11]. In this family, Pseudomonas putida (P. putida) is regarded a model strain for studying the catabolism and synthesis of toxic aromatics. A range of aromatic compounds, such as 3-methylcatechol [12] and p-hydroxybenzoic acid [13], have been produced with P. putida.
Pseudomonas putida is a gram-negative, rod-shaped, flagellated, saprotrophic soil bacterium that is frequently isolated from soil contaminated with petrochemicals. It relies on oxygen as terminal electron acceptor and does not ferment [4, 14]. Pseudomonas catabolism is efficient in the supply of redox power [15], but with a low cellular energy demand needed for cell maintenance; in other words, it has a high net NAD(P)H generation for enzymatic reactions [16]. In P. putida under aerobic conditions on glucose, the most important sources for NADPH are the pentose-phosphate pathway (PPP), and Entner-Doudoroff (ED) pathway enzymes glucose-6-phosphate dehydrogenase and phosphogluconate dehydrogenase [17, 18]. Because NADPH is an important co-factor for metabolite biosynthesis [19] and dealing with toxicity [15], P. putida has been described as one of the promising new platform organisms adaptable for biotechnology and synthetic biology applications [4].
The strictly aerobic metabolism of P. putida, however, may also lead to complications when it comes to industrial applications. On the one hand, it increases the capital cost, as scaling-up of aerobic process is significantly limited by the oxygen transfer rate [20], and due to this, both the maximum and average scales of commercial aerobic fermenters are much smaller in comparison with anaerobic fermenters [21]. On the other hand, aerobic production is also inseparable from substrate loss in the form of CO2, while some anaerobic processes can achieve carbon yields close to 100 % [22]. To overcome these limitations, a range of studies of P. putida under oxygen-limiting conditions have been conducted aiming at developing an anaerobic mutant of P. putida [23]. However, the success to date has been limited, and so far only a reduced death rate of P. putida cells in anaerobic conditions could be achieved while limited catabolic activity was observed [18, 24].
To solve the problem of electron balancing and allow efficient anaerobic metabolism of P. putida, we proposed to culture the organism in the anodic compartment of a bioelectrochemical system (BES). BES were firstly proposed as technology for electricity production from biodegradable waste [25–27] and then extended to other applications including hydrogen production [28], water desalination [29], nutrient removal and recovery [30–33] as well as bio-production [30] including the conversion of CO2 to acetate [34] and methane [35]. BESs use electrodes as electron sink or donor to drive the anoxic metabolism of microbial cells. In a recently reported study, electroactivity under different oxygen-limited conditions was observed for a recombinant P. putida KT2440 [36]. This bacterium was engineered to produce pyocyanin, an electron shuttle (or mediator molecule) from Pseudomonas aeruginosa [37].
Under the rationale that an electrode can act as electron acceptor (sink) in order to balance the intracellular energy and redox co-factors, in the present study, we investigate the mediated electron transport in a non-genetically modified P. putida F1 under strictly anaerobic conditions with glucose as a carbon source. We observed electrochemical activity and high yield production of 2-Keto-gluconate (2KGA), which is an industrial precursor for the production of antioxidant iso-ascorbic acid [38]. We used various mediators with broad electrochemical midpoint potentials and tested for electrochemical activity in the presence of the microbial cells. Using quantitative metabolite analysis, we show the impact of the presence of an extracellular electron acceptor on intracellular energy balance as well as the co-factor ratios of NAD+/NADH and NADP+/NADPH.
Strain and cultivation conditions
The strain used in this study was wild-type P. putida F1. Cells were cultivated in a defined mineral medium (DM9) that contained per litre: 6 g Na2HPO4, 3 g KH2PO4, 0.1 g NH4Cl, 0.1 g MgSO4·7H2O, 15 mg CaCl2·2H2O and 1 ml trace element solution, containing per litre: 1.5 g FeCl3·6H2O, 0.15 g H3BO3, 30 mg CuSO4·5H2O, 0.18 g KI, 0.12 g MnCl2·4H2O, 60 mg Na2MoO4·2H2O, 0.12 g ZnSO4·7H2O, 0.15 g CoCl2·6H2O, 10 g EDTA (acid), and 23 mg NiCl2·6H2O. Glucose was used as the sole metabolic carbon source in all tests, and the initial medium pH was 7. Cultivation temperature was 30 °C [39]. Cell density was analysed photometrically using absorbance at 600 nm (OD600) and converted to cell dry weight (CDW) by the following empirically determined conversion factor: CDW [g/L] = 0.476 × OD600.
Pre-cultures were prepared by picking and transferring of a single colony from a LB plate into baffled shake flasks for aerobic overnight cultivation (~16 h) in an orbital shaking incubator (2.5 cm orbit, Multitron, Infors, Bottmingen, Switzerland) at 200 rpm and 30 °C. When cell density reached OD600 = 0.5 (in log phase) cells were harvested by centrifugation (7000g, room temperature, 10 min), washed, resuspended in fresh DM9 medium, and then transferred into the main cultivations vessels. Anaerobic tests were done in 150 mL anaerobic culture bottles or BES reactors. Anaerobic conditions throughout the experiments were assured by sparging the culture medium with nitrogen. The anaerobic bottles were inoculated in an anaerobic chamber.
Bioelectrochemical system set up
The double-chamber BESs consisted of a double-jacketed sterilisable glass vessel with a net liquid volume of 350 mL serving as the anodic compartment. The cathodic compartment consisted of a 15 mL glass cannula inserted directly into the anode chamber. A circular cation exchange membrane (diameter 9 mm, CMI-7000, Membranes International INC., USA) was mounted at the bottom of the cannula to guarantee ionic connection between anodic and cathodic electrolytes. Additional file 1: Fig. S1 in the supporting information depicts a schematic of the BESs. Carbon cloth (projected surface area of ~25 cm2) was used as the anode electrode after pre-treatment with a modified cetyltrimethylammoniumbromide (CTAB) soaking method [40]. In brief, the carbon cloth electrodes were soaked in 2 mM CTAB solution and incubated in a shaking incubator at 40 °C, 200 rpm for 16 h. The pre-treatment was necessary to clean the carbon cloth and to improve the hydrophilicity. A titanium mesh (Kaian Metal Wire Mesh, Anping, P.R. China) was used as cathode electrode. An Ag/AgCl electrode in saturated KCl (+0.197 V vs standard hydrogen electrode) was used as reference electrode (Cat. 013457, RC-1CP, Als, Tokyo, Japan), and titanium wire (T555518, Advent Research Materials, Oxford, United Kingdom) was used as electric wire for all connections. For ease of comparison, all potentials herein are reported with respect to the standard hydrogen electrode (SHE). The working electrode potential was controlled to a set potential relatively to the reference electrode using a potentiostat (Potentiostat/Galvanostat VSP, BioLogic Science Instruments, France).The potentials were chosen based on the electrochemical characterisation of the mediators (vide infra). Riboflavin, [Co(Sep)]Cl3, Fe(EDTA), thionine chloride, Co(bpy)3](ClO4)2, K3[Fe(CN)6] were added (1 mM) as redox mediators to the culture medium. These mediators cover a wide range of electrochemical midpoint potentials and were tested for electrochemical activity in the presence of the microbial cells. Measurements of current production over time (chronoamperometry) were used to monitor the electrocatalytic activity of the microbes. Strictly anaerobic conditions were maintained by flushing sterile nitrogen gas through the reactor headspace at a flow rate of around 30 mL/min. Dissolved oxygen concentration was confirmed to be below detection limit (<15 ppb) using an optical oxygen sensor (OXY-4 mini, PreSens, Regensburg, Germany) in preliminary tests. A condenser cooled with 4 °C H2O was used to reduce water evaporation through the headspace due to the flushing.
Analytics and sampling
The concentrations of glucose, gluconic acid, 2KGA and acetic acid were analysed using an Agilent 1200 high performance liquid chromatography (HPLC) system and an Agilent Hiplex H column (300 × 7.7 mm, PL1170–6830) with guard column (SecurityGuard Carbo-H, Phenomenex PN: AJO-4490). In brief, the column temperature was set to 40 °C, and analytes were eluted isocratically with 14 mM H2SO4 at a flow rate of 0.4 mL/min. Glucose was detected using a refractive index detector, while the carboxylates were detected with absorbance at 210 nm. Prior to injection, culture broth was collected by centrifugation at 16,000 g, 4 °C for 10 min, and the supernatant was used for HPLC analysis undiluted.
Cell extraction for intracellular metabolite analysis
Samples were taken from the electrochemical reactors between 73 and 100 h when the current reached its peak value. In the controls, samples were taken at the similar time point as above when mediators were fully reduced.
For the analysis of intracellular ATP/ADP/AMP, cell pellets representing between 0.2 and 0.4 mg CDW were harvested by centrifugation at 12,000g, 4 °C for 2 min. Cell extractions for NAD(P) +/NAD(P)H were performed with a modified fast filtration—freeze/drying process [41, 42]. In brief, around 5 mg CDW were harvested by fast filtration (GVWP04700, 0.22-µm pore size, Millipore, Australia) and then quickly soaked into cold methanol solution (60 % v/v, −48 °C) and incubated at −48 °C for 20 min. The precise amount of CDW harvested was calculated from the optical density (as explained above) of the sample volume.
Afterwards, the extract was centrifuged at 10,000g, −4°C for 10 min (5810R, Eppendorf, Hamburg, Germany). Supernatants were collected and frozen in −80 °C freezer after dilution with high-purity water (R > 18 MΩ) to a final concentration of methanol <=20 % v/v). The frozen samples were lyophilised by a freeze-dryer at −20 °C and then resuspended into the reaction buffers provided by the enzymatic assay kits respectively.
For ATP/ADP/AMP analysis cell pellets were resuspended in 250 µL of ice-cold phosphate buffer (pH 7.75). Then cells were extracted by the cold trichloroaceticacid (TCA) method [17, 43]: addition of 250 μL ice-cold 5 % (w/v) TCA—4 mM EDTA and mixing (vortex, 20 s)and then incubation on ice for 20 min. Cell debris was removed by centrifugation (12,000g, 4 °C, 10 min) and the supernatant transferred to a new tube and kept on ice until subsequent analysis. Quantification of ATP content was conducted by a commercial bio-luminescence assay kit (LBR-S010, Biaffin, Germany) according to the manufacturer's instructions. The bioluminescence signal was quantified using a microplate reader (M200, Tecan, Switzerland) with white 96-well plate (655075, Greiner Bio-one, Germany). Prior to the assay, samples were diluted between 5 and 40 fold in 20 mM Tris-H2SO4 buffer (pH 7.75) containing 2 mM EDTA, for two reasons: (1) to achieve readings within the calibration curve of the assay and (2) to dilute TCA present in the extracts. TCA interferes with the optical signals and by diluting it, the assay becomes less inhibited, and the inhibitory effect can be numerically corrected (Additional file 1: Fig. S2). Quantification of ADP and AMP were determined indirectly by enzymatically converting them to ATP as described previously [44]. ADP was converted to ATP using pyruvate kinase (P9136, Sigma) while adenylate kinase (M5520, Sigma) was added in addition to also convert AMP. The reaction mixture was incubated at 37 °C for 15 min. Concentrations of ADP and AMP were calculated based on the difference in luminescent reading of samples with or without enzymatic conversions. Adenylate energy charge (AEC) was then calculated according to the formula:
$${\text{AEC }} = {{\left( {{\text{ATP}} + 0. 5\times {\text{ADP}}} \right)} \mathord{\left/ {\vphantom {{\left( {{\text{ATP}} + 0. 5\times {\text{ADP}}} \right)} {{\text{ATP}} + {\text{ADP}} + {\text{AMP}}}}} \right. \kern-0pt}( {{\text{ATP}} + {\text{ADP}} + {\text{AMP}}})}$$
NADP+/NADPH were quantified using a commercial colorimetric assay kit (MAR038, Sigma-Aldrich, USA), and NAD+/NADH was determined by a fluorimetric assay kit (PicoProbe™, K338-100, BioVision, USA) according to manufacturer's instructions.
Electrochemical analysis and calculations
The midpoint redox potentials of the soluble redox mediators were determined by cyclic voltammetry (CV) in a two–chambered electrochemical cell filled with medium containing 0.1 M KCl (counter electrode chamber) and pH 7.0 0.1 M phosphate buffer (working electrode chamber). During the measurements, the potential of the working electrode (a 2 cm2 CTAB pre-treated carbon cloth) was swept between a low and a high limit (within a potential window between 0.9 and 1.0 V vs SHE) at a scan rate of 50 m V s−1. Measurements were repeated for at least 100 cycles to guarantee reproducibility. A graphite rod was used as the counter electrode, while a reference electrode was placed into the anode chamber. During the tests, individual mediators were added at a concentration of 1 mM. The midpoint potential values (E m) were determined as the arithmetic averages of the anodic (E pa) and cathodic peak (E pc) potentials as determined during the forward and backward scans of the CVs, respectively, according to equation \(E_{\text{m}} =(E_{\rm{pa}}+E_{\rm{pc}})/2\).
In order to provide sufficient driving force for the oxidation of the redox mediator during normal reactor operations (chronoamperometry), the working electrode was set to a potential more positive (about 0.3 V) than the mid-point potentials as determined by CV. Control experiments with mediator but without cells did not lead to oxidation of glucose (Additional file 1: Fig. S7).
The coulombic efficiency (CE), i.e. the efficiency in the transfer of electric charge during the conversions was determined using the equation:
$${\text{CE [\%] }} = {Y_{\text{electrons}}/(Y_{\text{2KGA}}\times 4+{Y_{\text{acetic acid}}}\times 4 +Y_{\text{gluconic acid}}\times 2)}\times100$$
where Y is the molar yield coefficient on glucose basis of the products (electrons, 2KGA, acetic acid, gluconic acid, etc.) multiplied by the number of electrons released during the formation of the respective product from glucose. Per mol of glucose 4 mol electrons are generated when 2KGA is formed, 2 mol electrons are generated when gluconic acid is formed, and 4 mol electrons are generated when glucose is converted to CO2 and acetic acid. The molar yield coefficients were determined as the slope of a plot of mol product versus mol substrate converted (Additional file 1: Fig. S3). Reactor volume was corrected for withdrawn sample and water evaporation determined to be 0.09 mL/h at the reported headspace flushing rate.
The carbon balance was determined using the following equation:
$${\text{CB }}[ \%] \, = \, \left( {r_{{ 2 {\text{KGA}}}} \times 6+ \, {r_{\text{acetic acid}}} \times 2+ \, {r_{\text{gluconic acid}}} \times 6+ \, {r_{\text{CO}}}}_2 \right)/( {r_{\text{glucose}} \times 6}) \times 100$$
where r represents the specific uptake (glucose) or production rates (2KGA, acetic acid, gluconic acid and CO2) in [mmol/(gCDW h)] and multiplied by the number of carbon atoms in the respective compound.
The production of CO2 could not be quantified because of CO2 evolution being miniscule compared to the rate of N2 flushing of the reactor headspace. However, the formation of acetate from pyruvate by P. putida will coincide with CO2 formation, either due to the activity of the membrane-bound decarboxylating pyruvate dehydrogenase/oxidase (EC 1.2.5.1) or to the activities of other metabolic reactions such as the formation of acetyl-CoA, which can be converted to acetate either through catabolic pathways or through biosynthesis processes. The ratio of CO2 to acetate in all these circumstances will be 1:1. Therefore the CO2 yield was assumed to be equal to the acetate yield, major shortcomings of the carbon balance would then point to high activity of pathways producing CO2, such as the TCA cycle or PPP. All product yield coefficients were converted to specific rates using the average planktonic biomass concentration in the systems. In fact, the contribution to the yields from cells forming biofilm was considered negligible.
Redox mediators with high midpoint potential enable anoxic metabolism of P. putida F1
Like many of its family members, P. putida F1 is an obligate aerobe, and no anoxic growth in glucose-based mineral medium without oxygen is observed (Fig. 1a). It was previously shown that ferricyanide could serve as electron acceptor during the oxidation of nicotinic acid by Pseudomonas fluorescens [45]. In fact, when the glucose-based mineral medium (DM9) was supplemented with 1 mM potassium ferricyanide as electron acceptor, P. putida F1 anaerobically reduced ferricyanide (oxidised form [Fe(CN)6]3−) to ferrocyanide (reduced form, [Fe(CN)6]4−) within 55–90 h, as suggested by a change in colour of the solution from yellow to green ([Fe(CN)6]3−) to colourless ([Fe(CN)6]4−). The ability to continuously utilise [Fe(CN)6]3− as an electron acceptor was further tested in the BESs, where electrochemical oxidation of the [Fe(CN)6]4− at the anode allowed constant regeneration of the [Fe(CN)6]3− for microbial metabolism. In the absence of a mediator, P. putida F1 did not transfer electrons to the anode, since no catalytic current was detected when no mediator was added (Additional file 1: Fig. S4).
Change of biomass (triangles, a), pH (squares, b) and electron production (circles, b) in the anode compartment of a BES reactor of P. putida F1 with K3[Fe(CN)6] as electron acceptor in control (black symbols) and closed circuit with the anode potential poised at +0.697 V (white symbols). Data have been averaged from ten (closed circuit) and three (control) biological replicates with a total of 79 and 30 samples, respectively. Means and standard deviations (X and Y error bars) are given [average sample size n = 7 (closed circuit); exact sample size n = 3 (control)]
The combination of the BES and [Fe(CN)6]3− as electron acceptor proved to be a feasible solution to the problem of electron transport. P. putida F1 was able to perform detectable anoxic metabolic conversions in the BES with an applied potential of +0.697 V vs SHE compared to the control, as indicated by a drop in medium pH (Fig. 1b). The total concentration of [Fe(CN)6]3− and [Fe(CN)6]4− quantified by a developed optical method (Additional file 1: Fig. S5), was constant during the whole operating batch, confirming this chemical only served as electron mediator. The pH drop from 7.06 to 5.84 pointed towards the production of acids due to the metabolism of glucose [46]. The measured catalytic current (max current 0.066 mA cm−2) confirmed electrons were released from P. putida F1 to the anode. This electron flux is indicative of the anoxic catabolism of glucose since this was the only electron donor in the system at sufficiently high concentration to induce a transfer of charge of over 850 °C in 218 h. The anode chamber contained 2.55 mmol glucose, and around 10 mmol electrons were released over the course of the experiment. Providing an external electron sink to the cell in the form of an anode and a mediator molecule enabled the wild-type P. putida F1 to stay metabolically active for over 300 h under anaerobic conditions. The concentration of planktonic cells decreased over time similar to the controls lacking the anode (Fig. 1a), but a non-homogeneous biofilm formation (unquantified) on the carbon cloth electrode could be observed during the electrochemical experiments (Additional file 1: Fig. S6). This not only explains, on the one hand, the drop in planktonic cells but also indicates that anaerobic growth may have been possible to a limited extent. Metabolic engineering was previously used to adapt P. putida to anaerobic conditions by aiming at balancing the energy and redox couples [18], but in that study, only the death rate could be reduced, while metabolic turnover remained low.
Biochemical production in the presence of different mediators
After confirming that providing an extracellular electron sink to P. putida F1 in the form of electrodes and redox mediators results in production of organic acids, additional experiments were performed with the aims (1) to determine the complete product spectrum and (2) to describe the fermentation kinetics in relation to the redox potential of the mediator, through testing of compounds covering a broad range of redox potentials between approximately −0.4 and +0.4 V.
Cyclic voltammetry was used to determine the average midpoint potential values (E m) of the mediators used in this study. Characteristic CV traces at the scan rate of 50 mV s−1 are reported in Fig. 2a while E m values are listed in Table 1. The measurements confirmed the relatively low midpoint potential of riboflavin and [Co(Sep)]Cl3, centred at around −0.365 and −0.349 V, respectively, whereas Fe(EDTA), thionine chloride, [Co(bpy)3](ClO4)2 and K3[Fe(CN)6] displayed more positive E m, centred at around 0.078, 0.208, 0.310, and 0.416 V, respectively. These values are in good agreement to those reported previously for the same compounds (e.g., see references [47–52]) thus confirming that the electrode material used in this study (carbon cloth) was suitable for the electrochemical conversion of the redox mediators tested.
Electrochemical characterisation of the redox mediators used in this study by cyclic voltammetry (CV) (a); anodic current (solid line) and charge (dash line) production measured in the presence of mediators that show activity with P. putida (b). *I [mA]: each cyclic voltammogram is shown in its optimum scale to give a clear appearance for all compounds
Table 1 Formal redox potential of tested mediator molecules and their interaction with P. putida F1
Neutral red [53, 54] and riboflavin [55, 56] have been previously used successfully in combination to organisms such as E. coli and Shewanella oneidensis to shuttle electrons extracellularly by coupling with the respiratory chain using NADH as the electron donor. However, when we added these mediators to the cultures of P. putida F1, no significant current production was observed (Additional file 1: Fig. S4), as was for [Co(Sep)]Cl3 and Na[Fe(EDTA)]. On the contrary, mediators with redox potentials above 0.207 V, that is thionine chloride, [Co(bpy)3](ClO4)2 and K3[Fe(CN)6], demonstrated the ability to accept electrons from P. putida F1 cells (Fig. 2b). In fact, current output (Fig. 2b), glucose consumption and a drop in pH (Fig. 3) were observed when these mediators were present in the culturing solutions.
Total metabolite levels and pH in the anode compartment of the BES reactors with K3[Fe(CN)6] (a) and [Co(bpy)3](ClO4)2 (b) as mediators, respectively. The cumulative amount of electron produced during the conversions is also indicated. Data have been averaged from 10 (a) and four (b) biological replicates with a total of 79 and 36 samples, respectively. Means and standard deviations are given [average sample size in each point n = 7 (a); n = 3 (b)]
The catalytic current increased over time (Fig. 2b), despite the planktonic biomass concentration was stable after the drop during the initial 24 h. We hypothesise that the observed slow formation of a biofilm on the anode is the likely explanation for this increase along with the possibility of a change in gene expression of relevant membrane proteins involved in the mediated electron transport. Depending on available energy, both could be very slow processes.
HPLC analysis showed also the consumption of glucose by P. putida in the presence of each of the three mediators. However, in the case of thionine chloride, conversion rates were very low and no full substrate conversion could be reached within 400 h (Fig. 2b). Therefore, only production with [Co(bpy)3](ClO4)2 and K3[Fe(CN)6] was analysed in detail. For easier comparison with the produced electrons, concentrations were converted to absolute moles using the respective reactor volumes at each time point. Glucose was converted into three detectable products: 2KGA, gluconic acid and acetic acid with the former being the dominant product (Fig. 3). Gluconic acid was accumulated in the first 100 h of the cultivations and then consumed. This phenotype was much more pronounced in the case of [Co(bpy)3](ClO4)2 where not only a higher amount of gluconic acid was observed in transition, but also a net production remained at the end of the experiment. In the case of the K3[Fe(CN)6]-mediated electron transport, all gluconic acid was consumed in the second half of the fermentation, and acetate production was also higher in this process (Fig. 3).
The carbon balances closed in both studies (Table 2) under the assumption that per mol of acetic acid, one mol of CO2 would be released in metabolism (see M&M section). For both mediators, it was found that around 90 % of the glucose was converted to 2KGA, while twice as much acetic acid accumulated with K3[Fe(CN)6] compared to [Co(bpy)3](ClO4)2 (Table 2). The yields of electrons produced from glucose were comparable with both mediators, but the current profiles (Fig. 2b) indicated that the conversion rates were higher in the case of K3[Fe(CN)6]. While it is not possible to quantify the exact amounts of cells growing on the electrodes, the carbon balance indicated that growth would be minimal, which is also in agreement with our visual observations. Using the average planktonic biomass concentration over the course of the processes it was possible to calculate specific rates (Table 2). This shows that in fact the bioconversion in the presence of K3[Fe(CN)6] was much faster and was characterised by a glucose consumption rate that was 36 % higher compared to [Co(bpy)3](ClO4)2 and the same holds true for the rate of electron production. This rate was positively correlated to the redox potential of the mediators used; the more positive the potential, the faster the production (Table 2, Fig. 2b). Qualitatively this is expected according to Marcus' theory of electron transfer kinetics [57–59]. These results indicate that the anoxic metabolism was driven by the capability of the redox mediators to scavenge electrons from the intracellular metabolism of P. putida F1, since increasing conversions rates could be observed with mediators with higher potentials. Interestingly, the current density recorded immediately after the inoculation of microbes showed a different trend from the production rate of products: thionine chloride and [Co(bpy)3](ClO4)2 could trigger electron transfer to the anode more rapidly than K3[Fe(CN)6] in spite of having a lower redox potential than the latter (0.208 V and 0.31 V, respectively, vs 0.416 V of ferricyanide) (Fig. 2b). A similar change was also observed for coulombic efficiency, as the reactor with [Co(bpy)3](ClO4)2 gave a higher value than when K3[Fe(CN)6] was used (Table 2), showing that in the faster process using K3[Fe(CN)6] more electrons are lost. Despite these losses, however, it emerges that a BES can be used to produce oxidised products under oxygen-free conditions at high yield, high purity and with minimal production of biomass, and (as it is the case here) without genetic modifications.
Table 2 Key process parameters of anaerobic glucose conversion of P. putida F1 in the anode compartment of a BES using [Co(bpy)3]3+/2+ or [Fe(CN)6]3−/4− as electron acceptors with the anode potential poised at +0.697 V vs SHE
Intracellular electron and energy carriers during BES-driven glucose oxidation
The previous sections showed that only redox molecules whose electrochemical potentials were above 0.207 V could successfully shuttle electrons from microbes to the anode (Fig. 2a; Table 1). This potential is positive enough to oxidise a wide range of cellular redox carriers and proteins involved in the electron transport chain of P. aeruginosa [60], which has high similarity to the one of P. putida. This should enable further oxidation of carbonaceous matter, instead, our results shows that 90 % of the carbon provided accumulated as 2KGA in the BES, indicating that there is still a metabolic constraint in the cells that prevents the full utilisation of the carbon source under anaerobic conditions. In fact, no obvious growth could be observed. This is somewhat surprising, since the accumulation of acetic acid also indicated that some carbon was processed through glycolysis. This imbalance could potentially be explained by a limitation of ATP generation or imbalances of the intracellular redox couples NAD(P)+/NADPH.
To shed light on these possible limitations, analyses of intracellular concentrations of NADH, NAD+, NADPH, NADP+, ATP, AMP and ADP were performed for the experiments using the best performing mediator K3[Fe(CN)6] (Fig. 3a). Concentrations of intracellular ATP were compared to cells incubated in identical medium with and without K3[Fe(CN)6] in an anaerobic chamber. The intracellular ATP concentration in the BES was much higher than in the anaerobic conditions without an anode provided as electron acceptor (Fig. 4a). While we could not find data on anaerobic cultures of P. putida, the determined intracellular ATP concentrations are below 10 % of the ranges published for aerobically growing P. putida strains [17, 61], but well in agreement with the observed concentrations during carbon starvation [62] (note that concentrations were converted assuming 1.19 × 10^12 cfu/gCDW [63]). When comparing the adenylate energy charge (AEC), which uses the ratio of ATP, ADP and AMP to estimate the relative amount of energy rich phosphate bonds, it could be observed that providing the anode and the mediator in the BES helped the cells to restore the AEC to 0.9 (Fig. 4b). The AEC should be maintained normally over 0.8 for growing microbes [44, 64] and for P. putida under aerobic conditions, and exponential growth values between 0.75 and 0.95 have been described [61, 65]. These findings indicate that the BES could potentially provide enough energy for growth, but carbon turnover seems to be limiting.
a Specific intracellular ATP concentration in P. putida F1 under anaerobic conditions in the absence or presence of K3Fe(CN)6 and electrodes. b Adenylate energy charge under the same conditions (AEC = (ATP + 0.5×ADP)/(ATP + ADP + AMP))
When analysing the intracellular concentrations of redox cofactors it was observed that the ratios of NAD+/NADH and NADP+/NADPH were shifted towards the oxidised species under all conditions (Fig. 5). The ratio of NAD+/NADH remained, however, similar for the three tested conditions. The observed ratios for NAD+/NADH are highly similar to that observed for aerobic P. putida KT2440 [65], which is somewhat surprising since one would expect that in the absence of an electron acceptor (anaerobic control Fig. 5a) the NADH pool would be more reduced (It is important to note that the inverse ratio is given in the cited reference). These data show, however, that adding (electro)chemical oxidants did not alter overall NAD+/NADH balance. This situation was quite different for the couple NADP+/NADPH. In fact, the presence of oxidised mediators increased the ratio significantly (Fig. 5b), showing that the NADPH pool became more oxidised. The anaerobic control exhibited a NADP+/NADPH ratio which is already more oxidised than in the case of aerobically growing P. putida KT2440 [65]. This could point to a limitation in the activity of the PPP for NADPH regeneration, and the imbalance of NADP+/NADPH is aggravated by the presence of mediator and by the use of a BES. This imbalance may hamper the uptake and processing of carbon on the level of 2KGA and could explain the ATP concentrations that point towards carbon starvation.
Determination of intracelluar pyridine nucleotide cofactors in P. putida under anaerobic conditions in the absence or presence of K3[Fe(CN)6] and electrodes. Analytical samples were taken using the same procedures as those for ATP determination
Flux balance analysis
Using existing knowledge of the underlying pathway stoichiometry and the measured rates from the previous sections, it is possible to conduct a flux balance analysis using a simplified model (zero growth, conversion of glucose to gluconic acid, 2KGA, acetic acid and CO2 and electrons) (Fig. 6). The estimation of fluxes around the uptake of sugar or sugar acids, is complicated by the fact in P. putida transporters for the uptake of glucose, gluconic acid and 2KGA are present. We assume that the 2KGA present in the periplasm was the main C6 molecule imported into the cytoplasm and base this assumption on the observed shift in the NADPH ratio. This unbalanced consumption of NADPH could have resulted in the high ratio of NADP +/NADPH determined in BES condition (Fig. 5b). This could also be the reason for the changed ratio in the case of anaerobic cultivation adding K3[Fe(CN)6], because the small pool size of NADPH can experience a redox shift due to the mediator accepting electrons without a measurable difference in extracellular substrate concentrations.
Estimated flux distribution in P. putida F1 during glucose oxidation in the anode compartment of the BES reactors with K3[Fe(CN)6] (numbers on top) and [Co(bpy)3](ClO4)2 (numbers on bottom) as mediators, respectively. Solid lines represent measured fluxes and fluxes derived from mass balancing. Dashed lines highlight assumed fluxes, not directly deducible from mass balancing. PQQ pyrroloquinolinequinone, FAD flavin adenine dinucleotide, UQH2 reduced ubiquinones, Cyt C cytochromes C, ADP adenosine diphosphate, ATP adenosine triphosphate, NADP + /NADPH nicotinamide adenine dinucleotide phosphate (oxidised / reduced), NAD + /NADH nicotinamide adenine dinucleotide (oxidised / reduced), 2KGA 2-keto-gluconic acid, 2K6PG 2-keto-gluconic acid-6-phosphate, 6PGNT 6-phosphogluconic acid, KDPG 2-keto-3-deoxy-phosphogluconic acid, GAP glyceraldehyde-3-phosphate, F16BP fructose-1,6-bisphosphate, F6P fructose-6-phosphate, G6P glucose-6-phosphate, 1,3BPG 1,3-biphosphoglyceric acid, 3PG 3-phosphoglyceric acid, 2PG 2-phosphoglyceric acid, PEP phosphoenolpyruvic acid, PYR pyruvic acid, Med ox oxidised mediator, Med red reduced mediator
The observed increase in AEC (Fig. 4) raises the question if the energy is generated through a proton gradient-driven ATP synthase and/or substrate level phosphorylation. In the case of both mediators, acetic acid could be observed as a minor by-product derived from glycolytic pathways, through which ATP could be generated on the level of phosphoglycerate kinase (Fig. 6). The reported non-growth associated maintenance (NGAM) demand for P. putida KT2440 has been reported to be between 0.92 mmolATP/(gCDW h) [16] and 3.96 mmolATP/(gCDW h) [66]. Since the estimation of NGAM requires growth [67] it is currently not feasible to determine it in the BES. Independent of the assumed C6 uptake system, one mol ATP will be required for kinase reactions, while two mol ATP will be generated in glycolysis per mol C6 substrate taken up (Fig. 6). Based on the rates for acetate production (Table 1) and due to the fact that P. putida does not possess an acetate kinase [18] this equates to 0.06 and 0.16 mmolATP/(gCDW h) for [Co(bpy)3](ClO4)2 and K3[Fe(CN)6], respectively, or 1.5–6.5 % and 4–17 % of NGAM, depending which literature value is assumed. Even if the lower value for NGAM is still an overestimation, it seems that an additional energy production must be present, to explain the ongoing metabolic activity of the cells. Since carbon balances were closed and no other products detectable, this indicates that the cells must be able to generate energy through a process coupled to the electron transport process. Under the common assumption that three protons move through the ATP synthase to generate one ATP [68] and further respecting charge balance, which means that each of the electrons donated to the anode (measured as current) would have to take a cation (proton) out of the periplasmic space for charge balance, it is possible to balance the available periplasmic protons for ATP synthase (EC 3.6.3.14). For simplicity and due to the lack of knowledge, we assume that all electron transport is mediated via the quinone pool and that the mediator interacts with the terminal oxidase (EC 1.10.2.2) containing cytochrome c. This is thermodynamically feasible, based on the observed redox potentials (Table 1) [60]. The fact that the NAD +/NADH ratio remains on the oxidated side (Fig. 5) also implies that the NADH generated through acetic acid synthesis (Fig. 6) could be re-oxidised through the NADH dehydrogenase complex (EC 1.6.5.3) (Fig. 6) and hence the quinone pool must be re-oxidised through the interaction with the mediator.
Using the flux analysis to estimate the rate of ATP synthesis through ATP synthase that could be derived through the available proton gradient shows that 1.2 and 1.3 mmolATP/(gCDW h) for [Co(bpy)3](ClO4)2 and K3[Fe(CN)6], respectively, could have been generated. This would bring the total energy available within the range of NGAM, making this scenario much more likely than energy production through acetic acid formation alone. While this is only an estimation, it supports the idea that anodic oxidation of carbohydrates in a BES could also lead to ATP generation in the cells, which in our opinion is a prerequisite for maintaining active bio-catalysts over long periods of time and crucial for the viability of BES applications for bio-production.
By providing an electrode and redox chemicals as extracellular electron sinks, wild-type P. putida F1 was able to perform anoxic metabolism, without the need for metabolic engineering and without the formation of biomass as a substrate draining by-product. The redox power from electrode and redox chemicals drove the carbon flux from glucose to 2-Keto-gluconate with a high yield of over 90 %. A survey of different redox chemicals showed that a redox potential of above 0.207 V was crucial and that reaction rates increased with increasing redox potential. Energy was generated in metabolism, but the cells remained largely unable to fully oxidise the substrates to CO2 despite the intracellular redox co-factors being mainly oxidised. However, the study provides a proof of principle that a BES-driven bioconversion of glucose can achieve high yields, high purity and also deliver necessary energy for cell maintenance, and enables a strict aerobe to catalyse production under oxygen-free conditions for over a week. This opens the route to bi-phasic bio-processes, where the catalyst is grown under aerobic conditions and then used for anaerobic catalysis over long periods of time, without observable growth and hence drain of substrate. Combining this with metabolic engineering strategies could prove to be a powerful new way to produce bio-chemicals from renewable materials.
ADP:
adenosine diphosphate
AEC:
adenylate energy charge
adenosine monophosphate
ATP:
BES:
bioelectrochemical system
carbon balance
CDW:
cell dry weight
CE:
coulombic efficiency
CTAB:
cetrimonium bromide
Cyt C:
cytochromes C
DM9:
defined mineral medium
EDTA:
ethylenediaminetetraacetic acid
FAD/FADH2 :
flavin adenine dinucleotide
F16BP:
fructose-1,6-biphosphate
F6P:
fructose-6-phosphate
glyceraldehyde-3-phosphate
G6P:
glucose-6-phosphate
HPLC:
high performance liquid chromatography
KDPG:
2-keto-3-deoxy-phosphogluconic acid
Medox :
oxidised mediator
Medred :
reduced mediator
NAD+/NADH:
nicotinamide adenine dinucleotide (oxidised/reduced)
NADP+/NADPH:
nicotinamide adenine dinucleotide phosphate (oxidised/reduced)
NGAM:
non-growth-associated maintenance
PEP:
phosphoenolpyruvic acid
PQQ/PQQH2 :
pyrroloquinoline quinone
PYR:
pyruvic acid
SHE:
standard hydrogen electrode
TCA:
UQH2:
reduced ubiquinones
1,3BPG:
1,3-biphosphoglyceric acid
2KGA:
2-keto-gluconic acid
2K6PG:
2-keto-gluconic acid-6-phosphate
2PG:
2-phosphoglyceric acid
6PGNT:
6-phosphogluconic acid
World Energy Outlook, 2014 http://www.worldenergyoutlook.org/publications/weo-2014/.
Erickson B, Nelson J, Winters B. Perspective on opportunities in industrial biotechnology in renewable chemicals. Biotechnol J. 2012;7:176–85.
Gavrilescu M, Chisti Y. Biotechnology—a sustainable alternative for chemical industry. Biotechnol Adv. 2005;23(7–8):471–99.
Nikel PI, Martinez-Garcia E, de Lorenzo V. Biotechnological domestication of pseudomonads using synthetic biology. Nat Rev Micro. 2014;12(5):368–79.
Wierckx N, Ruijssenaars HJ, de Winde JH, Schmid A, Blank LM. Metabolic flux analysis of a phenol producing mutant of Pseudomonas putida S12: verification and complementation of hypotheses derived from transcriptomics. J Biotechnol. 2009;143(2):124–9.
Kuhn D, Buhler B, Schmid A. Production host selection for asymmetric styrene epoxidation: escherichia coli vs. solvent-tolerant Pseudomonas. J Ind Microbiol Biotechnol. 2012;39:1125–33.
Ramos JL, Duque E, Gallegos MT, Godoy P, Ramos-Gonzalez MI, Rojas A, Teran W, Segura A. Mechanisms of solvent tolerance in gram-negative bacteria. Annu Rev Microbiol. 2002;56:743–68.
Nielsen D, Leonard E, Yoon S, Tseng H, Yuan C, Prather K. Engineering alternative butanol production platforms in heterologous bacteria. Metab Eng. 2009;11:262–73.
Verhoef S, Wierckx N, Westerhof RG, de Winde JH, Ruijssenaars HJ. Bioproduction of p-hydroxystyrene from glucose by the solvent-tolerant bacterium Pseudomonas putida S12 in a two-phase water-decanol fermentation. Appl Environ Microbiol. 2009;75(4):931–6.
Nijkamp K, Westerhof R, Ballerstedt H, de Bont J, Wery J. Optimization of the solvent-tolerant Pseudomonas putida S12 as host for the production of p-coumarate from glucose. Appl Microbiol Biotechnol. 2007;74:617–24.
Wierckx N, Ballerstedt H, de Bont J, Wery J. Engineering of solvent-tolerant Pseudomonas putida S12 for bioproduction of phenol from glucose. Appl Environ Microbiol. 2005;71:8221–7.
Husken L, Beeftink R, de Bont J, Wery J. High-rate 3-methylcatechol production in Pseudomonas putida strains by means of a novel expression system. Appl Microbiol Biotechnol. 2001;55:571–7.
Meijnen JP, Verhoef S, Briedjlal AA, de Winde JH, Ruijssenaars HJ. Improved p-hydroxybenzoate production by engineered Pseudomonas putida S12 by using a mixed-substrate feeding strategy. Appl Microbiol Biotechnol. 2011;90(3):885–93.
Escapa I, Garcia J, Buhler B, Blank L, Prieto M. The polyhydroxyalkanoate metabolism controls carbon and energy spillage in Pseudomonas putida. Environ Microbiol. 2012;14:1049–63.
Blank L, Ionidis G, Ebert B, Buhler B, Schmid A. Metabolic response of Pseudomonas putida during redox biocatalysis in the presence of a second octanol phase. FEBS J. 2008;275:5173–90.
Ebert BE, Kurth F, Grund M, Blank LM, Schmid A. Response of Pseudomonas putida KT2440 to increased NADH and ATP demand. Appl Environ Microbiol. 2011;77(18):6597–605.
Chavarría M, Nikel PI, Pérez-Pantoja D, de Lorenzo V. The Entner–Doudoroff pathway empowers Pseudomonas putida KT2440 with a high tolerance to oxidative stress. Environ Microbiol. 2013;15(6):1772–85.
Nikel PI, de Lorenzo V. Engineering an anaerobic metabolic regime in Pseudomonas putida KT2440 for the anoxic biodegradation of 1,3-dichloroprop-1-ene. Metab Eng. 2013;15:98–112.
Knaggs AR. The biosynthesis of shikimate metabolites. Nat Prod Rep. 2003;20(1):119–36.
Garcia-Ochoa F, Gomez E. Bioreactor scale-up and oxygen transfer rate in microbial processes: an overview. Biotechnol Adv. 2009;27(2):153–76.
Hannon J, Bakker A, Lynd L, Wyman C. Comparing the scale-up of anaerobic and aerobic processes. In: Annual Meeting of the American Institute of Chemical Engineers: Salt Lake City, 2007.
Shukla VB, Zhou S, Yomano LP, Shanmugam KT, Preston JF, Ingram LO. Production of d(−)-lactate from sucrose and molasses. Biotechol Lett. 2004;26(9):689–93.
Costura RK, Alvarez PJJ. Expression and longevity of toluene dioxygenase in Pseudomonas putida F1 induced at different dissolved oxygen concentrations. Water Res. 2000;34(11):3014–8.
Steen A, Ütkür FÖ, Borrero-de Acuña JM, Bunk B, Roselius L, Bühler B, Jahn D, Schobert M. Construction and characterization of nitrate and nitrite respiring Pseudomonas putida KT2440 strains for anoxic biotechnical applications. J Biotechnol. 2013;163(2):155–65.
Du ZW, Li HR, Gu TY. A state of the art review on microbial fuel cells: a promising technology for wastewater treatment and bioenergy. Biotechnol Adv. 2007;25(5):464–82.
Lai B, Tang X, Li H, Du Z, Liu X, Zhang Q. Power production enhancement with a polyaniline modified anode in microbial fuel cells. Biosens Bioelectron. 2011;28(1):373–7.
Logan BE, Rabaey K. Conversion of wastes into bioelectricity and chemicals by using microbial electrochemical technologies. Science. 2012;337(6095):686–90.
Liu H, Grot S, Logan BE. Electrochemically assisted microbial production of hydrogen from acetate. Environ Sci Technol. 2005;39(11):4317–20.
Cao XX, Huang X, Liang P, Xiao K, Zhou YJ, Zhang XY, Logan BE. A new method for water desalination using microbial desalination cells. Environ Sci Technol. 2009;43(18):7148–52.
Rabaey K, Rozendal RA. Microbial electrosynthesis—revisiting the electrical route for microbial production. Nat Rev Microbiol. 2010;8(10):706–16.
Jourdin L, Freguia S, Donose BC, Chen J, Wallace GG, Keller J, Flexer V. A novel carbon nanotube modified scaffold as an efficient biocathode material for improved microbial electrosynthesis. J Mater Chem A. 2014;2(32):13093–102.
Nie H, Zhang T, Cui M, Lu H, Lovley DR, Russell TP. Improved cathode for high efficient microbial-catalyzed reduction in microbial electrosynthesis cells. Phys Chem Chem Phys. 2013;15(34):14290–4.
Virdis B, Read ST, Rabaey K, Rozendal RA, Yuan Z, Keller J. Biofilm stratification during simultaneous nitrification and denitrification (SND) at a biocathode. Bioresour Technol. 2011;102(1):334–41.
Nevin KP, Hensley SA, Franks AE, Summers ZM, Ou J, Woodard TL, Snoeyenbos-West OL, Lovley DR. Electrosynthesis of organic compounds from carbon dioxide catalyzed by a diversity of acetogenic microorganisms. Appl Environ Microbiol. 2011;77(9):2882–6.
Villano M, Aulenta F, Ciucci C, Ferri T, Giuliano A, Majone M. Bioelectrochemical reduction of CO2 to CH4 via direct and indirect extracellular electron transfer by a hydrogenophilic methanogenic culture. Bioresour Technol. 2010;101(9):3085–90.
Schmitz S, Nies S, Wierckx N, Blank LM, Rosenbaum MA. Engineering mediator-based electroactivity in the obligate aerobic bacterium Pseudomonas putida KT2440. Front Microbiol. 2015;6:284.
Rabaey K, Boon N, Hofte M, Verstraete W. Microbial phenazine production enhances electron transfer in biofuel cells. Environ Sci Technol. 2005;39(9):3401–8.
Zorn H, Czermak P, Lipinski G-WvR. Biotechnology of food and feed additives, vol. 143. Heidelberg: Springer; 2014.
Alagappan G, Cowan RM. Effect of temperature and dissolved oxygen on the growth kinetics of Pseudomonas putida F1 growing on benzene and toluene. Chemosphere. 2004;54(8):1255–65.
Guo K, Soeriyadi AH, Patil SA, Prévoteau A, Freguia S, Gooding JJ, Rabaey K. Surfactant treatment of carbon felt enhances anodic microbial electrocatalysis in bioelectrochemical systems. Electrochem Commun. 2014;39:1–4.
Bolten CJ, Kiefer P, Letisse F, Portais JC, Wittmann C. Sampling for metabolome analysis of microorganisms. Anal Chem. 2007;79(10):3843–9.
Moritz B, Striegel K, De Graaf AA, Sahm H. Kinetic properties of the glucose-6-phosphate and 6-phosphogluconate dehydrogenases from Corynebacterium glutamicum and their application for predicting pentose phosphate pathway flux in vivo. Eur J Biochem. 2000;267(12):3442–52.
Lundin A, Thore A. Comparison of methods for extraction of bacterial adenine nucleotides determined by firefly assay. Appl Microbiol. 1975;30(5):713–21.
Chapman AG, Fall L, Atkinson DE. Adenylate energy charge in Escherichia coli during growth and starvation. J Bacteriol. 1971;108(3):1072–86.
Ikeda T, Kurosaki T, Takayama K, Kano K, Miki K. Measurements of oxidoreductase-like activity of intact bacterial cells by an amperometric method using a membrane-coated electrode. Anal Chem. 1996;68(1):192–8.
Nikel PI, Chavarria M, Fuhrer T, Sauer U, de Lorenzo V. Pseudomonas putida KT2440 metabolizes glucose through a cycle formed by enzymes of the Entner-Doudoroff, Embden-Meyerhof-Parnas, and pentose phosphate pathways. J Biol Chem. 2015;290(43):25920–32.
Malinauskas A. Electrochemical study of riboflavin adsorbed on a graphite electrode. Chemija. 2008;19(2):1–3.
Bernhardt PV, Chen KI, Sharpe PC. Transition metal complexes as mediator-titrants in protein redox potentiometry. J Biol Inorg Chem. 2006;11(7):930–6.
Wang ZM, Liu CX, Wang XL, Marshall MJ, Zachara JM, Rosso KM, Dupuis M, Fredrickson JK, Heald S, Shi L. Kinetics of reduction of Fe(III) complexes by outer membrane cytochromes MtrC and OmcA of Shewanella oneidensis MR-1. Appl Environ Microbiol. 2008;74(21):6746–55.
Mcquillan AJ, Reid MR. Cyclic voltammetric studies of a thionine coated pyrolytic-graphite electrode. J Electroanal Chem. 1985;194(2):237–45.
O'Reilly JE. Oxidation-reduction potential of the ferro-ferricyanide system in buffer solutions. Biochim Biophys Acta. 1973;292(3):509–15.
Carter MT, Rodriguez M, Bard AJ. Voltammetric studies of the interaction of metal-chelates with DNA.2. Tris-chelated complexes of cobalt(Iii) and iron(Ii) with 1,10-phenanthroline and 2,2′-bipyridine. J Am Chem Soc. 1989;111(24):8901–11.
Park DH, Zeikus JG. Electricity generation in microbial fuel cells using neutral red as an electronophore. Appl Environ Microbiol. 2000;66(4):1292–7.
Park DH, Zeikus JG. Utilization of electrically reduced neutral red by Actinobacillus succinogenes: physiological function of neutral red in membrane-driven fumarate reduction and energy conservation. J Bacteriol. 1999;181(8):2403–10.
Marsili E, Baron DB, Shikhare ID, Coursolle D, Gralnick JA, Bond DR. Shewanella Secretes flavins that mediate extracellular electron transfer. P Natl Acad Sci USA. 2008;105(10):3968–73.
Yong YC, Cai Z, Yu YY, Chen P, Jiang R, Cao B, Sun JZ, Wang JY, Song H. Increase of riboflavin biosynthesis underlies enhancement of extracellular electron transfer of Shewanella in alkaline microbial fuel cells. Bioresour Technol. 2013;130:763–8.
Marcus RA. Electron transfer at electrodes and in solution: comparison of theory and experiment. Electrochim Acta. 1968;13(5):995–1004.
Marcus RA. On the theory of oxidation-reduction reactions involving electron transfer. I J Chem Phys. 1956;24(5):966–78.
Bard AJ, Faulkner LR. Electrochemical methods: fundamentals and applications, vol. 2nd. New York: John Wiley; 2001.
Kracke F, Vassilev I, Krömer JO. Microbial electron transport and energy conservation—the foundation for optimizing bioelectrochemical systems. Front Microbiol. 2015;6:575.
Neumann G, Cornelissen S, van Breukelen F, Hunger S, Lippold H, Loffhagen N, Wick LY, Heipieper HJ. Energetics and surface properties of Pseudomonas putida DOT-T1E in a two-phase fermentation system with 1-decanol as second phase. Appl Environ Microbiol. 2006;72(6):4232–8.
Eberl L, Givskov M, Sternberg C, Moller S, Christiansen G, Molin S. Physiological responses of Pseudomonas putida KT2442 to phosphate starvation. Microbiol Uk. 1996;142:155–63.
Fakhruddin ANM, Quilty B. Measurement of the growth of a floc forming bacterium Pseudomonas putida CP1. Biodegradation. 2007;18(2):189–97.
Khlyntseva SV, Bazel' YR, Vishnikin AB, Andruch V. Methods for the determination of adenosine triphosphate and other adenine nucleotides. J Anal Chem. 2009;64(7):657–73.
Martínez-García E, Nikel PI, Aparicio T, de Lorenzo V. Pseudomonas 2.0: genetic upgrading of P. putida KT2440 as an enhanced host for heterologous gene expression. Microb Cell Fact. 2014;13(1):1–15.
van Duuren J, Puchalka J, Mars A, Bucker R, Eggink G, Wittmann C, dos Santos VA. Reconciling in vivo and in silico key biological parameters of Pseudomonas putida KT2440 during growth on glucose under carbon-limited condition. BMC Biotechnol. 2013;13(1):93.
Pirt SJ. The maintenance energy of bacteria in growing cultures. Proc R Soc Lond B Biol Sci. 1965;163(991):224–31.
Berg JM, Tymoczko JL, Stryer L. Biochemistry. 5th ed. New York: W. H. Freeman and Company; 2002.
BL performed experiments and contributed to the design, acquisition and analysis of data. SY contributed to the cell extraction and enzymatic assay development. PVB provided expertise on the organic–metal complex redox chemicals and provided the chemicals [Co(Sep)]3+/2+, [Fe(EDTA)]−/2− and [Co(bpy)3]3+/2+. KR and BV contributed to design of the study and analysis of data. JOK developed the concept of the study, contributed to data analysis and preparation of the manuscript. All the authors were involved in the drafting and editing of the manuscript, read and approved the final manuscript.
The authors thank Dr. Nicholas Coleman (University of Sydney, Australia) for providing the P. putida F1strain, and Dr. Manuel Plan (Metabolomics Australia, University of Queensland) for metabolite analysis. The authors acknowledge strategic research and scholarship support by the University of Queensland.
Centre for Microbial Electrochemical Systems (CEMES), The University of Queensland, Office 618, Gehrmann Building (60), St. Lucia, Brisbane, QLD, 4072, Australia
Bin Lai, Shiqin Yu, Bernardino Virdis & Jens O. Krömer
Advanced Water Management Centre (AWMC), The University of Queensland, Brisbane, Australia
School of Chemistry and Molecular Biosciences, The University of Queensland, Brisbane, Australia
Paul V. Bernhardt
Laboratory of Microbial Ecology and Technology (LabMET), Ghent University, Ghent, Belgium
Korneel Rabaey
Bin Lai
Shiqin Yu
Bernardino Virdis
Jens O. Krömer
Correspondence to Jens O. Krömer.
An erratum to this article is available at http://dx.doi.org/10.1186/s13068-017-0843-8.
Additional file 1: Fig. S1.
Picture and the schematic drawing of BES reactor used. Fig. S2. Inhibition of Trichloroacetic acid (TCA) on the bioluminescent assay for ATP determination. Fig. S3. Regression analysis for the determination of product/glucose yield coefficients. Fig. S4. Current–time curve for BES reactors with/without mediators. Fig. S5. Determination and quantification of K3[Fe(CN)6] and K4[Fe(CN)6] by UV–vis spectroscopy. Fig. S6. Biofilm observed on the anode of BES reactor. Fig. S7. Abiotic control with mediator and full medium under BES conditions.
Lai, B., Yu, S., Bernhardt, P.V. et al. Anoxic metabolism and biochemical production in Pseudomonas putida F1 driven by a bioelectrochemical system. Biotechnol Biofuels 9, 39 (2016). https://doi.org/10.1186/s13068-016-0452-y
Anoxic metabolism
Pseudomonas putida F1
Redox mediators
Extracellular electron transfer
Bio-production
Chemical feedstocks | CommonCrawl |
Congruences modulo squares of primes for FU'S k dots bracelet partitions
Cristian Silviu Radu, James A. Sellers
In 2007, Andrews and Paule introduced the family of functions Δk(n) which enumerate the number of broken k-diamond partitions for a fixed positive integer k. In that paper, Andrews and Paule proved that, for all n ≥ 0, Δ1(2n+1) ≡ 0 (mod 3) using a standard generating function argument. Soon after, Shishuo Fu provided a combinatorial proof of this same congruence. Fu also utilized this combinatorial approach to naturally define a generalization of broken k-diamond partitions which he called k dots bracelet partitions. He denoted the number of k dots bracelet partitions of n by k(n) and proved various congruence properties for these functions modulo primes and modulo powers of 2. In this note, we extend the set of congruences proven by Fu by proving the following congruences: For all n ≥ 0, $$\begin{array}{r@{}cl}\mathfrak{B}-5(10n+7) &\equiv& 0 \pmod{5 2},\\[4pt]\mathfrak{B}-7(14n+11) &\equiv& 0 \pmod{7 2}, \quad {\rm and}\\[4pt]\mathfrak{B}-{11}(22n+21) &\equiv& 0 \pmod{11 2}\end{array}$$ We also conjecture an infinite family of congruences modulo powers of 7 which are satisfied by the function 7.
International Journal of Number Theory
C.-S. Radu was funded by the Austrian Science Fund (FWF), W1214-N15, project DK6 and by grant P2016-N18. The research was supported by the strategic program "Innovatives OÖ 2010 plus" by the Upper Austrian Government.
Broken k-diamonds
congruences
k dots bracelet partitions
Dive into the research topics of 'Congruences modulo squares of primes for FU'S k dots bracelet partitions'. Together they form a unique fingerprint.
Congruence Mathematics 100%
Modulo Mathematics 93%
Partition Mathematics 84%
Family Mathematics 21%
Generating Function Mathematics 19%
Standards Mathematics 12%
Integer Mathematics 11%
Radu, C. S., & Sellers, J. A. (2013). Congruences modulo squares of primes for FU'S k dots bracelet partitions. International Journal of Number Theory, 9(4), 939-943. https://doi.org/10.1142/S1793042113500073
Congruences modulo squares of primes for FU'S k dots bracelet partitions. / Radu, Cristian Silviu; Sellers, James A.
In: International Journal of Number Theory, Vol. 9, No. 4, 06.2013, p. 939-943.
Radu, CS & Sellers, JA 2013, 'Congruences modulo squares of primes for FU'S k dots bracelet partitions', International Journal of Number Theory, vol. 9, no. 4, pp. 939-943. https://doi.org/10.1142/S1793042113500073
Radu CS, Sellers JA. Congruences modulo squares of primes for FU'S k dots bracelet partitions. International Journal of Number Theory. 2013 Jun;9(4):939-943. https://doi.org/10.1142/S1793042113500073
Radu, Cristian Silviu ; Sellers, James A. / Congruences modulo squares of primes for FU'S k dots bracelet partitions. In: International Journal of Number Theory. 2013 ; Vol. 9, No. 4. pp. 939-943.
@article{8462c74f4af343169ab7e4f3bde77ade,
title = "Congruences modulo squares of primes for FU'S k dots bracelet partitions",
abstract = "In 2007, Andrews and Paule introduced the family of functions Δk(n) which enumerate the number of broken k-diamond partitions for a fixed positive integer k. In that paper, Andrews and Paule proved that, for all n ≥ 0, Δ1(2n+1) ≡ 0 (mod 3) using a standard generating function argument. Soon after, Shishuo Fu provided a combinatorial proof of this same congruence. Fu also utilized this combinatorial approach to naturally define a generalization of broken k-diamond partitions which he called k dots bracelet partitions. He denoted the number of k dots bracelet partitions of n by k(n) and proved various congruence properties for these functions modulo primes and modulo powers of 2. In this note, we extend the set of congruences proven by Fu by proving the following congruences: For all n ≥ 0, $$\begin{array}{r@{}cl}\mathfrak{B}-5(10n+7) &\equiv& 0 \pmod{5 2},\\[4pt]\mathfrak{B}-7(14n+11) &\equiv& 0 \pmod{7 2}, \quad {\rm and}\\[4pt]\mathfrak{B}-{11}(22n+21) &\equiv& 0 \pmod{11 2}\end{array}$$ We also conjecture an infinite family of congruences modulo powers of 7 which are satisfied by the function 7.",
keywords = "Broken k-diamonds, congruences, k dots bracelet partitions, modular forms, partitions",
author = "Radu, {Cristian Silviu} and Sellers, {James A.}",
note = "Funding Information: C.-S. Radu was funded by the Austrian Science Fund (FWF), W1214-N15, project DK6 and by grant P2016-N18. The research was supported by the strategic program "Innovatives O{\"O} 2010 plus" by the Upper Austrian Government.",
journal = "International Journal of Number Theory",
publisher = "World Scientific Publishing Co. Pte Ltd",
T1 - Congruences modulo squares of primes for FU'S k dots bracelet partitions
AU - Radu, Cristian Silviu
AU - Sellers, James A.
N1 - Funding Information: C.-S. Radu was funded by the Austrian Science Fund (FWF), W1214-N15, project DK6 and by grant P2016-N18. The research was supported by the strategic program "Innovatives OÖ 2010 plus" by the Upper Austrian Government.
N2 - In 2007, Andrews and Paule introduced the family of functions Δk(n) which enumerate the number of broken k-diamond partitions for a fixed positive integer k. In that paper, Andrews and Paule proved that, for all n ≥ 0, Δ1(2n+1) ≡ 0 (mod 3) using a standard generating function argument. Soon after, Shishuo Fu provided a combinatorial proof of this same congruence. Fu also utilized this combinatorial approach to naturally define a generalization of broken k-diamond partitions which he called k dots bracelet partitions. He denoted the number of k dots bracelet partitions of n by k(n) and proved various congruence properties for these functions modulo primes and modulo powers of 2. In this note, we extend the set of congruences proven by Fu by proving the following congruences: For all n ≥ 0, $$\begin{array}{r@{}cl}\mathfrak{B}-5(10n+7) &\equiv& 0 \pmod{5 2},\\[4pt]\mathfrak{B}-7(14n+11) &\equiv& 0 \pmod{7 2}, \quad {\rm and}\\[4pt]\mathfrak{B}-{11}(22n+21) &\equiv& 0 \pmod{11 2}\end{array}$$ We also conjecture an infinite family of congruences modulo powers of 7 which are satisfied by the function 7.
AB - In 2007, Andrews and Paule introduced the family of functions Δk(n) which enumerate the number of broken k-diamond partitions for a fixed positive integer k. In that paper, Andrews and Paule proved that, for all n ≥ 0, Δ1(2n+1) ≡ 0 (mod 3) using a standard generating function argument. Soon after, Shishuo Fu provided a combinatorial proof of this same congruence. Fu also utilized this combinatorial approach to naturally define a generalization of broken k-diamond partitions which he called k dots bracelet partitions. He denoted the number of k dots bracelet partitions of n by k(n) and proved various congruence properties for these functions modulo primes and modulo powers of 2. In this note, we extend the set of congruences proven by Fu by proving the following congruences: For all n ≥ 0, $$\begin{array}{r@{}cl}\mathfrak{B}-5(10n+7) &\equiv& 0 \pmod{5 2},\\[4pt]\mathfrak{B}-7(14n+11) &\equiv& 0 \pmod{7 2}, \quad {\rm and}\\[4pt]\mathfrak{B}-{11}(22n+21) &\equiv& 0 \pmod{11 2}\end{array}$$ We also conjecture an infinite family of congruences modulo powers of 7 which are satisfied by the function 7.
KW - Broken k-diamonds
KW - congruences
KW - k dots bracelet partitions
KW - modular forms
KW - partitions
JO - International Journal of Number Theory
JF - International Journal of Number Theory | CommonCrawl |
Dispersive estimates for the wave and the Klein-Gordon equations in large time inside the Friedlander domain
Period tripling and quintupling renormalizations below $ C^2 $ space
December 2021, 41(12): 5659-5705. doi: 10.3934/dcds.2021092
The nonlinear fractional relativistic Schrödinger equation: Existence, multiplicity, decay and concentration results
Vincenzo Ambrosio
Dipartimento di Ingegneria Industriale e Scienze Matematiche, Università Politecnica delle Marche, Via Brecce Bianche, 12, 60131 Ancona, Italy
Received December 2020 Revised April 2021 Published December 2021 Early access June 2021
In this paper we study the following class of fractional relativistic Schrödinger equations:
$ \begin{equation*} \left\{ \begin{array}{ll} (-\Delta+m^{2})^{s}u + V(\varepsilon x) u = f(u) &\text{ in } \mathbb{R}^{N}, \\ u\in H^{s}( \mathbb{R}^{N}), \quad u>0 &\text{ in } \mathbb{R}^{N}, \end{array} \right. \end{equation*} $
$ \varepsilon >0 $
is a small parameter,
$ s\in (0, 1) $
$ m>0 $
$ N> 2s $
$ (-\Delta+m^{2})^{s} $
is the fractional relativistic Schrödinger operator,
$ V: \mathbb{R}^{N} \rightarrow \mathbb{R} $
is a continuous potential satisfying a local condition, and
$ f: \mathbb{R} \rightarrow \mathbb{R} $
is a continuous subcritical nonlinearity. By using a variant of the extension method and a penalization technique, we first prove that, for
small enough, the above problem admits a weak solution
$ u_{\varepsilon } $
which concentrates around a local minimum point of
$ V $
$ \varepsilon \rightarrow 0 $
. We also show that
has an exponential decay at infinity by constructing a suitable comparison function and by performing some refined estimates. Secondly, by combining the generalized Nehari manifold method and Ljusternik-Schnirelman theory, we relate the number of positive solutions with the topology of the set where the potential
attains its minimum value.
Keywords: fractional relativistic Schrödinger operator, extension method, variational methods, Ljusternik-Schnirelman theory.
Mathematics Subject Classification: Primary: 35R11, 35J10, 35J20; Secondary: 35J60, 35B09, 58E05.
Citation: Vincenzo Ambrosio. The nonlinear fractional relativistic Schrödinger equation: Existence, multiplicity, decay and concentration results. Discrete & Continuous Dynamical Systems, 2021, 41 (12) : 5659-5705. doi: 10.3934/dcds.2021092
R. A. Adams, Sobolev Spaces, Pure and Applied Mathematics, Vol. 65 Academic Press, New York-London, 1975. Google Scholar
C. O. Alves and O. H. Miyagaki, Existence and concentration of solution for a class of fractional elliptic equation in $ \mathbb{R}^{N}$ via penalization method, Calc. Var. Partial Differential Equations, 55 (2016), art. 47, 19 pp. doi: 10.1007/s00526-016-0983-x. Google Scholar
A. Ambrosetti and P. H. Rabinowitz, Dual variational methods in critical point theory and applications, J. Funct. Anal., 14 (1973), 349-381. doi: 10.1016/0022-1236(73)90051-7. Google Scholar
V. Ambrosio, Ground states solutions for a non-linear equation involving a pseudo-relativistic Schrödinger operator, J. Math. Phys., 57 (2016), 051502, 18 pp. doi: 10.1063/1.4949352. Google Scholar
V. Ambrosio, Multiplicity of positive solutions for a class of fractional Schrödinger equations via penalization method, Ann. Mat. Pura Appl. (4), 196 (2017), 2043-2062. doi: 10.1007/s10231-017-0652-5. Google Scholar
V. Ambrosio, Concentrating solutions for a class of nonlinear fractional Schrödinger equations in $ \mathbb{R}^{N}$, Rev. Mat. Iberoam., 35 (2019), 1367-1414. doi: 10.4171/rmi/1086. Google Scholar
V. Ambrosio, Concentration phenomena for a class of fractional Kirchhoff equations in $ \mathbb{R}^{N}$ with general nonlinearities, Nonlinear Anal., 195 (2020), 111761, 39 pp. doi: 10.1016/j.na.2020.111761. Google Scholar
N. Aronszajn and K. T. Smith, Theory of Bessel potentials. I, Ann. Inst. Fourier (Grenoble), 11 (1961), 385-475. doi: 10.5802/aif.116. Google Scholar
C. Bucur and E. Valdinoci, Nonlocal Diffusion and Applications, Lecture Notes of the Unione Matematica Italiana, 20. Springer, Unione Matematica Italiana, Bologna, 2016. xii+155 pp. doi: 10.1007/978-3-319-28739-3. Google Scholar
H. Bueno, O. H. Miyagaki and G. A. Pereira, Remarks about a generalized pseudo-relativistic Hartree equation, J. Differential Equations, 266 (2019), 876-909. doi: 10.1016/j.jde.2018.07.058. Google Scholar
T. Byczkowski, J. Malecki and M. Ryznar, Bessel potentials, hitting distributions and Green functions, Trans. Amer. Math. Soc., 361 (2009), 4871-4900. doi: 10.1090/S0002-9947-09-04657-1. Google Scholar
L. Caffarelli and L. Silvestre, An extension problem related to the fractional Laplacian, Comm. Partial Differential Equations, 32 (2007), 1245-1260. doi: 10.1080/03605300600987306. Google Scholar
A.-P. Calderón, Lebesgue spaces of differentiable functions and distributions, Proc. Sympos. Pure Math., American Mathematical Society, Providence, R.I., 4 (1961), 33-49. Google Scholar
R. Carmona, W. C. Masters and B. Simon, Relativistic Schrödinger operators: Asymptotic behavior of the eigenfunctions, J. Func. Anal., 91 (1990), 117-142. doi: 10.1016/0022-1236(90)90049-Q. Google Scholar
S. Cingolani and S. Secchi, Semiclassical analysis for pseudo-relativistic Hartree equations, J. Differential Equations, 258 (2015), 4156-4179. doi: 10.1016/j.jde.2015.01.029. Google Scholar
V. Coti Zelati and M. Nolasco, Existence of ground states for nonlinear, pseudo-relativistic Schrödinger equations, Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl., 22 (2011), 51-72. doi: 10.4171/RLM/587. Google Scholar
V. Coti Zelati and M. Nolasco, Ground states for pseudo-relativistic Hartree equations of critical type, Rev. Mat. Iberoam., 29 (2013), 1421-1436. doi: 10.4171/RMI/763. Google Scholar
J. Dávila, M. del Pino, S. Dipierro and E. Valdinoci, Concentration phenomena for the nonlocal Schrödinger equation with Dirichlet datum, Anal. PDE, 8 (2015), 1165-1235. doi: 10.2140/apde.2015.8.1165. Google Scholar
J. Dávila, M. del Pino and J. Wei, Concentrating standing waves for the fractional nonlinear Schrödinger equation, J. Differential Equations, 256 (2014), 858-892. doi: 10.1016/j.jde.2013.10.006. Google Scholar
M. del Pino and P. L. Felmer, Local mountain passes for semilinear elliptic problems in unbounded domains, Calc. Var. Partial Differential Equations, 4 (1996), 121-137. doi: 10.1007/BF01189950. Google Scholar
E. Di Nezza, G. Palatucci and E. Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces, Bull. Sci. math., 136 (2012), 521-573. doi: 10.1016/j.bulsci.2011.12.004. Google Scholar
S. Dipierro, M. Medina and E. Valdinoci, Fractional Elliptic Problems with Critical Growth in the Whole of $ \mathbb{R}^{n}$, Appunti. Scuola Normale Superiore di Pisa (Nuova Serie) [Lecture Notes. Scuola Normale Superiore di Pisa (New Series)], 15. Edizioni della Normale, Pisa, 2017. doi: 10.1007/978-88-7642-601-8. Google Scholar
A. Erdélyi, W. Magnus, F. Oberhettinger and F. G. Tricomi, Higher Transcendental Functions. Vol. II, Based on notes left by Harry Bateman. Reprint of the 1953 original. Robert E. Krieger Publishing Co., Inc., Melbourne, Fla., 1981. Google Scholar
E. B. Fabes, C. E. Kenig and R. P. Serapioni, The local regularity of solutions of degenerate elliptic equations, Comm. Partial Differential Equations, 7 (1982), 77-116. doi: 10.1080/03605308208820218. Google Scholar
M. M. Fall and V. Felli, Unique continuation properties for relativistic Schrödinger operators with a singular potential, Discrete Contin. Dyn. Syst., 35 (2015), 5827-5867. doi: 10.3934/dcds.2015.35.5827. Google Scholar
P. Felmer, A. Quaas and J. Tan, Positive solutions of the nonlinear Schrödinger equation with the fractional Laplacian, Proc. Roy. Soc. Edinburgh Sect. A, 142 (2012), 1237-1262. doi: 10.1017/S0308210511000746. Google Scholar
P. Felmer and I. Vergara, Scalar field equation with non-local diffusion, NoDEA Nonlinear Differential Equations Appl., 22 (2015), 1411-1428. doi: 10.1007/s00030-015-0328-z. Google Scholar
G. M. Figueiredo and J. R. Santos, Multiplicity and concentration behavior of positive solutions for a Schrödinger-Kirchhoff type problem via penalization method, ESAIM Control Optim. Calc. Var., 20 (2014), 389-415. doi: 10.1051/cocv/2013068. Google Scholar
G. M. Figueiredo and G. Siciliano, A multiplicity result via Ljusternick-Schnirelmann category and Morse theory for a fractional Schrödinger equation in $ \mathbb{R}^{N}$, NoDEA Nonlinear Differential Equations Appl., 23 (2016), art. 12, 22 pp. doi: 10.1007/s00030-016-0355-4. Google Scholar
L. Grafakos, Modern Fourier analysis, Third edition. Graduate Texts in Mathematics, 250. Springer, New York, 2014. doi: 10.1007/978-1-4939-1230-8. Google Scholar
T. Grzywny and M. Ryznar, Two-sided optimal bounds for Green functions of half-spaces for relativistic $\alpha$-stable process, Potential Anal., 28 (2008), 201-239. doi: 10.1007/s11118-007-9071-3. Google Scholar
I. W. Herbst, Spectral theory of the operator $(p^{2}+m^{2})^{1/2}-Ze^{2}/r$, Comm. Math. Phys., 53 (1977), 285-294. Google Scholar
L. Hörmander, Lectures on Nonlinear Hyperbolic Differential Equations, Mathématiques & Applications (Berlin) [Mathematics & Applications], 26. Springer-Verlag, Berlin, 1997. Google Scholar
T. Jin, Y. Li and J. Xiong, On a fractional Nirenberg problem, part I: Blow up analysis and compactness of solutions, J. Eur. Math. Soc. (JEMS), 16 (2014), 1111-1171. doi: 10.4171/JEMS/456. Google Scholar
E. H. Lieb and M. Loss, Analysis, Graduate Studies in Mathematics, 14. American Mathematical Society, Providence, RI, 1997. Google Scholar
E. H. Lieb and H. T. Yau, The Chandrasekhar theory of stellar collapse as the limit of quantum mechanics, Comm. Math. Phys., 112 (1987), 147-174. doi: 10.1007/BF01217684. Google Scholar
P.-L. Lions, The concentration-compactness principle in the calculus of variations. The locally compact case. II, Ann. Inst. H. Poincaré Anal. Non Linéaire, 1 (1984), 223-283. doi: 10.1016/S0294-1449(16)30422-X. Google Scholar
G. Molica Bisci, V. Rǎdulescu and R. Servadei, Variational Methods for Nonlocal Fractional Problems, Cambridge University Press, 162 Cambridge, 2016. doi: 10.1017/CBO9781316282397. Google Scholar
J. Moser, A new proof of De Giorgi's theorem concerning the regularity problem for elliptic differential equations, Comm. Pure Appl. Math., 13 (1960), 457–468. doi: 10.1002/cpa.3160130308. Google Scholar
D. Mugnai, Pseudorelativistic Hartree equation with general nonlinearity: existence, non-existence and variational identities, Adv. Nonlinear Stud., 13 (2013), 799-823. doi: 10.1515/ans-2013-0403. Google Scholar
M. Ryznar, Estimate of Green function for relativistic $\alpha$-stable processes, Potential Analysis, 17 (2002), 1-23. doi: 10.1023/A:1015231913916. Google Scholar
S. Secchi, On some nonlinear fractional equations involving the Bessel potential, J. Dynam. Differential Equations, 29 (2017), 1173-1193. doi: 10.1007/s10884-016-9521-y. Google Scholar
E. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton Mathematical Series, No. 30 Princeton University Press, Princeton, N.J. 1970. Google Scholar
P. R. Stinga, User's guide to the fractional Laplacian and the method of semigroups, Handbook of Fractional Calculus with Applications, De Gruyter, Berlin, 2 (2019), 235–265. Google Scholar
P. R. Stinga and J. L. Torrea, Extension problem and Harnack's inequality for some fractional operators, Comm. Partial Differential Equations, 35 (2010), 2092-2122. doi: 10.1080/03605301003735680. Google Scholar
A. Szulkin and T. Weth, The method of Nehari manifold, Handbook of Nonconvex Analysis and Applications, Int. Press, Somerville, MA, 2010,597–632. Google Scholar
M. H. Taibleson, On the theory of Lipschitz spaces of distributions on Euclidean $n$-space. I. Principal properties, J. Math. Mech., 13 (1964), 407-479. Google Scholar
R. A. Weder, Spectral properties of one-body relativistic spin-zero Hamiltonians, Ann. Inst. H. Poincaré Sect. A (N.S.), 20 (1974), 211-220. Google Scholar
R. A. Weder, Spectral analysis of pseudodifferential operators, J. Functional Analysis, 20 (1975), 319-337. doi: 10.1016/0022-1236(75)90038-5. Google Scholar
M. Willem, Minimax Theorems, Progress in Nonlinear Differential Equations and their Applications 24, Birkhäuser Boston, Inc., Boston, MA, 1996. doi: 10.1007/978-1-4612-4146-1. Google Scholar
Xing-Bin Pan. Variational and operator methods for Maxwell-Stokes system. Discrete & Continuous Dynamical Systems, 2020, 40 (6) : 3909-3955. doi: 10.3934/dcds.2020036
Toshiyuki Suzuki. Scattering theory for semilinear Schrödinger equations with an inverse-square potential via energy methods. Evolution Equations & Control Theory, 2019, 8 (2) : 447-471. doi: 10.3934/eect.2019022
Umberto Biccari. Internal control for a non-local Schrödinger equation involving the fractional Laplace operator. Evolution Equations & Control Theory, 2022, 11 (1) : 301-324. doi: 10.3934/eect.2021014
Kaimin Teng, Xian Wu. Concentration of bound states for fractional Schrödinger-Poisson system via penalization methods. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2022014
Chenglin Wang, Jian Zhang. Cross-constrained variational method and nonlinear Schrödinger equation with partial confinement. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021036
Mouhamed Moustapha Fall, Veronica Felli. Unique continuation properties for relativistic Schrödinger operators with a singular potential. Discrete & Continuous Dynamical Systems, 2015, 35 (12) : 5827-5867. doi: 10.3934/dcds.2015.35.5827
Nguyen Dinh Cong, Roberta Fabbri. On the spectrum of the one-dimensional Schrödinger operator. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 541-554. doi: 10.3934/dcdsb.2008.9.541
Masoumeh Hosseininia, Mohammad Hossein Heydari, Carlo Cattani. A wavelet method for nonlinear variable-order time fractional 2D Schrödinger equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (7) : 2273-2295. doi: 10.3934/dcdss.2020295
Noboru Okazawa, Toshiyuki Suzuki, Tomomi Yokota. Energy methods for abstract nonlinear Schrödinger equations. Evolution Equations & Control Theory, 2012, 1 (2) : 337-354. doi: 10.3934/eect.2012.1.337
Augusto Visintin. An extension of the Fitzpatrick theory. Communications on Pure & Applied Analysis, 2014, 13 (5) : 2039-2058. doi: 10.3934/cpaa.2014.13.2039
Robert M. Strain. Coordinates in the relativistic Boltzmann theory. Kinetic & Related Models, 2011, 4 (1) : 345-359. doi: 10.3934/krm.2011.4.345
Zhili Ge, Gang Qian, Deren Han. Global convergence of an inexact operator splitting method for monotone variational inequalities. Journal of Industrial & Management Optimization, 2011, 7 (4) : 1013-1026. doi: 10.3934/jimo.2011.7.1013
Amina-Aicha Khennaoui, A. Othman Almatroud, Adel Ouannas, M. Mossa Al-sawalha, Giuseppe Grassi, Viet-Thanh Pham. The effect of caputo fractional difference operator on a novel game theory model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (8) : 4549-4565. doi: 10.3934/dcdsb.2020302
Hengguang Li, Jeffrey S. Ovall. A posteriori eigenvalue error estimation for a Schrödinger operator with inverse square potential. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1377-1391. doi: 10.3934/dcdsb.2015.20.1377
Xing-Bin Pan. An eigenvalue variation problem of magnetic Schrödinger operator in three dimensions. Discrete & Continuous Dynamical Systems, 2009, 24 (3) : 933-978. doi: 10.3934/dcds.2009.24.933
Ihyeok Seo. Carleman estimates for the Schrödinger operator and applications to unique continuation. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1013-1036. doi: 10.3934/cpaa.2012.11.1013
Valter Pohjola. An inverse problem for the magnetic Schrödinger operator on a half space with partial data. Inverse Problems & Imaging, 2014, 8 (4) : 1169-1189. doi: 10.3934/ipi.2014.8.1169
Joel Andersson, Leo Tzou. Stability for a magnetic Schrödinger operator on a Riemann surface with boundary. Inverse Problems & Imaging, 2018, 12 (1) : 1-28. doi: 10.3934/ipi.2018001
Ru-Yu Lai. Global uniqueness for an inverse problem for the magnetic Schrödinger operator. Inverse Problems & Imaging, 2011, 5 (1) : 59-73. doi: 10.3934/ipi.2011.5.59
Leyter Potenciano-Machado, Alberto Ruiz. Stability estimates for a magnetic Schrödinger operator with partial data. Inverse Problems & Imaging, 2018, 12 (6) : 1309-1342. doi: 10.3934/ipi.2018055 | CommonCrawl |
Publishing house / Journals and Serials / Colloquium Mathematicum / All issues
Colloquium Mathematicum
Online First articles
PDF files of articles are only available for institutions which have paid for the online version upon signing an Institutional User License.
Sets of lengths in atomic unit-cancellative finitely presented monoids
Volume 151 / 2018
Alfred Geroldinger, Emil Daniel Schwab Colloquium Mathematicum 151 (2018), 171-187 MSC: Primary 20M13, 20M05; Secondary 13A05. DOI: 10.4064/cm7242-6-2017 Published online: 20 November 2017
For an element $a$ of a monoid $H$, its set of lengths $\mathsf L (a) \subset \mathbb N$ is the set of all positive integers $k$ for which there is a factorization $a=u_1 \cdot \ldots \cdot u_k$ into $k$ atoms. We study the system $\mathcal L (H) = \{\mathsf L (a) \mid a \in H \}$ with a focus on the unions $\mathcal U_k (H) \subset \mathbb N$ of all sets of lengths containing a given $k \in \mathbb N$. The Structure Theorem for Unions—stating that for all sufficiently large $k$, the sets $\mathcal U_k (H)$ are almost arithmetical progressions with the same difference and global bound—has attracted much attention for commutative monoids and domains. We show that it holds true for the not necessarily commutative monoids in the title satisfying suitable algebraic finiteness conditions. Furthermore, we give an explicit description of the system of sets of lengths of the monoids $B_{n} = \langle a,b \mid ba=b^{n} \rangle $ for $n \in \mathbb N_{\ge 2}$. Based on this description, we show that the monoids $B_n$ are not transfer Krull.
Alfred GeroldingerInstitute for Mathematics and Scientific Computing
NAWI Graz
Heinrichstraße 36
http://imsc.uni-graz.at/geroldinge
Emil Daniel SchwabDepartment of Mathematical Sciences
500 W. University Ave
El Paso, TX 79968-0514, U.S.A.
http://www.math.utep.edu/Faculty/schwab/
8.00 EUR USD | CommonCrawl |
A particle executes linear simple harmonic motion with an amplitude of $$3\ cm$$. When the particle is at $$2\ cm$$ from the mean position, the magnitude of its velocity is equal to that of its acceleration. Then its time period in seconds is
√5π
√52π
4π√5
The correct option is C $$\dfrac {4\pi}{\sqrt {5}}$$
Velocity of particle executing S.H.M $$v=\omega \sqrt { { a }^{ 2 }-{ x }^{ 2 } } $$ and acceleration, $$a=-{ \omega }^{ 2 }x$$
$$x$$= displacement at any time interval
$$a$$ = amplitude$$=3cm$$
$$\omega$$= angular frequency, $$=\cfrac { 2\pi }{ T } $$
$$T$$=Time period
Now, at $$x=2cm$$
$$v=|a|$$
$$\Rightarrow { \omega }^{ 2 }\times 2=\omega \sqrt { { 3 }^{ 2 }-{ 2 }^{ 2 } } $$
$$\Rightarrow { \omega }=\cfrac { \sqrt { 5 } }{ 2 } $$
$$\Rightarrow \cfrac { 2\pi }{ T } =\cfrac { \sqrt { 5 } }{ 2 } $$
$$\Rightarrow T=\cfrac { 4\pi }{ \sqrt { 5 } } $$ | CommonCrawl |
Term (Formalized Language)
2010 Mathematics Subject Classification: Primary: 68P05 [MSN][ZBL]
This entry discusses terms as syntactically correct expressions in a formalized language defined over a signature $\Sigma =(S,F)$ and a set of variables. For terms as informal objects in mathematical expressions, see the entry term. For expressions similar to terms but representing a truth value instead of a type $s\in S$, see the entry formulas.
1 Definition of Terms
2 Identifying and Manipulating Free Variables
3 Ground Terms and Morphisms
Let $\Sigma =(S,F)$ be a signature. Let $X_s$ be a set of variables of sort $s\in S$ with $X_s\cap F=\emptyset$ and $X_s\cap S=\emptyset$. Furthermore, let the set of variables be defined as disjoint union $X:= \bigcup_{s\in S} X_s$. Then the set $T_s(\Sigma,X)$ of terms of sort $s$ is defined inductively as the smallest set containing all
$x\in X_s$
$f\in F$ being constants with range $s$ (i.e. type($f$) $=\,\, \rightarrow s$)
$f(t_1,\ldots,t_n)$ for $f\in F$ with type$(f)= s_1\times\cdots\times s_{ar(f)} \longrightarrow s$ and $t_i\in T_{s_i}(\Sigma,X)$
The set $T(\Sigma,X)$ of terms is defined as $T(\Sigma,X):= \bigcup\limits_{s\in S} T_s(\Sigma,X)$.
The terms $t\in T(\Sigma,X)$ are the elements of the formalized language given by the signature $\Sigma$ and the set $X$ of variables. Supplementing the function symbols $F$ of the signature $\Sigma$ by variables serves several purposes.
Using variables $X$, it is possible to construct well-defined language elements even if no constants belong to the signature. Representing terms as trees, only constants and variables can serve as leafs.
A single term $t\in T(\Sigma,X)$ containing a variable $x\in X_s$ of sort $s\in S$ can be used for representing the infinite collection of terms resulting from the substitution (see below) of $x$ by terms $t'\in T_s(\Sigma,X)$ of sort $s$.
Sometimes, a subterm $t_1\in T_s(\Sigma,X)$, $s\in S$ of a term $t\in T(\Sigma,X)$ is replaced by a term $t_2$ equivalent to $t_1$ (e.g. in the case of formula manipulation). In this case, the term $t$ with subterm $t_1$ replaced by a variable $x\in X_s$ not contained in $t$ defines the so-called context of the manipulation.
Identifying and Manipulating Free Variables
For the purposes listed above, it may be necessary to identify the free variables of a term. This is done using a mapping $V\colon T(\Sigma,X) \longrightarrow 2^X$, which is inductively defined as follows:
For $x\in X$, it holds $V(x)=\{x\}$
For constants, i.e. $c\in F$ with ar($c$) $=0$, it holds $V(c)=\emptyset$
For a term $f(t_1,\ldots,t_n)$ with $f\in F$ of type$(f)= s_1\times\cdots\times s_{ar(f)} \longrightarrow s$ and terms $t_i\in T_{s_i}(\Sigma,X)$, it holds $V(f(t_1,\ldots,t_n)) := V(t_1)\cup\cdots\cup V(t_n)$
Let $t,w\in T(\Sigma,X)$ be terms and $x\in X$ be a variable. The substitution $t[x\leftarrow w]$ of $x$ with $w$ is inductively defined as follows:
$x[x\leftarrow w]:= w$
$y[x\leftarrow w]:= y$ for $y\in X$ with $x\neq y$
$c[x\leftarrow w]:= c$ for $c\in F$ with ar($c$) $=0$
$f(t_1,\ldots,t_n)[x\leftarrow w] := f(t_1[x\leftarrow w],\ldots,t_n[x\leftarrow w])$ for a term $f(t_1,\ldots,t_n)$ with $f\in F$ of type$(f)= s_1\times\cdots\times s_{ar(f)} \longrightarrow s$ and terms $t_i\in T_{s_i}(\Sigma,X)$
Ground Terms and Morphisms
Terms $t$ without variables, i.e. $t\in T(\Sigma,\emptyset)=:T(\Sigma)$, are called ground terms. The ground terms $t$ of sort $s\in S$ are designated as $T_s(\Sigma):= t\in T_s(\Sigma,\emptyset)$. For all sets $X$ of variables and for all sorts $s\in S$, it holds $T_s(\Sigma) \subseteq T_s(\Sigma,X)$. For all sets $X$ of variables, it holds $T(\Sigma) \subseteq T(\Sigma,X)$. A term $t\in T(\Sigma,X)$ is called closed, if $V(t)= \emptyset$. It is closed, iff it is a ground term (i.e. $t\in T(\Sigma)$).
Every signature morphism $m\colon \Sigma_1\longrightarrow \Sigma_2$ for signatures $\Sigma_1=(S_1,F_1), \Sigma_2=(S_2,F_2)$ can be extended to a morphism $m'\colon T(\Sigma_1)\longrightarrow T(\Sigma_2)$ between ground terms. If the morphism $m$ can be extended to a mapping, which is defined for sets $X = \bigcup_{s\in S_1} X_s$, $X'= \bigcup_{s\in S_2} X'_s$ of variables as well with $m(x)\in X'_{m(s)}$ for $x\in X_s$, $s\in S_1$, the signature morphism $m$ can also be extended to a morphism $m^\ast\colon T(\Sigma_1,X)\longrightarrow T(\Sigma_2,X')$ between terms. Such an extension is called a translation. It replaces every function symbol $f\in F_1$ by the corresponding function symbol $\sigma(f)\in F_2$.
[EM85] H. Ehrig, B. Mahr: "Fundamentals of Algebraic Specifications", Volume 1, Springer 1985
[M89] B. Möller: "Algorithmische Sprachen und Methodik des Programmierens I", lecture notes, Technical University Munich 1989
[W90] M. Wirsing: "Algebraic Specification", in J. van Leeuwen: "Handbook of Theoretical Computer Science", Elsevier 1990
Term (Formalized Language). Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Term_(Formalized_Language)&oldid=29363
Retrieved from "https://encyclopediaofmath.org/index.php?title=Term_(Formalized_Language)&oldid=29363"
Numerical analysis and scientific computing | CommonCrawl |
PLato said,"Look to the perfection of the heavens for truth," while Aristotle said "look around you at what is, if you would know the truth" To Remember: Eskesthai
Oligarchy- A Historical Look from Plato's Dialogues
Backreaction: A little less conversation, a little more science please
An Oligarchy (Greek Ὀλιγαρχία, Oligarkhía) is a form of government in which power effectively rests with a small elite segment of society distinguished by royal, wealth, intellectual, family, military or religious hegemony. The word oligarchy is from the Greek words for "few" (ὀλίγος olígos) and "rule" (ἀρχή arkhē). Such states are often controlled by politically powerful families whose children are heavily conditioned and mentored to be heirs of the power of the oligarchy.[citation needed] Oligarchies have been tyrannical throughout history, being completely reliant on public servitude to exist. Although Aristotle pioneered the use of the term as a synonym for rule by the rich, for which the exact term is plutocracy, oligarchy is not always a rule by wealth, as oligarchs can simply be a privileged group. Some city-states from Ancient Greece were oligarchies.( bold added by me for emphasis)
Change "public servitude" to "consumerism" and this exemplifies for many what democracies have become?
Not only from a historical perspective do I introduce this material, but to indicate, that I along with many, have become disillusioned with the current politicization structures.
A desire then, for a more "introspective look" would then be accorded in a search for the political ideal. It was done for an "economic ideal," so it must be mustered in the same vain as the Economic Manhattan project where scientists had gathered for perspective. A desire then for a "21st Century View" toward a "just society."
See: Search function for concept here.
A search function listed by percentage of importance was then done in respect of Plato's commentary to be revealed in the Dialogues to offer perspective.
Plato : LAWS
Persons of the dialogue: An Athenian stranger - Cleinias, a Cretan
- Megillus, a Lacedaemonian
Translated by Benjamin Jowett - 60 Pages (Part 2)
Laws-Part 2 Page 23
Ath. I will do as you suggest. There is a tradition of the happy life of mankind in days when all things were spontaneous and abundant. And of this the reason is said to have been as follows: - Cronos knew what we ourselves were declaring, that no human nature invested with supreme power is able to order human affairs and not overflow with insolence and wrong. Which reflection led him to appoint not men but demigods, who are of a higher and more divine race, to be the kings and rulers of our cities; he did as we do with flocks of sheep and other tame animals. For we do not appoint oxen to be the lords of oxen, or goats of goats; but we ourselves are a superior race, and rule over them. In like manner God, in his love of mankind, placed over us the demons, who are a superior race, and they with great case and pleasure to themselves, and no less to us, taking care us and giving us peace and reverence and order and justice never failing, made the tribes of men happy and united. And this tradition, which is true, declares that cities of which some mortal man and not God is the ruler, have no escape from evils and toils. Still we must do all that we can to imitate the life which is said to have existed in the days of Cronos, and, as far as the principle of immortality dwells in us, to that we must hearken, both in private and public life, and regulate our cities and houses according to law, meaning by the very term "law," the distribution of mind. But if either a single person or an oligarchy or a democracy has a soul eager after pleasures and desires - wanting to be filled with them, yet retaining none of them, and perpetually afflicted with an endless and insatiable disorder; and this evil spirit, having first trampled the laws under foot, becomes the master either of a state or of an individual - then, as I was saying, salvation is hopeless. And now, Cleinias, we have to consider whether you will or will not accept this tale of mine.
Cle. Certainly we will.
Ath. You are aware - are you not? - that there are of said to be as many forms of laws as there are of governments, and of the latter we have already mentioned all those which are commonly recognized. Now you must regard this as a matter of first - rate importance. For what is to be the standard of just and unjust, is once more the point at issue. Men say that the law ought not to regard either military virtue, or virtue in general, but only the interests and power and preservation of the established form of government; this is thought by them to be the best way of expressing the natural definition of justice.
Cle. How?
Ath. Justice is said by them to be the interest of the stronger.
Cle. Speak plainer.
Plato:POLITEIA
Persons of the dialogue: Socrates - Glaucon - Polemarchus
- Adeimantus - Cephalus - Thrasymachus - Cleitophon
Translated by Benjamin Jowett - 71 Pages (Part 4) - Greek fonts
The ruin of oligarchy is the ruin of democracy; the same disease magnified and intensified by liberty overmasters democracy —the truth being that the excessive increase of anything often causes a reaction in the opposite direction; and this is the case not only in the seasons and in vegetable and animal life, but above all in forms of government.
The excess of liberty, whether in states or individuals, seems only to pass into excess of slavery.
Yes, the natural order.
And so tyranny naturally arises out of democracy, and the most aggravated form of tyranny and slavery out of the most extreme form of liberty?
How Do You Stop a Oligarchy?
Was introduced from a "consumerism point of view." It was directed toward the idea of governments who "gather money" to operate "their ideal structures of government" while depleting the resources of the "private citizen for that government operation."
Tax grabs to support that positions?
I am trying to orientate myself amidst these political terrain and would like nothing better then to be set straight to remove wrong thinking directed toward finding a "just government?" How do you define a "just government?"
Searching for the "Ideal Government for the 21st Century?"
Then is Now?
The ruin of oligarchy is the ruin of democracy; the same disease magnified and intensified by liberty overmasters democracy-Plato:POLITEIA
Can we recognize the decay in the processes of democratization?
It is when we are left with this "feeling of the electorate" to see that they have lost control of the government, that one senses this alienation from the process of democracy. Recognizes "agendas that are being played out," that were not part of the party stance with which they promised to govern before an election.
So what recourse then to see that the processes of legitimacy are recognized and drawn out, that it will become part of the rule of law and implemented, that there was really nothing that could have been done, citing petitions and initiatives toward recall.
Not so much now is there, as to what party and their allegiance, but to the recognition of democracy in decay that we all can now recognize.
The incapacitated people
The people feel disenfranchised. And it is disenfranchised. But the parliament) is directly legitimized by the electorate (at country level, the provincial assemblies. All other constitutional bodies, President and Chancellor (in progress), derive their legitimacy from it. Shift the political choices but from the circles of power in parliament and coalition rounds, who knows the Constitution does not, therefore, is sidelined by the Bundestag and abused only later to formally rubber-stamp, is a de facto policy demokratiefrei.
Sure I may point to "another country" to demonstrate the social construct of the democracy in question, but it is "not far" from what we can identify within our own, that we see the signs of the time?
Posted by PlatoHagel at 8:48:00 AM
Links to this post Labels: Daemon, Economic Manhattan Project, Plato, Plato's Cave, Socrates
Latex rendering update
Using equation on this blog asks that you put $ sign at the beginning and $sign at the end to substitute in bracket [ ]-tex and / tex
Here is the site language that will help blog developers with their latex language and give them an alternative from having to shift over to word press.
$\odot$ $\oplus$ $\pi$ $\omega$
$\LARGE U=\frac{-GMm}r=\frac{-GMh}{rc^2}{vo}$
$\large hv=hv_o[{1-}\frac{GM}{rc^2}] \hspace9 v=v_o[{1-}\frac{GM}{rc^2}] \hspace9 \frac{\bigtriangledown v} {v_o}={-}\frac{GM}{rc^2}$
Click on Image for a larger size
See:Help:Latex Symbols-Mathematics
Posted by PlatoHagel at 10:08:00 PM
Links to this post Labels: Blog Developers, latex rendering
The aim of dialogue in the academy is not merely to just state or assert one's opinion. Simply asserting one's opinion to one another has more in common with two dogs barking at each another than two people engaging in dialogue. The aim of dialogue is rather to defend or argue for one's opinion against any and all objections. The ideal here is the type of dialogues fictionalized in the early writings of Plato. An opinion that cannot be defended against objections has no place in the contemporary academy, just as it had no place in the ancient Greek academy.See: Against Anonymity(bold added for emphasis by me)
from discussion at Backreaction: Anonymity in Science
Jeffery did not understand the quality being exposed through the Dialogue to be accurate in his description used by identifying Plato's(what's in a name) Academy to critic against. For the names used were to coordinate perspective exchange to reveal the conclusion Plato wanted set forth?
Clearly, things that are raised in discussion highlight further response and the extensions of ideas as they are presented will become manifest. I was historically touched by the the subject of Being and Names to have this discussion highlighted by my own opinion as it is expressed in terms of what Anonymity means to me and what the subject of the story presents of itself. What then shall we always remember? Awards of distinction according to others by name, or by what we take into ourselves?
One can gain a sense of anonymity not by name but by the quality of truth in the words chosen. These reflect the essence of, and do become recognizable, not by name again, but by that quality of character. Is this quality of character that one sees, destined to come into this world by that some trait to recognizable by a name "only solidifies the world experience," but doe sit really reveal the essence of the person?
What ways can one imagine that such essence is descriptive again to have it mentioned as something that appears distinct from another person, who currently stands beside and reacts to a situation, unique and rightfully different then another does. To reveal, that essence is by quality hidden as essence in this nature of character, was designed by it's entrance into the world.
Plato:CRATYLUS Persons of the dialogue: Socrates - Hermogenes - Cratylus
SOC. But if this is a battle of names, some of them asserting that they are like the truth, others contending that they are, how or by what criterion are we to decide between them? For there are no other names to which appeal can be made, but obviously recourse must be had to another standard which, without employing names, will make clear which of the two are right; and this must be a standard which shows the truth of things.
Crat. I agree.
Soc. But if that is true, Cratylus, then I suppose that things may be known without names?
Crat. Clearly.
Soc. But how would you expect to know them? What other way can there be of knowing them, except the true and natural way, through their affinities, when they are akin to each other, and through themselves? For that which is other and different from them must signify something other and different from them.
Crat. What you are saying is, I think, true.
Soc. Well, but reflect; have we not several times acknowledged that names rightly given are the likenesses and images of the things which they name?
Crat. Yes.
Soc. Let us suppose that to any extent you please you can learn things through the medium of names, and suppose also that you can learn them from the things themselves- which is likely to be the nobler and clearer way to learn of the image, whether the image and the truth of which the image is the expression have been rightly conceived, or to learn of the truth whether the truth and the image of it have been duly executed?
Crat. I should say that we must learn of the truth.
Soc. How real existence is to be studied or discovered is, I suspect, beyond you and me. But we may admit so much, that the knowledge of things is not to be derived from names. No; they must be studied and investigated in themselves.
Crat. Clearly, Socrates.
Links to this post Labels: Plato
Raphael's Dissertation on Age and Youth?
School of Athens by Raphael
In center, while Plato - with the philosophy of the ideas and theoretical models, he indicates the sky, Aristotle - considered the father of Science, with the philosophy of the forms and the observation of the nature indicates the Earth. Many historians of the Art in the face correspondence of Plato with Leonardo, Heraclitus with Miguel Angel, and Euclides with Twine agree.
In a reflective occasion drawn to the center of the picture of Raphael, I am struck by the distinction of "age and youth" as I look at Plato and Aristotle. Of what has yet to descend into the minds of innovative and genuine science thinkers to know that the old man/woman works in concert with the science of youth, and this is something yet has still to unfold.
This drawing in red chalk is widely (though not universally) accepted as an original self-portrait. The main reason for hesitation in accepting it as a portrait of Leonardo is that the subject is apparently of a greater age than Leonardo ever achieved. But it is possible that he drew this picture of himself deliberately aged, specifically for Raphael's portrait of him in The School of Athens.See:Leonardo da Vinci
Why, when it is understood that Leonardo Da Vinci's face is emblazoned on the likes of Plato by Raphael? It's to call attention to Leonardo's inventiveness that one might speculate as to what "descends into any mind" that has been prep by and stands in concert, side by side with Aristotle of science?
PLato saids,"Look to the perfection of the heavens for truth," while Aristotle saids "look around you at what is, if you would know the truth" To Remember: Eskesthai
The ole wo/man represent all the possibilities of ingenuity as one moves to place their question. When it sinks deep into the vast reservoir of quantum descriptive world" it will then make sense that all things follow what has been put before the mind.
Posted by PlatoHagel at 10:26:00 AM
Links to this post Labels: Philosophy, Raphael, Signatore
Probability of Information Becoming?
I extend this blog posting from a comment here as a means to further understand what it is that geometers can do in my mind, as well, point toward the evolution of General Relativity with the aid of the geometer in mind. It was only by Einstein opening up, that such a success was accomplished? See also: Backreaction: This and That.
A map of how blogs are linked; but will blogging ever be seen as a genuine way to contribute to science? (Credit: Matthew Hurst/Science Photo Library)Doing Science in the Open
What about the second task, achieving cultural change? As any revolutionary can attest, that is a tall order. Let me describe two strategies that have been successful in the past, and that offer a template for future success. The first is a top-down strategy that has been successfully used by the open-access (OA) movement. The goal of the OA movement is to make scientific research freely available online to everyone in the world. It is an inspiring goal, and the OA movement has achieved some amazing successes. Perhaps most notably, in April 2008 the US National Institutes of Health (NIH) mandated that every paper written with the support of their grants must eventually be made open access. The NIH is the world's largest grant agency; this decision is the scientific equivalent of successfully storming the Bastille. See: Doing science in the open
While this picture above may seem complex, imagine the information behind it?
If you are going to be secretive or unsure of yourself as a scientist then what happens psychologically to scientists who work in the world so caution? I am not talking about being careful in what you might consider to present but of this stance could do to the mind if it had not wanted to share, but to put prestige ahead of being open to the public. What about it's own perception about failure? Not doing it just right and being perceived as? This is counterproductive to what boldness you might wanted to engender no matter the basis of, that it might seek new ways to bring about change and revolution in our thinking by providing new opportunities for growth.
Wikipedia contains numerous entries about science; the links between which are shown here; but scientists still seem reluctant to contribute to the site. (Credit-Chris Harrison, Carnegie Mellon University)Doing Science in the Open
There is something truly honorable in my eyes about "service to humanity" when such an opportunity is set forth to provide information not just to the public, but of the willingness to provide succession of experimental possibility, by creating the opportunity for insight and experimental testing methods of abstract notion toward working out the "probability of outcome" for new science to emerge?
A mind map is a diagram used to represent words, ideas, tasks or other items linked to and arranged radially around a central key word or idea. It is used to generate, visualize, structure and classify ideas, and as an aid in study, organization, problem solving, decision making, and writing.
It is an image-centered diagram that represents semantic or other connections between portions of information. By presenting these connections in a radial, non-linear graphical manner, it encourages a brainstorming approach to any given organizational task, eliminating the hurdle of initially establishing an intrinsically appropriate or relevant conceptual framework to work within.
A mind map is similar to a semantic network or cognitive map but there are no formal restrictions on the kinds of links used.
The elements are arranged intuitively according to the importance of the concepts and they are organized into groupings, branches, or areas. The uniform graphic formulation of the semantic structure of information on the method of gathering knowledge, may aid recall of existing memories. See: Mind Map
The fish is "soul food." The water the unconscious, all possible facets of the sensorium. The hook and worm, aspects of the "focus held" while you are fishing.
....versus....
Think about this for a moment. You have a vast library of information. This imaginary figure of mind moves into it and along with it, the possibilities of many pathways merging neurologically together. It can only do this of course once the framework had already been establish in mind( a soul in choosing to accept this responsibility), that a position is adopted. It is like fishing. You drop this bait/line into a vast reservoir for some knowledge (soul food) to emerge, as the next step in your own evolution. A becoming, as in the emergence of thought forming apparatus,exposed too, that which was not previously viewed before, yet, had always existed here in that possibility.
This type of growth is unprecedented in this way by supplying information toward such service for humanity, is, as if for every life on earth there is this opportunity for it to succeed in what it had chosen to come forward with in mind. To learn this time around. Only by "increasing the probability of outcome" can one achieve in my mind the possibility of any new science to emerge. Successful attempts toward growth and meaning to accomplish, what any soul had set out to do.
A concept map is a diagram showing the relationships among concepts. They are graphical tools for organizing and representing knowledge.
Concepts, usually represented as boxes or circles, are connected with labeled arrows in a downward-branching hierarchical structure. The relationship between concepts can be articulated in linking phrases such as "gives rise to", "results in", "is required by," or "contributes to". [1]
The technique for visualizing these relationships among different concepts is called "Concept mapping".
Links to this post Labels: Blog Developers, Mind Maps
Pushing Back Time
Credit: X-ray: NASA/CXC/PSU/S.Park & D.Burrows.; Optical: NASA/STScI/CfA/P.Challis
February 24, 2007 marks the 20th anniversary of one of the most spectacular events observed by astronomers in modern times, Supernova 1987A. The destruction of a massive star in the Large Magellanic Cloud, a nearby galaxy, spawned detailed observations by many different telescopes, including NASA's Chandra X-ray Observatory and Hubble Space Telescope. The outburst was visible to the naked eye, and is the brightest known supernova in almost 400 years.
This composite image shows the effects of a powerful shock wave moving away from the explosion. Bright spots of X-ray and optical emission arise where the shock collides with structures in the surrounding gas. These structures were carved out by the wind from the destroyed star. Hot-spots in the Hubble image (pink-white) now encircle Supernova 1987A like a necklace of incandescent diamonds. The Chandra data (blue-purple) reveals multimillion-degree gas at the location of the optical hot-spots. These data give valuable insight into the behavior of the doomed star in the years before it exploded.See:Supernova 1987A:
Twenty Years Since a Spectacular Explosion
(Bold added by me for emphasis)
Supernova Starting Gun: Neutrinos
Next they independently estimated how the hypothetical neutrinos would be picked up in a detector as massive as Super-Kamiokande in Japan, which contains 50,000 tons of water. The detector would only see a small fraction of the neutrinos. So the team outlined a method for matching the observed neutrinos to the supernova's expected luminosity curve to figure out the moment in time--to within about 10 milliseconds--when the sputtering star would have begun emitting neutrinos. In their supernova model, the bounce, the time of the first gravitational waves, occurs about 5 milliseconds before neutrino emission. So looking back at their data, gravitational wave hunters should focus on that point in time.
(again bold added for emphasis)
See Also:SciDAC Computational Astrophysics Consortium
Links to this post Labels: Hubble, Superfluids, SuperKamiokande, SuperNova
Exploring the Background of Information
Mean Gravity Field Image credit: Unless otherwise noted, images are provided by University of Texas Center for Space Research and NAS...
Oligarchy- A Historical Look from Plato's Dialogue...
Cambridge Relativity: Quantum Gravity
Center for Gravitational Wave Physics
Eskesthai Quantum Gravity Links
Living Reviews in Relativity
Perimeter Institute for Theoretical Physics
Prima Facie Questions in Quantum Gravity
QGravity
Quantum Gravity in Stanford Encyclopedia of Philosophy
Quantum gravity on Scholarpedia
Physics and Theoretics
Tommaso Dorigo
Machine Learning For Jets: A Workshop In New York - The third "Machine Learning for Jets" workshop is ongoing these days at the Kimmel centre of New York University, a nice venue overlooking Washington Squ...
How to test quantum gravity - Today I want to talk about a topic that most physicists get wrong: How to test quantum gravity. Most physicists believe it is just is not possible. But it ...
Black Holes and a Return to 2D Gravity! – Part I - Well, I think I promised to say a bit more about what I've been up to, resulting in that paper I talked about in an earlier post. The title of my paper, ...
Quantum Frontiers
An equation fit for a novel - Archana Kamal was hunting for an apartment in Cambridge, Massachusetts. She was moving MIT, to work as a postdoc in physics. The first apartment she toured...
Thanksgiving - This year we give thanks for space. (We've previously given thanks for the Standard Model Lagrangian, Hubble's Law, the Spin-Statistics Theorem, conservati...
Has a New Force of Nature Been Discovered? - There have been dramatic articles in the news media suggesting that a Nobel Prize has essentially already been awarded for the amazing discovery of a "fift...
Neutrino Telescopes XV
Sadly, no posts this year - The 2019 edition of Neutel is in full swing this week, and not a single blog post has appeared here – yet. Why? The problem, dear reader, is that this has ...
Both g-2 anomalies - Two months ago an experiment in Berkeley announced a new ultra-precise measurement of the fine structure constant α using interferometry techniques. This w...
TGD diary
Direct astrophysical evidence for monopole flux tubes - Monopole flux is the key property of flux tubes proposed to be behind various astrophysical structures. Is there any direct evidence for this? Evidence has...
A Vision of Ephemeral Ice - Artist Shoshannah White views the endangered Arctic ice through a unique lens -- Read more on ScientificAmerican.com
FQXi Community
Emergent Reality: Markus Müller at the 6th FQXi Meeting - At the 5th International FQXi conference in 2016, participants were given a marker and asked to write something on their conference badge that might serve ...
Chymistry of Isaac Newton
Closer to the Truth
Flat Land: A Romance of Many Dimensions
http://ncatlab.org/nlab/show/string+theory+FAQ
http://whystringtheory.com/
http://www.opensciencegrid.org/
Hyperphysics
IdeasRoadShow
Isaac Newton Resources
Latex Rendering
Nova: Einstein's Big Idea
Perimeter Institute Recorded Seminar Archive -PIRSA
Physics Stack Exchange
Plato-The Greek Word
Relativity:The Special and General Theory
Virtual Trips to Black Holes and Neutron Stars
ILC Home
LIGO Caltech
LIGO Information
Particle Interactions. Org
Particle Physics Education and Information sites
The Neutrino Oscillation Industry
Cosmology Sites and Astrophysics
Beyond Einstein
Cosmus
FQXi
Oskar Klein Centre
Qubit.Org
SciDac
Super Nova Science Cente
WMAP Mission
21 Grams (7)
Abraham Maslow (2)
Adinkras (5)
AdS/CFT (16)
Aerogels (1)
Agasa (3)
Albrecht Durer (17)
Alchemists (18)
Allotrope (7)
AMS (23)
Analogies (39)
Anthropic Principal (11)
Antony Garrett Lisi (4)
Aristotelean Arche (20)
Arthur Young (4)
Ashmolean Museum (8)
astrophysics (18)
Babar (4)
Benoit Mandelbrot (11)
BICEP2 (11)
Black Board (6)
Black Holes (103)
Blank Slate (12)
Blog Developers (11)
Boltzmann (10)
Book of the Dead (11)
Bose Condensate (10)
Branes (35)
Brian Greene (42)
Bubbles (35)
Calorimeters (25)
Carl Jung (10)
Cayley (9)
Cdms (7)
Cerenkov Radiation (27)
Chaldni (30)
CMB (12)
Coin (10)
Collision (64)
colorimetry (10)
Colour of Gravity (78)
Compactification (10)
Complexity (43)
Condense Matter (19)
Condensed Matter (22)
Cosmic Rays (67)
Cosmic Strings (55)
Coxeter (7)
Crab Nebula (6)
CubeSat (3)
Curvature Parameters (21)
Cymatics (7)
dark energy (83)
dark matter (78)
Deep Play (29)
Demarcation Problem (6)
Dimension (131)
Dirac (43)
Don Lincoln (3)
Donald Coxeter (10)
Donald Hoffman (5)
Economic Manhattan Project (8)
Einstein (155)
Entanglement (42)
EOT-WASH GROUP (6)
Eta Carinae (2)
Euler (11)
False Vacuum (18)
Faster Than Light (18)
Filter Bubble (2)
Finiteness in String theory Landscape (16)
first principle (15)
Fly's Eye (10)
Fractal Antenna (7)
Freedom Box (5)
General Relativity (133)
Genus Figures (23)
Geometrics (21)
geometries (165)
George Gabriel Stokes (8)
Giovanni Girolamo Saccheri (7)
Glast (60)
Gluon (50)
Gordon Kane (2)
Grace Satellite (50)
Grail (9)
Gran Sasso (27)
Gravimetry (2)
Graviton (59)
Gravity (282)
Gravity Probe B (4)
Hans Jenny (6)
Harmonic Oscillator (9)
Heisenberg (18)
Helioseismology (30)
HENRI POINCARE (16)
Heterodyne (4)
Higgs (44)
Holonomy (10)
Hooft (31)
House Building (25)
Howard Burton (3)
Hulse (12)
IceCube (33)
Imagery dimension (9)
imagery. gauss (1)
Induction (51)
Inertia (4)
Ingenuity (17)
Interferometer (11)
Intuition (35)
Inverse Square Law (27)
Isostasy (5)
Jacob Bekenstein (9)
Jet Quenching (11)
John Archibald Wheeler (10)
John Bachall (4)
John Bahcall (14)
John Mayer (4)
John Nash (10)
John Venn (11)
Juan Maldacena (17)
Kaluza (28)
Kip Thorne (36)
KK Tower (20)
Koan (8)
lagrangian (44)
latex rendering (4)
Laughlin (34)
Laval Nozzle (11)
Law of Octaves (10)
LCROSS (10)
Lee Smolin (2)
Liberal Arts (9)
LIGO (50)
Liminocentric (45)
Lisa Randall (19)
Loop Quantum (27)
LRO (7)
Ludwig Boltzmann (3)
M Theory (95)
Majorana Particles (2)
Mandelstam (8)
Marshall McLuhan (5)
Martin Rees (7)
Maurits Cornelis Escher (6)
Max Tegmark (11)
Medicine Wheel (14)
Membrane (16)
Mendeleev (15)
Meno (12)
Metaphors (5)
Microscopic Blackholes (45)
Microstate Blackholes (35)
Mike Lazaridis (1)
Mind Maps (28)
Moon Base (18)
Muons (46)
Navier Stokes (14)
Neil Turok (9)
Neurons (18)
Neutrinos (72)
Nima Arkani-Hamed (10)
Noam Chomsky (10)
nodal (12)
nodal Gauss Riemann (2)
Non Euclidean (67)
Numerical Relativity (11)
Octave (15)
Oh My God Particle (15)
Orbitals (10)
Oscillations (29)
Outside Time (39)
Particles (180)
Paul Steinhardt (6)
Perfect Fluid (21)
perio (1)
Periodic Table (4)
Peter Steinberg (14)
PHAEDRUS (6)
Phase Transitions (20)
Photon (72)
Pierre Auger (32)
Plato's Cave (40)
Plato's Nightlight Mining Company (15)
Platonist (19)
Polchinski (5)
Polytopes (7)
Powers of Ten (11)
Prof. Matt Strassler (6)
Projective Geometry (23)
qgp (3)
Quadrivium (15)
Quanglement (7)
Quantum Biology (24)
Quantum Chlorophyll (17)
Quantum Cognition (16)
Quantum Computers (13)
Quantum Gravity (193)
Quark Confinement (16)
Quark Gluon PLasma (82)
Quark Stars (21)
Quarks (40)
Quasicrystals (17)
Quiver (1)
Ramanujan (18)
Raphael Bousso (4)
Relativistic Muons (20)
Riemann Hypothesis (32)
Riemann Sylvestor surfaces (6)
Robert B. Laughlin (9)
Robert Pirsig (9)
Ronald Mallet (5)
Satellites (49)
School of Athens (12)
Schuman Response (1)
Schumann Response (6)
SCOAP3 (2)
Sean Carroll (16)
Self Evident (51)
Self-Organization (18)
Sensorium (5)
Seth Lloyd (6)
Shing-tung Yau (2)
Signatore (9)
Simulation Hypothesis (9)
Sir Isaac Newton (21)
Sir Roger Penrose (23)
Smolin (65)
SNO (9)
Socrates (23)
Socratic Method (30)
Sonification (31)
sonofusion (12)
Sonoluminence (11)
Soudan (3)
Space Station (40)
Space Treaty (6)
Space Weather (25)
Spherical Cow (8)
Spintronics (5)
Standard model (95)
StarShine (5)
Sterile Neutrinos (18)
Steve Giddings (7)
Strange Matter (39)
Strangelets (32)
String Theory (242)
Stuart Hameroff (6)
Stuart Kauffman (14)
Summing over Histories (8)
Sun. (1)
Superconductors (11)
Superfluids (44)
SuperKamiokande (11)
SuperNovas (20)
Supersymmetry (38)
Supertranslations (2)
Susskind (53)
Sylvester Surfaces (12)
Sylvestor surfaces (3)
Symmetry (93)
Symmetry Breaking (56)
Synesthesia (28)
Telescopes (13)
The Six of Red Spades (6)
Themis (5)
Theory of Everything (23)
Thomas Banchoff (13)
Thomas Kuhn (10)
Thomas Young (9)
Three Body Problem (13)
Tim Berners-Lee (6)
Timaeus (9)
Time Dilation (10)
Time Variable Measure (33)
TOE (16)
Tom Campbell (7)
Tomato Soup (5)
Tonal (8)
Toposense (18)
Transactional Analysis (5)
Triggering (4)
Tscan (6)
Tunnelling (14)
Twistor Theory (2)
Universal Library (5)
Usage Based Billing (5)
Veneziano (24)
Venn (9)
Vilenkin (5)
Viscosity (14)
VLBI (6)
Wayne Hu (12)
Web Science (8)
Webber (8)
Weber Bars (18)
When is a pipe a pipe? (14)
White Board (4)
White Space (14)
Witten (22)
WMAP (35)
WunderKammern (20)
Rhuthmos of Nature
PlatoHagel
The Hall of Ma'at.
If the heart was free from the impurities of sin, and therefore lighter than the feather, then the dead person could enter the eternal afterlife.
Travel theme. Theme images by wingmar. Powered by Blogger. | CommonCrawl |
ec72dedc8bd3571c0d0f65714da91ecc
<script type="text/javascript" src="https://animalscience.agri.huji.ac.il/profiles/openscholar/modules/os/modules/os_boxes/misc/os_boxes.resize_parent.js"></script> <iframe class="web-widgets-iframe" src="https://animalscience.agri.huji.ac.il/widget/embed/1562488823/iframe" scrolling="no" frameborder="0" width="100%"></iframe>
Publications by Authors
d4f8493be8496b49cb3b77a0677310ff
Active Faculty
Nurit Argov-Argaman
Lior David
Oren Forkush
Orna Halevy
Berta Sivan
Sameer Mabjeesh
Rina Meidan
Erez Mills
Zvi Roth
Israel Rozenboim
Sharon Schlesinger
Zehava Uni
Jaap van Rijn
Amiel Berman
Aharon Friedman
Dan Heller
David Wolfenson
Oocyte maturation in plasma or follicular fluid obtained from lipopolysaccharide-treated cows disrupts its developmental competence
Characterization of gonadotropin receptors Fshr and Lhr in Japanese medaka, Oryzias latipes
Changes in lipid droplets morphometric features in mammary epithelial cells upon exposure to non-esterified free fatty acids compared with VLDL
Early posthatch thermal stress causes long-term adverse effects on pectoralis muscle development in broilers
Cyclic-di-GMP regulation promotes survival of a slow-replicating subpopulation of intracellular Salmonella Typhimurium
Mono(2-ethylhexyl) phthalate (MEHP) induces transcriptomic alterations in oocytes and their derived blastocysts
Basavaraja, R. ; Madusanka, S. T. ; Drum, J. N. ; Shrestha, K. ; Farberov, S. ; Wiltbank, M. C. ; Sartori, R. ; Meidan, R. Interferon-Tau Exerts Direct Prosurvival and Antiapoptotic Actions in Luteinized Bovine Granulosa Cells. Scientific Reports 2019, 9. Publisher's VersionAbstract
Interferon-tau (IFNT), serves as a signal to maintain the corpus luteum (CL) during early pregnancy in domestic ruminants. We investigated here whether IFNT directly affects the function of luteinized bovine granulosa cells (LGCs), a model for large-luteal cells. Recombinant ovine IFNT (roIFNT) induced the IFN-stimulated genes (ISGs; MX2, ISG15, and OAS1Y). IFNT induced a rapid and transient (15–45 min) phosphorylation of STAT1, while total STAT1 protein was higher only after 24 h. IFNT treatment elevated viable LGCs numbers and decreased dead/apoptotic cell counts. Consistent with these effects on cell viability, IFNT upregulated cell survival proteins (MCL1, BCL-xL, and XIAP) and reduced the levels of gamma-H2AX, cleaved caspase-3, and thrombospondin-2 (THBS2) implicated in apoptosis. Notably, IFNT reversed the actions of THBS1 on cell viability, XIAP, and cleaved caspase-3. Furthermore, roIFNT stimulated proangiogenic genes, including FGF2, PDGFB, and PDGFAR. Corroborating the in vitro observations, CL collected from day 18 pregnant cows comprised higher ISGs together with elevated FGF2, PDGFB, and XIAP, compared with CL derived from day 18 cyclic cows. This study reveals that IFNT activates diverse pathways in LGCs, promoting survival and blood vessel stabilization while suppressing cell death signals. These mechanisms might contribute to CL maintenance during early pregnancy. © 2019, The Author(s).
Shrestha, K. ; Rodler, D. ; Sinowatz, F. ; Meidan, R. Chapter 16 - Corpus Luteum Formation. In The Ovary (Third Edition); Leung, P. C. K. ; Adashi, E. Y., Ed. The Ovary (Third Edition); Academic Press, 2019; pp. 255 - 267. Publisher's Version
Farberov, S. ; Meidan, R. Fibroblast growth factor-2 and transforming growth factor-beta1 oppositely regulate miR-221 that targets thrombospondin-1 in bovine luteal endothelial cells. Biology of Reproduction 2018, 98, 366-375. Publisher's VersionAbstract
Thrombospondin-1 (THBS1) affects corpus luteum (CL) regression. Highly induced during luteolysis, it acts as a natural anti-angiogenic, proapoptotic compound. THBS1 expression is regulated in bovine luteal endothelial cells (LECs) by fibroblast growth factor-2 (FGF2) and transforming growth factor-beta1 (TGFB1) acting in an opposite manner. Here we sought to identify specific microRNAs (miRNAs) targeting THBS1 and investigate their possible involvement in FGF2 and TGFB1-mediated THBS1 expression. Several miRNAs predicted to target THBS1 mRNA (miR-1, miR-18a, miR-144, miR-194, and miR-221) were experimentally tested. Of these, miR-221 was shown to efficiently target THBS1 expression and function in LECs. We found that this miRNA is highly expressed in luteal cells and in mid-cycle CL. Consistent with the inhibition of THBS1 function, miR-221 also reduced Serpin Family E Member 1 [SERPINE1] in LECs and promoted angiogenic characteristics of LECs. Plasminogen activator inhibitor-1 (PAI-1), the gene product of SERPINE1, inhibited cell adhesion, suggesting that PAI-1, like THBS1, has anti-angiogenic properties. Importantly, FGF2, which negatively regulates THBS1, elevates miR-221. Conversely, TGFB1 that stimulates THBS1, significantly reduces miR-221. Furthermore, FGF2 enhances the suppression of THBS1 caused by miR-221 mimic, and prevents the increase in THBS1 induced by miR-221 inhibitor. In contrast, TGFB1 reverses the inhibitory effect of miR-221 mimic on THBS1, and enhances the upregulation of THBS1 induced by miR-221 inhibitor. These data support the contention that FGF2 and TGFB1 modulate THBS1 via miR-221. These in vitro data propose that dynamic regulation of miR-221 throughout the cycle, affecting THBS1 and SERPINE1, can modulate vascular function in the CL. © The Author(s) 2017. Published by Oxford University Press on behalf of Society for the Study of Reproduction. All rights reserved.
Ochoa, J. C. ; Peñagaricano, F. ; Baez, G. M. ; Melo, L. F. ; Motta, J. C. L. ; Garcia-Guerra, A. ; Meidan, R. ; Pinheiro Ferreira, J. C. ; Sartori, R. ; Wiltbank, M. C. Mechanisms for rescue of corpus luteum during pregnancy: Gene expression in bovine corpus luteum following intrauterine pulses of prostaglandins e 1 and F 2α. Biology of Reproduction 2018, 98, 465-479. Publisher's VersionAbstract
In ruminants, uterine pulses of prostaglandin (PG) F 2α characterize luteolysis, while increased PGE 2 /PGE 1 distinguish early pregnancy. This study evaluated intrauterine (IU) infusions of PGF 2α and PGE 1 pulses on corpus luteum (CL) function and gene expression. Cows on day 10 of estrous cycle received 4 IU infusions (every 6 h; n = 5/treatment) of saline, PGE 1 (2 mg PGE 1), PGF 2α (0.25 mg PGF 2α), or PGE 1 + PGF 2α. A luteal biopsy was collected at 30 min after third infusion for determination of gene expression by RNA-Seq. As expected, IU pulses of PGF 2α decreased (P < 0.01) P4 luteal volume. However, there were no differences in circulating P4 or luteal volume between saline, PGE 1, and PGE 1 + PGF 2α, indicating inhibition of PGF 2α -induced luteolysis by IU pulses of PGE 1. After third pulse of PGF 2α, luteal expression of 955 genes were altered (false discovery rate [FDR] < 0.01), representing both typical and novel luteolytic transcriptomic changes. Surprisingly, after third pulse of PGE 1 or PGE 1 + PGF 2α, there were no significant changes in luteal gene expression (FDR > 0.10) compared to saline cows. Increased circulating concentrations of the metabolite of PGF 2α (PGFM; after PGF 2α and PGE 1 + PGF 2α) and the metabolite PGE (PGEM; after PGE 1 and PGE 1 + PGF 2α) demonstrated that PGF 2α and PGE 1 are entering bloodstream after IU infusions. Thus, IU pulses of PGF 2α and PGE 1 allow determination of changes in luteal gene expression that could be relevant to understanding luteolysis and pregnancy. Unexpectedly, by third pulse of PGE 1, there is complete blockade of either PGF 2α transport to the CL or PGF 2α action by PGE 1 resulting in complete inhibition of transcriptomic changes following IU PGF 2α pulses. © The Author(s) 2017. Published by Oxford University Press on behalf of Society for the Study of Reproduction. All rights reserved. For permissions, please e-mail:.
Farberov, S. ; Basavaraja, R. ; Meidan, R. Thrombospondin-1 at the crossroads of corpus luteum fate decisions. Reproduction 2018.Abstract
The multimodular matricellular protein thrombospondin-1 (THBS1) was among the first identified endogenous antiangiogenic molecules. Recent studies have shown THBS1-mediated suppression of angiogenesis and other critical activities for corpus luteum (CL) regression. THBS1 is specifically induced by prostaglandin F2alpha in mature CL undergoing regression, whereas luteinizing signals such as luteinizing hormone and insulin reduced its expression. THBS1 interacts both synergistically and antagonistically with other essential luteal factors, such as fibroblast growth factor 2, transforming growth factor beta1, and serpin family E member 1, to promote vascular instability, apoptosis, and matrix remodeling during luteal regression. Expression of THBS1 is also downregulated by pregnancy recognition signals to maintain the CL during early pregnancy. This dynamic pattern of luteal expression, the extensive interactivity with other luteal factors, and strong antiangiogenic and proapoptotic activities indicate that THBS1 is a major determinant of CL fate.
Shrestha, K. ; Meidan, R. The cAMP-EPAC Pathway Mediates PGE2-Induced FGF2 in Bovine Granulosa Cells. Endocrinologyendo 2018, 159, 3482 - 3491. Publisher's VersionAbstract
During the periovulatory period, the profile of fibroblast growth factor 2 (FGF2) coincides with elevated prostaglandin E2 (PGE2) levels. We investigated whether PGE2 can directly stimulate FGF2 production in bovine granulosa cells and, if so, which prostaglandin E2 receptor (PTGER) type and signaling cascades are involved. PGE2 temporally stimulated FGF2. Accordingly, endoperoxide-synthase2–silenced cells, exhibiting low endogenous PGE2 levels, had reduced FGF2. Furthermore, elevation of viable granulosa cell numbers by PGE2 was abolished with FGF2 receptor 1 inhibitor, suggesting that FGF2 mediates this action of PGE2. Epiregulin (EREG), a known PGE2-inducible gene, was studied alongside FGF2. PTGER2 agonist elevated cAMP as well as FGF2 and EREG levels. However, a marked difference between cAMP-induced downstream signaling was observed for FGF2 and EREG. Whereas FGF2 upregulated by PGE2, PTGER2 agonist, or forskolin was unaffected by the protein kinase A (PKA) inhibitor H89, EREG was significantly inhibited. FGF2 was dose-dependently stimulated by the exchange protein directly activated by cAMP (EPAC) activator; a similar induction was observed for EREG. However, forskolin-stimulated FGF2, but not EREG, was inhibited in EPAC1-silenced cells. These findings ascribe a novel autocrine role for PGE2, namely, elevating FGF2 production in granulosa cells. This study also reveals that cAMP-activated EPAC1, rather than PKA, mediates the effect of PGE2/PTGER2 on the expression of FGF2. Stimulation of EREG by PGE2 is also mediated by PTGER2 but, in contrast to FGF2, EREG was found to be PKA sensitive. PGE2-stimulated FGF2 can act to maintain granulosa cell survival; it can also act on ovarian endothelial cells to promote angiogenesis.
Kfir, S. ; Basavaraja, R. ; Wigoda, N. ; Ben-Dor, S. ; Orr, I. ; Meidan, R. Genomic profiling of bovine corpus luteum maturation. PLOS ONE 2018, 13, e0194456 -. Publisher's VersionAbstract
To unveil novel global changes associated with corpus luteum (CL) maturation, we analyzed transcriptome data for the bovine CL on days 4 and 11, representing the developing vs. mature gland. Our analyses revealed 681 differentially expressed genes (363 and 318 on day 4 and 11, respectively), with ≥2 fold change and FDR of <5%. Different gene ontology (GO) categories were represented prominently in transcriptome data at these stages (e.g. days 4: cell cycle, chromosome, DNA metabolic process and replication and on day 11: immune response; lipid metabolic process and complement activation). Based on bioinformatic analyses, select genes expression in day 4 and 11 CL was validated with quantitative real-time PCR. Cell specific expression was also determined in enriched luteal endothelial and steroidogenic cells. Genes related to the angiogenic process such as NOS3, which maintains dilated vessels and MMP9, matrix degrading enzyme, were higher on day 4. Importantly, our data suggests day 11 CL acquire mechanisms to prevent blood vessel sprouting and promote their maturation by expressing NOTCH4 and JAG1, greatly enriched in luteal endothelial cells. Another endothelial specific gene, CD300LG, was identified here in the CL for the first time. CD300LG is an adhesion molecule enabling lymphocyte migration, its higher levels at mid cycle are expected to support the transmigration of immune cells into the CL at this stage. Together with steroidogenic genes, most of the genes regulating de-novo cholesterol biosynthetic pathway (e.g HMGCS, HMGCR) and cholesterol uptake from plasma (LDLR, APOD and APOE) were upregulated in the mature CL. These findings provide new insight of the processes involved in CL maturation including blood vessel growth and stabilization, leucocyte transmigration as well as progesterone synthesis as the CL matures.
Shrestha, K. ; Onasanya, A. E. ; Eisenberg, I. ; Wigoda, N. ; Yagel, S. ; Yalu, R. ; Meidan, R. ; Imbar, T. miR-210 and GPD1L regulate EDN2 in primary and immortalized human granulosa-lutein cells. Reproduction 2018, 155, 197-205.Abstract
Endothelin-2 (EDN2), expressed at a narrow window during the periovulatory period, critically affects ovulation and corpus luteum (CL) formation. LH (acting mainly via cAMP) and hypoxia are implicated in CL formation; therefore, we aimed to elucidate how these signals regulate using human primary (hGLCs) and immortalized (SVOG) granulosa-lutein cells. The hypoxiamiR, microRNA-210 (miR-210) was identified as a new essential player in expression. Hypoxia (either mimetic compound-CoCl, or low O) elevated hypoxia-inducible factor 1A (HIF1A), miR-210 and Hypoxia-induced miR-210 was suppressed in HIF1A-silenced SVOG cells, suggesting that miR-210 is HIF1A dependent. Elevated miR-210 levels in hypoxia or by miR-210 overexpression, increased Conversely, miR-210 inhibition reduced levels, even in the presence of CoCl, indicating the importance of miR-210 in the hypoxic induction of A molecule that destabilizes HIF1A protein, glycerol-3-phosphate dehydrogenase 1-like gene-, was established as a miR-210 target in both cell types. It was decreased by miR-210-mimic and was increased by miR-inhibitor. Furthermore, reducing by endogenously elevated miR-210 (in hypoxia), miR-210-mimic or by siRNA resulted in elevated HIF1A protein and levels, implying a vital role for in the hypoxic induction of Under normoxic conditions, forskolin (adenylyl cyclase activator) triggered changes typical of hypoxia. It elevated , and miR-210 while inhibiting Furthermore, HIF1A silencing greatly reduced forskolin's ability to elevate and miR-210. This study highlights the novel regulatory roles of miR-210 and its gene target, GPD1L, in hypoxia and cAMP-induced by human granulosa-lutein cells.
Basavaraja, R. ; Przygrodzka, E. ; Pawlinski, B. ; Gajewski, Z. ; Kaczmarek, M. M. ; Meidan, R. Interferon-tau promotes luteal endothelial cell survival and inhibits specific luteolytic genes in bovine corpus luteum. Reproduction 2017, 154. Publisher's Version
Farberov, S. ; Meidan, R. Fibroblast growth factor-2 and transforming growth factor-beta1 oppositely regulate miR-221 that targets thrombospondin-1 in bovine luteal endothelial cells. Biology of Reproduction 2017, 98, 366 - 375. Publisher's VersionAbstract
Thrombospondin-1 (THBS1) affects corpus luteum (CL) regression. Highly induced during luteolysis, it acts as a natural anti-angiogenic, proapoptotic compound. THBS1 expression is regulated in bovine luteal endothelial cells (LECs) by fibroblast growth factor-2 (FGF2) and transforming growth factor-beta1 (TGFB1) acting in an opposite manner. Here we sought to identify specific microRNAs (miRNAs) targeting THBS1 and investigate their possible involvement in FGF2 and TGFB1-mediated THBS1 expression. Several miRNAs predicted to target THBS1 mRNA (miR-1, miR-18a, miR-144, miR-194, and miR-221) were experimentally tested. Of these, miR-221 was shown to efficiently target THBS1 expression and function in LECs. We found that this miRNA is highly expressed in luteal cells and in mid-cycle CL. Consistent with the inhibition of THBS1 function, miR-221 also reduced Serpin Family E Member 1 [SERPINE1] in LECs and promoted angiogenic characteristics of LECs. Plasminogen activator inhibitor-1 (PAI-1), the gene product of SERPINE1, inhibited cell adhesion, suggesting that PAI-1, like THBS1, has anti-angiogenic properties. Importantly, FGF2, which negatively regulates THBS1, elevates miR-221. Conversely, TGFB1 that stimulates THBS1, significantly reduces miR-221. Furthermore, FGF2 enhances the suppression of THBS1 caused by miR-221 mimic, and prevents the increase in THBS1 induced by miR-221 inhibitor. In contrast, TGFB1 reverses the inhibitory effect of miR-221 mimic on THBS1, and enhances the upregulation of THBS1 induced by miR-221 inhibitor. These data support the contention that FGF2 and TGFB1 modulate THBS1 via miR-221. These in vitro data propose that dynamic regulation of miR-221 throughout the cycle, affecting THBS1 and SERPINE1, can modulate vascular function in the CL.
Ochoa, J. C. ; Peñagaricano, F. ; Baez, G. M. ; Melo, L. F. ; Motta, J. C. L. ; Garcia-Guerra, A. ; Meidan, R. ; Pinheiro Ferreira, J. C. ; Sartori, R. ; Wiltbank, M. C. Mechanisms for rescue of corpus luteum during pregnancy: gene expression in bovine corpus luteum following intrauterine pulses of prostaglandins E1 and F2α†. Biology of Reproductionbiolreprod 2017, 98, 465 - 479. Publisher's VersionAbstract
In ruminants, uterine pulses of prostaglandin (PG) F2α characterize luteolysis, while increased PGE2/PGE1 distinguish early pregnancy. This study evaluated intrauterine (IU) infusions of PGF2α and PGE1 pulses on corpus luteum (CL) function and gene expression. Cows on day 10 of estrous cycle received 4 IU infusions (every 6 h; n = 5/treatment) of saline, PGE1 (2 mg PGE1), PGF2α (0.25 mg PGF2α), or PGE1 + PGF2α. A luteal biopsy was collected at 30 min after third infusion for determination of gene expression by RNA-Seq. As expected, IU pulses of PGF2α decreased (P < 0.01) P4 luteal volume. However, there were no differences in circulating P4 or luteal volume between saline, PGE1, and PGE1 + PGF2α, indicating inhibition of PGF2α-induced luteolysis by IU pulses of PGE1. After third pulse of PGF2α, luteal expression of 955 genes were altered (false discovery rate [FDR] < 0.01), representing both typical and novel luteolytic transcriptomic changes. Surprisingly, after third pulse of PGE1 or PGE1 + PGF2α, there were no significant changes in luteal gene expression (FDR > 0.10) compared to saline cows. Increased circulating concentrations of the metabolite of PGF2α (PGFM; after PGF2α and PGE1 + PGF2α) and the metabolite PGE (PGEM; after PGE1 and PGE1 + PGF2α) demonstrated that PGF2α and PGE1 are entering bloodstream after IU infusions. Thus, IU pulses of PGF2α and PGE1 allow determination of changes in luteal gene expression that could be relevant to understanding luteolysis and pregnancy. Unexpectedly, by third pulse of PGE1, there is complete blockade of either PGF2α transport to the CL or PGF2α action by PGE1 resulting in complete inhibition of transcriptomic changes following IU PGF2α pulses.
Meidan, R. ; Girsh, E. ; Mamluk, R. ; Levy, N. ; Farberov, S. Luteolysis in Ruminants: Past Concepts, New Insights, and Persisting Challenges. In The Life Cycle of the Corpus Luteum; Meidan, R., Ed. The Life Cycle of the Corpus Luteum; Springer International Publishing: Cham, 2017; pp. 159–182. Publisher's VersionAbstract
It is well established that in ruminants, and in other species with estrous cycles, luteal regression is stimulated by the episodic release of prostaglandin F2$\alpha$ (PGF2$\alpha$) from the uterus, which reaches the corpus luteum (CL) through a countercurrent system between the uterine vein and the ovarian artery. Because of their luteolytic properties, PGF2$\alpha$ and its analogues are routinely administered to induce CL regression and synchronization of estrus, and as such, it is the basis of protocols for synchronizing ovulation. Luteal regression is defined as the loss of steroidogenic function (functional luteolysis) and the subsequent involution of the CL (structural luteolysis). During luteolysis, the CL undergoes dramatic changes in its steroidogenic capacity, vascularization, immune cell activation, ECM composition, and cell viability. Functional genomics and many other studies during the past 20 years elucidated the mechanism underlying PGF2$\alpha$ actions, substantially revising old concepts. PGF2$\alpha$ acts directly on luteal steroidogenic and endothelial cells, which express PGF2$\alpha$ receptors (PTGFR), or indirectly on immune cells lacking PTGFR, which can be activated by other cells within the CL. Accumulating evidence now indicates that the diverse processes initiated by uterine or exogenous PGF2$\alpha$, ranging from reduction of steroid production to apoptotic cell death, are mediated by locally produced factors. Data summarized here show that PGF2$\alpha$ stimulates luteal steroidogenic and endothelial cells to produce factors such as endothelin-1, angiopoietins, nitric oxide, fibroblast growth factor 2, thrombospondins, transforming growth factor-B1, and plasminogen activator inhibitor-B1, which act sequentially to inhibit progesterone production, angiogenic support, cell survival, and ECM remodeling to accomplish CL regression.
Meidan, R. ; Girsh, E. ; Mamluk, R. ; Levy, N. ; Farberov, S. Luteolysis in ruminants: Past concepts, new insights, and persisting challenges. In The Life Cycle of the Corpus Luteum; The Life Cycle of the Corpus Luteum; 2016; pp. 159 - 182. Publisher's Version
Meidan, R. The life cycle of the corpus luteum; The Life Cycle of the Corpus Luteum; 2016; pp. 1 - 283. Publisher's Version
Farberov, S. ; Meidan, R. Thrombospondin-1 Affects Bovine Luteal Function via Transforming Growth Factor-Beta1-Dependent and Independent Actions1. Biology of Reproductionbiolreprod 2016, 94. Publisher's VersionAbstract
Thrombospondin-1 (THBS1) and transforming growth factor-beta1 (TGFB1) are specifically up-regulated by prostaglandin F2alpha in mature corpus luteum (CL). This study examined the relationship between the expression of THBS1 and TGFB1 and the underlying mechanisms of their actions in luteal endothelial cells (ECs). TGFB1 stimulated SMAD2 phosphorylation and SERPINE1 levels in dose- and time-dependent manners in luteal EC. THBS1 also elevated SERPINE1; this effect was abolished by TGFB1 receptor-1 kinase inhibitor (SB431542). The findings here further imply that THBS1 activates TGFB1 in luteal ECs: THBS1 increased the effects of latent TGFB1 on phosphorylated SMAD (phospho-SMAD) 2 and SERPINE1. THBS1 silencing significantly decreased SERPINE1 and levels of phospho-SMAD2. Lastly, THBS1 actions on SERPINE1 were inhibited by LSKL peptide (TGFB1 activation inhibitor); LSKL also counteracted latent TGFB1-induced phospho-SMAD2. We found that TGFB1 up-regulated its own mRNA levels and those of THBS1. Both compounds generated apoptosis, but THBS1 was significantly more effective (2.5-fold). Notably, this effect of THBS1 was not mediated by TGFB1. THBS1 and TGFB1 also differed in their activation of p38 mitogen-activated protein kinase. Whereas TGFB1 rapidly induced phospho-p38, THBS1 had a delayed effect. Inhibition of p38 pathway by SB203580 did not modulate TGFB1 effect on cell viability, but it amplified THBS1 actions. THBS1-stimulated caspase-3 activation coincided with p38 phosphorylation, suggesting that caspase-induced DNA damage initiated p38 phosphorylation. The in vitro data suggest that a feed-forward loop exists between THBS1, TGFB1, and SERPINE1. Indeed all these three genes were similarly induced in the regressing CL. Their gene products can promote vascular instability, apoptosis, and matrix remodeling during luteolysis. | CommonCrawl |
ePSproc
ePSproc Readme
Extended installation notes
ePSproc base and multijob class intro
Class demos:
ePSdata interface demo
ePSproc wavefunction plotting tests & demo
ePSproc wavefunction plotting tests & demo: CH3I with animation
Function demos:
ePSproc demo
ePSproc \(\beta_{L,M}\) calculations demo
ePSproc X-section demo
Matrix element LM plotting routines demo
Data stuctures - basic overview and demo
Matlab:
ePSproc Matlab demo
ePolyScat basics tutorial
Theoretical background
Numerical background
ePolyScat basic example calculation: N2 \(3\sigma_g^{-1}\) photoionization
System & job definition
Init job & run calculations
Running ePolyScat
ePolyScat: preparing an input file
Symmetry selection rules in photoionization
Worked example: N2 \(3\sigma_g^{-1}\)
Example: NO2 with autogeneration
Multi-energy calculations
ePolyScat: advanced usage
ePolyScat advanced usage tutorial
Geometric methods summary
Advanced/Special Topics:
Degenerate states tutorial and demo
Test Notebooks:
ePSproc function defn tests
Frame definitions & rotations tests
Low-level function tests & benchmarks
ePSproc - basic plotting development, XC version
hvPlotters function tests
Method Development:
Method development for geometric functions
Method development for geometric functions pt 2: \(\beta\) parameters with geometric functions.
Method development for geometric functions pt 3: \(\beta\) aligned-frame (AF) parameters with geometric functions.
ePSproc LF/AF function verification & tests
cclib + chemlab for orbital functions
Density Matrices notes + demo (ePSproc + PEMtk dev.)
Density Matrices
Function ref:
epsproc package
ePolyScat basics tutorial¶
Paul Hockett
Disclaimer: I am an enthusiastic ePolyScat user for photoionization calculations, but not an expert on the code. Nonetheless, this tutorial aims to go over some of the key features/uses of ePS for such problems - as far as my own usage goes - and provide an introduction and resource to new users.
Overview & resources¶
ePolyScat (ePS) is an open-source tool for numerical computation of electron-molecule scattering & photoionization by Lucchese & coworkers. For more details:
The ePolyScat website and manual. Note that the manual uses a frames-based layout, use the menu on the page to navigate to sub-sections (direct links break the menu).
Calculation of low-energy elastic cross sections for electron-CF4 scattering, F. A. Gianturco, R. R. Lucchese, and N. Sanna, J. Chem. Phys. 100, 6464 (1994), http://dx.doi.org/10.1063/1.467237
Cross section and asymmetry parameter calculation for sulfur 1s photoionization of SF6, A. P. P. Natalense and R. R. Lucchese, J. Chem. Phys. 111, 5344 (1999), http://dx.doi.org/10.1063/1.479794
Applications of the Schwinger variational principle to electron-molecule collisions and molecular photoionization. Lucchese, R. R., Takatsuka, K., & McKoy, V. (1986). Physics Reports, 131(3), 147–221. https://doi.org/10.1016/0370-1573(86)90147-X (comprehensive discussion of the theory and methods underlying the code).
ePSproc is an open-source tool for post-processing & visualisation of ePS results, aimed primarily at photoionization studies.
Ongoing documentation is on Read the Docs.
Source code is available on Github.
For more background, see the software metapaper for the original release of ePSproc (Aug. 2016): ePSproc: Post-processing suite for ePolyScat electron-molecule scattering calculations, on Authorea or arXiv 1611.04043.
ePSdata is an open-data/open-science collection of ePS + ePSproc results.
ePSdata collects ePS datasets, post-processed via ePSproc (Python) in Jupyter notebooks, for a full open-data/open-science transparent pipeline.
ePSdata is currently (Jan 2020) collecting existing calculations from 2010 - 2019, from the femtolabs at NRC, with one notebook per ePS job.
In future, ePSdata pages will be automatically generated from ePS jobs (via the ePSman toolset, currently in development), for immediate dissemination to the research community.
Source notebooks are available on the Github project pages, and notebooks + datasets via Zenodo repositories (one per dataset). Each notebook + dataset is given a Zenodo DOI for full traceability, and notebooks are versioned on Github.
Note: ePSdata may also be linked or mirrored on the existing ePolyScat Collected Results OSF project, but will effectively supercede those pages.
All results are released under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0) license, and are part of our ongoing Open Science initiative.
Workflow¶
The general workflow for photoionization calculations plus post-processing is shown below. This pipeline involves a range of code suites, as shown in the main workflow; some additional details are also illustrated. This tutorial will only discuss ePS.
Theoretical background¶
For general scattering theory notes, try these textbooks:
Quantum Mechanics Volume I. Messiah, A. (1970). North-Holland Publishing Company.
Introduction to the Quantum Theory of Scattering. Rodberg, L. S., & Thaler, R. M. (1967). Academic Press.
For notes focussed on photoionization problems, try:
Photoelectron angular distributions seminar series
Brief notes on scattering theory for photoionization
Quantum Metrology with Photoelectrons, Volume 1 Foundations. Hockett, P. (2018). IOP Publishing. https://doi.org/10.1088/978-1-6817-4684-5
For ePS, the details can be found in:
The Method section from the latter paper is reproduced here for reference:
Both the initial neutral molecule electronic wave function and the final ionized molecule electronic wave function are represented by single determinants constructed using the Hartree–Fock orbitals of the initial neutral state. The final \(N\)–electron state (continuum photoelectron + molecular ion) can then be variationally obtained from:
\begin{equation} \langle\delta\Psi_{\mathbf{k}}|H-E|\Psi_{\mathbf{k}}\rangle=0, \end{equation}
where \(\delta\Psi_{\mathbf{k}}\) represent variations in the final state due to variations on the continuum wave function \(\psi_{\mathbf{k}}(\mathbf{r})\), and \(\mathbf{k}\) is the momentum of the ejected electron. The final photoionization problem is then reduced to solving the problem of an electron under the action of the potential of the ion. Thus, we do not consider many-electron effects. The following scattering equation can be obtained from Eq. (1) (Ref. 19) (in atomic units),
\begin{equation} \left[-\frac{1}{2}\nabla^{2}+V(\mathbf{r})-\frac{k^{2}}{2}\right]\psi_{\mathbf{k}}^{(\pm)}(\mathbf{r})=0, \end{equation}
where \(V(\mathbf{r})\) is the static exchange potential given by:
\begin{equation} V(\mathbf{r})=-\sum_{\gamma=1}^{M}Z_{\gamma}|\mathbf{r}-\mathbf{R}_{\gamma}|^{-1}+\sum_{i=1}^{n_{occ}}(2\hat{J}_{i}-\hat{K}_{i})+\hat{J}_{\perp5}+\hat{K}_{\perp5} \end{equation}
for \(M\) nuclei of charge \(Z_{\gamma}\) located at \(\mathbf{R}\gamma\) , where \(n_{occ}\) is the number of doubly occupied orbitals. \(\hat{J}_{i}\) is the Coulomb operator,
\begin{equation} \hat{J}_{i}(\mathbf{r}_{1})=\int\frac{\phi_{i}^{*}(\mathbf{r}_{2})\phi_{i}(\mathbf{r}_{2})}{r_{12}}d^{3}r_{2}, \end{equation}
and \(\hat{K}_{i}\) is the nonlocal exchange potential operator
\begin{equation} (\hat{K}_{i}\psi)(\mathbf{r}_{1})=\phi_{i}(\mathbf{r}_{2})\int\frac{\phi_{i}^{*}(\mathbf{r}_{2})\psi(\mathbf{r}_{2})}{r_{12}}d^{3}r_{2} \end{equation}
Correlation and polarization effects can be included in the calculations through the addition of a local, energy independent model correlation polarization potential, such as described in Ref. 10 {[}…{]}
To solve the scattering problem, we use the single center expansion (SCE) method, where all three dimensional functions are expanded on a set of angular functions \(X_{lh}^{p\mu}(\theta,\phi)\), which are symmetry adapted, according to the irreducible representations of the molecular point group. An arbitrary three dimensional function \(F^{p\mu}(r,\theta,\phi)\) is then expanded as
\begin{equation} F^{p\mu}(r,\theta,\phi)=\sum_{lh}r^{-1}f_{lh}^{p\mu}(r)X_{lh}^{p\mu}(\theta,\phi), \end{equation}
\begin{equation} X_{lh}^{p\mu}(\theta,\phi)=\sum_{m}b_{lhm}^{p\mu}Y_{lm}(\theta,\phi), \end{equation}
and \(p\) is one of the irreducible representations of the molecular point group, \(m\) is a component of the representation \(p\), and \(h\) indexes all possible \(X_{lh}^{p\mu}\) belonging to the same irreducible representation (\(p\mu\)) with the same value of \(l\). \emph{The radial functions $f_{lh}^{p\mu}(r)$ are represented on a numerical grid}. When solving the scattering equations we enforce orthogonality between the continuum solutions and the occupied orbitals.\(^{19}\)
The matrix elements of the dipole operator are
\begin{equation} I_{\mathbf{k},\hat{n}}^{L}=(k)^{1/2}\langle\Psi_{i}|\mathbf{r}.\hat{n}|\Psi_{f,\mathbf{k}}^{(-)}\rangle \end{equation}
for the dipole length form, and
\begin{equation} I_{\mathbf{k},\hat{n}}^{V}=\frac{(k)^{1/2}}{E}\langle\Psi_{i}|\nabla.\hat{n}|\Psi_{f,\mathbf{k}}^{(-)}\rangle \end{equation}
for the dipole velocity form, where \(|\Psi_{i}\rangle\) is the initial bound state, \(|\Psi_{f,\mathbf{k}}^{(-)}\rangle\) is the final continuum state, \(E\) is the photon energy, \(\mathbf{k}\) is the momentum of the photoelectron, and \(\hat{n}\) is the direction of polarization of the light, which is assumed to be linearly polarized.
The matrix elements \(I_{\mathbf{k},\hat{n}}^{(L,V)}\) of Eqs. (8) and (9) can be expanded in terms of the \(X_{lh}^{p\mu}\) functions of Eq. (7) as\(^{14}\)
\begin{equation} I_{\mathbf{k},\hat{n}}^{(L,V)}=\left[\frac{4\pi}{3}\right]^{1/2}\sum_{p\mu lhv}I_{lhv}^{p\mu(L,V)}X_{lh}^{p\mu}(\hat{k})X_{1v}^{p_{v}\mu_{v}}(\hat{n}). \end{equation}
{[}Note here the final term gives polarization (dipole) terms, with \(l=1\), \(h=v\), corresponding to a photon with one unit of angular momentum and projections \(v=-1,0,1\), correlated with irreducible representations \(p_{v}\mu_{v}\).{]}
The differential cross section is given by
\begin{equation} \frac{d\sigma^{L,V}}{d\Omega_{\mathbf{k}}}=\frac{\sigma^{L,V}}{4\pi}[1+\beta_{\mathbf{k}}^{L,V}P_{2}(\cos\theta)], \end{equation}
where the asymmetry parameter can be written as\(^{14}\)
\begin{eqnarray} \beta_{\mathbf{k}}^{L,V} & = & \frac{3}{5}\frac{1}{\sum_{p\mu lhv}|I_{\mathbf{k},\hat{n}}^{(L,V)}|^{2}}\sum_{\stackrel{p\mu lhvmm_{v}}{p'\mu'l'h'v'm'm'_{v}}}(-1)^{m'-m_{v}}I_{\mathbf{k},\hat{n}}^{(L,V)}\nonumber \\ & \times & \left(I_{\mathbf{k},\hat{n}}^{(L,V)}\right)^{*}b_{lhm}^{p\mu}b_{l'h'm'}^{p'\mu'*}b_{1vm_{v}}^{p_{v}\mu_{v}}b_{1v'm'_{v}}^{p'_{v}\mu'_{v}*}\nonumber \\ & \times & [(2l+1)(2l'+1)]^{1/2}(1100|20)(l'l00|20)\nonumber \\ & \times & (11-m'_{v}m_{v}|2M')(l'l-m'm|2-M'), \end{eqnarray}
and the \((l'lm'm|L'M')\) are the usual Clebsch–Gordan coefficients. The total cross section is
\begin{equation} \sigma^{L,V}=\frac{4\pi^{2}}{3c}E\sum_{p\mu lhv}|I_{\mathbf{k},\hat{n}}^{(L,V)}|^{2}, \end{equation}
where c is the speed of light.
Numerical background¶
Numerically, ePS proceeds approximately as in the flow-chart above:
Molecular structure is parsed from the input quantum chemistry results;
Various functions process this input, plus additional user commands, to set up the problem (e.g. ionizing orbital selection, symmetry-adapted single point expansion, and subsequent determination of the scattering potential \(V(\mathbf{r})\));
The scattering solution is determined variationally (i.e. solving \(\langle\delta\Psi_{\mathbf{k}}|H-E|\Psi_{\mathbf{k}}\rangle=0\)) - this is essentially the Schwinger variational method, see ePS references at head of section for further details;
Various outputs are written to file - some examples are discussed below.
It's the variational solution for the continuum wavefunction that (usually) takes most of the computational effort, and is the core of an ePS run. This procedure can take anywhere from seconds to minutes to hours for a single point (== single input geometry, energy and symmetry), depending on the problem at hand, and the hardware.
In terms of the code, ePS is written in fortran 90, with MPI for parallelism, and LAPACK and BLAS libraries for numerical routines - see the intro section of the manual for more details.
ePolyScat basic example calculation: N2 \(3\sigma_g^{-1}\) photoionization¶
ePS uses a similar input format to standard quantum chemistry codes, with a series of data records and commands set by the user for a specific job.
As a basic intro, here is test12 from the ePS sample jobs:
# input file for test12
# N2 molden SCF, (3-sigma-g)^-1 photoionization
LMax 22 # maximum l to be used for wave functions
EMax 50.0 # EMax, maximum asymptotic energy in eV
FegeEng 13.0 # Energy correction (in eV) used in the fege potential
ScatEng 10.0 # list of scattering energies
InitSym 'SG' # Initial state symmetry
InitSpinDeg 1 # Initial state spin degeneracy
OrbOccInit 2 2 2 2 2 4 # Orbital occupation of initial state
OrbOcc 2 2 2 2 1 4 # occupation of the orbital groups of target
SpinDeg 1 # Spin degeneracy of the total scattering state (=1 singlet)
TargSym 'SG' # Symmetry of the target state
TargSpinDeg 2 # Target spin degeneracy
IPot 15.581 # ionization potentail
Convert '$pe/tests/test12.molden' 'molden'
GetBlms
ExpOrb
ScatSym 'SU' # Scattering symmetry of total final state
ScatContSym 'SU' # Scattering symmetry of continuum electron
FileName 'MatrixElements' 'test12SU.idy' 'REWIND'
GenFormPhIon
DipoleOp
GetPot
GetCro
ScatSym 'PU' # Scattering symmetry of total final state
ScatContSym 'PU' # Scattering symmetry of continuum electron
FileName 'MatrixElements' 'test12PU.idy' 'REWIND'
GetCro 'test12PU.idy' 'test12SU.idy'
Let's breakdown the various segments to this input file…
System & job definition¶
Basic file header info, comments with #.
Defined some numerical values for the calculation - the values here are defined in the Data Records section of the manual, and may have defaults if not set.
Define symmetries and orbital occupations. Note that the orbitals are grouped by degeneracy in ePS, so the numbering here may be different from that in the raw computational chemistry file output (which is typically not grouped).
Note that the symmetries set here correspond to specific orbital and continuum sets, so there may be multiple symmetries for a given problem - more on this later.
Init job & run calculations¶
Define electronic structure (in this case, from a Molden file), and run a single-point expansion in symmetry adapted harmonics (see Commands manual pages for more details).
Convert '$pe/tests/test12.molden' 'molden' # Read electronic structure file
The meat of ePS is running scattering computations, and calculating associated matrix elements/parameters/observables. In this case, there are two (continuum) symmetries, (SU, PU). In each case, the data records are set, and a sequence of commands run the computations to numerically determine the continuum (scattering) wavefunction (again, more details can be found via the Commands manual pages, comments added below are the basic command descriptions).
FileName 'MatrixElements' 'test12SU.idy' 'REWIND' # Set file for matrix elements
GenFormPhIon # Generate potential formulas for photoioniztion.
DipoleOp # Compute the dipole operator onto an orbital.
GetPot # Calculate electron density, static potential, and V(CP) potential.
PhIon # Calculate photionization dipole matrix elements.
GetCro # Compute photoionization cross sections from output of scatstab or from a file of dynamical coefficients.
# Repeat for 2nd symmetry
# Run a final GetCro command including both symmetries, set by specifying the matrix element files to use.
Running ePolyScat¶
Assuming you have ePS compiled/installed, then running the above file is a simple case of passing the file to the ePolyScat executable, e.g. /opt/ePolyScat.E3/bin/ePolyScat inputFile.
For a general code overview see the manual, contact R. R. Lucchese for source code.
Performance will depend on the machine, but for a decent multi-core workstation expect minutes to hours per energy point depending on the size and symmetry of the problem (again, see the test/example jobs for some typical timings.)
Results¶
Again following quantum chemistry norms, the main output file is an ASCII file with various sections, separated by keyword headers, corresponding different steps in the computations. Additional files - e.g. the matrix elements listed above - may also be of use, although are typically not user-readable/interpretable, but rather provided for internal use (e.g. running further calculations on matrix elements, used with utility programmes, etc.).
The full test12 output file can be found in the manual. We'll just look at the CrossSection segments, correlated with the GetCro commands in the input, which give the calculated \(\sigma^{L,V}\) and \(\beta_{\mathbf{k}}^{L,V}\) values (as per definitions above).
Here's the first exampe, for SU symmetry
CrossSection - compute photoionization cross section
Ionization potential (IPot) = 15.5810 eV
Label -
Cross section by partial wave F
Cross Sections for
Sigma LENGTH at all energies
25.5810 0.57407014E+01
Sigma MIXED at all energies
Sigma VELOCITY at all energies
Beta LENGTH at all energies
Beta MIXED at all energies
Beta VELOCITY at all energies
COMPOSITE CROSS SECTIONS AT ALL ENERGIES
Energy SIGMA LEN SIGMA MIX SIGMA VEL BETA LEN BETA MIX BETA VEL
EPhi 25.5810 5.7407 5.3223 4.9344 0.4743 0.4754 0.4764
Time Now = 19.9228 Delta time = 0.0067 End CrossSection
This provides a listing of the photoionization cross-section(s) \(\sigma\) (in mega barns, 1 Mb = \(10^{-18} cm^2\)), and associated anisotropy parameter(s) \(\beta\) (dimensionless) - in this case, just for a single energy point. Note that there are multiple values, these correspond to calculations in \(L\) or \(V\) gauge. The values are also tabulated at the end of the output, along with a note on the computational time.
Note that these values correspond to calculations for an isotropic ensemble, i.e. what would usually be measured in the lab frame for a 1-photon ionization from a gas sample, as given by the \(\beta_{\mathbf{k}}^{L,V}\) values defined earlier). For these normalised values, the differential cross section (photoelectron flux vs. angle) \(\frac{d\sigma^{L,V}}{d\Omega_{\mathbf{k}}}\) defined earlier can be written in a slightly simplified form, \(I(\theta) = \sigma(1 + \beta \cos^2(\theta))\) (note this is, by definition, cylindrically symmetric).
Here's the values corresponding to PU symmetry:
… and the final set of values for both continua:
In this case, the SU continua has a larger XS (~5.7 Mb) and a larger \(\beta\) value, hence will have a more anisotropic angular scattering distribution. The values for the different gauges are similar, indicating that these results should be (numerically) accurate - generally speaking, the differences in results between different gauges can be regarded as indicative of numerical accuracy and stability. (See, for example *Atoms and Molecules in Intense Laser Fields: Gauge Invariance ofTheory and Models*, A. D. Bandrauk, F. Fillion-Gourdeau and E. Lorin, arXiv 1302.2932 (best ref…?).)
ePolyScat: preparing an input file¶
In most cases, this is relatively simple, assuming that there is a suitable test example to use as a template. A few caveats…
There are quite a few computational parameters which can, and should, be played with. (I have not done enough of this, to be honest.)
There are lots of commands which are not really covered in the test examples, so exploring the manual is worthwhile.
One aspect which is non-trivial is the assignment of correct symmetries. To the best of my knowledge (which doesn't go very far here), there is no easy way to auto-generate these (although ePS will tell you if you got things wrong…), so maybe this can be considered as the necessary barrier to entry…! Some examples are given below.
(Some autogeneration for input files & general job management is currently in development.)
Symmetry selection rules in photoionization¶
Essentially, the relevant direct products must contain the totally symmetric representation of the point group to constitute an allowed combination:
\begin{eqnarray} \Gamma_{\mathrm{ion}}\otimes\Gamma_{\mathrm{electron}}\otimes\Gamma_{\mathrm{dipole}}\otimes\Gamma_{\mathrm{neutral}} & \supseteq & \mathrm{A_{1}}\\ \end{eqnarray}
Where \(\Gamma\) is the character of the quantity of interest. For the neutral and ion this will be the direct product of unpaired electrons - hence, for a closed-shell system, this is usually totally symmetric for the neutral, and identical to the character of the ionizing orbital for the ion. The dipole character corresponds to the \((x,y,z)\) operators in the point group, and the electron (continuum) character is what one needs to work out.
For more discussion, see, for example, Signorell, R., & Merkt, F. (1997). General symmetry selection rules for the photoionization of polyatomic molecules. Molecular Physics, 92(5), 793–804. DOI: 10.1080/002689797169745
Worked example: N2 \(3\sigma_g^{-1}\)¶
This corresponds to the test case above. We'll need the character table for \(D_{\infty h}\) (see also Oxford materials page from Atkins, Child & Philips, which includes the direct product tables (PDF version). Then just plug in and work through…
\begin{eqnarray} \Gamma_{\mathrm{ion}}\otimes\Gamma_{\mathrm{electron}}\otimes\Gamma_{\mathrm{dipole}}\otimes\Gamma_{\mathrm{neutral}} & \supseteq & \Sigma_{g}^{+}\\ \Sigma_{g}^{+}\otimes\Gamma_{\mathrm{electron}}\otimes\begin{array}{c} \Pi_{u}(x,y)\\ \Sigma_{u}^{+}(z) \end{array}\otimes\Sigma_{g}^{+} & \supseteq & \Sigma_{g}^{+}\\ \Sigma_{g}^{+}\otimes\Gamma_{\mathrm{electron}}\otimes\begin{array}{c} \Pi_{u}(x,y)\\ \Sigma_{u}^{+}(z) \end{array} & \supseteq & \Sigma_{g}^{+}\\ \Gamma_{\mathrm{electron}}\otimes\begin{array}{c} \Pi_{u}(x,y)\\ \Sigma_{u}^{+}(z) \end{array} & \supseteq & \Sigma_{g}^{+} \end{eqnarray}
Hence:
\begin{equation} \Gamma_{\mathrm{electron}}=\begin{array}{c} \Pi_{u}(x,y)\\ \Sigma_{u}^{+}(z) \end{array} \end{equation}
Finally we also need to specify the total scattering symmetry \(\Gamma_{\mathrm{scat}}=\Gamma_{\mathrm{ion}}\otimes\Gamma_{\mathrm{electron}}\):
\begin{equation} \Gamma_{\mathrm{scat}}=\Sigma_{g}^{+}\otimes\begin{array}{c} \Pi_{u}(x,y)\\ \Sigma_{u}^{+}(z) \end{array}=\begin{array}{c} \Pi_{u}(x,y)\\ \Sigma_{u}^{+}(z) \end{array} \end{equation}
… which is identical to \(\Gamma_{\mathrm{electron}}\) in this simple case.
These symmetries correspond to the SU and PU cases set in the ePS input earlier. (A full list of ePS supported symmetries is given in the manual.) It is worth noting that the continua correspond to the polarisation of the electric field in the molecular frame, hence which (Cartesian) dipole component is selected.
Example: NO2 with autogeneration¶
(To follow - part of epsman development…!)
Multi-energy calculations¶
The most common case (for photoionization problems) is likely to be setting multiple energy points for a calculation. An example is shown in test04 (full output), which basically uses the ScatEng record:
ScatEng 0.5 10.0 15.0 # list of scattering energies
In this case, later use of GetCro will output properties at the energies set here.
ePolyScat: advanced usage¶
See pt. 2 notebook!
Working directly with matrix elements.
Post-processing with ePSproc.
© Copyright 2019, Paul Hockett. Revision 1c0b8fd4. | CommonCrawl |
Median sales price of new houses
Data for the median sales price (MSP) of new houses was released this past week on FRED, and the data is showing a distinct correlated negative deviation which is generally evidence that a non-equilibrium shock is underway in the dynamic information equilibrium model (DIEM).
I added a counterfactual shock (in gray). This early on, there is a tendency for the parameter fit to underestimate the size of the shock (for an explicit example, see this version for the unemployment rate in the Great Recession). The model overall shows the housing bubble alongside the two shocks (one negative and one positive) to the level paralleling the ones seen in the Case Shiller index and housing starts.
This seems like a good time to look at the interest rate model and the yield curve / interest rate spreads. First, the interest rate model is doing extraordinarily well for having started in 2015:
I show the Blue Chip Economic Indicators forecast from 2015 as well as a recent forecast from the Wall Street Journal (click to embiggen):
And here's the median (~ principal component) interest rate spread we've been tracking for the past year (almost exactly — June 25, 2018):
If -28 bp was the lowest point (at the beginning of June), it's higher than previous three lowest points (-40 to -70 bp). Also, if it is in fact the lowest point, the previous three cycles achieved their lowest points between 1 and 5 quarters before the NBER recession onset.
PCE inflation
The DIEM for PCE inflation continues to perform fairly well ... though it's not the most interesting model in the current regime (the lowflation period has ended).
Here's the same chart with other forecasts on it:
The new gray dot with a black outline shows the estimated annual PCE inflation for 2019 assuming the previous data is a good sample (this is not the best assumption, but it gives an idea where inflation might end up given what we know today). The purple dots with the error bars are Fed projections, and the other purple dotted line is the forecast from Jan Hatzius of Goldman Sachs.
Mostly just to troll the DSGE haters, here's the FRB NY DSGE model forecast compared to the latest data — it's doing great!
But then the DIEM is right on as well with smaller error bands ...
A Workers' History of the United States 1948-2020
Available now! Click here!
After seven years of economic research and developing forecasting models that have outperformed the experts, author, blogger, and physicist Dr. Jason Smith offers his controversial insights about the major driving factors behind the economy derived from the data and it's not economics — it's social changes. These social changes are behind the questions of who gets to work, how those workers organize, and how workers identify politically — and it is through labor markets that these social changes manifest in economic effects. What would otherwise be a disjoint and nonsensical postwar economic history of the United States is made into a cohesive workers' history driven by women entering the workforce and the backlash to the Civil Rights movement — plainly: sexism and racism. This new understanding of historical economic data offers lessons for understanding the political economy of today and insights for policies that might actually work.
Dr. Smith is a physicist who began with quarks and nuclei before moving into research and development in signal processing and machine learning in the aerospace industry. During a government fellowship from 2011 to 2012 — and in the aftermath of the global financial crisis — he learned about the potential use of prediction markets in the intelligence community and began to assess their validity using information theoretic approaches. From this spark, Dr. Smith developed the more general information equilibrium approach to economics which has shown to have broader applications to neuroscience and online search trends. He wrote A Random Physicist Takes on Economics in 2017 documenting this intellectual journey and the change in perspective towards economic theory and macroeconomics that comes with this framework. This change in perspective to economic theory came with new interpretations of economic data over time that finally came together in this book.
The book I've been working on for the past year and a half — A Workers' History of the United States 1948-2020 — is now available on Amazon as a Kindle e-book or a paperback. Get your copy today! Head over to the book website for an open thread for your first impressions and comments. And pick up a copy of A Random Physicist Takes on Economics if you haven't already ...
Update 7am PDT 24 June 2019
The paperback edition still says "publishing" on KDP, but it should be ready in the next 24-48 hours. However, I did manage to catch what is probably a fleeting moment where the book is #1 in Macroeconomics:
Update 2pm PDT 24 June 2019
Paperback is live!
Posted by Jason Smith at 5:00 AM
Sometimes I feel like I don't see the data
Sometimes I feel like my only friend.
I've seen links to this nymag article floating around the interwebs that purports to examine labor market data for evidence that the Fed rate hike of 2015 was some sort of ominous thing:
But refrain they did not.
Instead, the Federal Reserve began raising interest rates in 2015 ...
Scott Lemieux (a poli sci lecturer at the local university) puts it this way:
But the 2015 Fed Rate hike was based on false premises and had disastrous consequences, not only because of the direct infliction of unnecessary misery on many Americans, but because it may well have been responsible for both President Trump and the Republican takeover of the Senate, with a large amount of resultant damage that will be difficult or impossible to reverse.
Are we looking at the same data? Literally nothing happened in major labor market measures in December of 2015 (here: prime age labor force participation, JOLTS hires, unemployment rate, wage growth from ATL Fed):
There were literally no consequences from the Fed rate hike in terms of labor markets. All of these time series continued along their merry log-linear equilibrium paths. It didn't even end the 2014 mini-boom (possibly triggered by Obamacare going into effect) which was already ending.
But it's a good opportunity to plug my book which says that the Fed is largely irrelevant (although it can make a recession worse). The current political situation is about changing alliances and identity politics amid the backdrop of institutions that under-weight urban voters.
Update + 30 minutes
Before someone mentions something about the way the BLS and CPS count unemployment, let me add that nothing happened in long term unemployment either:
The mini-boom was already fading. Long term unemployment has changed, but the change (like the changes in many measures) came in the 90s.
Resolving the Cambridge capital controversy with logic
So I wrote somewhat tongue-in-cheek blog post a few years ago titled "Resolving the Cambridge capital controversy with abstract algebra" [RCCC I] that called the Cambridge Capital Controversy [CCC] for Cambridge, UK in terms of the original debate they they were having — summarized by Joan Robinson's claim that you can't really add apples and oranges (or in this case printing presses and drill presses) to form a sensible definition of capital. I used a bit of group theory and the information equilibrium framework to show that you can't simply add up factors of production. I mentioned at the bottom of that post that there are really easy ways around it — including a partition function approach in my paper — but Cambridge, MA (Solow and Samuelson) never made those arguments.
On the Cambridge, MA side no one seemed to care because the theory seemed to "work" (debatable). A few years passed and eventually Samuelson conceded Robinson and Sraffa were in fact right about their re-switching arguments. A short summary is available in an NBER paper from Baqaae and Farhi, but what interested me about that paper was that the particular way they illustrated it made it clear to me that the partition function approach also gets around the re-switching arguments. So I wrote that up in a blog post with another snarky title "Resolving the Cambridge capital controversy with MaxEnt" [RCCC II] (a partition function is maximum entropy distribution or MaxEnt).
This of course opened a can of worms on Twitter when I tweeted out the link to my post. The first volley was several people saying Cobb-Douglas functions were just a consequence of accounting identities or that they fit any data — a lot of which was based on papers by Anwar Shaikh (in particular the "humbug" production function). I added an update to my post saying these arguments were disingenuous — and in my view academic fraud because they rely on a visual misrepresentation of data as well as a elision of the direction of mathematical implication. Solow pointed out the former in his 1974 response to Shaikh's "humbug" paper (as well as the fact that Shaikh's data shows labor output is independent of capital which would render the entire discussion moot if true), but Shaikh has continued to misrepresent "humbug" until at least 2017 in an INET interview on YouTube.
The funny thing is that I never really cared about the CCC — my interest on this blog is research into economic theory based on information theory. RCCC I and RCCC II were both primarily about how you would go about addressing the underlying questions in the information equilibrium framework. However, the subsequent volleys have brought up even more illogical or plainly false arguments against aggregate production functions that seem to have sprouted in the Post-Keynesian walled garden. I believe it's because "mainstream" academic econ has long since abandoned arguing about it, and like my neglected back yard a large number of weeds have grown up. This post is going to do a bit of weeding.
Constant factor shares!
Several comments brought up that Cobb-Douglas production functions can fit any data assuming (empirically observed) constant factor shares. However, this is just a claim that the gradient
\nabla = \left( \frac{\partial}{\partial \log L} , \frac{\partial}{\partial \log K} \right)
is constant, which a fortiori implies a Cobb-Douglas production function
\log Y = a \log L + b \log K + c
A backtrack is that it's only constant factor shares in the neighborhood of observed values, but that just means Cobb-Douglas functions are a local approximation (i.e. the tangent plane in log-linear space) to the observed region. Either way, saying "with constant factor shares, Cobb Douglas can fit any data" is saying vacuously "data that fits a Cobb-Douglas function can be fit with a Cobb-Douglas function". Leontief production functions also have constant factor shares locally, but in fact have two tangent planes, which just retreats to the local description (data that is locally Cobb-Douglas can be fit with a local Cobb-Douglas function).
Aggregate production functions don't exist!
The denial that the functions even exist is by far the most interesting argument, but it's still not logically sound. At least it's not disingenuous — it could just use a bit of interdisciplinary insight. Jo Michell linked me to a paper by Jonathan Temple with the nonthreatening title "Aggregate production functions and growth economics" (although the filename is "Aggreg Prod Functions Dont Exist.Temple.pdf" and the first line of the abstract is "Rigorous approaches to aggregation indicate that aggregate production functions do not exist except in unlikely special cases.")
However, not too far in (Section 2, second paragraph) it makes a logical error of extrapolating from $N = 2$ to $N \gg 1$:
It is easy to show that if the two sectors each have Cobb-Douglas production technologies, and if the exponents on inputs differ across sectors, there cannot be a Cobb-Douglas aggregate production function.
It's explained how the argument proceeds in a footnote:
The way to see this is to write down the aggregate labour share as a weighted average of labour shares in the two sectors. If the structure of output changes, the weights and the aggregate labour share will also change, and hence there cannot be an aggregate Cobb-Douglas production function (which would imply a constant labour share at the aggregate level).
This is true for $N = 2$, because the change of one "labor share state" (specified by $\alpha_{i}$ for a individual sector $y_{i} \sim k^{\alpha_{i}}$) implies an overall change in the ensemble average labor share state $\langle \alpha \rangle$. However, this is a bit like saying if you have a two-atom ideal gas, the kinetic energy of one of the atoms can change and so the average kinetic energy of the two-atom gas doesn't exist therefore (rigorously!) there is no such thing as temperature (i.e. a well defined kinetic energy $\sim k T$) for an ideal gas in general with more than two atoms ($N \gg 1$) except in unlikely special cases.
I was quite surprised that econ has disproved the existence of thermodynamics!
Joking aside, if you have more than two sectors, it is possible you could have an empirically stable distribution over labor share states $\alpha_{i}$ and a partition function (details of the approach appear in my paper):
Z(\kappa) = \sum_{i} e^{- \kappa \alpha_{i}}
take $\kappa \equiv \log (1+ (k-k_{0})/k_{0})$ which means
\langle y \rangle \sim k^{\langle \alpha \rangle}
where the ensemble average is
\langle X \rangle \equiv \frac{1}{Z} \sum_{i} \hat{X} e^{- \kappa \alpha_{i}}
There are likely more ways than this partition function approach based on information equilibrium to get around the $N = 2$ case, but we only need to construct one example to disprove nonexistence. Basically this means that unless the output structure of a single firm affects the whole economy, it is entirely possible that the output structure of an ensemble of firms could have a stable distribution of labor share states. You cannot logically rule it out.
What's interesting to me is that in a whole host of situations, the distributions of these economic states appear to be stable (and in some cases in an unfortunate pun, stable distributions). For some specific examples, we can look at profit rate states and stock growth rate states.
Now you might not believe these empirical results. Regardless, the logical argument is not valid unless your model of the economy is unrealistically extremely simplistic (like modeling a gas with a single atom — not too unlike the unrealistic representative agent picture). There is of course the possibility that empirically this doesn't work (much like it doesn't work for a whole host of non-equilibrium thermodynamics processes). But Jonathan Temple's paper is a bunch of wordy prose with the odd equation — it does not address the empirical question. In fact, Temple re-iterates one of the defenses of the aggregate production function approaches that has vexed these theoretical attempts to knock them down (section 4, first paragraph):
One of the traditional defenses of aggregate production functions is a pragmatic one: they may not exist, but empirically they 'seem to work'.
They of course would seem to work if economies are made up of more than two firms (or sectors) and have relatively stable distributions of labor share states.
To put it yet another way, Temple's argument relies on a host of unrealistic assumptions about an economy — that we know the distribution isn't stable, and that there are only a few sectors, and that the output structure of these few firms changes regularly enough to require a new estimate of the exponent $\alpha$ but not regularly enough that the changes create a temporal distribution of states.
Fisher! Aggregate production functions are highly constrained!
There's a lot of references that trace all the way back to Fisher (1969) "The existence of aggregate production functions" and several people who mentioned Fisher or work derived from his papers. The paper is itself a survey of restrictions believed to constrain aggregate production functions, but it seems to have been written from the perspective that an economy is a highly mathematical construct that can either only be described by $C^{2}$ functions or not at all. In a later section (Sec. 6) talking about whether maybe aggregate production functions can be good approximations, Fisher says:
approximations could only result if [the approximation] ... exhibited very large rates of change ... In less technical language, the derivatives would have to wiggle violently up and down all the time.
Heaven forbid were that the case!
He cites in a footnote the rather ridiculous example of $\lambda \sin (x/\lambda)$ (locally $C^{2}$!) — I get the feeling he was completely unaware of stochastic calculus or quantum mechanics and therefore could not imagine a smooth macroeconomy made up of noisy components, only a few pathological examples from his real analysis course in college. Again, a nice case for some interdisciplinary exchange! I wrote a post some years ago about the $C^{2}$ view economists seem to take versus a far more realistic noisy approach in the context of the Ramsey-Cass-Koopmans model. In any case, why exactly should we expect firm level production functions to be $C^{2}$ functions that add to a $C^{2}$ function?
One of the constraints Fisher notes is that individual firm production functions (for the $i^{th}$ firm) must take a specific additive form:
f_{i}(K_{i}, L_{i}) = \phi_{i}(K_{i}) + \psi_{i}(L_{i})
This is probably true if you think of an economy as one large $C^{2}$ function that has to factor (mathematically, like, say, a polynomial) into individual firms. But like Temple's argument, it denies the possibility that there can be stable distributions of states $(\alpha_{i}, \beta_{i})$ for individual firm production functions (that even might change over time!) such that
Y_{i} = f_{i}(K_{i}, L_{i}) = K_{i}^{\alpha_{i}}L_{i}^{\beta_{i}}
\langle Y \rangle \sim K^{\langle \alpha \rangle} L^{\langle \beta \rangle}
The left/first picture is a bunch of random production functions with beta distributed exponents. The right/second picture is an average of 10 of them. In the limit of an infinite number of firms, constant returns to scale hold (i.e. $\langle \alpha \rangle + \langle \beta \rangle \simeq 0.35 + 0.65 = 1$) at the macro level — however individual firms aren't required to have constant returns to scale (many don't in this example). In fact, none of the individual firms have to have any of the properties of the aggregate production function. (You don't really have to impose that constraint at either scale — and in fact, in the whole Solow model works much better empirically in terms of nominal quantities and without constant returns to scale.) Since these are simple functions, they don't have that many properties but we can include things like constant factor shares or constant returns to scale.
The information-theoretic partition function approach actually has a remarkable self-similarity between macro (i.e. aggregate level) and micro (i.e. individual or individual firm level) — this self-similarity is behind the reason why Cobb-Douglas or diagrammatic ("crossing curve") models at the macro scale aren't obviously implausible.
Both the arguments of Temple and Fisher seem to rest on strong assumptions about economies constructed from clean, noiseless, abstract functions — and either a paucity or surfeit of imagination (I'm not sure). It's a kind of love-hate relationship with neoclassical economics — working within its confines to try to show that it's flawed. A lot of these results are cases of what I personally would call mathiness. I'm sure Paul Romer might think they're fine, but to me they sound like an all-too-earnest undergraduate math major fresh out of real analysis trying to tell us what's what. Sure, man, individual firms production functions are continuous and differentiable additive functions. So what exactly have you been smoking?
These constraints on production functions from Fisher and Temple actually remind me a lot of Steve Keen's definition of an equilibrium that isn't attainable — it's mathematically forbidden! It's probably not a good definition of equilibrium if you can't even come up with a theoretical case that satisfies it. Fisher and Temple can't really come up with a theoretical production function that meets all their constraints besides the trivial "all firms are the same" function. It's funny that Fisher actually touches on that in one of his footnotes (#31):
Honesty requires me to state that I have no clear idea what technical differences actually look like. Capital augmentation seems unduly restrictive, however. If it held, all firms would produce the same market basket of outputs and hire the same relative collection of labors.
But the bottom line is that these claims to have exhausted all possibilities are just not true! I get the feeling that people have already made up their minds which side of the CCC they stand on, and it doesn't take much to confirm their biases so they don't ask questions after e.g. Temple's two sector economy. That settles it then! Well, no ... as there might be more than two sectors. Maybe even three!
Resolving the Cambridge capital controversy with MaxEnt
I came across this 2018 NBER working paper from Baqaee and Farhi again today (on Twitter) after seeing it around the time it came out. The abstract spells it out:
Aggregate production functions are reduced-form relationships that emerge endogenously from input-output interactions between heterogeneous producers and factors in general equilibrium. We provide a general methodology for analyzing such aggregate production functions by deriving their first- and second-order properties. Our aggregation formulas provide non-parametric characterizations of the macro elasticities of substitution between factors and of the macro bias of technical change in terms of micro sufficient statistics. They allow us to generalize existing aggregation theorems and to derive new ones. We relate our results to the famous Cambridge- Cambridge controversy.
One thing that they do in their paper is reference Samuelson's (version of Robinson's and Sraffa's) re-switching arguments. I'll quote liberally from the paper (this is actually the introduction and Section 5) because it sets up the problem we're going to look at:
Eventually, the English Cambridge prevailed against the American Cambridge, decisively showing that aggregate production functions with an aggregate capital stock do not always exist. They did this through a series of ingenious, though perhaps exotic looking, "re-switching" examples. These examples demonstrated that at the macro level, "fundamental laws" such as diminishing returns may not hold for the aggregate capital stock, even if, at the micro level, there are diminishing returns for every capital good. This means that a neoclassical aggregate production function could not be used to study the distribution of income in such economies.
... In his famous "Summing Up" QJE paper (Samuelson, 1966), Samuelson, speaking for the Cambridge US camp, finally conceded to the Cambridge UK camp and admitted that indeed, capital could not be aggregated. He produced an example of an economy with "re-switching": an economy where, as the interest rate decreases, the economy switches from one technique to the other and then back to the original technique. This results in a non-monotonic relationship between the capital-labor ratio as a function of the rate of interest r.
... [In] the post-Keynesian reswitching example in Samuelson (1966). ... [o]utput is used for consumption, labor can be used to produce output using two different production functions (called "techniques"). ... the economy features reswitching: as the interest rate is increased, it switches from the second to the first technique and then switches back to the second technique.
I wrote a blog post four years ago titled "Resolving the Cambridge capital controversy with abstract algebra" which was in part tongue-in-cheek, but also showed how Cambridge, UK (Robinson and Sraffa) had the more reasonable argument. With Samuelson's surrender summarized above, it's sort of a closed case. I'd like to re-open it, and show how a resolution in my blog post renders the post-Keynesian re-switching arguments as describing pathological cases unlikely to be realized in a real system — and therefore calling the argument in favor of the existence of aggregate production functions and Solow and Samuelson.
To some extent, this whole controversy is due to economists seeing economics as a logical discipline — more akin to mathematics — instead of an empirical one — more akin to the natural sciences. The pathological case of re-switching does in fact invalidate a general rigorous mathematical proof of the existence of aggregate production functions in all cases. But it is just that — a pathological case. It's the kind of situation where you have to show instead some sort of empirical evidence it exists until you take the impasse it presents to mathematical existence seriously.
If you follow through the NBER paper, they show a basic example of re-switching from Samuelson's 1966 paper. As the interest rate increases, one of the "techniques" becomes optimal over the other and we get a shift in capital to output and capital to labor:
Effectively, this is a shift in $\alpha$ in a production function
Y \sim K^{\alpha} L^{1-\alpha}
or more simply in terms of the neoclassical model in per-labor terms ($x \equiv X/L$)
y \sim k ^{\alpha}
That is to say in one case we have $y \sim k^{\alpha_{1}}$ and $y \sim k^{\alpha_{2}}$ in the other. As the authors of the paper put it:
The question we now ask is whether we could represent the disaggregated post-Keynesian example as a version of the simple neoclassical model with an aggregate capital stock given by the sum of the values of the heterogeneous capital stocks in the disaggregated post-Keynesian example. The non-monotonicity of the capital-labor and capital-output ratios as a function of the interest rate shows that this is not possible. The simple neoclassical model could match the investment share, the capital share, the value of capital, and the value of the capital-output and capital-labor ratios of the original steady state of the disaggregated model, but not across steady states associated with different values of the interest rate. In other words, aggregation via financial valuation fails.
But we must stress that this is essentially one (i.e. representative) firm with this structure, and that across a real economy, individual firms would have multiple "techniques" that change in a myriad ways — and there would be many firms.
The ensemble approach to information equilibrium (where we have a large number of production functions $y_{i} \sim k^{\alpha_{i}}$) recovers the traditional aggregate production function (see my paper here), but with ensemble average variables (angle brackets) evaluated with a partition function:
(see the paper for the details). This formulation does not depend on any given firm staying in a particular "production state" $\alpha_{i}$, and it is free to change from any one state to another in a different time period or at a different interest rate. The key question is that we do not know which set of $\alpha_{i}$ states describes every firm for every interest rate. With constant returns to scale, we are restricted to $\alpha$ states between zero and one, but we have no other knowledge available without a detailed examination of every firm in the economy. We'd be left to a uniform distribution over [0,1] if that is all we had, but we could (in principle) average the $\alpha$'s we observe and constrain our distribution to respect $\langle \alpha \rangle$ to be some (unknown) real value in [0, 1]. That defines a beta distribution:
Getting back to the Samuelson example, I've reproduced the capital to labor ratio:
Of course, our model has no compunctions against drawing a new $\alpha$ from a beta distribution for any value of the interest rate ...
That's a lot of re-switching. If we have a large number of firms, we'll have a large number of re-switching (micro) production functions — Samuelson's post-Keynesian example is but one of many paths:
The ensemble average (over that beta-distribution above) produces the bolder blue line:
This returns a function with respect to the interest rate that approximates a constant $\alpha$ as a function of the interest rate — and which only gets better as more firms are added and more re-switching is allowed:
This represents an emergent aggregate production function smooth in the interest rate where each individual production function is non-monotonic. The aggregate production function of the Solow model is in fact well-defined and does not suffer from the issues of re-switching unless the draw from the distribution is pathological — for example, all firms being the same or, equivalently, a representative firm assumption).
This puts the onus on the Cambridge, UK side to show that empirically such cases exist and are common enough to survive aggregation. However, if we do not know about the production structure of a sizable fraction of firms with respect to a broad swath of interest rates, we must plead ignorance and go with maximum entropy. As the complexity of an economy increases, we become less and less likely to see a scenario that cannot be aggregated.
Again, I mentioned this back four years ago in my blog post. The ensemble approach offers a simple workaround to the inability to simply add apples and oranges (or more accurately printing presses and drill presses). However, the re-switching example is a good one to show how a real economy — with heterogeneous firms and heterogeneous techniques — can aggregate into a sensible macroeconomic production function.
Update 18 June 2019
I am well aware of the Cobb-Douglas derangement syndrome associated with the Cambridge capital controversy that exists in on Econ twitter and the econoblogosphere (which is in part why I put that gif with the muppet in front of a conflagration on the tweets about this blog post ... three times). People — in particular post-Keynesian acolytes — hate Cobb-Douglas production functions. One of the weirder strains of thought out there is that a Cobb-Douglas function can fit any data arbitrarily well. This plainly false as
a \log X + b \log Y + c
is but a small subset of all possible functions $f(X, Y)$. Basically, this strain of thought is equivalent to saying a line $y = m x + b$ can fit any data.
A subset of this mindset appears to be a case of a logical error based on accounting identities. There have been a couple papers out there (not linking) that suggest that Cobb-Douglas functions are just accounting identities. The source of this might be that you can approximate any accounting identity by a Cobb Douglas form. If we define $X \equiv \delta X + X_{0}$, then
X_{0} \left( \log (\delta X + X_{0}) + 1\right) + Y_{0} \left( \log (\delta Y + Y_{0}) + 1\right) + C
is equal to $X + Y$ for $\delta X / X_{0} \ll 1$ if
C \equiv - X_{0} \log X_{0}- Y_{0} \log Y_{0}
That is to say you can locally approximate an accounting identity by taking into account that log linear is approximately linear for small deviations.
It appears that some people have taken this $p \rightarrow q$ to mean $q \rightarrow p$ — that any Cobb Douglas form $f(X, Y)$ can be represented as an accounting identity $X+Y$. That is false in general. Only the form above under the conditions above can do so, so if you have a different Cobb Douglas function it cannot be so transformed.
Another version of this thinking (from Anwar Shaikh) was brought up on Twitter. Shaikh has a well-known paper where he created the "Humbug" production function. I've reproduced it here:
I was originally going to write about something else here, but in working through the paper and reproducing the result for the production function ...
... I found out this paper is a fraud. Because of the way the values were chosen, the resulting production function has no dependence on the variation in $q$ aside from an overall scale factor. Here's what happens if you set $q$ to be a constant (0.8) — first "HUMBUG" turns into a line:
And the resulting production function? It lies almost exactly on top of the original:
It's not too hard to pick a set of $q$ and $k$ data that gives a production function that looks nothing like a Cobb-Douglas function by just adding some noise:
The reason can be seen in the table and relies mostly on Shaikh's choice of the variance in the $k$ values (click to enlarge):
But also, if we just plot the $k$-values and the $q$-values versus time, we have log-linear functions:
Is it any surprise that a Cobb-Douglas production function fits this data? Sure, it seems weird if we look at the "HUMBUG" parametric graph of $q$ versus $k$, but $k(t)$ and $q(t)$ are lines. The production function is smooth because the variance in $A(t)$ depends almost entirely on the variance in $q(t)$ so that taking $q(t)/A(t)$ leaves approximately a constant. The bit of variation left is the integrated $\dot{k}/k$, which is derived from a log-linear function — so it's going to have a great log-linear fit. It's log-linear!
Basically, Shaikh mis-represented the "HUMBUG" data as having a lot of variation — obviously nonsense by inspection, right?! But it's really just two lines with a bit of noise.
Update + 2 hours
I was unable to see the article earlier, but apparently this is exactly what Solow said. Solow was actually much nicer (click to enlarge):
Solow:
The cute HUMBUG numerical example tends to bowl you over at first, but when you think about it for a minute it turns out to be quite straightforward in terms of what I have just said. The made-up data tell a story, clearer in the table than in the diagram. Output per worker is essentially constant in time. There are some fluctuations but they are relatively small, with a coefficient of variation about 1/7. The fact that the fluctuations are made to spell HUMBUG is either distraction or humbug. The series for capital per worker is essentially a linear function of time. The wage share has small fluctuations which appear not to be related to capital per worker. If you as any systematic method or educated mind to interpret those data using a production function and the marginal productivity relations, the answer will be that they are exactly what would be produced by technical regress with a production function that must be very close to Cobb-Douglas.
Emphasis in the original. That's exactly what the graph above (and reproduced below) shows. Shaikh not only does not address this comment in his follow up — he quotes only the last sentence of this paragraph and then doubles down on eliding the HUMBUG data as representative of "any data":
Yet confronted with the humbug data, Solow says: "If you ask any systematic method or any educated mind to interpret those data using a production function and the marginal productivity relations, the answer will be that they are exactly what would be produced by technical regress with a production function that must be very close to Cobb-Douglas" (Solow, 1957 [sic], p. 121). What kind of "systematic method" or "educated mind" is it that can interpret almost any data, even the humbug data, as arising from a neoclassical production function?
This is further evidence that Shaikh is not practicing academic integrity. Even after Solow points out that "Output per worker is essentially constant in time ... The series for capital per worker is essentially a linear function of time" continues to suggest that "even the humbug data" is somehow representative of the universe of "any data" when it is in fact a line.
The fact that Shaikh chose to graph "HUMBUG" rather than this time series is obfuscation and in my view academic fraud. As of 2017, he continues to misrepresent this paper in an Institute for New Economic Thinking (INET) video on YouTube saying "... this is essentially an accounting identity and I illustrated that putting the word humbug and putting points on the word humbug and showing that I could fit a perfect Cobb-Douglas production function to that ..."
I did want to add a bit about how the claims about the relationship between Cobb-Douglas production functions and accounting identities elide the direction of implication. Cobb-Douglas implies an accounting identity holds, but the logical content of the accounting identity on its own is pretty much vacuous without something like Cobb-Douglas. In his 2005 paper, Shaikh elides the point (and also re-asserts his disingenuous claim about the humbug production function above).
Is Washington State heading into recession?
Back in 2017, I looked at Washington State's unemployment rate to see if Seattle's minimum wage laws had impacted employment. However, I haven't checked back in with that forecast since that time and when I did, there appears to be a spike in unemployment. Is a recession brewing?
Click to enlarge. It doesn't quite reach the threshold for a recession. We have δlog u ~ 0.16, not yet 0.17 — which in any case is the threshold for a national recession based on national data, not state data which fluctuates more. However, other data suggests there might be economic weakness in the West census region. Therefore, this seems like a good time series to monitor. (As a side note, Seattle's housing prices seem to have hit a hiccup.)
I ran a counterfactual recession shock and came up with something pretty uncertain:
Here's a zoom in to more recent data:
Of course, this is how the model looks early on in the process (see here for a version of this at the national level going through the Great Recession).
I'll continue to follow this — state unemployment rates for May come out in a little over a week on June 21st.
The latest data brings our error bands on the counterfactual down a bit, but the mean path is largely in the same place:
CPI and DIEM inflation forecasts
Apropos of getting into an argument about the quantity theory of money on Twitter, the new CPI data came out today — and it continues to be consistent with the Dynamic Information Equilibrium Model (DIEM — paper, presentation) forecast from 2017. Here's year-over-year inflation:
The red line is the forecast with 90% error bands (I'll get to the dashed curve in a second). The black line is the post-forecast data. The horizontal gray line is the "dynamic equilibrium", i.e. the equilibrium inflation rate of about 2.5% (this is CPI all items), and the vertical one is the center of the "lowflation" shock associated with the fall in the labor force after the Great Recession. Shocks to CPI follow these demographic by about 3.5 years.
Back to that dashed curve — the original model forecast estimated the lowflation shock while it was still ongoing, which ends up being a little off. I re-estimated the parameters a year later and as you can see the result is well within the error bands. The place where it makes more a difference visually (it's still numerically small) is in the CPI level:
Without the revision, the level data would be biased a bit high (i.e. the integrated size of the shock was over-estimated). But again, it's all within the error bands. For reference, here's a look at what would have looked like to estimate a bigger shock in real time — unemployment during the Great Recession.
PS/Update +10 minutes: Here's the log-derivative CPI inflation (continuously compounded annual rate of change):
Posted by Jason Smith at 12:00 PM 14 comments:
JOLTS!
It's JOLTS day once again (data reported today is for April 2019), and, well, still pretty much status quo (which is how you should view almost any report, really). As always, click to enlarge ...
I've speculated that these time series are leading indicators (even though the data is delayed by over a month, JOLTS hires still leads unemployment by about 5 months on average). There's basically zero sign of any deviation in hires — which according to this means according to this model means we should continue to see the unemployment rate fall through September of 2019 (5 months from April 2019). As in the last several reports, we continue to see a flattening in quits, total separations, and job openings (vacancies).
Employment Situation Day (and other updates)
Today, we got another data point that lines up with the dynamic information equilibrium model (DIEM) forecast from 2017 (as well as labor force participation):
As a side note, I applied the recession detection algorithm to Australian data after a tweet from Cameron Murray:
The employment situation data flows into GDPnow calculations, and I looked at the performance of those forecasts compared to a constant 2.4% RGDP growth:
It turns out there is some information in the later forecasts from GDPnow with the final update (a day before the BEA releases its initial estimate) having about half the error spread (in terms of standard deviations). However, incorporating the 2014 mini-boom and the TCJA effect brings the constant model much closer to the GDPnow performance (from 300 down to 200 bp at 95% CL versus 150 bp at 95% CL — with no effect of eliminating the the mini-boom or TCJA bumps).
Resolving the Cambridge capital controversy with l...
Resolving the Cambridge capital controversy with M...
Market updates, Fair's model, and Sahm's rule | CommonCrawl |
Only show content I have access to (44)
Materials Research (34)
Statistics and Probability (8)
MRS Online Proceedings Library Archive (16)
British Journal of Nutrition (12)
Epidemiology & Infection (7)
Proceedings of the International Astronomical Union (7)
The Journal of Navigation (3)
Acta Neuropsychiatrica (2)
Canadian Mathematical Bulletin (2)
English Today (2)
Geological Magazine (2)
Journal of Fluid Mechanics (2)
Microscopy and Microanalysis (2)
Oryx (2)
Materials Research Society (35)
International Astronomical Union (8)
BSAS (5)
RIN (3)
Brazilian Society for Microscopy and Microanalysis (SBMM) (2)
Canadian Mathematical Society (2)
Royal College of Speech and Language Therapists (2)
Ryan Test (2)
Scandinavian College of Neuropsychopharmacology (2)
Applied Probability Trust (1)
Asian Association of Social Psychology (1)
Society for Disaster Medicine and Public Health, Inc. SDMPH (1)
The Paleontological Society (1)
Application of a long short-term memory neural network: a burgeoning method of deep learning in forecasting HIV incidence in Guangxi, China
G. Wang, W. Wei, J. Jiang, C. Ning, H. Chen, J. Huang, B. Liang, N. Zang, Y. Liao, R. Chen, J. Lai, O. Zhou, J. Han, H. Liang, L. Ye
Guangxi, a province in southwestern China, has the second highest reported number of HIV/AIDS cases in China. This study aimed to develop an accurate and effective model to describe the tendency of HIV and to predict its incidence in Guangxi. HIV incidence data of Guangxi from 2005 to 2016 were obtained from the database of the Chinese Center for Disease Control and Prevention. Long short-term memory (LSTM) neural network models, autoregressive integrated moving average (ARIMA) models, generalised regression neural network (GRNN) models and exponential smoothing (ES) were used to fit the incidence data. Data from 2015 and 2016 were used to validate the most suitable models. The model performances were evaluated by evaluating metrics, including mean square error (MSE), root mean square error, mean absolute error and mean absolute percentage error. The LSTM model had the lowest MSE when the N value (time step) was 12. The most appropriate ARIMA models for incidence in 2015 and 2016 were ARIMA (1, 1, 2) (0, 1, 2)12 and ARIMA (2, 1, 0) (1, 1, 2)12, respectively. The accuracy of GRNN and ES models in forecasting HIV incidence in Guangxi was relatively poor. Four performance metrics of the LSTM model were all lower than the ARIMA, GRNN and ES models. The LSTM model was more effective than other time-series models and is important for the monitoring and control of local HIV epidemics.
Verification of AIS Data by using Video Images taken by a UAV
Fan Zhou, Shengda Pan, Jingjing Jiang
Journal: The Journal of Navigation , First View
Effective technical methods for verifying the authenticity and accuracy of Automatic Identification System (AIS) data, which are important for safe navigation and traffic regulation, are still lacking. In this study, we propose a new method to verify AIS data by using video images taken by an Unmanned Aerial Vehicle (UAV). An improved ViBe algorithm is used to extract the ship target image from the video images and the ship's spatial position is calculated using a monocular target-positioning algorithm. The positioning results are compared with the position, speed and course data of the same ship in AIS, and the authenticity and accuracy of the AIS data are verified. The results of the experiment conducted in the inland waterways of Huangpu River in Shanghai, China, show that AIS signals can be automatically checked and verified by a UAV in real time and can thus improve the supervision efficiency of maritime departments.
Turbulent Rayleigh–Bénard convection in an annular cell
Xu Zhu, Lin-Feng Jiang, Quan Zhou, Chao Sun
Journal: Journal of Fluid Mechanics / Volume 869 / 25 June 2019
Published online by Cambridge University Press: 29 April 2019, R5
Print publication: 25 June 2019
We report an experimental study of turbulent Rayleigh–Bénard (RB) convection in an annular cell of water (Prandtl number $Pr=4.3$ ) with a radius ratio $\unicode[STIX]{x1D702}\simeq 0.5$ . Global quantities, such as the Nusselt number $Nu$ and the Reynolds number $Re$ , and local temperatures were measured over the Rayleigh range $4.2\times 10^{9}\leqslant Ra\leqslant 4.5\times 10^{10}$ . It is found that the scaling behaviours of $Nu(Ra)$ , $Re(Ra)$ and the temperature fluctuations remain the same as those in the traditional cylindrical cells; both the global and local properties of turbulent RB convection are insensitive to the change of cell geometry. A visualization study, as well as local temperature measurements, shows that in spite of the lack of the cylindrical core, there also exists a large-scale circulation (LSC) in the annular system: thermal plumes organize themselves with the ascending hot plumes on one side and the descending cold plumes on the opposite side. Near the upper and lower plates, the mean flow moves along the two circular branches. Our results further reveal that the dynamics of the LSC in this annular geometry is different from that in the traditional cylindrical cell, i.e. the orientation of the LSC oscillates in a narrow azimuthal angle range, and no cessations, reversals or net rotation were detected.
Effect of oxidation on thermal fatigue behavior of cast tungsten carbide particle/steel substrate surface composite
Quan Shan, Zaifeng Zhou, Zulai Li, Yehua Jiang, Fan Gao, Lei Zhang
Journal: Journal of Materials Research / Volume 34 / Issue 10 / 28 May 2019
Published online by Cambridge University Press: 11 April 2019, pp. 1754-1761
Cast tungsten carbide is widely used to reinforce iron or steel substrate surface composites to meet the demands of harsh wear environments due to its extremely high hardness and excellent wettability with molten steel. Cast tungsten carbide particle/steel matrix surface composites have demonstrated great potential development in applications under the abrasive working condition. The thermal shock test was used to investigate the fatigue behavior of the composites fabricated by vacuum evaporative pattern casting technique at different temperatures. At elevated temperatures, the fatigue behavior of the composites was influenced by the oxidation of tungsten carbide, producing WO3. Thermodynamic calculations showed that the W2C in the tungsten carbide particle was oxidized at an initial temperature of approximately 570 °C. The relationship between oxidation and thermal fatigue crack growth was investigated, and the results suggested that oxidation would become more significant with increasing thermal shock temperature. These findings provide a valuable guide for understanding and designing particle/steel substrate surface composites.
Stock Return Asymmetry: Beyond Skewness
Lei Jiang, Ke Wu, Guofu Zhou, Yifeng Zhu
Journal: Journal of Financial and Quantitative Analysis /
Rheological behavior of semisolid hypereutectic Al–Si alloys
Qiuping Wang, Lu Li, Rongfeng Zhou, Yongkun Li, Fan Xiao, Yehua Jiang
Journal: Journal of Materials Research / Volume 34 / Issue 12 / 28 June 2019
Refinement and homogenization of primary Si particles in hypereutectic Al–Si alloys is an effective route to enhance the tensile strength and wear resistance and satisfy the industrial requirements for a wide range of applications. Herein, two kinds of semisolid hypereutectic Al–Si alloys are synthesized by using a rotating-rod-induced nucleation technology. The influence of different cooling conditions and shear rates on the apparent viscosity of molten melt of slurry are examined by self-made high-precision and high-temperature apparent viscosity test equipment. The correlation between the shear rate and the uniformity of hard phases has been investigated from the obtained results, fitting curves, and optical microscope. With the increase in the shear rate, the particles tend to become rounder and the apparent viscosity becomes lower. The enhanced shape factor resulted in more rounded grains, which further reduced the apparent viscosity. During the same cooling time, the higher cooling rate resulted in higher solid fraction, generating higher apparent viscosity. The present study provides unique insight into the filling behavior of semisolid hypereutectic Al–Si alloys and serves as a baseline for future work.
Impact of maternal HIV infection on pregnancy outcomes in southwestern China – a hospital registry based study
M. Yang, Y. Wang, Y. Chen, Y. Zhou, Q. Jiang
Published online by Cambridge University Press: 08 March 2019, e124
Globally, human immune deficiency virus (HIV)/acquired immune deficiency syndrome (AIDS) continues to be a major public health issue. With improved survival, the number of people living with HIV/AIDS is increasing, with over 2 million among pregnant women. Investigating adverse pregnant outcomes of HIV-infected population and associated factors are of great importance to maternal and infant health. A cross-sectional data collected from hospital delivery records of 4397 mother–infant pairs in southwestern China were analysed. Adverse pregnant outcomes (including low birthweight/preterm delivery/low Apgar score) and maternal HIV status and other characteristics were measured. Two hundred thirteen (4.9%) mothers were HIV positive; maternal HIV infection, rural residence and pregnancy history were associated with all three indicators of adverse pregnancy outcomes. This research suggested that maternal population have high prevalence in HIV infection in this region. HIV-infected women had higher risks of experiencing adverse pregnancy outcomes. Rural residence predisposes adverse pregnancy outcomes. Findings of this study suggest social and medical support for maternal-infant care needed in this region, selectively towards rural areas and HIV-positive mothers.
Late onset of the Holocene rainfall maximum in northeastern China inferred from a pollen record from the sediments of Tianchi Crater Lake
Xiaoyan Liu, Tao Zhan, Xinying Zhou, Haibin Wu, Qin Li, Chao Zhao, Yansong Qiao, Shiwei Jiang, Luyao Tu, Yongfa Ma, Jun Zhang, Xia Jiang, Benjun Lou, Xiaolin Zhang, Xin Zhou
Journal: Quaternary Research / Volume 92 / Issue 1 / July 2019
The timing of the Holocene summer monsoon maximum (HSMM) in northeastern China has been much debated and more quantitative precipitation records are needed to resolve the issue. In the present study, Holocene precipitation and temperature changes were quantitatively reconstructed from a pollen record from the sediments of Tianchi Crater Lake in northeastern China using a plant functional type-modern analogue technique (PFT-MAT). The reconstructed precipitation record indicates a gradual increase during the early to mid-Holocene and a HSMM at ~5500–3100 cal yr BP, while the temperature record exhibits a divergent pattern with a marked rise in the early Holocene and a decline thereafter. The trend of reconstructed precipitation is consistent with that from other pollen records in northeastern China, confirming the relatively late occurrence of the HSMM in the region. However, differences in the onset of the HSMM within northeastern China are also evident. No single factor appears to be responsible for the late occurrence of the HSMM in northeastern China, pointing to a potentially complex forcing mechanism of regional rainfall in the East Asian monsoon region. We suggest that further studies are needed to understand the spatiotemporal pattern of the HSMM in the region.
Microstructural evolution and wear performance of the high-entropy FeMnCoCr alloy/TiC/CaF2 self-lubricating composite coatings on copper prepared by laser cladding for continuous casting mold
Jun Jiang, Ruidi Li, Tiechui Yuan, Pengda Niu, Chao Chen, Kechao Zhou
The FeMnCoCr high-entropy alloy/TiC/CaF2 self-lubricating coatings were successfully prepared on a Cu–Zr–Cr alloy for continuous casting mold by laser cladding for wear-resistance. The intriguing finding was that the laser-cladded FeMnCoCr is mainly composed of face-centered cubic and hexagonal close-packed solid solution phases. During the cladding process, the FeMnCoCr/TiC or the FeMnCoCr/TiC/CaF2 mixed sufficiently with Cu matrix, while FeMnCoCr exhibited a spherical shape owing to being insoluble in Cu. The average hardness of the FeMnCoCr/TiC/CaF2 self-lubricating high-entropy alloy (HEA) coatings was twice that of the pure FeMnCoCr HEA coating. By addition of TiC, the friction coefficient and wear rate were decreased from 0.35 and 3.68 × 10−15 mm3/m to 0.27 and 3.06 × 10−15 mm3/m, respectively. When CaF2 was added, the friction coefficients and wear rate were decreased to 0.16 and 2.16 × 10−15 mm3/m, respectively, which was 54% lower than the pure FeMnCoCr HEA coating. The main wear mechanism of the FeMnCoCr coating is abrasive wear while that of the FeMnCoCr/TiC coating is abrasive and adhesion wear. But adhesion wear is dominant for the FeMnCoCr/TiC/CaF2 coating.
Functional analysis of the dairy cow mammary transcriptome between early lactation and mid-dry period
Ye Lin, He Lv, Minghui Jiang, Jinyu Zhou, Shuyuan Song, Xiaoming Hou
Journal: Journal of Dairy Research / Volume 86 / Issue 1 / February 2019
Published online by Cambridge University Press: 07 February 2019, pp. 63-67
Print publication: February 2019
In this research communication we used digital gene expression (DGE) analysis to identify differences in gene expression in the mammary glands of dairy cows between early lactation and the mid-dry period. A total of 741 genes were identified as being differentially expressed by DGE analysis. Compared with their expression in dry cows, 214 genes were up-regulated and 527 genes were down-regulated in lactating cow mammary glands. Gene Ontology analysis showed that lactation was supported by increased gene expression related to metabolic processes and nutrient transport and was associated with decreased gene expression related to cell proliferation. Pathway mapping using the Kyoto Encyclopedia of Genes and Genomes showed that 579 differentially expressed genes had pathway annotations related to 204 pathways. Metabolic pathway-related genes were the most significantly enriched. Genes and pathways identified by the present study provide insights into molecular events that occur in the mammary gland between early lactation and mid-dry period, which can be used to facilitate further investigation of the mechanisms underlying lactation and mammary tissue remodeling in dairy cows.
CHARACTERIZATIONS OF BMO AND LIPSCHITZ SPACES IN TERMS OF $A_{P,Q}$ WEIGHTS AND THEIR APPLICATIONS
MSC 2010: Special classes of linear operators
MSC 2010: Harmonic analysis in several variables
DINGHUAI WANG, JIANG ZHOU, ZHIDONG TENG
Journal: Journal of the Australian Mathematical Society , First View
Published online by Cambridge University Press: 30 January 2019, pp. 1-11
Let $0<\unicode[STIX]{x1D6FC}<n,1\leq p<q<\infty$ with $1/p-1/q=\unicode[STIX]{x1D6FC}/n$ , $\unicode[STIX]{x1D714}\in A_{p,q}$ , $\unicode[STIX]{x1D708}\in A_{\infty }$ and let $f$ be a locally integrable function. In this paper, it is proved that $f$ is in bounded mean oscillation $\mathit{BMO}$ space if and only if
$$\begin{eqnarray}\sup _{B}\frac{|B|^{\unicode[STIX]{x1D6FC}/n}}{\unicode[STIX]{x1D714}^{p}(B)^{1/p}}\bigg(\int _{B}|f(x)-f_{\unicode[STIX]{x1D708},B}|^{q}\unicode[STIX]{x1D714}(x)^{q}\,dx\bigg)^{1/q}<\infty ,\end{eqnarray}$$
where $\unicode[STIX]{x1D714}^{p}(B)=\int _{B}\unicode[STIX]{x1D714}(x)^{p}\,dx$ and $f_{\unicode[STIX]{x1D708},B}=(1/\unicode[STIX]{x1D708}(B))\int _{B}f(y)\unicode[STIX]{x1D708}(y)\,dy$ . We also show that $f$ belongs to Lipschitz space $Lip_{\unicode[STIX]{x1D6FC}}$ if and only if
$$\begin{eqnarray}\sup _{B}\frac{1}{\unicode[STIX]{x1D714}^{p}(B)^{1/p}}\bigg(\int _{B}|f(x)-f_{\unicode[STIX]{x1D708},B}|^{q}\unicode[STIX]{x1D714}(x)^{q}\,dx\bigg)^{1/q}<\infty .\end{eqnarray}$$
As applications, we characterize these spaces by the boundedness of commutators of some operators on weighted Lebesgue spaces.
Commuting and Semi-commuting Monomial-type Toeplitz Operators on Some Weakly Pseudoconvex Domains
Cao Jiang, Xing-Tang Dong, Ze-Hua Zhou
Journal: Canadian Mathematical Bulletin / Volume 62 / Issue 2 / June 2019
In this paper, we completely characterize the finite rank commutator and semi-commutator of two monomial-type Toeplitz operators on the Bergman space of certain weakly pseudoconvex domains. Somewhat surprisingly, there are not only plenty of commuting monomial-type Toeplitz operators but also non-trivial semi-commuting monomial-type Toeplitz operators. Our results are new even for the unit ball.
Novel guidance model and its application for optimal re-entry guidance
C.W. Jiang, G.F. Zhou, B. Yang, C.S. Gao, W.X. Jing
Journal: The Aeronautical Journal / Volume 122 / Issue 1257 / November 2018
Aiming at three-dimensional (3D) terminal guidance problem, a novel guidance model is established in this paper, in which line-of-sight (LOS) range is treated as an independent variable, describing the relative motion between the vehicle and the target. The guidance model includes two differential equations that describe LOS's pitch and yaw motions in which the pitch motion is separately decoupled. This model avoids the inaccuracy of simplified two-dimensional (2D) guidance model and the complexity of 3D coupled guidance model, which not only maintains the accuracy but also simplifies the guidance law design. The application of this guidance model is studied for optimal re-entry guidance law with impact angle constraint, which is presented in the form of normal overload. Compared with optimal guidance laws based on traditional guidance model, the proposed one based on novel guidance model is implemented with the LOS range instead of time-to-go, which avoids the problem of the time-to-go estimation of traditional optimal guidance laws. Finally, the correctness and validity of the guidance model and guidance law are verified by numerical simulation. The guidance model and guidance law proposed in this paper provide a new way for the design of terminal guidance.
Prevalence of Posttraumatic Stress Disorder (PTSD) and Its Correlates Among Junior High School Students at 53 Months After Experiencing an Earthquake
Qiaolan Liu, Min Jiang, Yang Yang, Huan Zhou, Yanyang Zhou, Min Yang, Huanyu Xu, Yuanyi Ji
Journal: Disaster Medicine and Public Health Preparedness , First View
Published online by Cambridge University Press: 12 September 2018, pp. 1-6
To identify the prevalence of posttraumatic stress disorder (PTSD) and its determinants among adolescents more than 4 years after the 2008 Wenchuan earthquake.
Adolescents (1,125 total) from 2 junior high schools in areas affected by the catastrophic earthquake were followed up for 3 years. The self-rating PTSD scale based on the Manual of Mental Disorders, 4th Edition (DSM-IV) and the Chinese Classification and Diagnostic Criteria of Mental Disorders, 2nd Edition, Revised (CCMD-2-R) was collected at 53 months, and determinant data were collected repeatedly. Logistic regression was used for a determinants analysis.
The prevalence of overall PTSD was 23.4% among the sample. The risk factors for PTSD were older age (OR=1.52, 95% CI: 1.20~1.92), and death or injury of a family member in the earthquake (OR=1.61, 95% CI: 1.09~2.37). Adolescents who had moderate-to-severe common mental health problems were more likely to have PTSD symptoms, with ORs from 3.98 to 17.67 (All P<0.05). Self-esteem remained a protective factor for PTSD regardless of age, whereas positive coping was a protective factor for PTSD when adolescents were older.
PTSD symptoms among adolescent survivors of a catastrophic earthquake seemed to persist over time. Long-term interventions are needed to alleviate PTSD symptoms among adolescent survivors. (Disaster Med Public Health Preparedness. 2018;page 1 of 5)
Laboratory study of astrophysical collisionless shock at SG-II laser facility
HPL Laboratory Astrophysics
Dawei Yuan, Huigang Wei, Guiyun Liang, Feilu Wang, Yutong Li, Zhe Zhang, Baojun Zhu, Jiarui Zhao, Weiman Jiang, Bo Han, Xiaoxia Yuan, Jiayong Zhong, Xiaohui Yuan, Changbo Fu, Xiaopeng Zhang, Chen Wang, Guo Jia, Jun Xiong, Zhiheng Fang, Shaoen Jiang, Kai Du, Yongkun Ding, Neng Hua, Zhanfeng Qiao, Shenlei Zhou, Baoqiang Zhu, Jianqiang Zhu, Gang Zhao, Jie Zhang
Published online by Cambridge University Press: 04 September 2018, e45
Astrophysical collisionless shocks are amazing phenomena in space and astrophysical plasmas, where supersonic flows generate electromagnetic fields through instabilities and particles can be accelerated to high energy cosmic rays. Until now, understanding these micro-processes is still a challenge despite rich astrophysical observation data have been obtained. Laboratory astrophysics, a new route to study the astrophysics, allows us to investigate them at similar extreme physical conditions in laboratory. Here we will review the recent progress of the collisionless shock experiments performed at SG-II laser facility in China. The evolution of the electrostatic shocks and Weibel-type/filamentation instabilities are observed. Inspired by the configurations of the counter-streaming plasma flows, we also carry out a novel plasma collider to generate energetic neutrons relevant to the astrophysical nuclear reactions.
Chitooligosaccharides enhance cold tolerance by repairing photodamaged PS II in rice
Jiachun Zhou, Qiao Chen, Yang Zhang, Liqiang Fan, Zhen Qin, Qiming Chen, Yongjun Qiu, Lihua Jiang, Liming Zhao
Journal: The Journal of Agricultural Science / Volume 156 / Issue 7 / September 2018
Chitooligosaccharides (COS) are multi-functional foods and nutrients and environmentally friendly biological abiotic-resistance inducing agents for plants. In the current study, the effects and possible mechanisms of COS on improving the cold resistance of rice (II YOU 1259) seedlings were investigated. Compared with the control, a COS pre-soaking treatment enhanced photosynthesis, reduced oxidation damage and led to accumulation of more osmotic regulation substances under chilling treatment. In addition, a novel Deg/HtrA family serine endopeptidase (DegQ) gene, related to COS enhanced rice cold resistance, was identified. Quantitative real-time polymerase chain reaction (qRT-PCR) analysis revealed that transcription of DegQ and psbA (D1 protein encoding gene) were up-regulated in a time-dependent manner by COS treatment under cold stress. With increasing expression of the D1 protein, chlorophyll b content was enhanced correspondingly. The current results suggest that COS could enhance cold stress tolerance of rice by repairing the photodamaged photosystem II, altering osmotic regulation and reducing oxidation damage.
Association of dietary sodium:potassium ratio with the metabolic syndrome in Chinese adults
Xiaocheng Li, Baofu Guo, Di Jin, Yanli Wang, Yun Jiang, Baichun Zhu, Yang Chen, Liankai Ma, Han Zhou, Guoxiang Xie
Journal: British Journal of Nutrition / Volume 120 / Issue 6 / 28 September 2018
Several epidemiological studies have investigated that Na or K intakes might be associated with the metabolic syndrome (MetS). However, little evidence has evaluated the association between Na:K ratio and the MetS. In this study, we assessed the association between the dietary Na:K ratio and the MetS. The cross-sectional study was conducted among adults aged 18 years and older in Nanjing, using a multi-stage random sampling method, which resulted in a sample size of 1993 participants. Dietary Na and K intakes were assessed by 3 consecutive days of dietary recollection combined with condiments weighing method. Health-related data were obtained by standardised questionnaires, as well as physical examinations and laboratory assessments. The prevalence rate of the MetS was 36·5 % (728/1993). After adjusting for various lifestyle and dietary factors of the MetS, participants in the highest quartile of dietary Na:K ratio were at a higher risk of developing MetS (OR=1·602; 95 % CI 1·090, 2·353) compared with those in the lowest quartile. Each 1-sd increase in dietary Na:K ratio was associated with a higher risk of prevalent MetS (OR=1·166; 95 % CI: 1·018, 1·336). Among the components of the MetS, dietary Na:K ratio was positively associated with high blood pressure (quartile 3 v. quartile 1: OR=1·656; 95 % CI 1·228, 2·256) and hypertriacylglycerolaemia (quartile 4 v. quartile1: OR=1·305; 95 % CI 1·029, 1·655) in multivariate analysis. These results revealed that higher dietary Na:K ratio significantly increased the risk of the MetS in Chinese adults. Further studies are needed to verify this association.
Asymmetry in Stock Comovements: An Entropy Approach
Lei Jiang, Ke Wu, Guofu Zhou
Journal: Journal of Financial and Quantitative Analysis / Volume 53 / Issue 4 / August 2018
Published online by Cambridge University Press: 06 August 2018, pp. 1479-1507
We provide an entropy approach for measuring the asymmetric comovement between the return on a single asset and the market return. This approach yields a model-free test for stock return asymmetry, generalizing the correlation-based test proposed by Hong, Tu, and Zhou (2007). Based on this test, we find that asymmetry is much more pervasive than previously thought. Moreover, our approach also provides an entropy-based measure of downside asymmetric comovement. In the cross section of stock returns, we find an asymmetry premium: Higher downside asymmetric comovement with the market indicates higher expected returns.
Numerical modeling of the thermally induced core laser leakage in high power co-pumped ytterbium doped fiber amplifier
Fibres for High Power Lasers
Lingchao Kong, Jinyong Leng, Pu Zhou, Zongfu Jiang
Published online by Cambridge University Press: 24 May 2018, e25
We propose a novel model to explain the physical process of the thermally induced core laser leakage (TICLL) effect in a high power co-pumped ytterbium doped fiber (YDF) amplifier. This model considers the thermally induced mode bending loss decrease and the thermally induced mode instability (TMI) in the coiled YDF, and is further used to reproduce the TICLL effect in the high power co-pumped step-index $20/400$ fiber amplifier. Besides, the TICLL effect in the co-pumping scheme and counter-pumping scheme is compared. The result proves that the TICLL effect is caused by the combined effect of the thermally induced mode bending loss decrease and the TMI, and could be mitigated by adopting the counter-pumping scheme. To our best knowledge, this is the first theoretical explanation of the TICLL effect in high power fiber amplifier.
A New Cycle Slip Detection and Repair Method for Single-Frequency GNSS Data
Qusen Chen, Hua Chen, Weiping Jiang, Xiaohui Zhou, Peng Yuan
Journal: The Journal of Navigation / Volume 71 / Issue 6 / November 2018
Cycle slip detection for single frequency Global Navigation Satellite System (GNSS) data is currently mainly based on measurement modelling or prediction, which cannot be effectively performed for kinematic applications and it is difficult to detect or repair small cycle slips such as half-cycle slips. In this paper, a new method that is based on the total differential of ambiguity and Least-Squares Adjustment (LSA) for cycle slip detection and repair is introduced and validated. This method utilises only carrier-phase observations to build an ambiguity function. LSA is then conducted for detecting and repairing cycle slips, where the coordinate and cycle slips are obtained successively. The performance of this method is assessed through processing short and long baselines in static and kinematic modes and the impact of linearization and atmospheric errors are analysed at the same time under a controlled variable method. The results indicate this method is very effective and reliable in detecting and repairing multiple cycle slips, especially small cycle slips. | CommonCrawl |
Uppsala University Department of Mathematics Research Probability Theory and ... Seminars in Probability and ...
Seminar series in Probability and Combinatorics
You are welcome to participate in our seminars where our own and invited researchers talk about their research. The seminars usually takes place on Thursdays every or every two weeks at 10:15 - 11:15.
Contact: Tiffany Lo and Paul Thévenin
Calendar RSS Subscribe to calendar
Calendar link: Copy calendar link
Please follow these instructions to subscribe to the calendar in Outlook on Windows:
Copy the calendar link.
Switch to the calendar view in Outlook.
Right click My calendars (or Other calendars), select Add calendar, select From Internet.
In the dialogue that pops up, paste the calendar link.
Approve the subscription.
The calendar is now available in Outlook. Check the checkbox besides the new calendar to view it.
For Safari on Mac, read these instructions.
In other calendar applications, copy the link and then follow the appropriate procedure.
2 February, 10:15 –11:15
PC Seminar with Júlia Komjáthy, TU Delft
Welcome to this seminar held by Júlia Komjáthy, TU Delft. Title and abstract T.B.A.
Previous seminars in Probability and Combinatorics
Counting combinatorial 3-spheres using Shannon entropy
Date: 1 December, 10:15–11:15
Lecturer: Joel Danielsson, Lund University
Abstract: How many combinatorial d-spheres are there with m facets? That is, how many simplicial complexes with m maximal faces are there whose geometric realizations are homeomorphic to the unit sphere in Euclidean (d+1)-space?
While this has been solved for d=1 (cycle graphs) and for d=2 (triangulations of the 2-sphere), it is still an open problem for d≥3. We prove an upper bound on the number of 3-spheres, by estimating the entropy of a sphere picked uniformly at random. For this we use a corollary of Shannon's noiseless encoding theorem from a recent paper by Palmer & Pálvölgyi.
Sparse Random Graphs with Many Triangles
Date: 17 November, 10:15
Lecturer: Suman Chakraborty, Uppsala University
Abstract: It is well known that sparse random graphs (where the number of edges is of the order of the number of vertices) contain only a small number of triangles. On the other hand, many networks observed in real life are sparse but contain a large number of triangles. Motivated by this discrepancy we ask the following two questions: How (un)likely is it that a sparse random graph contains a large number of triangles? What does the graph look like when it contains a large number of triangles? We also ask a related question: What is the probability that in a sparse random graph a large number of vertices are part of some triangle? We discuss these questions, as well as some applications to exponential random graph models.
Joint work with Remco van der Hofstad and Frank den Hollander.
Uncovering a graph
Lecturer: Svante Janson
Abstract: Let G be a graph, deterministic or random, and uncover its vertices one by one, in uniformly random order. This yields a growing sequence of (random) induced subgraphs of G, and we study the evolution of this sequence. More precisely, we study only the evolution of the number of edges in these subgraphs.
This question (among others) was studied by Hackl, Panholzer and Wagner (AofA 2022) for the case when $G$ is a uniformly random labelled tree. They showed that the stochastic process given by the number of visible edges, after suitable rescaling, converges to a continuous Gaussian process, which resembles a Brownian bridge but with a somewhat different distribution. (The proof uses an exact formula for a multivariate generating function.) Our main result is that this result extends to a wide class of deterministic and random trees and graphs.
The problem can be seen as dual to the random graph process introduced by Erdös and Renyi, where the edges of a complete graph are uncovered in random order. Our proof uses an adaption of a method introduced for that problem (Janson 1990, 1994).
Stein's method for exponential random graph models and assessing goodness of fit
Date: 3 November, 10:15
Lecturer: Gesine Reinert, Oxford University
Abstract: Exponential random graph models are a key tool in network analysis but due to an intractable normalising constant are difficult to manipulate. In this talk we shall use Stein's method to approximate these models by Bernoulli random graphs in ``high temperature" regimes.
For assessing the goodness of fit of a model, often independent replicas are assumed. When the data are given in the form of a network, usually there is only one network available. If the data are hypothesised to come from an exponential random graph model, the likelihood cannot be calculated explicitly. Using a Stein operator for these models we introduce a kernelized goodness of fit test and illustrate its performance.
Finally, we extend the ideas of this goodness of fit test to provide an approximate goodness of fit test for potentially black-box graph generators.
This talk is based on joint work with Nathan Ross and with Wenkai Xu.
Friend of a friend models of network growth
Date: 26 October
Lecturer: Watson Levens, University of Dar es Salaam and Uppsala University
Abstract: A model for a friend of a friend network growth is based on the idea that individuals joining the social network choose one individual at random and link to her with probability p, then they choose a friend of that person and link with probability q. The model is more general and conceptually simple, yet it produces power-law degree distributions, small world clustering and super-hub networks with non-stationary degree distributions.
I will discuss a general framework for analysing a friend of a friend models of network growth and look at some special cases which produce scale-free and super-hubs networks. I will then discuss the general results of the models and show examples of misleading claims about how some cases of models similar to the friend of a friend models can be used as a form of local mechanism for motivating preferential attachment.
Finally, I will mention some results about the early evolution of 2018/2019 Tanzania Twitter and compare it with the 2012 Twitter networks of USA, Brazil and Japan.
The talk is based on some recent studies by Watson Levens, David J. T. Sumpter and Alex Szorkovszky.
Is it easier to count communities than to find them?
Date: 20 October, 10:15
Lecturer: Fiona Skerman, Uppsala University
We study the planted-dense-subgraph model where an Erdős–Rényi base graph is augmented by adding one or more `communities' - subsets of vertices with a higher-than-average connection probability between them. The detection problem is to distinguish between the vanilla Erdős–Rényi model and that with one or many planted communities and the recovery problem is to approximately recover the position of the community structure within the graph. A detection-recovery gap is known to occur, for some parameter combinations (sizes of structure, edge probabilities), we have fast algorithms to perform detection but recovery is not possible. We investigate something in-between: we want to infer some property of the community structure without needing to recover it. We say counting is the problem of distinguishing a single community from many. Our result is to show counting is no easier than detection.
The combinatorics at play: let a=(a_H)_H and b=(b_H)_H be two sequences, indexed by graphs. We define a graph sequence r by setting the empty graph to have r-value 1, and via the following recursion r_G = a_G - \sum_H b_H r_{G\H} where the sum is over all non-empty subgraphs of G. Central to our proof is to show that the sequence r inherits properties of the sequences a and b. Loosely, in our context the sequence a (resp. b) encodes information of the probability space with many communities (resp. one community) and whether one can distinguish these two probability spaces is characterised by the value of the sum of squares of the r-values. Joint work with Cindy Rush, Alex Wein and Dana Yang.
Fragmentation of trees and drifted excursions
Lecturer: Paul Thévenin, Uppsala University
Abstract: The fragmentation of a tree is a process which consists in cutting the tree at random points, thus splitting it into smaller connected components as time passes. In the case of the so-called Brownian tree, it turns out that the sizes of these subtrees, known as the Aldous-Pitman fragmentation process, have the same distribution as the lengths of the excursions over its current infimum of a linearly drifted Brownian excursion, as proved by Bertoin. We provide a natural coupling between these two objects. To this end, we make use of the so-called cut-tree of the Brownian tree, which can be seen as the genealogical tree of the fragmentation process. Joint work with Igor Kortchemski.
Extremal trees with fixed degree sequence
Lecturer: Eric Andriantiana (Rhodes University)
Date: 6 October, 10:15
Abstract: Joint work with Valisoa Razanajatovo Misanantenaina and Stephan G. Wagner. The so-called greedy tree G(D) and alternating greedy tree M(D) are known to be extremal graphs among elements of the set T_D of trees with degree sequence D, with respect to various graph invariants. This talk will discuss a generalized proof that covers a larger family of invariants for which G(D) or M(D) is an extremal graph in T_D. The result implies known results on the Wiener index, the number of subtrees, the number of independent subsets, the Hosoya index, the terminal Wiener index, and the energy of graphs. Furthermore, new results on the number of rooted spanning forests, the incidence energy and the solvability of a graph also follow. By comparing greedy trees, and alternating greedy trees, with different degree sequences, the results in T_D are extended to the set of trees whose degree sequences are majorized by D.
Some topics on random graphs
Lecturer: Tiffany Lo (Uppsala University)
Abstract: We consider the preferential attachment (PA) tree with additive fitness and the duplication divergence (DD) random graph. In particular, we discuss the construction of the local weak limit of the PA tree, and study the expected degree distribution of the DD graph using a certain type of birth-catastrophe process. The work on the DD random graph is joint with A.D. Barbour.
Details: Benny Avelin, Uppsala University - Seminarierum of Ångström, Hus 6, Rum 64119, 10:15 - 11:15 Title: Universal approximation and regularity of periodic neural networks
Abstract: In this talk, I will focus on the approximation of functions of Bounded Variation (BV) using periodic neural networks. I will present a calculus of variations approach to the regularity and localization of the approximation problem.
Details: Gabriel Lipnik, Graz University of Technology - Online, 10:15 - 11:15 Title: Fragmentation Process derived from $\alpha$-stable Galton-Watson trees
Abstract: Many well-known combinatorial sequences satisfy some sort of recurrence
relations. In this talk, we discuss a special class of such sequences,
so-called q-recursive sequences. For an integer q at least 2, a q-recursive
sequence is defined by recurrence relations on subsequences of indices modulo
some fixed power of q. Precise asymptotic results for these sequences are obtained
via a detour to q-regular sequences in the sense of Allouche and Shallit. It turns out that many combinatorial sequences are in fact q-recursive. We conclude the talk by studying some specific q-recursive sequences in detail. This is joint work with Clemens Heuberger and Daniel Krenn.
Recorded: https://media.medfarm.uu.se/play/kanal/920/video/14791
Details: Colin Desmarais, Uppsala University - Seminarierum of Ångström, Hus 6, Rum 64119, 10:15 - 11:15 Title: Broadcasting induced colourings of preferential attachment trees
Abstract: A random recursive tree is a rooted tree constructed by successively choosing a vertex uniformly at random and adding a child to the selected vertex. A random preferential attachment tree is grown in a similar manner, but the vertex selection is made proportional to a linear function of the number of children of a vertex. Preferential attachment trees are the tree version of the Barabasi-Albert preferential attachment model. We consider a red-blue colouring of the vertices of preferential attachment trees, which we call a broadcasting induced colouring: the root is either red or blue with equal probability, while for a fixed value p between 0 and 1, every other vertex is assigned the same colour as its parent with probability p and the other colour otherwise. In this talk I will discuss properties of preferential attachment trees with broadcasting induced colourings, including limit laws for the number of vertices, clusters (maximal monochromatic subtrees) and leaves of each colour. The main focus of the talk will be on the size of the root cluster, that is, the maximal monochromatic subtree containing the root. Joint work with Cecilia Holmgren and Stephan Wagner.
Details: Open problem session - Seminarierum of Ångström, Hus 6, Rum 64119, 10:15 - 11:15
Details: Annika Heckel, University of Munich - Seminarierum of Ångström, Hus 6, Rum 64119, 10:15 - 11:15 Title: How does the chromatic number of a random graph vary?
Abstract: How much does the chromatic number of the random graph G(n, 1/2) vary? A classic result of Shamir and Spencer shows that it is contained in some sequence of intervals of length about n^(1/2). Until recently, however, no non-trivial lower bounds on the fluctuations of the chromatic number of a random graph were known, even though the question was raised by Bollobás many years ago. I will talk about the main ideas needed to prove that, at least for infinitely many n, the chromatic number of G(n, 1/2) is not concentrated on fewer than n^(1/2-o(1)) consecutive values. I will also discuss the Zigzag Conjecture, made recently by Bollobás, Heckel, Morris, Panagiotou, Riordan and Smith: this proposes that the correct concentration interval length 'zigzags' between n^(1/4+o(1)) and n^(1/2+o(1)), depending on n. Joint work with Oliver Riordan.
Details: Gabriel Berzunza-Ojeda, University of Liverpool - Online, 10:15 - 11:15 Title: Fragmentation Process derived from $\alpha$-stable Galton-Watson trees
Abstract: Aldous, Evans and Pitman (1998) studied the behavior of the fragmentation process derived from deleting the edges of a uniform random tree on n labelled vertices. In particular, they showed that, after proper rescaling, the above fragmentation process converges as n -> \infty to the fragmentation process of the Brownian CRT obtained by cutting-down the Brownian CRT along its skeleton in a Poisson manner. In this talk, we will discuss the fragmentation process obtained by deleting randomly chosen edges from a critical Galton-Watson tree t_n conditioned on having n vertices, whose offspring distribution belongs to the domain of attraction of a stable law of index \alpha in (1,2]. The main result establishes that, after rescaling, the fragmentation process of t_n converges, as n -> \infty, to the fragmentation process obtained by cutting-down proportional to the length on the skeleton of an \alpha-stable Lévy tree. We will also explain how one can construct the latter by considering the partitions of the unit interval induced by the normalized \alpha-stable Lévy excursion with a deterministic drift. In particular, the above extends the result of Bertoin (2000) on the fragmentation process of the Brownian CRT. The approach uses the Prim's algorithm (or Prim-Jarník algorithm) to define a consistent exploration process that encodes the fragmentation process of t_n. We will discuss the key ideas of the proof. Joint work with Cecilia Holmgren (Uppsala University)
Details: Baptiste Louf & Paul Thévenin, Uppsala University - Seminarierum of Ångström, Hus 6, Rum 64119, 10:15 - 11:15 Title: Asymptotic behaviour of a factorization of fixed genus of the n-cycle
Abstract: A factorization of the n-cycle is a way of writing the cycle (1, 2, ..., n) as a product of transpositions. It is well-known that the minimal number of transpositions in a factorization of the n-cycle est n-1. More generally, a factorisation as a product of n-1+2g transpositions is called factorisation of genus g. We will expose a bijection between the factorizations of the n-cycle and a set of graphs with n vertices, as well as an algorithm inspired by this bijection and sampling an asymptotically uniform factorization of fixed genus. We will also show how this algorithm allows us to describe the scaling limit of a uniform factorization of given genus. Joint work with Valentin Féray.
Details: Cyril Marzouk, Ecole Polytechnique - Online, 10:15 - 11:15 Title: On the geometry of biconditioned random trees
Abstract: We consider simply generated random plane trees with n vertices and k_n leaves, sampled from a sequence of weights. Motivated by questions on random planar maps, we will focus on the asymptotic behaviour of the largest degree. Precisely we will give conditions on both the number of leaves and the weight sequence that ensure the convergence in distribution of the associated Lukasiewicz path (or depth-first walk) to the Brownian excursion. This should also provide a first step towards the convergence of their height of contour function. The proof scheme is to reduce step by step to simpler and simpler objects and we will discuss excursion and bridge paths, non decreasing paths conditioned by their tip, and finally estimates of the form of the local limit theorem which may be of independent interest.
Details: Benjamin Hackl, Uppsala University - Ångström 4006, 10:15 - 11:15
Title: Hands-On Workshop: Mathematical Animations with Manim
Abstract: Manim [1] is an open source Python framework for visualizing mathematical concepts and ideas in animated videos. Originally, the library was created by Grant "3Blue1Brown" Sanderson, whose Manim-produced YouTube videos [2] get millions of clicks and are a driving force contributing to the popularization of Mathematics. To battle the usual shortcomings of large one-person software projects (unstable interface, little to no documentation), a small community has formed that is actively maintaining, cleaning and continuously improving Manim [3]. We will explore the framework's basic functionalities by creating a series of short (but cool!) animations, and learn about further references. The talk / workshop will be interactive, you are encouraged to bring your own device and work along; all you need is a working internet connection.
[1] https://www.manim.community [2] https://www.youtube.com/3blue1brown [3] https://github.com/ManimCommunity/manim
Details: Svante Janson, Uppsala University - Online, 10:15 - 11:15 Title: Asymptotic normality for m-dependent and constrained U-statistics, with applications to pattern matching in random strings and permutations
Abstract: We study U-statistics based on a stationary sequence of m-dependent variables, and constrained U-statistics with some restrictions on the gaps between indices. A law of large numbers and a central limit theorem are extended from the standard case to this setting. The results are motivated by applications to pattern matching in random strings and permutations. We obtain both new results and new proofs of old results.
Details: Michael Missethan, Graz University of Technology - Online, 10:15 - 11:15
Title: Random planar graphs
Abstract: In this talk we consider random planar graphs with a fixed
number of vertices and edges. We will discuss recent results on
longest and shortest cycles, the two-point concentration of the
maximum degree, and the Benjamini-Schramm local limit in such a random
planar graph and will compare them to related classical results in the
Erdős-Rényi random graph. This talk is based on joint work with Mihyun
Kang.
Details: Benjamin Hackl, Uppsala Universitet - Online, 10:15 - 11:15
Title: Combinatorial aspects of flip-sorting and pop-stacked permutations
Abstract: Flip-sorting describes the process of sorting a permutation by iteratively reversing (``flipping'') all of its maximal descending consecutive subsequences (``falls''). The combinatorial structure induced by this procedure can be illustrated by the associated sorting tree: a graph whose vertices are permutations, and where an edge between two permutations models that one results from the other after flipping its falls.
In this talk, we will first make sure that flip-sorting is actually a sorting procedure (and hence the sorting tree is actually a tree rooted at the identity permutation), and then explore the surprisingly rich structure of the tree in order to identify which permutations are very close to or very far away from the root.
Details: Guillem Perarnau, Universitat Politècnica de Catalunya - Online, 10:15 - 11:15
Title: Rankings in directed random graphs
Abstract: In this talk we will consider the extremal values of the stationary distribution of the sparse directed configuration model. Under the assumption of linear $(2+\eta)$-moments on the in-degrees and of bounded out-degrees, we obtain tight comparisons between the maximum value of the stationary distribution and the maximum in-degree. Under the further assumption that the order statistics of the in-degrees have power-law behavior, we show that the extremal values of the stationary distribution also have power-law behavior with the same index. Moreover, these results extend to the PageRank scores of the model, thus confirming a version of the so-called power-law hypothesis. Joint work with Xing Shi Cai, Pietro Caputo and Matteo Quattropani.
Details: Sarai Hernandez-Torres, Technion - Online, 10:15 - 11:15
Title: Chase-escape with death
Abstract: Chase-escape is a competitive growth process in which red particles spread to adjacent uncolored sites while blue particles overtake adjacent red particles. We can think of this model as rabbits escaping from wolves pursuing them on an infinite graph. There are two phases for chase-escape. If the rabbits spread fast enough, then both species coexist at all times. Otherwise, the wolves eat all the rabbits in a finite time, and we have extinction. This talk presents a variation of chase-escape where each rabbit has a random lifespan, after which it dies. This process is chase-escape with death, and we will study it on d-ary trees. Chase-escape with death exhibits a new phase where death benefits the survival of the rabbit population. We will understand the phase transitions of this process through a connection between probability and analytic combinatorics. This talk is joint work with Erin Beckman, Keisha Cook, Nicole Eikmeier, and Matthew Junge.
Details: Noela Müller, Ludwig Maximilians Universität München - Online, 10:15 - 11:15
Title: Belief Propagation on the random k-SAT model
Abstract: Corroborating a prediction from statistical physics, we prove that the Belief Propagation message passing algorithm approximates the partition function of the random k-SAT model well for all clause/variable densities and all inverse temperatures for which a modest absence of long-range correlations condition is satisfied. This condition is known as "replica symmetry" in physics language. From this result we deduce that a replica symmetry breaking phase transition occurs in the random k-SAT model at low temperature for clause/variable densities below but close to the
satisfiability threshold.
This is joint work with Amin Coja-Oghlan and Jean Bernoulli Ravelomanana.
Details: Thomas Budzinski, ENS Lyon - Online, 10:15 - 11:15
Title: Universality for random maps with unconstrained genus
Abstract: We consider the random map model obtained by starting from a finite set of polygons and gluing their sides two by two uniformly at random. Several properties of this model (central limit theorem for the genus, asymptotic Poisson--Dirichlet distribution for vertex degrees) were proved by Gamburd and Chmutov--Pittel using techniques from algebraic combinatorics. I will describe new, probabilistic proofs of these results which can be used to obtain more precise information about the model. In particular, our results support the following conjecture: asymptotically, the law of the graph associated to the map should only depend on the total number of edges, and not on how they are distributed between the faces.
Based on joint work with Nicolas Curien and Bram Petri.
Details: Bram Petri, Sorbonne Université - Online, 10:15 - 11:15
Title: Random 3-manifolds with boundary
Abstract: When one glues a finite number of tetraheda together along their faces at random, the probability that the resulting complex is a manifold tends to zero as the number of tetrahedra grows. However, the only points that pose problems are the vertices of this complex. So, if we truncate the tetrahedra at their vertices, we obtain a random manifold with boundary. I will speak about joint work with Jean Raimbault on the geometry and topology of such a random manifold. I will not assume any familiarity with 3-dimensional geometry and topology.
Details: Igor Kortchemski, École polytechnique - Online, 10:15 - 11:15
Title: Cauchy-Bienaymé-Galton-Watson
Abstract: We will be interested in the structure of large random Bienaymé-Galton-Watson trees, with critical offspring distribution belonging to the domain of attraction of a Cauchy law. We will identify a so-called condensation phenomenon, where a single vertex with macroscopic degree emerges. This is a joint work with Loïc Richier.
Details: Svante Janson, Uppsala University - Online, 10:15 - 11:15
Title: The sum of powers of subtree sizes of conditioned Galton-Watson trees
Abstract: Let $\alpha$ be a fixed number. For any tree $T$, define $$F(T) := \sum |T_v|^\alpha,$$ summing over all fringe trees of $T$. Such sums have been studied by several authors, for several models of random trees. Today I let $T$ be a conditioned Galton-Watson tree, where the critical offspring distribution has finite variance. For real $\alpha$, there are three different phases: $\alpha$ in $(-\infty,0)$, $(0,1/2)$, and $(1/2,\infty)$. We consider also complex $\alpha$, which is useful since it enables us to use properties of analytic functions in some proofs; moreover, it yields new results and problems. We use several methods, including Aldous's convergence to Brownian excursion to obtain convergence in distribution, and singularity analysis of generating functions to obtain moment asymptotics. (Joint work with Jim Fill.)
Details: Cecilia Holmgren, Uppsala University - Online, 10:15 - 11:15
Title: Split trees -- A unifying model for many important random trees of logarithmic height
Abstract: Split trees were introduced by Devroye (1998) as a novel approach for unifying many important random trees of logarithmic height. They are interesting not least because of their usefulness as models of sorting algorithms in computer science; for instance the well-known Quicksort algorithm (introduced by Hoare [1960]) can be depicted as a binary search tree (which is one example of a split tree). In 2012, I introduced renewal theory as a novel approach for studying split trees*. This approach has proved to be highly useful for investigating such trees and has allowed us to show several general results valid for all split trees. In my presentation, I will introduce split trees and illustrate some of our results for this large class of random trees, e.g. on the size, total path length, number of cuttings and number of inversions as well as on the size of the giant component after bond percolation.
* Holmgren C., Novel characteristic of split trees by use of renewal theory. Electronic Journal of Probability 2012.
Details: Stephan Wagner, Uppsala University - Online, 10:15 - 11:15
Title: The mean subtree order of trees
Abstract: By a subtree of a tree T, we mean any nonempty induced subgraph that is connected and thus again a tree. The study of the average number of vertices in a subtree, which is called the mean subtree order, goes back to Jamison's work in the 1980s. His first paper on the topic concludes with six open problems. The first of these was resolved in 2010, and over the past decade, further progress was made so that only one of them remains open today. In my talk, I will mainly focus on recent joint work with Stijn Cambie and Hua Wang on the elusive remaining conjecture, which states that for every number of vertices, the tree with greatest mean subtree order must be a caterpillar.
Presentation of the master thesis of Bernat Sopena Gilboy
Details: Uppsala University - Online, 10:15 - 11:15
Abstract: The counting problem (i.e. given a combinatorial class, how many possible objects of size n are there) is introduced. In the first part an overview of combinatorial classes, generating functions, the symbolic method as well as results on coefficient asymptotics are presented. Examples are given and a general counting problem with a functional equation of the type y = F(x,y) is solved to provide context for the methods. The rest of the talk is spent on solving the counting problem for simple labelled planar graphs (Giménez & Noy, 2009). To this end we review results obtained on 3-connected planar maps (Mullin & Schellenberg, 1968) and labelled 2-connected planar graphs (Walsh 1982; Bender, Gao & Wormald 2002).
Details: Eleanor Archer, Tel-Aviv University - Online, 10:15 - 11:15
Title: Random walks on decorated Galton-Watson trees
Abstract: Random trees are the building blocks for a range of probabilistic structures, including percolation clusters on the lattice and a range of statistical physics models on random planar maps. In this talk we consider a random walk on a critical "decorated" Galton-Watson tree, by which we mean that we first sample a critical Galton-Watson tree T, replace each vertex of degree n with an independent copy of a graph G(n), and then glue these inserted graphs along the tree structure of T. We will determine the random walk exponents for this decorated tree model in terms of the exponents for the underlying tree and the exponents for the inserted graphs. We will see that the model undergoes several phase transitions depending on how these exponents balance out.
Details: Benedikt Stufler, Vienna University of Technology - Online, 10:15 - 11:15
Title: Cutvertices in random planar maps
Abstract: We study the number of cutvertices in a random planar map as the number of edges tends to infinity. Interestingly, the combinatorics behind this seemingly simple problem are quite involved. This is joint work with Marc Noy and Michael Drmota.
Details: Vasiliki Velona, Universitat Pompeu Fabra and Universitat Politècnica de Catalunya - Online, 10:15 - 11:15
Title: Broadcasting on random recursive trees
Consider a random recursive tree, whose root vertex has a random bit value assigned. Every other vertex has the same bit value as its parent with probability 1 − q and the opposite value with probability q, where q is in [0, 1]. The broadcasting problem consists in estimating the value of the root bit upon observing the unlabeled tree, together with the bit value associated with every vertex. In a more difficult version of the problem, the unlabeled tree is observed but only the bit values of the leaves are observed. When the underlying tree is a uniform random recursive tree, in both variants of the problem it is possible to characterize the values of q for which the optimal reconstruction method has a probability of error bounded away from 1/2. Moreover, we find that the probability of error is bounded by a constant times q. Two simple reconstruction rules that are considered is the simple majority vote and the bit value of the centroid vertex of the tree. Most results are extended to linear preferential attachment trees as well. The results to be presented in this talk are joint work with Louigi Addario-Berry, Luc Devroye, Gábor Lugosi.
Title: On general subtrees of a conditioned Galton-Watson tree
Abstract: We show that the number of copies of a given rooted tree in
a conditioned Galton-Watson tree satisfies a law of large numbers under a
minimal moment condition on the offspring distribution. Based on arXiv:2011.04224.
Details: Fiona Skerman, Uppsala University - Online, 10:15 - 11:15
Title: Edge-sampling and modularity
Abstract: We analyse when community structure of an underlying graph can be determined from an observed subset of the graph. In a natural model where we suppose edges in an underlying graph G appear independently with some probability in our observed graph G' we describe how high a sampling probability we need to infer the modularity of the underlying graph. Modularity is a function on graphs which is used in algorithms for community detection. For a given graph G, each partition of the vertices has a modularity score, with higher values indicating that the partition better captures community structure in G. The (max) modularity q*(G) of the graph G is defined to be the maximum over all vertex partitions of the modularity score, and satisfies 0 ≤ q*(G) ≤ 1. In the seminar I will spend time on intuition for the behaviour of modularity, how it can be approximated, links to other graph parameters and to present some conjectures and open problems. Joint work with Colin McDiarmid.
Details: Xing Shi Cai, Uppsala University - Online, 17:15 - 18:15
Title: Minimum stationary values of sparse random directed graphs
Abstract: We consider the stationary distribution of the simple random walk on the directed configuration model with bounded degrees. Provided that the minimum out-degree is at least 2, with high probability (whp) there is a unique stationary distribution. We show that the minimum positive stationary value is whp n^−(1+C+o(1)) for some constant C≥0 determined by the degree distribution. In particular, C is the competing combination of two factors: (1) the contribution of atypically "thin" in-neighbourhoods, controlled by subcritical branching processes; and (2) the contribution of atypically "light" trajectories, controlled by large deviation rate functions. Additionally, our proof implies that whp the hitting and the cover time are both n^(1+C+o(1)). Our results complement those of Caputo and Quattropani who showed that if the minimum in-degree is at least 2, stationary values have logarithmic fluctuations around n−1.
Details: Sergey Dovgal, University of Bordeaux - Online, 17:15 - 18:15
Title: The birth of the strong components
Abstract: It is known that random directed graphs D(n,p) undergo a phase transition around the point p = 1/n. Moreover, the width n^{-4/3} of the transition window has been known since the works of Luczak and Seierstad. In particular, they have established that as n → ∞ when p = (1 + μn^{-1/3})/n, the asymptotic probability that the strongly connected components of a random directed graph are only cycles and single vertices decreases from 1 to 0 as μ goes from −∞ to ∞. By using techniques from analytic combinatorics, we establish the exact limiting value of this probability as a function of μ and provide more statistical insights into the structure of a random digraph around, below and above its transition point. We obtain the limiting probability that a random digraph is acyclic and the probability that it has one strongly connected complex component with a given difference between the number of edges and vertices (called excess). Our result can be extended to the case of several complex components with given excesses as well in the whole range of sparse digraphs. Our study is based on a general symbolic method which can deal with a great variety of possible digraph families, and a version of the saddle-point method which can be systematically applied to the complex contour integrals appearing from the symbolic method. While the technically easiest model is the model of random multidigraphs, in which multiple edges are allowed, and where edge multiplicities are sampled independently according to a Poisson distribution with a fixed parameter p, we also show how to systematically approach the family of simple digraphs, where multiple edges are forbidden, and where 2-cycles are either allowed or not. Our theoretical predictions are supported by numerical simulations when the number of vertices is finite, and we provide tables of numerical values for the integrals of Airy functions that appear in this study. Joint work with Élie de Panafieu, Dimbinaina Ralaivaosaona, Vonjy Rasendrahasina, and Stephan Wagner.
Details: Clément Requilé, Uppsala University, Ångström 4006, 17:15 - 18:15
Abstract: A graph is labelled when its vertex set is {1,...,n}, and planar when it admits an embedding on the sphere. A random (labelled) planar graph is a graph chosen uniformly among all planar graphs on n vertices. We would like to study its properties as n goes to infinity. However, the planarity constraint makes it difficult to mimic the classical methods used in the study of the Erdős-Rényi random graph. An alternative is to rely on asymptotic enumeration via generating functions and analytic combinatorics. The starting point is a decomposition of graphs according to their connected components developed by Tutte in the 60's to study planar maps (fixed embeddings of planar graphs), and which can be extended to encode parameters of interest. In this talk I will present several results about some families of planar graphs, in particular in the cubic (3-regular), 4-regular, and bipartite cases. We will discuss the behaviour of various parameters in the random setting and explain how some of them can be encoded via the Ising and Potts models. If time permits, I will also try to highlight some limitations of this method and where can a more probabilistic viewpoint hopefully help.
Details: Paul Thévenin, Uppsala University, Ångström 4006, 17:15 - 18:15
Title: Random trees, laminations and factorizations
Abstract: A minimal factorization of the n-cycle is a way of seeing the cycle (1 2 3 ... n) (sending 1 to 2, 2 to 3, ..., n to 1) as a product of (n-1) transpositions. By coding each of these transpositions by a chord in the unit disk, one sees a uniform minimal factorization as a random process of laminations, which are sets of noncrossing chords in the disk. In this talk, I will discuss the convergence of this process, and highlight various connections between this model, a family of random trees and fragmentation processes. If time allows, I will also present some possible generalizations of these results to other models of factorizations.
Details: Baptiste Louf, Uppsala University, Ångström 4006, 17:15 - 18:15
Title: The geometry of high genus maps
Abstract: A map is the result of polygons glued together to form a (compact, connected, oriented) surface. Alternatively, one can think of it as a graph embedded in a surface. Just like graphs, maps are a good model of discrete geometry, and it can be interesting to study their properties, especially when considering random maps whose size goes to infinity. In this talk I will present some results about high genus maps. The genus of a map is the number of handles of the surface it lives on (for instance, a sphere has genus 0 and a torus has genus 1), and high genus maps are defined as (sequences of) uniform random maps whose size and genus go to infinity at the same time. There won't be any proof or other technical details, but I will present a bunch of open problems and conjectures.
The talks from 2020-04-02 to 2020-05-28 were cancelled due to the Covid-19 pandemic.
Details: Alexander Watson, University College London, Ångström 64119, 10:15 - 11:15 am
Details: Benjamin Dadoun, University of Bath, Ångström 64119, 10:15 - 11:15 am
Details: Gerardo Barrera Vargas, University of Helsinki, Ångström 64119, 10:15 - 11:15 am
Details: Quan Shi, Universität Mannheim, Ångström 64119, 10:15 - 11:15 am
Details: Robin Stephenson, University of Sheffield, Ångström 64119, 10:15 - 11:15 am
Details: Igor Pak, University of California, UCLA, Ångström 64119, 10:15 - 11:15 am
Title: Sampling Contingency Tables
Abstract: Contingency tables are integer matrices with fixed row and column sums. Sampling them efficiently is a notoriously challenging problem both in both theory and practice, of great interest in both theoretical and the real world statistics. Roughly speaking, random sampling of contingency tables allows one to measure the empirical correlation between discrete random variables, always a good thing to have.
I will first give a brief overview of the existing approaches (Fisher-Yates sampling, sequential sampling, the Diaconis-Gangolli MCMC algorithm and the algebraic statistic tools). I will then describe a new MCMC sampling algorithm based on combinatorial and group theoretic ideas. Many examples will follow which will illustrate the surprising power of our algorithm both in two and higher dimensions. If time permits, I will mention the theory behind our work and some potential generalizations we are thinking about.
Joint work with Sam Dittmer.
Details: Stephan Wagner, Uppsala University, Ångström 64119, 10:15 - 11:15 am
Title: On the Collection of Fringe Subtrees in Random Binary Trees
Abstract: A fringe subtree of a rooted tree is a subtree consisting of one of the nodes and all its descendants. In this talk, we are specifically interested in the number of non-isomorphic trees that appear in the collection of all fringe subtrees of a binary tree. This number is analysed under two different random models: uniformly random binary trees and random binary search trees.
Details: Svante Janson, Uppsala University, Ångström 64119, 10:15 - 11:15 am
Title: Central limit theorems for additive functionals and fringe trees in tries
Abstract: We prove central limit theorems for additive functionals of tries, undersuitable conditions. Several methods are used and combined; these include:
Poissonization (introducing more independence);
approximation with a sum of independent terms (coming from disjoint subtrees);
dePoissonization using a conditional limit theorem;
moment asymptotics by renewal theory.
As examples, we consider some properties of fringe trees.
Details: Ilse Fischer, Universität Wien, Ångström 64119, 10:15 - 11:15 am
Title: Bijective proofs of skew Schur polynomial factorizations
Abstract: Schur polynomials and their generalizations appear in various differentcontexts. They are the irreducible characters of polynomial representations of the general linear group and an important basis of the space of symmetric functions. They are accessible from a combinatorial point of view as they are multivariate generating functions of semistandard tableaux associated with a fixed integer partition. Recently, Ayyer and Behrend discovered for a wide class ofpartitions factorizations of Schur polynomials with an even number of variables where half of the variables are the reciprocals of the others into symplectic and/or orthogonal group characters, thereby generalizing results of Ciucu and Krattenthaler for rectangular shapes. We present bijective proofs of such identities. Our proofs involve also what we call a ``randomized'' bijection. No prior knowledge on group characters and Schur polynomials is necessary. Joint work with Arvind Ayyer.
Details: Tony Johansson, Stocholms Universitet, Ångström 64119, 10:15 - 11:15 am
Title: Finding Hamilton cycles in fixed-degree random graphs
Abstract: The fixed degree sequence random graph is obtained by fixing a sequence of n integers, then drawing a graph on n vertices uniformly at random from the set of graphs with the prescribed sequence as its degree sequence. We consider the problem of finding a Hamilton cycle in this graph.
If all degrees are equal to d we obtain the random regular graph, known to be Hamiltonian with high probability when d is at least 3. Otherwise not much is known; what if half the degrees are 3 and half are 4? Half 3 and half 1000? It is easy to come up with degree sequences which do not give Hamilton cycles, and we want to be able to determine which ones do and which ones don't.
I don't fully solve the problem, but I derive a graph condition which is essentially necessary and sufficient for Hamilton cycles in this class of random graphs. It remains open to determine for which degree sequences this condition holds.
Details: Pasha Tkachov, Gran Sasso Science Institute, Ångström 64119, 10:15 - 11:15 am
Title: On stability of traveling wave solutions for integro-differential equations related to branching Markov processes
Abstract: The aim of the talk is to present results on stability of traveling waves for integro-differential equations connected with branching Markov processes. In other words, the limiting law of the left-most particle of a time-continuous branching Markov process with a Lévy non-branching part is shown. In particular, Bramson's correction is obtained. The key idea is to approximate the branching Markov process by a branching random walk and apply the result of Aïdékon on the limiting law of the latter one.
Details: Laura Eslava, National Autonomous University of Mexico, Ångström 64119, 10:15 - 11:15 am
Title: Branching processes with merges and locality of hypercube's critical percolation
Abstract: We define a branching process to understand the locality of the critical percolation in the hypercube; that is, whether the local structure of the hypercube has an effect on the critical percolation as a function of the dimension of the hypercube. The branching process mimics the local behavior of an exploration of a percolated hypercube; it is defined recursively as follows. Start with a single individual in generation 0. On an first stage, each individual has independent Poisson offspring with mean (1+p)(1-q)^k where k depends on the ancestry of the individual; on the merger stage, each pair of cousins merges with probability q.
There is a threshold function q_c=q_c(p) for extinction of the branching process. When p is sufficiently small, the first order terms of q_c coincide with those of the critical percolation for the hypercube, suggesting that percolation in the hypercube is dictated by its local structure. This is work in progress with Sarah Penington and Fiona Skerman.
Registration number: 202100-2932 VAT number: SE202100293201 PIC: 999985029 Registrar About this website Privacy policy Editor: Marie Chajara Svensson | CommonCrawl |
Only $35.99/year
Pharm Exam 4
CG482Plus
A patient is prescribed insulin glargine [Lantus]. Which statement should the nurse include in the discharge instructions?
A. The insulin will have a cloudy appearance in the vial.
B. Once daily at bedtime.
C. The patient should mix Lantus with the intermediate-acting insulin.
D. The patient will have less risk of hypoglycemic reactions with this insulin.
Glargine insulin is indicated for once-daily subcutaneous administration to treat adults and children with type 1 diabetes and adults with type 2 diabetes. According to the package labeling, the once-daily injection should be given at bedtime. Glargine insulin should not be given more than once a day, although some patients require the above dosing to achieve a full 24 hours of basal coverage.
2. A patient is prescribed NPH insulin. Which statement should the nurse include in the discharge instructions?
B. The onset of action is rapid.
C. The patient should not mix Lantus with short-acting insulin.
D. The patient will have no risk of allergic reactions with this insulin.
NPH insulins are supplied as cloudy suspensions. The onset of action of NPH insulin is delayed, and the duration of action is extended. NPH insulin is the only one suitable for mixing with short-acting insulins. Allergic reactions are possible with NPH insulins.
A patient is prescribed metformin. Which statement about metformin does the nurse identify as true?
A. Metformin increases absorption of vitamin B12.
B. Metformin can delay the development of type 2 diabetes in high-risk individuals.
C. Metformin causes patients to gain weight.
D. Metformin use predisposes patients to alkalosis.
Metformin decreases absorption of vitamin B12 and folic acid and thereby can cause deficiencies of both. Metformin is considered a "weight-neutral" antidiabetic drug, in contrast with several other antidiabetic drugs that tend to increase weight ("weight-positive"). Metformin and other biguanides inhibit mitochondrial oxidation of lactic acid and can thereby cause lactic acidosis.
The nurse instructs a patient about taking levothyroxine [Synthroid]. Which statement by the patient indicates the teaching has been effective?
A. "To prevent an upset stomach, I will take the drug with food."
B. "If I have chest pain or insomnia, I should call my doctor."
C. "This medication can be taken with an antacid."
D. "The drug should be taken before I go to bed at night."
Levothyroxine overdose may produce the following symptoms: tachycardia, angina, tremor, nervousness, insomnia, hyperthermia, heat intolerance, and sweating; the patient should contact the prescriber if these symptoms are noted. Levothyroxine should be taken in the morning on an empty stomach 30 minutes before a meal. Levothyroxine should not be taken with antacids, which reduce the absorption of levothyroxine.
5. A patient with hyperthyroidism is taking propylthiouracil (PTU). It is most important for the nurse to assess the patient for which adverse effects?
A. Gingival hyperplasia and dysphagia
B. Dyspnea and a dry cough
C. Blurred vision and nystagmus
D. Fever and sore throat
Fever and sore throat are signs of infection, which concerning for agranulocytosis, is a serious condition characterized by a dramatic reduction in circulating granulocytes, a type of WBC needed to fight infection
A patient takes levothyroxine [Synthroid] 0.75 mcg every day. It is most appropriate for the nurse to monitor which laboratory test to determine whether a dose adjustment is needed?
A. Thyrotropin-releasing hormone (TRH)
B. Thyroid-stimulating hormone (TSH)
C. Serum free T4 test
D. Serum iodine level
Serum thyroid-stimulating hormone (TSH) is the preferred laboratory test for monitoring replacement therapy in patients with hypothyroidism.
The nurse teaches a group of postmenopausal women about hormone therapy (HT). Which information should the nurse include in the teaching plan?
A. The most frequent adverse effect of HT is headache.
B. HT increases the risk of stroke and venous thromboembolism.
C. Blood levels of estrogen are more consistent with oral HT.
D. HT may cause a harmless yellow discoloration of the skin.
In postmenopausal women, estrogen, used alone or combined with a progestin, increases the risk of venous thromboembolism (VTE) and stroke. Nausea is the most frequent undesired response to the estrogens. Compared with oral formulations, the transdermal formulations cause fewer fluctuations of estrogen in the blood. Estrogens have been associated with jaundice (yellow discoloration of the skin), which may develop in women with preexisting liver dysfunction.
8. A patient is taking estrogen daily. Which instruction by the nurse should be included to reduce the risk of a cardiovascular event, such as stroke or myocardial infarction?
A. Reduce aerobic activities.
B. Increase dietary intake of trans fat.
C. Stop smoking.
D. Take the medication with food.
C. Stop smoking
To reduce cardiovascular risk, patients should avoid smoking, perform regular exercise, and reduce their intake of saturated fats. Although taking estrogen with meals decreases nausea, this intervention does not reduce the cardiovascular risk.
The nurse identifies which female patient has the least risk for developing complications when hormone therapy is prescribed?
A. A 45-year-old patient who takes estrogen after a hysterectomy
B. A 55-year-old patient who takes estrogen combined with progestin
C. A 58-year-old patient with osteopenia who takes hormone therapy
D. A 60-year-old patient with a family history of breast cancer
For women younger than 60 years who have undergone hysterectomy, hormone therapy may be safer than for any other group; women who no longer have a uterus are treated with estrogen alone, which is somewhat safer than estrogen combined with a progestin. The risks of estrogen therapy are lower for younger women than for older women. Specifically, compared with older women, younger women have a lower risk of estrogen-induced breast cancer.
Which patient would be at greatest risk of developing a venous thromboembolism (VTE) if a combination oral contraceptive were prescribed?
A. A 25-year-old patient who drinks 3 to 4 alcoholic drinks a day
B. A 45-year-old patient who has a family history of stroke
C. A 22-year-old patient who smokes 2 packs of cigarettes a day
D. A 29-year-old patient who has used birth control pills for 9 years
Major factors that increase the risk of thromboembolism for women who take combination oral contraceptives are heavy smoking, a history of thromboembolism, and thrombophilias (genetic disorders that predispose to thrombosis). Additional risk factors include diabetes, hypertension, cerebral vascular disease, coronary artery disease, and surgery in which immobilization increases the risk of postoperative thrombosis
A patient contacts a clinic nurse to determine the proper action after she forgot to take her oral contraceptive [Ortho Tri-Cyclen] for the past 2 days during the first week of a 28-day regimen. Which response by the nurse is most appropriate?
A. "Take the omitted two doses together with the next dose."
B. "Take two doses per day on the following 2 days."
C. "Stop taking the oral contraceptive until menstruation occurs."
D. "Take a dose now and continue with the scheduled doses."
Ortho Tri-Cyclen is a combination of estrogen and progestin (on a 28-day-cycle). If 1 or more pills are missed in the first week, the patient should be advised to take 1 pill as soon as possible and then continue with the pack; the patient should also be instructed to use an additional form of contraception for 7 days.
The nurse teaches a patient about a Progesterone only pill, Camila. Which statement by the patient requires an intervention by the nurse?
A. "I might have irregular bleeding while taking this pill."
B. "These pills do not usually cause blood clots."
C. "I should take this pill at the same time every day."
D. "This pill works primarily by preventing ovulation."
Camila is a progestin-only contraceptive; contraceptive effects of Camila result largely from altering cervical glands to produce a thick, sticky mucus that acts as a barrier to penetration by sperm. Progestins also modify the endometrium, making it less favorable for nidation. Compared with combination oral contraceptives, Camila is a weak inhibitor of ovulation; therefore, this mechanism contributes little to their effects. Camila does not contain estrogen and will not cause thromboembolic disorders. Camila is more likely to cause irregular bleeding than combination oral contraceptives. Camila is taken every day and should be taken at the same time each day.
The nurse instructs a patient in the use of combination oral contraceptives for birth control. The nurse determines that teaching is successful if the patient makes which statement?
A. "I'll avoid herbal products such as St. John's wort."
B. "Birth control pills don't have serious side effects."
C. "I can continue taking birth control before elective surgeries."
D. "I should take the pill with food to prevent an upset stomach."
Products that induce hepatic cytochrome P450 (for example, St. John's wort) can accelerate oral contraceptive (OC) metabolism and can thereby reduce OC effects. Combination OCs have several adverse effects (for example, thrombotic disorders, hypertension, abnormal uterine bleeding, glucose intolerance, stroke, hyperkalemia). Women who are scheduled for elective surgeries that will result in immobilization (and increased risk of thrombosis) should stop OCs before surgery. OCs do not cause gastrointestinal upset. OCs should be taken at the same time every day.
A nurse is providing medication teaching to a 12-year-old male patient with hypogonadism. Which statement, made by the patient, indicates an understanding of the prescribed medication, testosterone enanthate?
A. "I will grow significantly taller while taking this medication."
B. "Sexual changes in my body will occur within 4 to 6 months."
C. "I will come to the clinic every 2 weeks for shots of testosterone."
D. "If the medication causes stomach upset, I can take it with food."
Testosterone enanthate is administered intramuscularly (IM) as a long-acting testosterone and is administered every 2 to 4 weeks for 3 to 4 years. Height may be stunted because of accelerated bone maturation. Sexual changes will develop slowly over a period of years.
A male patient is prescribed a topical testosterone gel [AndroGel]. It is most appropriate for the nurse to teach the patient to do what?
A. Apply the gel to the genital area every morning.
B. Leave the patch in contact with the skin for 24 hours.
C. Avoid showering or swimming after gel application.
D. Keep the treated area covered with clothing.
AndroGel should be applied once daily to the shoulders, upper arms, or abdomen, but not the genitalia. AndroGel is not a patch, but rather a gel that is rubbed into the skin. Showering or swimming is allowed 5 to 6 hours after application. The treated area should be covered with clothing.
A patient has been prescribed sildenafil [Viagra] for erectile dysfunction. Which instruction should the nurse include in the teaching plan?
A. Take the medication on an empty stomach.
B. Drink plenty of fluids to prevent priapism.
C. Avoid taking nitroglycerin with this drug.
D. Constipation is a common adverse effect
Taking nitrates with sildenafil may result in severe hypotension. Sildenafil can be taken with or without food. Patients who experience priapism (an erection lasting longer than 4 hours) should contact their health care provider immediately
Constipation is a common adverse effect.A patient is taking finasteride [Proscar] for benign prostatic hyperplasia (BPH). The nurse should explain that this medication has what effect?
A. Decreases the size of the prostate gland.
B. Relaxes smooth muscle of the prostate gland.
C. Reduces the risk of prostate cancer.
D. Improves sexual performance during intercourse.
Finasteride [Proscar] promotes the regression of prostate epithelial tissue and decreases the size of the mechanical obstruction.
18. A patient who takes over-the counter diphenhydramine [Benadryl] for seasonal allergy symptoms complains of drowsiness. What should the nurse do?
A. Instruct the patient to drink caffeinated beverages.
B. Recommend taking the medication with meals.
C. Ask the patient's healthcare provider to prescribe hydroxyzine [Vistaril].
D. Tell the patient to take cetirizine [Zyrtec] instead of diphenhydramine.
Second-generation antihistamines, such as cetirizine, cross the blood-brain barrier poorly and hence produce much less sedation than first-generation antihistamines.
Which statement regarding antihistamine administration to older adults does the nurse identify as true?
A. Antihistamines cause CNS excitation in older adults.
B. Larger doses of antihistamines are needed for older adults.
C. Antihistamines can be used to reduce intraocular pressure.
D. Older men with benign prostatic hypertrophy can experience worse symptoms when taking antihistamines.
When used in older adults, antihistamines can cause sedation; smaller doses should be used initially and titrated up if needed. Also, these medications can worsen glaucoma or benign prostatic hyperplasia.
A patient with asthma is prescribed albuterol [Proventil], 2 puffs every 4 hours as needed. The nurse should teach the patient to do what?
A. Rinse the mouth after taking the prescribed dose.
B. Take an extra dose if breathing is compromised.
C. Wait 1 minute between puffs from the inhaler.
D. Take adequate amounts of calcium and vitamin D.
Patients should be taught to wait at least 1 minute between puffs. Extra doses should not be taken unless prescribed by the health care provider. Glucocorticoid inhalation requires oral rinses to prevent the development of dysphonia and oropharyngeal candidiasis. Patients should take adequate amounts of calcium and vitamin D with glucocorticoid therapy.
Which information should the nurse include when teaching a patient about inhaled glucocorticoids?
A. Inhaled glucocorticoids have many significant adverse effects.
B. The principal side effects of inhaled glucocorticoids include hypertension and weight gain.
C. Use of a spacer can minimize side effects.
D. Patients should rinse the mouth and gargle before administering inhaled glucocorticoids.
Inhaled glucocorticoids are generally very safe. Their principal side effects are oropharyngeal candidiasis and dysphonia, which can be minimized by using a spacer device during administration and by rinsing the mouth and gargling after use.
Which of the following is NOT a serious adverse effect of long-term oral glucocorticoid therapy?
A. Adrenal suppression
B. Osteoporosis
C. Hypoglycemia
D. Peptic ulcer disease
Serious adverse effects include adrenal suppression, osteoporosis, hyperglycemia, peptic ulcer disease, and growth suppression.
A patient asks what medication would be most effective in the treatment of seasonal hay fever. The nurse will teach the patient about the use of which drug?
A. Azelastine [Astelin]
B. Chlorpheniramine [Chlor-Trimeton]
C. Fluticasone [Flonase]
D. Pseudoephedrine [Sudafed]
C. Fluticasone [Flonase
Glucocorticoids (fluticasone [Flonase]) are the most effective agents used to treat allergic rhinitis.
A patient is prescribed codeine as an antitussive. Which symptom will the nurse observe for as an adverse effect of this medication?
A. Respiratory depression
B. Increased heart rate
C. Productive cough
D. Restlessness
Codeine is an opioid that can suppress respiration.
The physician now orders Michael an inhaled glucocorticoid with the inhaled long acting beta2 agonist. Which statement by Michael indicates understanding of his medication regimen?
a. "The glucocorticoid is used as prophylaxis to prevent exacerbations."
b. "The beta2-adrenergic agonist suppresses the synthesis of inflammatory mediators."
c. "I should use the glucocorticoid as needed when symptoms flare."
d. "I will need to use the beta2-adrenergic agonist drug daily."
1.The nurse auscultates Michael's lungs and hears bilateral wheezes. His O2 is 90% and respirations are 30. "I just can't catch my breath", Michael says.
The nurse will prepare to administer:
A.SABA
B.LABA
C.Glucocorticoid
D.Antihistamine
The physician orders Michael a metered-dose Inhaler of Albuterol two puffs daily as needed for Asthma. It is important for the nurse to teach Michael that:
a. He should wait 1 minute between puffs.
b. He should activate the device and then inhale.
c. He should store the MDI in the refrigerator between doses.
d. He should inhale suddenly to receive the maximum dose.
Michael successfully takes the Albuterol inhaler and noticies a hand tremor that goes away. The nurse educates Michael that this is an expected side effect. What else should a nurse monitor for following Albuterol administration (select all that apply)
A.Heart rate
B.Respiratory rate
C.Capillary Refill
D.Oxygen saturation
The physician also orders Michael a long acting beta 2 agonist medication. What will the nurse tell Michael?
a. LABAs reduce the risk of asthma-related deaths.
b. LABAs should be combined with an inhaled glucocorticoid.
c. LABAs can be used on an as-needed basis to treat symptoms.
d. LABAs are safer than short-acting beta2 agonists.
Michael starts taking a glucocorticoid medication with a metered-dose inhaler (MDI). The nurse should give him which instruction about correct use of the inhaler?
a. "After you inhale the medication once, repeat until you obtain symptomatic relief."
b. "Wait no longer than 30 seconds after the first puff before taking the second one."
c. "Breathe in through the nose and hold for 2 seconds just before activating the inhaler."
d. "Use a spacer with the inhaler and rinse your mouth after each dose administration."
After two weeks of using his inhaled glucocorticoid Michael calls his PCP's office and reports hoarseness. What will the nurse do?
a. Tell the patient to discontinue use of the glucocorticoid.
b. Request an order for an antifungal medication.
c. Suggest that the patient be tested for a bronchial infection.
d. Ask whether the patient is rinsing the mouth after each dose.
Students also viewed
Pharmacology Cardiac 2
150 terms
Dallas_Branum
Respiratory Medication Exam
Abby_Hubbell12Plus
RPD Question Bank
Mariah_Terhaar
770 Assessment 4
krusceli1
MS2 Final
NURS 335 final practice Qs
Nursing psych final
NUTR-110 test 3
Four-point charges, each having a charge with a magnitude of $2.00 \mu \text{C}$, are at the corners of a square whose sides are $4.00 \text{~m}$ long. Find the electrostatic potential energy of this system under the following conditions: $(c)$ the charges at two adjacent corners are positive and the other two charges are negative,
Apply retrosynthetic analysis to identify all the practical combinations of Grignard reagent and aldehyde or ketone that will give the required target.
In a mixed-type expression involving ints and floats, Python will convert a) floats to ints b) ints to strings c) floats and ints to strings d) ints to floats
What percentage error is made in using $\frac{1}{2} m_0 v^2$ for the kinetic energy of a particle if its speed is 0.90 c$ ?
Pharmacology and the Nursing Process
7th Edition•ISBN: 9780323087896 (1 more)Julie S Snyder, Linda Lilley, Shelly Collins
The Human Body in Health and Disease
7th Edition•ISBN: 9780323402118Gary A. Thibodeau, Kevin T. Patton
1,505 solutions
Clinical Reasoning Cases in Nursing
7th Edition•ISBN: 9780323527361Julie S Snyder, Mariann M Harding
Law and Ethics for Health Professions
9th Edition•ISBN: 9781264154500Carlene Harrison, Karen Judson
Pharmacology quiz 2
cassie_love23Plus
Practice HESI Exam for Adult Health
Danielle_Corliss1
RESEARCH EXAM TWO
billymcgillan
Erika5785 | CommonCrawl |
edward_d
definite-integrals
How to prove that if the determinant of the matrix is zero then at least one eigenvalue must be zero? [duplicate]
How many different operations can be defined in a finite groupoid with a given property?
Area of a Triangle Inside a Circle?
What is the meaning of this Boolean function definition?
What is the number of antisymmetric relations in a set where the relations of some elements are given?
What is the correct theorem that connects the Riemann integral of a function to it's primitive function?
Number of combinations in numbered, colored balls combinatorics problem
Is the series $\sum_{n=1}^{\infty} \frac{\sin(nx)}{n^3}$ termwise differentiable on an interval $I\subseteq \mathbb{R}$?
Bernoulli equation when not homogenous
How to prove or disprove an expression using a mean value theorem of integrals? | CommonCrawl |
'Serial' and Changes of Heart
During this past/last week of the podcast Serial, Sarah Koenig explains (unsurprisingly) that throughout her journalistic investigation of the murder of Hae Min Lee she has changed her opinion many times about Adnan's guilt or innocence:
Several times, I have landed on a decision, I've made up my mind and stayed there, with relief and then inevitably, I learn something I didn't know before and I'm up ended. Sometimes the reversal takes a few weeks, sometimes it happens within hours. And what's been astonishing to me is how the back and forth hasn't let up, after all of this time. Even into this very week and I kid you not, into this very day that I'm writing this.
Given the transparent method by which Koenig has shared large chunks as well as scraps of information pertaining to the case week by week, we, the listeners, have similarly been able to shift our opinions/beliefs/doubts about Adnan's guilt as time has passed. Unlike in the case of a conventional television crime drama, there is no formulaic ending – no revealing of a killer who had been hiding in plain sight the during the entire 40-something minutes of predictively-paced intrigue. Uncertainty – not Adnan, Hae, or Jay – is the key player whose perpetual presence defines our experience with Serial. And given the overarching, dominant role that uncertainty has played in the "Serial phenomenon," I wondered, after finishing the final episode, how opinions had been shifting over the course of the podcast… Was this uncertainty – the uncertainty that I had heard in coworkers' debates, read about in think pieces, and fought to accept in the cluttered, MailChimp-ad-filled corners of my mind – evident in the numbers somewhere?
The podcast is weekly, meaning there is time between each episode's release to ponder, debate, maybe even cast a vote on your opinions…? In fact, yes, there is aggregate-level data with respect to public opinion on Adnan's guilt (what is the percentage of people that think Adnan is guilty? innocent? what percentage is undecided?) thanks to the dedicated Serial coverage by /r/serialpodcast (note for the less media savvy, more mentally healthy among us: /r/serialpodcast is a subreddit, a page on Reddit, dedicated to discussion of the podcast). After the release of episode 6, users on the sub started creating weekly polls in order to keep track of listeners' wavering opinions. Every Thursday, starting with October 30th (the date of release for episode 6), a poll was accessible on Reddit. People would vote on the poll until the next week when the poll would close just before the next episode became available. The poll opening and closing times ensured that no information from later episodes $ e \in \{X+z | z \in \mathbb{Z}^+\}$ would influence listeners' opinions for a given poll meant to illustrate opinions' in the aftermath of episode $X$. Thus, percentages from the polls accurately reflect where listeners stand after a given episode's recent reveals!
Going a step further, one could argue that since the voter base for these polls (/r/serialpodcast subscribers) are loyal repeat voters, the percentages associated with each subsequent episode less the corresponding percentages from the previous episode illustrate the differential effect of that very episode. This seems like a logical conclusion since listeners are adjusting their evolving opinions based on new information in the most recent podcast. Therefore, by looking into the changes in aggregate opinion between the airing of episodes 7-12 (we don't know how episode 6 changed the public's opinion since we don't have data before that episode's release), we can see the effect that episodes 7-12 had on the crowd's collective opinion.
Less talk, more graphs
In order to visualize the impact of this range of episodes, I graphed public opinion on Adnan's guilt following the release of episodes 6-12:
This graph depicts public opinion on Adnan's guilt (in terms of percentages who believe he is guilty, innocent, and the percentage of those who are undecided) over the course of the release of Serial episodes 6-12.
There are many interesting things to note about this progression. First off, the percentage of individuals who believe Adnan is innocent ends on a high note after the finale, What We Know, of the podcast – 54% of voters believe Adnan is innocent. This percentage is exactly three times (!) that following the release of episode 8, The Deal With Jay. Furthermore, it is clear that after Episode 9, To Be Suspected, there are no more aggressive changes to public opinion. Instead, all three stances seem to move steadily – very steadily when compared to the changes brought about in consequence to the release of episodes 7-9.
Turning to episodes 7-9, are the movements in opinion due to said episodes logical given the episodes' substance? I believe so. Listeners are also potentially mimicking, without realizing it, Koenig's own state of mind in the episodes. Episode 7, The Opposite of the Prosecution, causes a dip in the guilty percentage (of 10 percentage points) and a jump in the not guilty percentage (of 19 percentage points) – a consequence that is predictable just given the name of the episode. However, episode 8 undoes all the hard work episode 7 did for Adnan's case in the eyes of the public. The not guilty percentage drops down to post episode 6 levels at 18%, while the guilty percentage is above post episode 6 levels at 42%. The largest of all the weekly changes of heart comes with the release of episode 9, To Be Suspected, which highlights Adnan's calm demeanor during his time in prison. The guilty camp goes from containing 42% of the voters to just 17% while the not guilty camp goes from 18% to 44%. In the graph, this change almost creates a perfectly symmetrical "X" with the guilty and not guilty lines. It is also in this episode that Adnan also makes an emotional appeal to Koenig saying that that his parents would be happier if they thought he deserved to be in prison – therefore saying that if he were lying about his innocence he would be bringing pain to his parents – something that Adnan, the same person with the funny anecdote about T-mobile customer service behind bars, wouldn't do.
Another interesting element to note in this analysis is that, over our available time period, the percentage of people who vote as undecided has declined or remained the same every week. This potentially illustrates that despite the fact that more and more scraps, facts, and individuals were added into the mix throughout the progression of the postcast, the aggregate group of voters did feel more certain in their convictions – to the point of no longer checking the "undecided" option. However, this result could also be a fragment of something I felt myself when answering the poll near the end of the podcast – I wanted to vote one way or the other because I felt increasingly useless to the polling exercise by voting undecided repeatedly. Perhaps with the end of the podcast nearing, individuals wanted to be able to make a decision and stick to it, regardless of the constant insecurity in their beliefs.
After looking into how each episode affected aggregate opinions, I wondered if this could differ between the subgroups that reddit users included in their polls – specifically, those with legal training and those without legal training.
This graph depicts the opinion of those with legal training on Adnan's guilt (in terms of percentages who believe he is guilty, innocent, and the percentage of those who are undecided) over the course of the release of Serial episodes 6-12.
This graph depicts the opinion of those without legal training on Adnan's guilt (in terms of percentages who believe he is guilty, innocent, and the percentage of those who are undecided) over the course of the release of Serial episodes 6-12.
It is immediately evident that the percentages in these two graphs are very similar once episode 8 has aired. However, there is a large and obvious difference in how the two groups respond to episode 7, The Opposite of the Prosecution. For those without legal training, it bumped up the numbers for not guilty by 21 percentage points and pushed down the numbers for guilty by 13 percentage points…but, for those with legal training, it bumped up the numbers for not guilty by just 5 percentage points and even pushed up the numbers for guilty by 6 percentage points.
An easier way to visualize and understand the differences between the two divergent responses to episode 7 is by ignoring the undecided percentages in order to create a "innocence index" of sorts. This quasi-index is equal to the percentage of voters who vote that Adnan is not guilty minus the percentage of voters who vote that Adnan is guilty. The index doesn't have any meaning other than the differential between perceived innocence and perceived guilt according to the crowd of voters.
This graph depicts the constructed innocence indices between those with and without legal training over the course of the release of Serial episodes 6-12.
Since we don't have information from before the release of episode 6, we can't speak to the differential nature of episode 6 (it could have been that the two groups were divergent in a similar way before that episode and, therefore, episode 6 had little or no effect on aggregate opinion), however, in the case of episodes 7-12, it is very clear that the paths of the two subgroups are extremely similar except for in the aftermath of episode 7. For those with legal training, the innocence index doesn't move substantially, it actually goes down one point, meanwhile the index increases by 34 points for those without legal training.
Perhaps this drastically different response is because of the fact that episodes 6 and 7 deal with the case against and the case for the innocence of Adnan. It could be that those with legal training are aware of the potential brutal nature of the case that could be made against Adnan as well as the potentially very favorable nature of the direct opposite approach. Perhaps these individuals are not surprised by the way episode 7 threw off much of the doubt cast on him by episode 6 because they are familiar with the legal process, and understand how a single case can be framed in extremely different ways. Meanwhile, the opinions of those of us more in the dark when it comes to the dynamics of a prosecution/defense were more malleable. Regardless of the exact reasons for this divergence, the difference in the two groups' innocence indices following episode 7 is immediately striking.
I have doubted myself with respect to my thoughts on Adnan's case over the past many weeks. I've oscillated up and down with the severity of the rises and falls in the included figures. In brief, it is incredible to see that the week of Serial you just consumed can so profoundly alter the core of your beliefs about the case.
You don't need Sarah Koenig to serenade you during the finale with tales of the tenuous nature of truth in order to have the point driven home that we are often unsure, uncertain, unclear about our convictions… Just look at the pretty pictures.
Or just listen to that girl pronounce Mail Chimp. Is it Kimp or Chimp? We may never know.
Update: New Visuals [March 2015]
My original approach in visualizing this data used line charts, which I think are often the best option for depicting time-series data (due to their simplicity and corresponding comprehensibility). However, using line charts in this context generates lines out of what are truly discrete points–in other words, the plot assumes a linear trend in opinion changes between episodes, which does not reflect the true nature of the data. Because of this conceptual shortcoming that accompanies line charts, I decided to try out another form of visualization that could more accurately represent the discrete nature of the data points. Thanks to a great FlowingData post, I realized an interesting way to do this would be to use stacked bar charts since all the percentages for each opinion of guilty, no guilty, or undecided add up to 100%. (Originally I was attracted to the stacked area chart because it seems sexier–or, as sexy as a chart can be–however, this method also fails to accurately depict the discrete nature of the data points! So, stacked bar chart it is.) Here is the result (made with the ggplot2 package in R)1:
These charts depict public opinion on Adnan's guilt (in terms of percentages who believe he is guilty, not guilty, and the percentage of those who are undecided) over the course of the release of Serial episodes 6-12.
Here are the poll percentage sources in case anyone is curious: Percentages for episodes 6-8, Percentages for episodes 9-11, and Percentages for episodes 12 – I collected the data for episode 12 at 6:30pm EST Thursday 12/18/14. I looked at the updated information at 12:50am EST 12/19/14 and, of course, more people had voted, but the percentages for guilty/innocent/undecided were the same. So, I use these numbers without fear of dramatic change in the next few days.
All data and scripts used for this project are available in my "Serial" Github repo.
Percentages are rounded to the nearest percentage point. Therefore, some combinations might add up to 101 or 99 instead of 100 due to rounding in each category. ↩ | CommonCrawl |
Apothem of a Pentagon – Formulas and Examples
The apothem of a pentagon is the perpendicular distance from the center of the pentagon to the center of one of its sides. The apothem can also be considered as the radius of the incircle of a polygon. The apothem is used mainly to calculate the area of a regular polygon.
Here, we will learn how to calculate the apothem of a pentagon. In addition, we will use the apothem formula to solve some examples.
Learning about the apothem of a pentagon with examples.
Formula to find the apothem of a pentagon
Apothem of a pentagon – Examples with answers
Apothem of a pentagon – Practice problems
To find the formula for the apothem, we can use the diagram of a pentagon:
Here, we divide the pentagon into five congruent triangles and use one of the triangles to find the apothem. We can see that the apothem is the height of one of the triangles and divides one of the sides into two equal parts.
We can use trigonometry to find the length of the apothem. We start by finding the angle of the center of the apothem. We have five triangles and by dividing each triangle in two, we would have 10 small triangles.
Also, we know that a complete circle has 360°, so when dividing by the 10 triangles, we have 36°. The angle at the center of the pentagon always measures 36°.
We use the tangent to calculate the height of the triangle. In a right triangle, the tangent of an angle is equal to the length of the opposite side divided by the length of the adjacent side.
The side opposite the 36° angle is the base of the triangle (half the length of one side of the pentagon). The side adjacent to the 36° angle is the height of the triangle. Therefore, we have:
$latex \tan(36°)= \frac{ \text{opposite}}{ \text{adjacent}}$
$latex \tan(36°)= \frac{ \frac{s}{2}}{a}$
$latex \tan(36°)= \frac{s}{2a}$
$latex a= \frac{s}{2\tan(36°)}$
where, a represents the length of the apothem and s represents the length of one of the sides of the pentagon.
The formula for the apothem of a pentagon is used to solve the following examples. Each example has its respective solution, but it is recommended that you try to solve the problems yourself before looking at the answer.
What is the length of the apothem of a pentagon that has sides of length 4 m?
We use the apothem formula with $latex s=4$. Therefore, we have:
$latex a= \frac{s}{2 \tan(36°)}$
$latex a= \frac{4}{2 \tan(36°)}$
$latex a= \frac{4}{1.453}$
$latex a=2.75$
The length of the apothem is 2.75 m.
The length of the sides of a pentagon is 5 m. What is the length of the apothem?
We have that the sides have a length of $latex s = 5$. Therefore, using this value in the formula, we have:
A pentagon has sides that are 10 m long. What is the length of its apothem?
Here, the length of the pentagon's sides is $latex s=10$. Therefore, substituting this value in the formula, we have:
$latex a= \frac{10}{2 \tan(36°)}$
$latex a= \frac{10}{1.453}$
A pentagon has an apothem of length 7.6 m. What is the length of its sides?
In this case, we start with the length of the apothem and we want to know the length of the sides of the pentagon. Therefore, we use the formula with $latex a=7.6$ and solve for s:
$latex 7.6= \frac{s}{2 \tan(36°)}$
$latex 7.6= \frac{s}{1.453}$
$latex s= (7.6)(1.453)$
$latex s=11.04$
The length of the sides is 11.04 m.
What is the length of the sides of a pentagon that has an apothem with a length of 6 m?
Again, we start with the length of the apothem and we are going to find the length of the sides of the pentagon. Therefore, we use the value $latex a=6$ in the formula and solve for s:
$latex 6= \frac{s}{2 \tan(36°)}$
$latex 6= \frac{s}{1.453}$
$latex s= (6)(1.453)$
$latex s=8.72$
The length of the sides is 8.72 m.
Put into practice the use of the pentagons apothem formula to solve the following problems. If you need help, you can look at the solved examples above.
What is the apothem of a pentagon with sides of length 8m?
$latex a=4.61m$
A pentagon has sides that are 13m long. What is the length of the apothem?
What is the length of the sides of the pentagon that has an apothem of 8m?
$latex a=10.1m$
A pentagon has an apothem of 12m. What is the length of the sides?
$latex s=13.9m$
Interested in learning more about pentagons? Take a look at these pages:
Area of a Pentagon – Formulas and Examples
Perimeter of a Pentagon – Formulas and Examples
What are the characteristics of a pentagon? | CommonCrawl |
The impact of potential Brexit scenarios on German car exports to the UK: an application of the gravity model
Jacqueline Karlsson1,
Helena Melin1 &
Kevin Cullinane ORCID: orcid.org/0000-0002-4031-34531
Journal of Shipping and Trade volume 3, Article number: 12 (2018) Cite this article
The objective is to forecast the impact of potential Brexit scenarios on the export volume of passenger cars from Germany to the UK. Based on Germany's total export volume of passenger cars, a double-logarithmic gravity model is specified and estimated using Ordinary Least Squares (OLS) regression. The final estimated model has strong explanatory power, with all variables significant at the 5% level. This is used for forecasting future export volumes under different Brexit scenarios. Diagnostic tests suggest that the model is robust and efficient. All tested Brexit scenarios are found to negatively impact passenger cars export volumes from Germany to the UK. The level of tariffs is found to have the most significant effect, but lower GDP due to Brexit is forecast to offset the benefits of trading with lower tariffs. The most pessimistic scenario for 2030 forecasts is a reduction of 15.4% compared to the 'no Brexit' base-case scenario.
The UK held a referendum on continued membership of the EU on June 23rd 2016. The 'Leave Campaign' won a surprising victory, meaning that what is commonly referred to as Brexit (Hunt and Wheeler 2017) has emerged as an imminent reality. Subsequently, the UK Government has formally invoked Article 50 of the Lisbon treaty of the EU on March 30th 2017, thus starting a two-year process of leaving the EU (Castle 2017).
Despite the fact that Brexit is a relatively new phenomenon, the literature already contains works on the overall Brexit and referendum (Butler et al. 2016; Glencross 2015; Menon and Salter 2016a; Hobolt 2016; Vasilopoulou 2016), the reasons behind Brexit (Menon and Salter 2016b; Thielemann and Schade 2016), the referendum outcome (Goodwin and Heath 2016), the negotiations or legal implications following it (Jensen and Snaith 2016; Lazowski 2016; Kroll and Leuffen 2016; Gordon 2016; Chalmers 2016), the future challenges for the EU (Biscop 2016; Simón 2015) and, even, estimates of the financial implications (Boulanger and Philippidis 2015).
Authoritative sources suggest that there are three long-term scenarios that could potentially emerge as outcomes from the UK's Brexit negotiations with the EU (HM Treasury 2016; PwC 2016; European Union Committee 2016):
UK becomes a member of the EEA; with EU non-member states treated as members of the Single Market as though they were part of the EU (European Union Committee 2016). As such, this would mean that the free movement of goods, capital, services and people would continue and would be legally enforced by designated institutions under the ultimate jurisdiction of the EU Court of Justice. The UK, however, would not be a part of the customs union. This would enable the UK to independently and separately sign FTAs with trading partners other than the EU (Emerson 2016). A non-member country must, however, pay into the EU budget. Despite the fact that the UK would not be allowed to take part in future decision making processes within the EU (Emerson 2016), members of the EEA are required to contribute funds to decrease social and economic disparities; a form of grant to poorer EU members based on the contributor's economic situation (HM Treasury 2016). To put this financial obligation into context, in 2011, the UK's net contribution to the EU amounted to GBP 128 per capita while, as a member of the EEA, that of Norway amounted to GBP 108 per capita (House of Commons 2013). Traditionally, this type of agreement has suited smaller countries such as Norway, Iceland and Liechtenstein (OECD 2016).
UK negotiates a bilateral trade agreement with the EU; This could reduce most tariff and non-tariff barriers on goods traded, but agreements that yield the greatest access to the Single Market usually come with the greatest obligations, particularly with respect to the EU's four freedoms that are deemed indispensable (HM Treasury 2016; Economist 2016).
UK trades with the EU under WTO terms; This is the most likely scenario if no other agreement is reached between the parties (Economist 2017), particularly since the British Government has already committed to not accepting any deal that is not in the UK's best interests (Parker and Barker 2017). The WTO standards are based on the concept of the Most Favoured Nation (hereinafter MFN), whereby all countries have to be treated equally and countries cannot discriminate between trading partners. Hence, if one country would like to change the tariff for one of its trading partners, it has to change it for all other trading partners as well (World Trade Organization 2017a). The main advantage of this option is that it would free the UK from all obligations associated with access to the Single Market (HM Treasury 2016). However, the tariffs on some goods could be high.
All three of these scenarios would result in different tariffs on goods and services and all are predicted to have a significant impact on the UK's GDP (OECD 2016). Through the fundamental changes in the nature of its trading relationships with EU partners that any of these three scenarios will bring about, the GDPs of the UK's current trading partners within the EU are also potentially under threat. In relation to this potential, some attempts have been made to quantify the potential impact of Brexit within particular sectors or industries. Examples include the marine environment (Boyes and Elliott 2016), the agriculture or food sector (Swinbank 2016; Grant 2016; Matthews 2016) and the pharmaceutical industry (Song 2016; Baker et al. 2016). Similarly, the focus of the work presented herein is the automotive sector and, more specifically, the fundamental objective is to assess the potential impact of Brexit on the volume of passenger cars exported from Germany to the U.K.
Germany is one of the UK's most important trade partners,Footnote 1 with its main export to the UK being passenger cars. The German automotive industry is the biggest car industry in the EU, producing 34.9% of the total number of cars produced in the EU (OICA 2017). Approximately 2% of Germany's total population works in direct automotive manufacturing, compared to the EU average of 1% and the equivalent value of 0.5% in the UK (ACEA 2016). The figure for Germany equates to about 500,000 permanent employees (VDA 2017).
In 2015, 77% of all passenger cars manufactured in Germany were sold abroad (VDA 2017), representing a total export volume of 7.8 million passenger cars. Out of this total, 1.4 million passenger cars were exported to the UKFootnote 2 (United Nations 2017a). This equates to the fact that 39% of the total number of imported units into the UK were of German originFootnote 3 (United Nations 2017a). The four most important German car manufacturers in terms of volume are Volkswagen, BMW Group, Mercedes-Benz and Audi, with BMW and Volkswagen having the largest market sharesFootnote 4 (Stastista, 2017), with the most popular models being the Volkswagen Golf, Volkswagen Polo, Audi A3 and Mini. In 2016, the combined sales of these models were 223,038 units, representing 32,3% of the sales of the top ten most popular models sold in the UK (SMMT 2017).
As the result of global trends in production and consumption, the automotive industry has come to be characterised by a globalised supply base, where there has been an increased amount of outsourcing to suppliers. Another trend has been to adopt Just-in-time concepts (Thomas and Oliver 1991). Both these phenomena have combined to leave automotive manufacturers more and more dependent on their suppliers. Indeed, several companies have gone so far as to pursue even more interactive relationships with their suppliers, with collaboration in product development, supplier development, information sharing and more (McIvor et al. 1998). As a consequence, automotive supply chains are both highly interconnected and international and consist of many suppliers. This makes the industry particularly vulnerable to the imposition of tariffs and, within the context of the EU, highly reliant on the Single Market (Campbell 2016). If tariffs were to be applied within this context, the additional time required for customs checks would be significant and the increase in cost substantial (Campbell 2016; Monaghan 2016). In addition, as O'Grady (2016) suggests, the imposition of tariffs in the automotive industry would be administratively difficult.
The sheer volume of German passenger car exports to the UK and the complexity of the sector's supply chain network, as well as the significance of the trade for both the German and UK economies, more than justifies a focus on the sector when considering the three scenarios likely to emerge from Brexit negotiations. This work applies the gravity model as a reduced form of general equilibrium model of international trade in final goods. The estimated version of the model provides the foundation for a quantitative forecasting model that will facilitate achieving the objective of forecasting the impact of the three likely Brexit scenarios on Germany's passenger car exports to the UK.
The remainder of the work is structured as follows. The chosen methodology is justified and described in the following section. Details of the analysis which leads to an estimated version of the model are provided in section "Model estimation". This includes the systematic elimination of variables and the application of diagnostic tests. Section "Results" outlines the results achieved from applying the forecasting model. Finally, in section "Conclusions", conclusions are drawn and suggestions made for future research.
The gravity model
The gravity model is commonly applied in economics and has been deemed to be a successful tool for estimating international trade (Anderson 1979), a general framework to examine trade patterns (Eichengreen and Irwin 1995) and one of the most "empirically successful" trade analytical tools in economics (Anderson and van Wincoop 2003, p.170). The theoretical foundation of the model has been established through the work of several scholars, such as Linnemann (1966); Bergstrand (1985); Evenett and Keller (2002) and Anderson and van Wincoop (2003).
The gravity model estimates bilateral trade flows where trade is positively related to the level of GDP of the trading partners and negatively related to the distance between them. In the model, bilateral trade flows are based on the mutual gravitational force between the nations, with the gravity variable GDP reflecting mass. In addition to the conventional standard version of the model, several modifications can be made and dummy variables added (Chi and Kilduff 2010).
The gravity model has been widely used to estimate product and factor movements within the context of bilateral trade flows across international borders (Anderson 1979; Bergstrand 1985; McCallum 1995; Baier and Bergstrand 2001; Hummels 2001; Feenstra 2002; Anderson and van Wincoop 2003; Anderson and van Wincoop 2004; Anderson 2011) and trade agreements (McCallum 1995; Lavergne 2004; Rose 2004; Carrere 2006; Baier and Bergstrand 2007; Caporale et al. 2009; Cipollina and Salvatici 2010; Kepaptsoglou et al. 2010). Nobel laureate, Jan Tinbergen, was the first to apply the gravity model to the effect of Free Trade Agreements (FTAs) on bilateral trade flows, by including them in the model as a dummy variable (Tinbergen 1962). Since then, the gravity model has become the foundation for estimating the effects of FTAs and customs unions on bilateral trade flows (Bayoumi and Eichengreen 1995), particularly in relation to bilateral trade flows between fellow members of the EU (Balassa 1967; Aitken 1973; Abrams 1980; Brada and Mendez 1985; Frankel et al. 1995).
There is minimal agreement as to which variables should be included in the gravity equation, and which ones that should be omitted (Yamarik and Ghosh 2005). Anderson and van Wincoop (2003) point out that bias can appear in both the estimation and the analysis through the omission of the wrong variables. However, trade data appears to perform empirically well in the gravity model (Feenstra 2002) and, as a result, the gravity model has gained in popularity in the empirical trade literature (Yamarik and Ghosh 2005).
The gravity equation is derived as a reduced form from a general equilibrium model of international trade in final goods. According to Chi and Kilduff (2010) the original gravity model in international trade is defined as:
$$ {T}_{ij}=A\times \left(\frac{Y_i\times {Y}_j}{D_{ij}}\right) $$
…where the variables are defined as follows:
Tij trade flow from country i to country j;
Yi GDP of country i;
Yj GDP of country j;
Dij physical distance between country i and country j and;
A is a constant.
Nevertheless, according to Bergstrand (1985), the gravity model in international trade commonly takes the form:
$$ {T}_{ij}={\beta}_0{\left({Y}_i\right)}^{\beta_1}{\left({Y}_j\right)}^{\beta_2}{\left({D}_{ij}\right)}^{\beta_3}{\left({A}_{ij}\right)}^{\beta_4}{\mu}_{ij} $$
…where the parameters to be estimated are denoted by β and the variables are defined as follows:
Dij physical distance between country i and country j;
Aij other factor(s) either aiding or resisting trade between country i and country j and;
μij a logarithmic-normally distributed error term with E(ln μij) = 0.
The gravity equation is normally specified in a double-logarithmic form and estimated using Ordinary Least Squares (OLS) regression analysis (Eichengreen and Irwin 1995), although there are some exceptions to this general practice. Variations which have been applied to resolve a number of different issues include the use of non-linear OLS (Anderson and van Wincoop 2003), maximum likelihood estimation (Baier and Bergstrand 2007), a tobit model form (Chen 2004; Martin and Pham 2015), poisson pseudo maximum-likelihood estimation (Santos Silva and Tenreyro 2006) and a semi-logarithmic form (Eichengreen and Irwin 1995).
The sample used to estimate the model consisted of all countries to which Germany exported more than 1000 passenger cars in the designated year and for which data were available. Country i in the model denotes Germany and country j the import country. The total sample consists of more than 80 observations per year, representing approximately 98% of the total quantity of passenger cars exported by Germany over the 4-year period 2012 to 2015 inclusive.Footnote 5
In specifying the sample, the work was delimited by focussing solely on the export of complete cars. Thus, interactions between countries or industries which take place either before or after a complete car is exported are not accounted for. This means that the following are not addressed in the sample specification, data collection, model estimation or forecasts: the movement of components; whether Brexit scenarios bring about a change in the export quantities of passenger cars from Germany to other countries or; from where the UK would import cars in the future in the case that export quantities from Germany are predicted to decline. Model forecasts assume ceteris paribus applies to external factors. Thus, for example, they do not take into account the expected growth in demand for electric cars which, inevitably, will disrupt the current market structure. Finally, the work does not distinguish between new and used passenger cars.
Selection of variables and data collection
The dependent variable in the model is the volume of passenger cars exported from Germany (country i) to a range of importing nations (country j). Data on the export and import quantities of passenger cars from country i to country j were collected from the Comtrade databaseFootnote 6 (United Nations 2017a). The collection of the required trade data was undertaken on the basis of the following approach:
Data were extracted using the 4th version of the Harmonized Commodity Description and Coding System (hereinafter HS) which is an international nomenclature. The six-digit system consists of goods classified at different levels of specificity.
Some countries do not report data at lower commodity code levels (United Nations Statistics 2017a). Hence, this analysis uses the highest commodity code level for which quantity is reported. Thus, the commodity code "HS 8703 Passenger Cars" is used to collect data on imported and exported quantities of passenger cars.
Although the Comtrade database provides information on quantity, weight and value of trade, this analysis utilises quantities so that issues such as valuation and currency conversion are avoided.
In line with the advice of United Nations Statistics (2017a), the total quantities were based on the consolidated amount for all countries and not what the database refers to as "world" totals.
An average was taken for those situations where there were differences between reported export and reported import quantities (United Nations Statistics 2017b).
Where relevant, export quantities include re-exports.
The core of the gravity model is based on GDP and distance, but a variety of variables were considered for initial inclusion (Yamarik and Ghosh 2005). Selecting the appropriate variables for inclusion is important since including irrelevant variables can lower the precision of the model, while omitting variables that are important could introduce bias into the model estimates (Greene 2003).
The independent variables included within the initial specification of the gravity model to be tested are as follows: the GDP of countries i and j; the GDP per capita of countries i and j; the population of countries i and j; the geographical distance between the trade partners; the quality of logistics in country j and; the import tariff on passenger cars moving from country i to country j. In addition to these, the gravity model is initially specified to include a number of dummy variables controlling for: membership of the EEA; if country j has direct access to the sea; country adjacency and; if countries i and j share a common language. The choice of these variables was made by reviewing work by, for example, Aitken (1973), Rose (2004) and Chi and Kilduff (2010), who have all performed similar studies.
GDP and population
GDP is included in the model on the basis that the GDP of an exporting nation measures its productive capacity (Aitken 1973: Abrams 1980), while the GDP of an importing nation provides a measure of absorptive capacity or potential market size (Tinbergen 1962). Together with population, the value of GDP will impact the demand for imports (Aitken 1973; Abrams 1980). In terms of the exporting nation, the potential for economies of scale suggests that the larger the population, the more efficient is market production (Aitken 1973).
GDP per capita for countries i and j are also included in the model because, as established by Linder (1961), countries that have similar demand structures trade more with each other than dissimilar countries and that greater inequality has a negative effect on trade. Bergstrand (1990) argues that this relationship is present in both the supply structure, based on the Heckscher-Ohlin theorem, as well as in the demand structure, such as in the work by Linder (1961).
Data on GDP, population and GDP per capita for all countries were collected from the World Bank (2017a). GDP data referred to the GDP at purchaser's prices in USDFootnote 7; population data was the total population based on mid-year figures for all residents, regardless of legal status or citizenship and; GDP per capita is the ratio of the former over the latter (World Bank 2017a). In utilising this source, it should be recognised that the World Bank relies on international and regional sources such as the United Nations (2017b), Eurostat by the European Commission (2017) and Prism (2017). The World Bank also uses national statistics gathered from census reports and other national sources which mean that they are reliant on those individual countries to provide updated statistics (see World Bank (2017a) for more details). Countries which did not report their national statistics were excluded from the sample.
Distance, Total logistics cost and the quality of logistics
Geographical distance has long been treated as a proxy for transportation cost (for example, see Linnemann 1966). Disdier and Head (2008) found that bilateral trade is almost directly inversely proportionate to physical distance, with an average increase of distance by 10% reducing the trade between the parties by approximately 9%. Chi and Kilduff (2010) suggest that this is because transportation costs and convenience favour closer relationships and sourcing. Due to the advancement of logistics-related technology, distance as a proxy for transportation costs has been questioned and total logistics costs argued as being a more appropriate input variable. Disdier and Head (2008) have shown, however, that the effect of geographical distance has not declined in more recent years, indicating that technological change has not led to a reduction in the impact of distance.
A distance variable is thus included within the model as one proxy for total logistics cost, with distance measured either from the capital city of country i to the capital city of country j, as suggested by Yamarik and Gosh (2005) or as the "great circle distance" from the location where the largest port is situated in country i to the location of the largest port of country j, in line with Smarzynska (2001). The choice between these two measures is made on a country-by-country basis where countries north of Turkey or located within Europe were assumed to transport cars by land and the others by sea. If country j lacked a port and was assumed to transport cars by sea, the distance was measured from the capital city of country j to the closest port, and from that port to the largest port of country i. Road transport distances were obtained from Google Maps (2017) and sea transport distances from Marinetraffic (2017).Footnote 8
In order to test other potential influences on total logistics cost, the model initially included a proxy for infrastructure, namely the total span of the motorway network, in line with Bougheas et al. (1999). However, due to the characteristics of the international car trade (i.e. it is mostly moved as seaborne freight in car carriers), the model was later modified to instead include a dummy variable for country j's direct access to the sea. Google Maps (2017) provided the source for data on whether country j had direct access to the sea and for countries that share a border with country i.
The overall quality of a nation's logistics system is sourced from the World Bank (2017b), where the Logistics Performance Index (LPI) is derived from a survey where respondents rate countries based on several logistics performance criteria: "the efficiency of customs and border clearance"; "the quality of trade and transport infrastructure"; "the ease of arranging competitively priced shipments"; "the competence and quality of logistics services"; "the ability to track and trace consignments" and; "the frequency of which shipments reach consignees within scheduled or expected delivery times" (World Bank 2014, pp.51–52). The index is only made available every second year. Hence, the index for 2012 was applied to the models for 2012 and 2013 and the index for 2014 was applied in 2014 and 2015. The input variable was based on the country with the highest index value being the benchmark and determined as follows for the importing nation, country j:
$$ {LOGIS}_j=\left(\ \frac{x_j}{x_i}\ \right)\times 100 $$
LOGISj represents the overall quality of logistics performance of country j in year t; xj is the observed quality of logistics in country j in year t and; xi is the observed quality of logistics in the country with the highest LPI value in year t.
All countries profit from less barriers to trade (Eaton and Kortum 2002) and reductions of tariffs have been argued to explain about 26% of the growth of trade in OECD countries between the late 1950s and the late 1980s (Baier and Bergstrand 2001). Therefore, a variable reflecting the tariff rate was included in the model. For the purpose of collecting the data, the MFN tariff rates for 'HS 8703 Passenger Cars' were sourced from the World Trade Organization (2017b). The rates were presented as applied MFN tariff rates in weighted averages based on the sub-categories of 'HS 12 8703 Passenger Cars'. The data were compared to all of the EU's PTAs and. if there was a deviation, the bound rate in the PTA was applied. In cases where HS 8703 was not specifically referred to in a PTA, the applied MFN tariff rate presented by the World Trade Organization (2017b) was utilised.
In addition, the most recent updated tariff rates were assumed to be valid in the years following. Thus, if country j reported a tariff rate x for HS 8703 in year t, then this rate was applied in years t + 1 and t − 1 in cases where there was no other tariff rate present. If there was a change of tariff rate x to tariff rate z in year t + 1, the tariff rate z was applied in t + 1 and all years following it. If the tariff rate x was introduced and came into effect in year t − 1, but tariff rate y was applied in all years before year t − 1, then the tariff rate x applies in year t − 1 and all years following it. A value of 1 was added to all tariff rates so that logarithms could be applied.
Language commonality and country adjacency
Language commonality was included to show whether countries i and j shared a language or cultural similarity (Frankel et al. 1995) since this makes trade easier (Bougheas et al. 1999). When two countries share a language, it increases trade "substantially" (Havrylyshyn and Pritchett 1991, p.6). In addition to the language commonality variable, the model also included a border effect dummy variable. Aitken (1973, p.882) argues that neighbouring countries can be expected to trade more with each other due to "similarity of tastes and an awareness of common interests". The data on language commonality was based on CIA (2017).
EEA membership
Most economists argue that international trade should be free (Rose 2004). However, the regional integration provided by the EU has the "potential to harm participants through trade diversion or nonparticipants nearby through worsened terms of trade" (Eaton and Kortum 2002, p.1743). A dummy variable is included, therefore, for membership of the European Community. Baier and Bergstrand (2001) explain that it might seem unnecessary to include dummy variables to reflect a preferential trade agreement (hereinafter PTA), but the PTA itself might lead to greater trade beyond the effect of no tariff barriers. Input data on membership of the EEA was collected from the European Union (2017) and included all member countries of the EU or EFTA.
In summary, the fully specified as follows:
$$ {\displaystyle \begin{array}{c}\ln \left({EX}_{ij}\right)=\alpha +{\beta}_1\ln \left({GDP}_i\right)+{\beta}_2\ln \left({GDP}_j\right)+{\beta}_3\ln \left({D}_{ij}\right)+{\beta}_4\ln \left({POP}_i\right)\\ {}+{\beta}_5\ln \left({POP}_j\right)+{\beta}_6\ln \left({GDP CAP}_i\right)+{\beta}_7\ln \left({GDP CAP}_j\right)\\ {}+{\beta}_8\ln \left({TARIFF}_{ij}\right)+{\beta}_9\ln \left({LOGIS}_j\right)+{\beta}_{10}{CA}_{ij}\\ {}+{\beta}_{11}{LC}_{ij}+{\beta}_{12}{EEA}_j+{\beta}_{13}{SEA}_j+{e}_{ij}\end{array}} $$
..where the parameters to be estimated are denoted by β and the variables defined as follows:
EXij export of passenger cars from country i to country j, in units;
GDPi GDP of country i, in current USD;
GDPj GDP of country j, in current USD;
Dij physical distance between the trade centre in country i and country j, in kilometres;
POPi total population of country i;
POPj total population of country j;
GDPCAPi GDP per capita of country i, in current USD;
GDPCAPj GDP per capita of country j, in current USD;
TARIFFij tariff rate that country j imposes on passenger cars from country i;
LOGISj quality of logistics of country j;
CAij country adjacency, a dummy variable with a value of 1 if country j shares a common border with country i, 0 otherwise;
LCij common language, a dummy variable with a value of 1 if country j shares an official language with country i, 0 otherwise;
EEAj European Community, a dummy variable with a value of 1 if country j is a member of the European Community, 0 otherwise;
SEAj direct access to the sea, a dummy variable with a value of 1 if country j has direct access to the sea, 0 otherwise; and.
eij the error term.
This model is estimated using OLS regression analysis. A higher GDP and population in country j were expected to lead to greater demand and, consequently. to higher passenger car exports from country i to country j. A higher GDP and population in country i were expected to increase the production capacity. Similarly, a higher quality of logistics in the importing country can also be expected to be positively related to trade volumes. With respect to the dummy variables, the adjacency of trading nations, a common language, membership of the EEA and direct access to the sea would all also be expected to facilitate trade. Hence, the variables GDPj; POPj; GDPCAPj; LOGISj; CAij; LCij; EEAj and SEAj were all expected to be positively correlated to export quantities. On the other hand, increasing physical distance between trade partners, as well as higher tariffs, were both expected to have a depressing effect on trade quantities. Hence, the variables Dij and Tariffij were expected to be negatively correlated to trade volumes.
Systematic elimination of variables
An econometric problem can occur when the dependent variable is a component of one of the regressors or, more generally, when the regressors in the model are correlated with the disturbance term (McCallum 1995). The dependent variable, i.e. the export volume of passenger cars from country i to country j, is a component of the GDP of country i and, consequently, the latter variable was removed. Similarly, GDP per capita and population for country i were also removed, since they were also strongly correlated with GDP. The results of the regressions with the remaining variables are displayed in Table 1. In line with the approach presented by Yamarik and Ghosh (2005), variables were added or removed from the regression equation on a one-by-one basis to determine how they might affect the final regression equation.
Table 1 Systematic elimination of variables
As shown in Table 1, the dummy variables representing direct access to the sea and EEA membership were excluded from the model, due to their low impact on the overall explanatory power of the regression; almost all countries in the dataset had access to the sea. Moreover, the dummy variable EEA was also removed, since it was strongly correlated with distance and tariffs, suggesting that the trade-creating benefits of EEA membership are captured by other regressors. The two remaining dummy variables - common language and country adjacency - were also excluded, since they were statistically insignificant. Lastly, the population of country j was removed due its low overall impact on the model.
After conducting the systematic elimination, the remaining independent variables were: GDP of country j; geographical distance between country i and country j; import tariffs and; the quality of logistics of country j. A value of 1 was added to the variable TARIFFij so that logarithms could be applied. The variable LOGISj was based on the country with the highest index value as defined in Eq. (4).
The final gravity model, as shown at step (6) in Table 1, is defined as:
$$ {\displaystyle \begin{array}{c}\ln \left({EX}_{ij}\right)=\alpha +0,760\ln \left({GDP}_j\right)-0,369\ \ln \left({D}_{ij}\right)\\ {}-3,958\ln \left({TARIFF}_{ij}\right)+1,988\ln \left({LOGIS}_j\right)+{e}_{ij}\end{array}} $$
Or, more specifically:
$$ {\displaystyle \begin{array}{c}\ln \left({EX}_{ij}\right)=\alpha +0,760\ln \left({GDP}_j\right)-0,369\ \ln \left({D}_{ij}\right)\\ {}-3,958\ln \left({TARIFF}_{ij}+1\right)+1,988\ln \left(\left(\ \frac{x_j}{x_i}\ \right)\times 100\right)+{e}_{ij}\end{array}} $$
Dij Physical distance between the trade centres in country i and country j, in kilometres;
TARIFFij Tariff rate that country j imposes on passenger cars from country i; and.
LOGISj Quality of logistics of country j.
A confidence interval of 95% was used to test the model. All variables were statistically significant, the results were consistent over several years and contributed to the overall explanatory power of the model. Hence, the variables were considered robust and efficient in measuring the impact of Brexit on German car exports to the U.K.
The final estimated gravity model, as arrived at through systematic elimination and expressed in Eq. (6), was further analysed with respect to the assumptions which underpin the OLS regression technique utilised (Montgomery et al. 2012).
The relationship between the dependent variable and the regressors was tested using scatter diagrams and a correlation matrix (Wegman 1990) All covariates in the final equation were found to have an approximately linear relationship with the dependent variable, although a few outliers were present. Table 2 presents the Pearson's correlation for 2015 for all variables included in the final analysis.
Table 2 Pearson's correlation
The presence of multi-collinearity is tested for using the Variance Inflation Factor (hereinafter VIF) as suggested by O'Brien (2007) and Montgomery et al. (2012). The VIF is defined as:
$$ {VIF}_k=\frac{1}{\left(1-{R}_k^2\ \right)} $$
…where VIFk is the Variance Inflation Factor for the estimated coefficient k and Rk is the coefficient of multiple determination of the estimated coefficient k. According to O'Brien (2007), the estimated value of a coefficient is seriously affected by multi-collinearity if the value of the VIF is greater than 10. As shown in Table 3, the calculated VIF for all coefficients, for all years tested, have values lower than 3. This provides quite strong evidence that multi-collinearity is not present within the dataset.
Table 3 Variance inflation factors
A Q-Q plot (Liang et al. 2004) suggests that the normality assumption is complied with. Figure 1 presents, however, a scatterplot of the squared residuals relative to the unstandardized predicted values, which suggests that the assumption of homoscedasticity is being violated. Both the Breusch and Pagan (1979) and the White (1980) tests yielded probability values of less than 0,05, indicating heteroscedasticity in the sample. In line with the approach recommended by Huber (1967) and White (1980), robust error terms were introduced to deal with this.
Scatterplot test for homoscedasticity
The final diagnostic tests involve the calculation of Cook's distance, as the basis for identifying outliers and analysing their leverage (Cook 1977; Montgomery et al. 2012). The calculations reveal that there are outliers present and that some of these observations have leverage. To determine the extent to which the leveraged observations impact parameter estimates and whether the sample is robust, a set of diagnostic tests suggested by Rose (2004) is implemented. While the final dataset includes approximately 98% of the total quantity of cars exported from Germany for all 4 years measured, this suggested approach involves estimating a model for a sub-sample of the full dataset such that: sample (1) excludes 3σ outliers; sample (2) excludes 2σ outliers; sample (3) includes only those exports to countries reported as having a high income by the World Bank (2017d); sample (4) includes only those exports to countries reported as having upper middle income by the World Bank (2017e) and; sample (5) includes only those exports to countries reported as having lower middle income by the World Bank (2017f) .
As might be expected, the results from these diagnostic tests indicate that excluding outliers reduces the variability of the dataset and increases the explanatory power of the estimated regression model. However, excluding outliers does not impact the coefficients to a very large extent. To split the model based on income level increases the explanatory power of the regression analysis in the case of rich countries (denoted sample (3) in Table 4) and influences some coefficients. Most notably, there is an increase in the impact of GDP and a reduction in the impact of tariffs.
Table 4 Diagnostic tests based on sub-samples
In summary, these diagnostic tests do not yield any categorical evidence of a lack of robustness in the parameter estimates derived from a regression analysis using the full dataset. In any case, since outliers contain important evidence of irregular activities which the data describes, many would argue that they should remain in the dataset analysed (Aggarwal and Yu 2001). Thus, the parameter estimates derived from a regression analysis of the original full dataset remain as the basis for the forecasting model utilised in the ensuing section.
The forecasting model
As commonly applied in the forecasting of demand or production (Head and Mayer 2014), a double-logarithmic technique was utilised to convert the final gravity model to constant elasticity in order to facilitate econometric analysis. Applying a confidence interval of 95%, the results from the final gravity model are statistically significant and explain between 85.5% and 88.5% of the variability of the export quantities in each year of the sample period. The sample dataset consists of all countries to which Germany (country i) exported more than 1000 units in the designated year and for which data were available. Hence, this sample size represents approximately 98% of the total number of Germany's exported passenger cars in each year. Table 5 presents the consolidated results from the gravity model by year.
Table 5 Results from the gravity model
The estimated coefficients for the 2015 gravity model were utilised in the forecasting model. Based on the gravity equation defined in Eq. (6), both short-term and long-term forecasts were derived (for 2020 and 2030 respectively) to quantify the effect of Brexit on German car exports to the U.K.
Forecast input values for GDP are based on a report by the Organisation for Economic Cooperation and Development which presented a GDP forecast for a base-case 'No Change' (No Brexit) scenario and then forecasts of how GDP would change with Brexit under different possible future scenarios, expressed as 'optimistic', 'central' and 'pessimistic' (OECD 2016). Forecast input values for the tariff rate were assumed to be: the current 'Most Favoured Nation (MFN)' rate in the absence of a Preferential Trade Agreement (PTA) for the 'pessimistic' scenario; the most popular tariff rate in the dataset that was greater than zero for the 'central' scenario and; a tariff rate of zero for the 'optimistic' scenario.
In summary, the specific scenarios tested for 2020 are specified as follows:
No change: A GDP reduction of 0.00% and a tariff rate of 0.00% are assumed. This scenario reflects a development whereby the U.K. did not exit the EU. It could also reflect the successful negotiation of a transitional agreement. OECD (2016) suggests that either of these two outcomes will serve to strengthen external perceptions of the EU and provide a stimulus to trade and foreign direct investment in every member state. In fact, the assumptions for GDP growth and tariff rates under this scenario could actually be viewed as rather conservative if the UK's decision to remain aligned to the EU were to facilitate further free trade and investment agreements with non-EU nations.
Central scenario: A GDP reduction of 3.30% and a tariff rate of 5.00% are assumed. The tariff rate is meant to reflect a semi-beneficial scenario whereby, for example, the U.K. successfully negotiated a bilateral agreement with the EU. The logic underpinning these forecast values revolves around the greater economic uncertainty that the U.K. would face after leaving the EU, especially during the early years (OECD 2016). Investor and consumer confidence is likely to fall and spending decisions deferred therefore. The potential also exists for a flight of capital out of the country. While the latter means a reduced availability of capital, the former implies the presence of a risk premium which will raise the cost of capital. In addition, since it will take time to develop new trade agreements, the OECD (2016) suggests that the UK will initially have to trade under WTO rules with both EU and third-party nations. Planned changes to immigration policies within the UK and the deterrent effect of a stuttering economy will also lead to reduced GDP, exacerbated by a depreciation in the value of sterling.
For the forecast year of 2030, two additional scenarios are also tested, specified as follows:
Pessimistic scenario: A GDP reduction of 7.70% and a tariff rate of 9.70% are assumed. The tariff rate is based on the current MFN rate and the scenario could reflect trade under WTO terms. OECD (2016) forecasts that longer-term structural changes to the U.K. economy would result in lower business investment than would otherwise occur and a continuous decline in capital stock. A predicted reduction in innovation, the skills base and managerial quality also contributes to this longer-term pessimistic scenario, all of which undermines future potential returns on what is expected to become a declining asset base in the U.K.
Optimistic scenario: A GDP reduction of 2.72% and a tariff rate of 0.00% are assumed. This reflects a scenario in which the U.K. either remains within the Single Market or where some of the losses outlined in the pessimistic scenario are offset by greater deregulation of the U.K. labour and product markets, as well as by fiscal savings from stopping the net transfer of funds to the EU. Since UK labour and product markets are already highly deregulated and the OECD (2016) predicts UK fiscal savings of only 0.3–0.4% of GDP, the potential for this offset value is rather minimal.
For each of the forecast scenarios (f), the GDP of country j (the UK), was adjusted in the following manner:
$$ {GDP}_f={s}_t\times \left(1-{x}_f\right) $$
where the variables were defined as follows:
GDPf forecast GDP for country j under scenario f;
st forecast GDP for country j in the absence of Brexit; and.
xf forecast percentage GDP change for country j under scenario f.
In the forecasting model, country i denotes variables related to Germany and country j variables relate to the UK. Moreover, the distance in kilometres between London and Berlin is used; the value of 1 was added to the tariff rate; and the quality of logistics in 2016 was applied to all scenarios, no matter year and severity.
Based on Eq. (6) and adjusted to take into account Eq. (8), the forecasting model is defined as:
$$ {\displaystyle \begin{array}{c}\ln \left({EX}_{ij}\right)=\alpha +0,760\ln \left({s}_t\times \left(1-{x}_f\right)\right)-0,369\ \ln \left({D}_{ij}\right)\\ {}-3,958\ln \left({TARIFF}_{ij}+1\right)+1,988\ln \left(\left(\ \frac{x_j}{x_i}\ \right)\times 100\right)+{e}_{ij}\end{array}} $$
Model outputs
The results from the forecasts show that if there were to be no Brexit or, alternatively, if a transition deal without changes to the current agreement could be negotiated, Germany is predicted to export 1.3 million passenger cars to the UK in 2020. Nevertheless, if Brexit does occur and a 5% tariff is applied, this quantity would decrease by 7.73%, representing lost sales volume of approximately 102,000 passenger cars.
The forecast for 2030 was based on three different scenarios. The pessimistic scenario projected that Germany would export 15.39% less cars to the UK than if no Brexit would occur; the optimistic scenario would lead to 0.92% less exported cars compared to a scenario in the absence of the Brexit; and the central scenario would lead to a reduction of 9.20% exported passenger cars from Germany to the UK.
Hence, all Brexit scenarios would lead to lower export quantities of passenger cars from Germany to the UK in terms of number of units, compared to a situation where the UK would have stayed in the EU. Table 6 presents a consolidated view of the forecast outcomes.
Table 6 Results from the forecasting model
In summary, based on a model which includes GDP, distance, tariffs and the quality of logistics – capturing demand factors and logistics-related costs – all Brexit scenarios are estimated to reduce German export quantities compared to a situation where Brexit did not occur.
The trading relationship between Germany and the UK will inevitably change when the UK leaves the EU. The size of the effects will depend, to a large extent, on the terms under which the two countries will trade in the future. The future financial and trade-related uncertainties relating to Brexit will depend, to a large extent, on whether the UK retains access to the Single Market. Assuming that the UK does lose access to the Single Market, the OECD (2016) estimates that the UK's exports will drop by 8% due to this loss of preferential treatment, not just with the EU but also with other trade partners. The OECD also asserts that supply chains in both the UK and the EU, which have developed over a long time, would disentangle and production costs could increase for both parties (OECD 2016). Similarly, HM Treasury (2016) has calculated that the UK's total trade quantities would decrease by between 17 and 24% as the result of not having access to the Single Market. However, there have been arguments raised against these relatively pessimistic predictions. For instance, in full expectation of potential supply chain disruption, many organizations and political institutions are developing plans (in some cases jointly) to avoid or surmount whatever difficulties and problems may arise with existing supply chains (Manners-Bell 2017). Given the role played by the 'Quality of Logistics' variable in the final model, this should have a significant impact on model outputs. Similarly, many observers are pointing to the potential benefits to UK competitiveness and GDP that comes from the seemingly inevitable weaker currency that will result from leaving the EU (Dhingra et al. 2016) and this too should also be reflected in the findings derived from the model.
Open Europe (2015) argues that the best case scenario would be for the UK to develop an FTA with the EU. They estimate that the UK GDP would be 1.6% higher in 2030 under such a scenario compared to if the UK stayed in the EU (Open Europe 2015). Gros (2016) also argues that the negative effect of Brexit on the GDP of the UK would be long-term and that this would lead to a weaker currency which could have a positive impact on export competitiveness, as well as mitigate the financial impact of leaving the Single Market. More pessimistically, HM Treasury (2016) suggests that trade would be lower in many product sectors if the UK were to trade under an FTA with the EU. This is due to an estimated negative impact on production, brought about by an assumed decrease in foreign direct investment into the UK. Open Europe (2015) has developed a 'worst case' scenario where the UK fails to develop a trade deal and loses access to the Single Market. Under such a scenario, they have estimated that the UK GDP would be 2.2% lower in 2030 compared to if it had stayed in the EU (Open Europe 2015).
The outcome of the analysis herein is that under all likely Brexit scenarios that have been identified in an extensive review of the literature, the export quantities of German passenger cars to the UK are estimated to decrease. Analysing the estimated parameters associated with the key variables in the model reveals the extent of the expected decrease. The estimated short term impact suggests that Germany could expect to export 7.73% less cars in 2020 compared to a situation of no Brexit. The long term impact under a pessimistic scenario involves applying the MFN tariff rate and utilising a forecast large reduction in the GDP of the UK. This yields a predicted decrease of 15.39% in passenger car exports from Germany to the UK in 2030, compared to a scenario of no Brexit. Under a central scenario, which involves applying a 5.00% tariff and only a moderate forecast reduction in the GDP of the UK, export quantities are forecast to decrease by 9.20%. If the UK were to trade with the EU without tariffs and, in consequence, with only a relatively small reduction in GDP, the export of German passenger cars to the UK is estimated to decrease by 0.92%. Of course, this finding raises the question of how German car exporters are going to offset this loss of export value by expanding export volumes in other markets. Similarly, it is interesting to determine what knock-on effects the findings imply for the demand for cars in the UK, particularly with respect to the identification of substitute sources to satisfy this demand. Both issues constitute suitable topics for future research.
The analysis conducted herein finds that the effect of tariffs is substantial in determining model forecasts. This might suggest that representatives of the automotive industry should be lobbying politicians to develop an agreement between the UK and the EU where passenger cars would face low tariffs. Nonetheless, even under a no tariff scenario, German exports of passenger cars to the UK is still forecast to decrease. This is because the expected negative impact on the GDP of the UK will effectively offset the benefits of trading without tariffs. Thus, the forecast input values for the GDP of the UK under each of the tested Brexit scenarios are critical to the forecast outcomes produced by the model. The GDP forecast values utilised in this analysis have been sourced from the OECD (2016) and predict reductions in GDP under all Brexit scenarios compared to the situation of no Brexit. Clearly, such forecasts are subject to error, even to the extent that future GDP values may actually prove to be positive compared to the no Brexit base-case. This points to the fact that it is not the absolute values of the forecasts derived under each Brexit scenario which is critical, but rather the relative outcomes achieved under each scenario, since it is this which should motivate the German government and automotive industry to seek lower tariffs under Brexit in support of its exports to the UK. At the same time, the differential impact of the different scenarios should also inform the actions and decisions of supporting industries such as supply chain planners and the logistics sector, particularly in seeking to avoid any potential disruption to existing supply chains.
Clearly, the analysis reported herein can be applied to other industrial sectors and to other bilateral trades between the UK and other EU member states. There is also great scope for the disaggregation of the results achieved within this work, to analyse the impact under each of the tested Brexit scenarios on the individual segments or even brands which comprise German car exports to the UK. In other words, to take into account the different price elasticities which exist within different passenger car segments or across the four major German brands in the UK market; Volkswagen, BMW, Audi and Mercedes-Benz.
In 2015, the most important countries of origin for the UK's total imports (in terms of value of all products) were Germany (15.0%), China (10.0%), the U.S (9.2%), the Netherlands (7.5%) and France (6.1%) (United Nations 2017a).
In 2015, Germany's biggest export partners for passenger cars were the UK (17.4%), the U.S (15.9%), China (7.8%), France (6.5%) and Italy (5.5%) (United Nations 2017a).
In 2015, Germany was the largest source of passenger car imports into the UK in volume terms (39.0%), followed by Spain (12.3%). Germany was also the largest source of UK passenger car imports in terms of value (48.1%), followed by Belgium (13.3%), Spain (10.2%), France (4.7%) and Japan (4.1%) (United Nations 2017a).
Audi is a subsidiary of Volkswagen.
A full list of importing nations (country j) which were included in, and excluded from, the sample dataset and the quantities represented of the total export quantity is available from the corresponding author.
See United Nations Statistics (2010, 2013) for a detailed descriptions of the data sources, methodology and limitations of the Comtrade database.
GDP at purchaser's prices is the "sum of gross value added by all resident producers in the economy plus any product taxes and minus any subsidies not included in the value of the products. It is calculated without making deductions for depreciation of fabricated assets or for depletion and degradation of natural resources. Data are in current U.S. dollars. Dollar figures for GDP are converted from domestic currencies using single year official exchange rates. For a few countries where the official exchange rate does not reflect the rate effectively applied to actual foreign exchange transactions, an alternative conversion factor is used" (World Bank 2017c).
Where sea transport is assumed and distance is measured on a port-to-port basis, there is an implicit assumption that in estimating a model based on the global trade in cars, road distances are relatively trivial compared to distances at sea and that sea distances are, therefore, quite highly correlated with total transport distance where shipping is employed. While this supports the likely unbiasedness of the coefficient which is estimated for this variable, it does not negate using a more appropriate input value for distance when applying the model to an individual bilateral trade. Thus, in the later application of this estimated gravity model to the specific trade in cars from Germany to the UK, the distance between capital cities is utilized as a more realistic input value for distance, even though shipping is the primary mode of carriage.
Bayerische Motoren Werke
EEA:
European Economic Area
EFTA:
FTA:
GBP:
GDP:
HM:
HS:
Harmonized Commodity Description and Coding System
LPI:
LSCI:
Liner Shipping Connectivity Index
MFN:
Most Favoured Nation
OECD:
OLS:
PTA:
Preferential Trade Agreement
PwC:
Price Waterhouse Cooper
SMMT:
Society of Motor Manufacturers and Traders
VDA:
Verband der Automobilindustrie
VIF:
Variance Inflation Factor
WTO:
Abrams R (1980) International trade flows under flexible exchange rates. Federal Reserve Bank of Kansas City. Econ Rev 65(3):3–10
ACEA (2016) The Automobile Industry Pocket Guide 2016–2017. In: European Automobile Manufacturers Association http://www.acea.be/uploads/publications/ACEA_Pocket_Guide_2016_2017.pdf Accessed 1 Mar 2017
Aggarwal C, Yu P (2001) Outlier detection for high dimensional data. ACM SIGMOD Rec 30(2):37–46
Aitken N (1973) The effect of the EEC and EFTA on European trade: a temporal cross-section analysis. Am Econ Rev 63(5):881–892
Anderson J (1979) A Theoretical Foundation for the gravity equation. Am Econ Rev 69(1):106–116
Anderson J (2011) The gravity model. Ann Rev Econ 3(1):133–160
Anderson J, van Wincoop E (2003) Gravity with gravitas: a solution to the border puzzle. Am Econ Rev 93(1):170–192
Anderson J, van Wincoop E (2004) Trade Costs. J Econ Lit 42(3):691–751
Baier S, Bergstrand J (2001) The growth of world trade: tariffs, transport costs, and income similarity. J Int Econ 53(1):1–27
Baier S, Bergstrand J (2007) Do free trade agreements actually increase Members' International trade? J Int Econ 71(1):72–95
Baker A, Ali R, Thrasher A (2016) Impact of BREXIT on UK gene and Cell therapy: the need for continued Pan-European collaboration. Hum Gene Ther 27(9):653–655
Balassa B (1967) Trade creation and trade diversion in the European common market. Econ J 77(305):1
Bayoumi T, Eichengreen B (1995) Is regionalism simply a diversion? Evidence from the evolution of the EC and EFTA. IMF Working Pap 95(109):1
Bergstrand J (1985) The gravity equation in international trade: some microeconomic foundations and empirical evidence. Rev Econ Stat 67(3):474–481
Bergstrand J (1990) The Heckscher-Ohlin-Samuelson model, the Linder hypothesis and the determinants of bilateral intra-industry trade. Econ J 100(403):1216–1229
Biscop S (2016) All or nothing? The EU global strategy and defence policy after the Brexit. Contemp Sec Pol 37(3):431–445
Bougheas S, Demetriades P, Morgenroth E (1999) Infrastructure, transport costs and trade. J Int Econ 47(1):169–189
Boulanger P, Philippidis G (2015) The end of a romance? A note on the quantitative impacts of a 'Brexit' from the EU. J Agric Econ 66(3):832–842
Boyes S, Elliott M (2016) Brexit: the marine governance horrendogram just got more horrendous! Mar Pollut Bull 111(1–2):41–44
Brada J, Mendez J (1985) Economic integration among developed, developing and centrally planned economies: a comparative analysis. Rev Econ Stat 67(4):549
Breusch T, Pagan A (1979) A simple test for heteroscedasticity and random coefficient variation. Econometrica 47(5):1287–1294
Butler G, Jensen M, Snaith H (2016) 'Slow change may pull us apart': debating a British exit from the European Union. J Eur Publ Pol 23(9):1278–1284
Campbell P (2016) UK car industry fears effects of Brexit tariffs on supply chain. In: The Financial Times https://www.ft.com/content/c397f174-9205-11e6-a72e-b428cb934b78 Accessed 8 Feb 2017
Caporale GM, Rault C, Sova R, Sova A (2009) On the bilateral trade effects of free trade agreements between the EU-15 and the CEEC-4 countries. Rev World Econ 145(2):189–206
Carrere C (2006) Revisiting the effects of regional trade agreements on trade flows with proper specification of the gravity model. Eur Econ Rev 50(2):223–247
Castle P (2017) U.K. Initiates 'Brexit' and Wades Into a Thorny Thicket. In: The New York Times https://www.nytimes.com/2017/03/29/world/europe/brexit-uk-eu-article-50.html Accessed 8 Feb 2017
Chalmers D (2016) Alternatives to EU membership and the rational imagination. Polit Q 87(2):269–279
Chen N (2004) Intra-national versus international trade in the European Union: why do national borders matter? J Int Econ 63(1):93–118
Chi T, Kilduff P (2010) An empirical investigation of the determinants and shifting patterns of US apparel imports using a gravity model framework. J Fashion Mark Manag Int J 14(3):501–520
CIA (2017) Library. In: CIA https://www.cia.gov/library/publications/the-world-factbook/fields/2098.html Accessed 24 Feb 2017
Cipollina M, Salvatici L (2010) Reciprocal trade agreements in gravity models: a meta-analysis. Rev Int Econ 18(1):63–80
Cook R (1977) Detection of influential observation in linear regression. Technometrics 19(1):15–18
Dhingra S, Ottaviano G, Sampson T, Van Reenen J (2016) The impact of Brexit on foreign investment in the UK. Centre for Economic Performance, London School of Economics, BREXIT, p 24 http://www.kenwitsconsultancy.co.uk/wp-content/uploads/2016/09/BREXIT-2016-Policy-Analysis-from-the-Centre-for-Economic-Performance.pdf#page=40, (Viewed 05/04/18)
Disdier A, Head K (2008) The puzzling persistence of the distance effect on bilateral trade. Rev Econ Stat 90(1):37–48
Eaton J, Kortum S (2002) Technology, geography, and trade. Econometrica 70(5):1741–1779
Economist (2016) Economic integration and the "four freedoms". In: De Economist http://www.economist.com/news/finance-and-economics/21711327-why-free-movement-labour-essential-europes-economic-project-economic Accessed 24 April 2017
Economist (2017) Why the "WTO option" for Brexit will prove tricky. In: De Economist http://www.economist.com/blogs/economist-explains/2017/01/economist-explains-4 Accessed 10 Feb 2017
Eichengreen B, Irwin D (1995) Trade blocs, currency blocs and the reorientation of world trade in the 1930s. J Int Econ 38(1–2):1–24
Emerson M (2016) Which model for Brexit? In: CEPS Special Report https://www.ceps.eu/system/files/SR147%20ME%20Which%20model%20for%20Brexit.pdf Accessed 6 Feb 2017
European Commission (2017) Database - Eurostat. In: European Commission http://ec.europa.eu/eurostat/data/database Accessed 17 April 2017
European Union. (2017). About the EU European Union. Available at: https://europa.eu/european-union/about-eu/countries_en Accessed 25 April 2017
European Union Committee (2016) Brexit: the options for trade. In: European Union Committee https://www.publications.parliament.uk/pa/ld201617/ldselect/ldeucom/72/72.pdf Accessed 10 Feb 2017
Evenett S, Keller W (2002) On theories explaining the success of the gravity equation. J Polit Econ 110(2):281–316
Feenstra R (2002) Border effects and the gravity equation: consistent methods for estimation. Scott J Pol Econ 49(5):491–506
Frankel J, Stein E, Wei S (1995) Trading blocs and the Americas: the natural, the unnatural, and the super-natural. J Dev Econ 47(1):61–95
Glencross A (2015) Why a British referendum on EU membership will not solve the Europe question. Int Aff 91(2):303–317
Goodwin M, Heath O (2016) The 2016 referendum, Brexit and the left behind: an aggregate-level analysis of the result. Polit Q 87(3):323–332
Google Maps (2017) Google Maps. In: Google Maps https://www.google.se/maps Accessed 17 April 2017
Gordon M (2016) Brexit: a challenge for the UK constitution, of the UK constitution? Eur Const Law Rev 12(03):409–444
Grant W (2016) The challenges facing UK farmers from Brexit. EuroChoices 15(2):11–16
Greene W (2003) Econometric Analysis, 5th edn. Prentice Hall, Upper Saddle River
Gros D (2016) The Not-So-High Costs of Brexit. In: Project Syndicate https://www.project-syndicate.org/commentary/overblown-costs-of-brexit-by-daniel-gros-2016-09 Accessed 17 Feb 2017
Havrylyshyn O, Pritchett L (1991) European trade patterns after the transition. In: World Bank Working Paper WPS. World Bank, Washington, DC, p 748
Head K, Mayer T (2014) Gravity equations: workhorse, toolkit, and cookbook. In: Gopinath G, Helpman E, Rogoff K (eds) Handbook of International Economics, vol 4. Elsevier, Amsterdam, pp 131–195
HM Treasury (2016) HM Treasury analysis: the long-term economic impact of EU membership and the alternatives. In: HM Treasury https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/517415/treasury_analysis_economic_impact_of_eu_membership_web.pdf Accessed 10 Feb 2017
Hobolt S (2016) The Brexit vote: a divided nation, a divided continent. J Eur Publ Pol 23(9):1259–1277
House of Commons (2013) Leaving the EU. Res Pap 13/42. In: UK Parliament http://researchbriefings.parliament.uk/ResearchBriefing/Summary/RP13-42 Accessed 10 Feb 2017
Huber PJ (1967) The behavior of maximum likelihood estimates under nonstandard conditions. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, vol 1. Berkeley, University of California Press, pp 221–233
Hummels D (2001) Toward a Geography of Trade Costs. In: SSRN Electronic Journal, GTAP Working Papers, Paper, p 17
Hunt A, Wheeler B (2017) Brexit: All you need to know about the UK leaving the EU. In: BBC News http://www.bbc.com/news/uk-politics-32810887 Accessed 6 Feb 2017
Jensen M, Snaith H (2016) When politics prevails: the political economy of a Brexit. J Eur Public Pol 23(9):1302–1310
Kepaptsoglou K, Karlaftis M, Tsamboulas D (2010) The gravity model specification for modeling international trade flows and free trade agreement effects: a 10-year review of empirical studies. Open Econ J 3(1):1–13
Kroll D, Leuffen D (2016) Ties that bind, can also strangle: the Brexit threat and the hardships of reforming the EU. J Eur Public Pol 23(9):1311–1320
Lavergne M (2004) The long and short of the Canada-US free trade agreement. Am Econ Rev 94(4):870–895
Lazowski A (2016) Unilateral withdrawal from the EU: realistic scenario or a folly? J Eur Public Pol 23(9):1294–1301
Liang J, Pan W, Yang Z (2004) Characterization-based Q–Q plots for testing multinormality. Stat Prob Lett 70(3):183–190
Linder S (1961) An essay on trade and transformation, 1st edn. John Wiley, New York
Linnemann H (1966) An econometric study of international trade flows, 1st edn. North Holland Publishing Company
Manners-Bell J (2017) Supply chain risk management: understanding emerging threats to global supply chains. Kogan Page Publishers
Marinetraffic (2017) MarineTraffic Voyage planner - Distance calculator - Route finder. In: Marinetraffic https://www.marinetraffic.com/en/voyage-planner Accessed 11 April 2017
Martin W, Pham C (2015) Estimating the gravity model when zero trade flows are frequent and economically determined. In: World Bank Group http://documents.worldbank.org/curated/en/695631467998785933/pdf/WPS7308.pdf Accessed 25 April 2017
Matthews A (2016) The potential implications of a Brexit for future EU Agri-food policies. EuroChoices 15(2):17–23
McCallum J (1995) National Borders Matter: Canada-U.S. Regional Trade Patterns. Am Econ Rev 85(3):615–623
McIvor RT, Humphreys PK, McAleer WE (1998) European car makers and their suppliers: changes at the interface. Eur Bus Rev 98(2):87–99
Menon A, Salter J (2016a) Brexit: initial reflections. Int Aff 92(6):1297–1318
Menon A, Salter J (2016b) Britain's influence in the EU. Natl Inst Econ Rev 236(1):7–13
Monaghan A (2016) UK car industry risks 'death by a thousand cuts' after Brexit vote. In: The Guardian https://www.theguardian.com/business/2016/nov/03/uk-car-industry-risks-death-by-thousand-cuts-brexit-vote Accessed 7 Mar 2017
Montgomery D, Peck E, Vining G (2012) Introduction to linear regression analysis, 5th edn. John Wiley and Sons, Inc, Hoboken
O'Brien R (2007) A caution regarding rules of thumb for variance inflation factors. Qual Quant 41(5):673–690
OECD (2016) The economic consequences of Brexit: a taxing decision. In: OECD http://www.oecd-ilibrary.org/docserver/download/5jm0lsvdkf6k-en.pdf?expires=1486712893&id=id&accname=guest&checksum=E10E1D42F00E6D90145FBB1379868519 Accessed 10 Feb 2017
O'Grady S (2016) Brexit latest: Tariffs on UK car exports to Europe would be 'disastrous' for jobs says Jaguar Land Rover boss. In: The Independent http://www.independent.co.uk/news/business/news/brexit-latest-tariffs-on-uk-car-exports-to-europe-would-be-disastrous-for-jobs-says-jaguar-land-a7334991.html Accessed 17 Feb 2017
OICA. (2017). 2016 Production Statistics. [dataset] OICA. Available at: http://www.oica.net/category/production-statistics/ Accessed 17 April 2017
Open Europe (2015). What if...? The consequences, challenges and opportunities facing Britain outside the EU. Open Europe. Available at: http://openeurope.org.uk/intelligence/britain-and-the-eu/what-if-there-were-a-brexit/ Accessed 10 Feb 2017
Parker G, Barker A (2017) Theresa may warns UK will walk away from 'bad deal'. In: The Financial Times https://www.ft.com/content/c3741ca2-dcc6-11e6-86ac-f253db7791c6 Accessed 10 Feb 2017
Prism (2017) Regional Data and Tools. In: Prism http://sdd.spc.int/en/stats-by-topic/population-statistics Accessed 6 April 2017
PwC (2016) Leaving the EU: Implications for the UK economy. In: PwC http://www.pwc.co.uk/economic-services/assets/leaving-the-eu-implications-for-the-uk-economy.pdf Accessed 7 March 2017
Rose A (2004) Do we really know that the WTO increases trade? Am Econ Rev 94(1):98–114
Santos Silva J, Tenreyro S (2006) The log of gravity. Rev Econ Stat 88(4):641–658
Simón L (2015) Britain, the European Union and the future of Europe: a geostrategic perspective. RUSI Journal 160(5):16–23
Smarzynska B (2001) Does relative location matter for bilateral trade flows? An extension of the gravity model. J Econ Integr 16(3):379–398
SMMT (2017) SMMT Motor Industry Facts 2016. In: The Society of Motor Manufacturers and Traders (SMMT) https://www.smmt.co.uk/wp-content/uploads/sites/2/SMMT-Motor-Industry-Facts-2016_v2-1.pdf Accessed 1 Mar 2017
Song C (2016) Understanding the aftermath of Brexit: implications for the pharmaceutical industry. Pharm Med 30(5):253–256
Statista (2017) Vehicles & Road Traffic. [online] Statista. https://www.statista.com/markets/419/topic/487/vehicles-road-traffic/ Accessed 20 Oct 2018
Swinbank A (2016) Brexit or Bremain? Future options for UK agricultural policy and the CAP. EuroChoices 15(2):5–10
Thielemann E, Schade D (2016) Buying into myths: free movement of people and immigration. Polit Q 87(2):139–147
Thomas R, Oliver N (1991) Components supplier patterns in the UK motor industry. Omega 19(6):609–616
Tinbergen J (1962) Shaping the world economy, 1st edn. Twentieth Century Fund, New York
United Nations (2017a) UN Comtrade Database. In: United Nations http://comtrade.un.org/data/ Accessed 22 Feb 2017
United Nations (2017b) Department of Economic and Social Affairs. In: United Nations https://esa.un.org/unpd/wpp/Download/Standard/Population/ Accessed 6 April 2017
United Nations Statistics (2010) International Merchandise Trade Statistics. In: United Nations Statistics https://unstats.un.org/unsd/trade/eg-imts/IMTS%202010%20(English).pdf Accessed 25 April 2017
United Nations Statistics (2013) International Merchandise Trade Statistics: Compilers Manual, Revision 1. In: United Nations Statistics https://unstats.un.org/unsd/trade/publications/seriesf_87Rev1_e_cover.pdf Accessed 25 April 2017
United Nations Statistics (2017a) Subcategories do not add up to higher level codes. In: United Nations Statistics https://unstats.un.org/unsd/tradekb/Knowledgebase/50094/Subcategories-do-not-add-up-to-higher-level-codes Accessed 25 April 2017
United Nations Statistics (2017b) Bilateral asymmetries. In: United Nations Statistics https://unstats.un.org/unsd/tradekb/Knowledgebase/50657/Bilateral-asymmetries Accessed 25 April 2017
Vasilopoulou S (2016) UK Euroscepticism and the Brexit referendum. Polit Q 87(2):219–227
VDA (2017) Annu Rep 2016. In: Association of the German Automotive Industry https://www.vda.de/en/services/Publications/annual-report-2016.html Accessed 1 Mar 2017
Wegman E (1990) Hyperdimensional data analysis using parallel coordinates. J Am Stat Assoc 85(411):664–675
White H (1980) A Heteroskedasticity-consistent covariance matrix estimator and a direct test for Heteroskedasticity. Econometrica 48(4):817
World Bank (2014) LPI Methodology. Trade logistics in the global economy. In: World Bank https://wb-lpi-media.s3.amazonaws.com/LPI%20Methodology.pdf Accessed 17 April 2017
World Bank (2017a) World Development Indicators. In: World Bank http://databank.worldbank.org/data/reports.aspx?Code=NY.GDP.MKTP.KD.ZG&id=1ff4a498&report_name=Popular-Indicators&populartype=series&ispopular=y Accessed 12 April 2017
World Bank. (2017b). Logistics Performance Index. [dataset] World Bank. Available at: https://lpi.worldbank.org/ Accessed 17 April 2017
World Bank (2017c) GDP (current US$). In: World Bank http://data.worldbank.org/indicator/NY.GDP.MKTP.CD Accessed 12 April 2017
World Bank (2017d) High income. In: World Bank http://data.worldbank.org/income-level/high-income Accessed 12 April 2017
World Bank (2017e) Upper middle income. In: World Bank http://data.worldbank.org/income-level/upper-middle-income Accessed 12 April 2017
World Bank (2017f) Lower middle income. In: World Bank http://data.worldbank.org/income-level/lower-middle-income Accessed 12 April 2017
World Trade Organization (2017a) Principles of the trading system. In: World Trade Organization https://www.wto.org/english/thewto_e/whatis_e/tif_e/fact2_e.htm Accessed 23 Feb 2017
World Trade Organization (2017b) Tariff download facility. In: World Trade Organization http://tariffdata.wto.org/ Accessed 6 April 2017
Yamarik S, Ghosh S (2005) A sensitivity analysis of the gravity model. Int Trade J 19(1):83–126
The authors would like to express their gratitude to three anonymous referees that provided extremely helpful comments on an earlier draft of this paper. Also, thanks are due to Prof. Mike Lai for expediting the publication process in his role as Editor-in-Chief of the journal.
The research reported within the paper was not supported by any form of external funding.
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
School of Business, Economics and Law, University of Gothenburg, PO BOX 610, SE-405 30, Gothenburg, Sweden
Jacqueline Karlsson
, Helena Melin
& Kevin Cullinane
Search for Jacqueline Karlsson in:
Search for Helena Melin in:
Search for Kevin Cullinane in:
Each of the co-authors contributed to all aspects of the paper. All authors read and approved the final manuscript.
Correspondence to Kevin Cullinane.
Karlsson, J., Melin, H. & Cullinane, K. The impact of potential Brexit scenarios on German car exports to the UK: an application of the gravity model. J. shipp. trd. 3, 12 (2018) doi:10.1186/s41072-018-0038-x
Gravity model | CommonCrawl |
A thermodynamic study of the two-dimensional pressure-driven channel flow
Hyperbolic balance laws with relaxation
August 2016, 36(8): 4287-4347. doi: 10.3934/dcds.2016.36.4287
On the condition number of the critically-scaled Laguerre Unitary Ensemble
Percy A. Deift 1, , Thomas Trogdon 1, and Govind Menon 2,
Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, New York, NY 10012, United States, United States
Division of Applied Mathematics, Brown University, 182 George St, Providence, RI 02912, United States
Received July 2015 Revised November 2015 Published March 2016
We consider the Laguerre Unitary Ensemble (aka, Wishart Ensemble) of sample covariance matrices $A = XX^*$, where $X$ is an $N \times n$ matrix with iid standard complex normal entries. Under the scaling $n = N + \lfloor \sqrt{ 4 c N} \rfloor$, $c > 0$ and $N \rightarrow \infty$, we show that the rescaled fluctuations of the smallest eigenvalue, largest eigenvalue and condition number of the matrices $A$ are all given by the Tracy--Widom distribution ($\beta = 2$). This scaling is motivated by the study of the solution of the equation $Ax=b$ using the conjugate gradient algorithm, in the case that $A$ and $b$ are random: For such a scaling the fluctuations of the halting time for the algorithm are empirically seen to be universal.
Keywords: Laguerre polynomials, Wishart ensemble, conjugate gradient algorithm., Laguerre Unitary Ensemble, Riemann--Hilbert problems.
Mathematics Subject Classification: Primary: 60B20, 65C50; Secondary: 35Q1.
Citation: Percy A. Deift, Thomas Trogdon, Govind Menon. On the condition number of the critically-scaled Laguerre Unitary Ensemble. Discrete & Continuous Dynamical Systems - A, 2016, 36 (8) : 4287-4347. doi: 10.3934/dcds.2016.36.4287
T. H. Baker, P. J. Forrester and P. A. Pearce, Random matrix ensembles with an effective extensive external charge,, J. Phys. A. Math. Gen., 31 (1998), 6087. doi: 10.1088/0305-4470/31/29/002. Google Scholar
E. Basor, Y. Chen and L. Zhang, PDEs satisfied by extreme eigenvalues distributions of GUE and LUE,, Random Matrices Theory Appl., 1 (2012). doi: 10.1142/S2010326311500031. Google Scholar
P. Deift, Orthogonal Polynomials and Random Matrices: A Riemann-Hilbert Approach,, Amer. Math. Soc., (1999). Google Scholar
P. Deift, T. Kriecherbauer, K. T.-R. McLaughlin, S. Venakides and X. Zhou, Asymptotics for polynomials orthogonal with respect to varying exponential weights,, Internat. Math. Res. Not., 16 (1997), 759. doi: 10.1155/S1073792897000500. Google Scholar
P. Deift, T. Kriecherbauer, K. T.-R. McLaughlin, S. Venakides and X. Zhou, Strong asymptotics of orthogonal polynomials with respect to exponential weights,, Comm. Pure Appl. Math., 52 (1999), 1491. Google Scholar
P. A. Deift, G. Menon, S. Olver and T. Trogdon, Universality in numerical computations with random data,, Proc. Natl. Acad. Sci. U. S. A., 111 (2014), 14973. doi: 10.1073/pnas.1413446111. Google Scholar
A. Edelman, Eigenvalues and condition numbers of random matrices,, SIAM J. Matrix Anal. Appl., 9 (1988), 543. doi: 10.1137/0609045. Google Scholar
A. S. Fokas, A. R. Its and A. V. Kitaev, The isomonodromy approach to matrix models in 2D quantum gravity,, Comm. Math. Phys., 147 (1992), 395. doi: 10.1007/BF02096594. Google Scholar
P. J. Forrester, The spectrum edge of random matrix ensembles,, Nucl. Phys. B, 402 (1993), 709. doi: 10.1016/0550-3213(93)90126-A. Google Scholar
H. H. Goldstine and J. von Neumann, Numerical inverting of matrices of high order. II,, Proc. AMS, 2 (1951), 188. doi: 10.1090/S0002-9939-1951-0041539-X. Google Scholar
A. Greenbaum, Behavior of slightly perturbed Lanczos and conjugate-gradient recurrences,, Linear Algebra Appl., 113 (1989), 7. doi: 10.1016/0024-3795(89)90285-1. Google Scholar
M. Hestenes and E. Steifel, Method of conjugate gradients for solving linear systems,, J. Res., 20 (1952), 409. Google Scholar
T. Jiang and D. Li, Approximation of rectangular beta-laguerre ensembles and large deviations,, J. Theor. Probab., 28 (2015), 804. doi: 10.1007/s10959-013-0519-7. Google Scholar
K. Johansson, Shape fluctuations and random matrices,, Commun. Math. Phys., 209 (2000), 437. doi: 10.1007/s002200050027. Google Scholar
I. M. Johnstone, On the distribution of the largest eigenvalue in principal components analysis,, Ann. Stat., 29 (2001), 295. doi: 10.1214/aos/1009210543. Google Scholar
S. Kaniel, Estimates for some computational techniques in linear algebra,, Math. Comput., 20 (1966), 369. doi: 10.1090/S0025-5718-1966-0234618-4. Google Scholar
P. R. Krishnaiah and T. C. Chang, On the exact distribution of the smallest root of the wishart matrix using zonal polynomials,, Ann. Inst. Stat. Math., 23 (1971), 293. doi: 10.1007/BF02479230. Google Scholar
A. B. J. Kuijlaars, K. T.-R. McLaughlin, W. Van Assche and M. Vanlessen, The Riemann-Hilbert approach to strong asymptotics for orthogonal polynomials on $[-1,1]$,, Adv. Math. (N. Y)., 188 (2004), 337. doi: 10.1016/j.aim.2003.08.015. Google Scholar
V. A. Marčenko and L. A. Pastur, Distribution of eigenvalues for some sets of random matrices,, Math. USSR-Sbornik, 1 (1967), 457. Google Scholar
F. W. J. Olver, D. W. Lozier, R. F. Boisvert and C. W. Clark, NIST Handbook of Mathematical Functions,, Cambridge University Press, (2010). Google Scholar
W.-Y. Qiu and R. Wong, Global asymptotic expansions of the Laguerre polynomials Riemann-Hilbert approach,, Numer. Algorithms, 49 (2008), 331. doi: 10.1007/s11075-008-9159-x. Google Scholar
B Simon, Trace Ideals and Their Applications, volume 120 of Mathematical Surveys and Monographs,, American Mathematical Society, (2010). Google Scholar
T. Sugiyama, On the distribution of the largest latent root and the corresponding latent vector for principal component analysis,, Ann. Math. Stat., 37 (1966), 995. doi: 10.1214/aoms/1177699378. Google Scholar
G. Szegö, Orthogonal Polynomials,, Amer. Math. Soc., (1959). Google Scholar
C. A. Tracy and H. Widom, Level-spacing distributions and the Airy kernel,, Comm. Math. Phys., 159 (1994), 151. doi: 10.1007/BF02100489. Google Scholar
T. Trogdon, Riemann-Hilbert Problems, Their Numerical Solution and the Computation of Nonlinear Special Functions,, PhD thesis, (2013). Google Scholar
M. Vanlessen, Strong asymptotics of laguerre-type orthogonal polynomials and applications in random matrix theory,, Constr. Approx., 25 (2007), 125. doi: 10.1007/s00365-005-0611-z. Google Scholar
S.-X. Xu, D. Dai and Y.-Q. Zhao, Critical edge behavior and the bessel to airy transition in the singularly perturbed laguerre unitary ensemble,, Commun. Math. Phys., 332 (2014), 1257. doi: 10.1007/s00220-014-2131-9. Google Scholar
Zhiyan Ding, Qin Li, Jianfeng Lu. Ensemble Kalman Inversion for nonlinear problems: Weights, consistency, and variance bounds. Foundations of Data Science, 2020 doi: 10.3934/fods.2020018
Huiying Fan, Tao Ma. Parabolic equations involving Laguerre operators and weighted mixed-norm estimates. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5487-5508. doi: 10.3934/cpaa.2020249
Håkon Hoel, Gaukhar Shaimerdenova, Raúl Tempone. Multilevel Ensemble Kalman Filtering based on a sample average of independent EnKF estimators. Foundations of Data Science, 2020, 2 (4) : 351-390. doi: 10.3934/fods.2020017
Geir Evensen, Javier Amezcua, Marc Bocquet, Alberto Carrassi, Alban Farchi, Alison Fowler, Pieter L. Houtekamer, Christopher K. Jones, Rafael J. de Moraes, Manuel Pulido, Christian Sampson, Femke C. Vossepoel. An international initiative of predicting the SARS-CoV-2 pandemic using ensemble data assimilation. Foundations of Data Science, 2020 doi: 10.3934/fods.2021001
Manxue You, Shengjie Li. Perturbation of Image and conjugate duality for vector optimization. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020176
Alessandro Carbotti, Giovanni E. Comi. A note on Riemann-Liouville fractional Sobolev spaces. Communications on Pure & Applied Analysis, 2021, 20 (1) : 17-54. doi: 10.3934/cpaa.2020255
Constantine M. Dafermos. A variational approach to the Riemann problem for hyperbolic conservation laws. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 185-195. doi: 10.3934/dcds.2009.23.185
Mostafa Mbekhta. Representation and approximation of the polar factor of an operator on a Hilbert space. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020463
Peter H. van der Kamp, D. I. McLaren, G. R. W. Quispel. Homogeneous darboux polynomials and generalising integrable ODE systems. Journal of Computational Dynamics, 2021, 8 (1) : 1-8. doi: 10.3934/jcd.2021001
Petr Čoupek, María J. Garrido-Atienza. Bilinear equations in Hilbert space driven by paths of low regularity. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 121-154. doi: 10.3934/dcdsb.2020230
Liam Burrows, Weihong Guo, Ke Chen, Francesco Torella. Reproducible kernel Hilbert space based global and local image segmentation. Inverse Problems & Imaging, 2021, 15 (1) : 1-25. doi: 10.3934/ipi.2020048
Predrag S. Stanimirović, Branislav Ivanov, Haifeng Ma, Dijana Mosić. A survey of gradient methods for solving nonlinear optimization. Electronic Research Archive, 2020, 28 (4) : 1573-1624. doi: 10.3934/era.2020115
Wolfgang Riedl, Robert Baier, Matthias Gerdts. Optimization-based subdivision algorithm for reachable sets. Journal of Computational Dynamics, 2021, 8 (1) : 99-130. doi: 10.3934/jcd.2021005
Djamel Aaid, Amel Noui, Özen Özer. Piecewise quadratic bounding functions for finding real roots of polynomials. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 63-73. doi: 10.3934/naco.2020015
He Zhang, John Harlim, Xiantao Li. Estimating linear response statistics using orthogonal polynomials: An RKHS formulation. Foundations of Data Science, 2020, 2 (4) : 443-485. doi: 10.3934/fods.2020021
Ágota P. Horváth. Discrete diffusion semigroups associated with Jacobi-Dunkl and exceptional Jacobi polynomials. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021002
Martin Heida, Stefan Neukamm, Mario Varga. Stochastic homogenization of $ \Lambda $-convex gradient flows. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 427-453. doi: 10.3934/dcdss.2020328
Gabrielle Nornberg, Delia Schiera, Boyan Sirakov. A priori estimates and multiplicity for systems of elliptic PDE with natural gradient growth. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3857-3881. doi: 10.3934/dcds.2020128
Aihua Fan, Jörg Schmeling, Weixiao Shen. $ L^\infty $-estimation of generalized Thue-Morse trigonometric polynomials and ergodic maximization. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 297-327. doi: 10.3934/dcds.2020363
Thomas Frenzel, Matthias Liero. Effective diffusion in thin structures via generalized gradient systems and EDP-convergence. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 395-425. doi: 10.3934/dcdss.2020345
HTML views (0)
Percy A. Deift Thomas Trogdon Govind Menon | CommonCrawl |
Last 12 months (4)
Statistics and Probability (1)
MRS Advances (2)
Canadian Journal of Neurological Sciences (1)
Chinese Journal of Agricultural Biotechnology (1)
Earth and Environmental Science Transactions of The Royal Society of Edinburgh (1)
Epidemiology & Infection (1)
Journal of Fluid Mechanics (1)
Management and Organization Review (1)
The Journal of Navigation (1)
The Psychiatric Bulletin (1)
International Association for Chinese Management Research (1)
Ryan Test (1)
CEO Values, Firm Long-Term Orientation, and Firm Innovation: Evidence from Chinese Manufacturing Firms
Wei Zheng, Rui Shen, Weiguo Zhong, Jiangyong Lu
Journal: Management and Organization Review , First View
Innovation contributes to a firm's long-term competitive advantages but also involves significant risk and uncertainty. As agency theory predicts, CEOs are self-interested and risk-averse, and thus are reluctant to engage in innovation investments. However, the extent to which CEOs are self-interested and the mechanisms through which self-interested CEOs affect firm innovation have not been empirically tested. To fill this gap, we propose that CEOs possess a mix of both self-preserving and other-regarding motives, and build a mediation model in which CEO values affect firm innovation via firms' long-term orientation. Based on a three-phase (from 2014 to 2016) survey of 436 Chinese manufacturing firms, we find that CEOs with high self-regarding values reduce innovation efforts and performance by damaging a firm's long-term orientation. Moreover, CEO tenure, CEO duality, and environmental uncertainty weaken the relationship between CEO values and firm innovation via long-term orientation. Our study enriches the innovation literature by extending the basic assumptions of agency theory and by providing empirical evidence to determine whether and how self-regarded CEOs affect firm innovation.
Epidemiology, risk factors and outcomes of Candida albicans vs. non-albicans candidaemia in adult patients in Northeast China
Wei Zhang, Xingpeng Song, Hao Wu, Rui Zheng
This study aimed to evaluate the clinical characteristics, risk factors and outcomes of adult patients with candidaemia caused by C. albicans vs. non-albicans Candida spp. (NAC). All adult hospitalised cases of candidaemia (2012–2017) at a tertiary hospital in Shenyang were included in the retrospective study, and a total of 180 episodes were analysed. C. parapsilosis was the most frequently isolated species (38.3%), followed by C. albicans (35.6%), C. glabrata (13.9%), C. tropicalis (10%) and others (2.2%). As initial antifungal therapy, 75.0%, 3.9%, 5.6% and 2.2% of patients received fluconazole, caspofungin, micafungin and voriconazole, respectively. Multivariate analyses revealed that total parenteral nutrition was associated with an increased risk of NAC bloodstream infections (BSI) (OR 2.535, 95% CI (1.066–6.026)) vs. C. albicans BSI. Additionally, the presence of a urinary catheter was associated with an increased risk of C. albicans BSI (OR 2.295 (1.129–4.666)) vs. NAC BSI. Moreover, ICU stay (OR 4.013 (1.476–10.906)), renal failure (OR 3.24 (1.084–9.683)), thrombocytopaenia (OR 7.171 (2.152–23.892)) and C. albicans (OR 3.629 (1.352–9.743)) were independent risk factors for candidaemia-related 30-day mortality, while recent cancer surgery was associated with reduced mortality risk (OR 26.479 (2.550–274.918)). All these factors may provide useful information to select initial empirical antifungal agents.
Evaluation of Signal-in-Space Continuity and Availability for BeiDou Satellite Considering Failures
Lihong Fan, Rui Tu, Zengji Zheng, Rui Zhang, Xiaochun Lu, Jinhai Liu, Xiaodong Huang, Ju Hong
Journal: The Journal of Navigation , First View
Signal-in-space (SIS) continuity and availability are important indicators of performance assessment for Global Navigation Satellite Systems (GNSSs). The BeiDou Navigation Satellite System (BDS) Open Service Performance Standard (BDS-OS-PS-1.0) has been released, and the corresponding public performance indicators have been provided, but the actual SIS performance is uncertain to users. SIS continuity and availability are primarily related to unscheduled outages (failures). Therefore, based on the existing failure classification system and actual operation modes, four types of failure modes are first analysed: long-term failure related to satellite service period, maintenance failure related to satellite manoeuvring, short-term failure associated with random repairable anomalies and equivalent failure corresponding to a combination of the above three types of failures. Second, based on the failure classification and selected precise and broadcast ephemerides from 2015–2016, the Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR) of each failure type are obtained using appropriate detection methods. Finally, using a corresponding assessment model, the SIS continuity and availability of BeiDou are calculated for individual and equivalent failure cases, and these are compared with the provided index in the BDS Open Service Performance Standard.
Identical Location STEM analysis on La1−xSrxCoO3 Oxygen-Evolution Catalysts
Xue Rui, Dongyoung Chung, Pietro Papa Lopes, Hong Zheng, John Mitchell, Nenad M. Markovic, Robert Klie
Long-term effects of biochar on rice production and stabilisation of cadmium and arsenic levels in contaminated paddy soils
Peng CHEN, Hong-Yan WANG, Rui-Lun ZHENG, Bo ZHANG, Guo-Xin SUN
Journal: Earth and Environmental Science Transactions of The Royal Society of Edinburgh / Volume 109 / Issue 3-4 / June 2019
Heavy metal contamination in the paddy soils of China is a serious concern because of its health risk through transfer in food chains. A field experiment was conducted in 2014–2015 to investigate the long-term effects of different biochar amendments on cadmium (Cd) and arsenic (As) immobilisation in a contaminated paddy field in southern China. Two types of biochar, a rice-straw-derived biochar (RB) and a coconut-by-product-derived biochar (CB), were amended separately to determine their impacts on rice yield and their efficacy in reducing Cd and As in rice. The two-year field experiment showed that biochar amendments significantly improved the rice yields and that CB is superior to RB, especially in the first growth season. Using a large amount of biochar amendment (22.5tha–1) significantly increased soil pH and total organic carbon, and concomitantly decreased the Cd content in rice grains over the four growth seasons, regardless of biochar type and application rate. Arsenic levels in rice were similar to the control, and results from this study suggest that there was a sustainable effect of biochar on Cd sequestration in soil and reduction of Cd accumulation in rice for at least two years. Biochar amendment in soil could be considered as a sustainable, reliable and cost-effective option to remediate heavy metal contamination in paddy fields for long periods.
The prevalence and clinical characteristics of tick-borne diseases at One Sentinel Hospital in Northeastern China
Hong-Bo Liu, Ran Wei, Xue-Bing Ni, Yuan-Chun Zheng, Qiu-Bo Huo, Bao-Gui Jiang, Lan Ma, Rui-Ruo Jiang, Jin Lv, Yun-Xi Liu, Fang Yang, Yun-Huan Zhang, Jia-Fu Jiang, Na Jia, Wu-Chun Cao
Journal: Parasitology / Volume 146 / Issue 2 / February 2019
Northeastern China is a region of high tick abundance, multiple tick-borne pathogens and likely human infections. The spectrum of diseases caused by tick-borne pathogens has not been objectively evaluated in this region for clinical management and for comparison with other regions globally where tick-transmitted diseases are common. Based on clinical symptoms, PCR, indirect immunofluorescent assay and (or) blood smear, we identified and described tick-borne diseases from patients with recent tick bite seen at Mudanjiang Forestry Central Hospital. From May 2010 to September 2011, 42% (75/180) of patients were diagnosed with a specific tick-borne disease, including Lyme borreliosis, tick-borne encephalitis, human granulocytic anaplasmosis, human babesiosis and spotted fever group rickettsiosis. When we compared clinical and laboratory features to identify factors that might discriminate tick-transmitted infections from those lacking that evidence, we revealed that erythema migrans and neurological manifestations were statistically significantly differently presented between those with and without documented aetiologies (P < 0.001, P = 0.003). Twelve patients (6.7%, 12/180) were co-infected with two tick-borne pathogens. We demonstrated the poor ability of clinicians to identify the specific tick-borne disease. In addition, it is necessary to develop specific laboratory assays for optimal diagnosis of tick-borne diseases.
MRI and Tractography in Hypertrophic Olivary Degeneration
Qilun Lai, Chaobo Zheng, Rui Zhang, Xiaoli Liu, Yaguo Li, Qi Xu
Journal: Canadian Journal of Neurological Sciences / Volume 45 / Issue 3 / May 2018
Xiaobo Chen, Ian D. Sharp, Rui Cao, Yao Zheng, Chun Zhao, Artur Braun
Journal: Journal of Materials Research / Volume 33 / Issue 5 / 14 March 2018
Print publication: 14 March 2018
The Internal Buckling Behavior Induced by Growth Self-restriction in Vertical Multi-walled Carbon Nanotube Arrays
Quan Zhang, Guo-an Cheng, Rui-ting Zheng
Journal: MRS Advances / Volume 3 / Issue 45-46 / 2018
Published online by Cambridge University Press: 07 May 2018, pp. 2815-2823
The internal buckling is a common phenomenon in the as-grown carbon nanotube arrays. It makes the physical properties of carbon nanotube array in experiment lower than that in theory. In this work, we analyzed the formation and evolution mechanism of the internal buckling based on quasi-static compression model, which is different from collective effect of the van der Waals interactions. The self-restriction effect and the different growth rate of carbon nanotubes verify the possibility of the quasi-static compression model to explain the morphology evolution of vertical carbon nanotube arrays, especially the phenomenon of the quasi-straight and bent carbon nanotubes coexisted in the array. We generalized the Euler beam to wave-like beam and explained the mechanism of high-mode buckling combined with the van der Waals interaction. The calculated result about the link between compressive stress and strain confirms with the stage of collective buckling in the quasi-static compression test of carbon nanotube array. Preparation of well-organized carbon nanotube arrays was strong evidence verified the effect of self-restriction in experiment.
High-current field emission from "flower-like" few-layer graphene grown on tip of nichrome (8020) wire
Xiao-lu Yan, Bao-shun Wang, Rui-ting Zheng, Xiao-ling Wu, Guo-an Cheng
Journal: MRS Advances / Volume 3 / Issue 4 / 2018
We report a novel tip-type field emission (FE) emitter by synthesizing the few-layer graphene (FLG) flakes on tip of nichrome (8020) wire (ϕ≈80 μm) by microwave plasma enhanced chemical vapor deposition(PECVD). These resultant random arrays of free-standing FLG flakes are aligned vertically to the substrate surface in a high-density and stacked to each other to form several larger "flower-like" agglomerates in spherical shapes. The FE performance of the tip-type FLG flakes emitter shows a low threshold field of 0.55 V/μm, a large field enhancement factor of 9455 ± 46, a large field emission current density of 22.18 A/cm2 at 2.70 V/μm, and an excellent field emission stability at high emission current densities (6.93 A/cm2). It can be used in variety of applications that include cathode-ray tube monitors, X-ray sources, electron microscopes, and other vacuum electronic applications.
Structure and Bonding Properties of a 20-Gold-Atom Nanocluster Studied by Theoretical X-ray Absorption Spectroscopy
Rui Yang, Daniel M. Chevrier, Peng Zhang
Published online by Cambridge University Press: 25 May 2015, pp. 33-39
Gold nanoclusters with precisely controlled atomic composition have emerged as promising materials for applications in nanotechnology because of their unique optical, electronic and catalytic properties. The recent discovery of a 20-gold-atom nanocluster protected by 16 organothiolate molecules, Au20(SR)16, is the smallest member in a surprising series of small gold−thiolate nanoclusters with a face-centered cubic (FCC) ordered core structures. A fundamental challenge facing gold nanocluster research is being able to understand the composition-dependent properties from a site-specific perspective in order to confidently establish structure-property relationships. A step in this direction is to examine the influence of various structural features (core geometry and thiolate-gold bonding motifs) on the bonding properties of gold-thiolate nanoclusters. In this work, ab initio simulations were conducted to systematically study the local structure and electronic properties of Au20(SR)16 from each unique Au and S atomic site using Au L3-edge extended X-ray absorption fine structure (EXAFS), projected density of states (l-DOS) and S K-edge X-ray absorption near edge structure (XANES) spectra. Two larger FCC-like gold-thiolate nanoclusters (Au28(SR)20 and Au36(SR)24) were used for a comparative study with Au20(SR)16, providing further predictions about the cluster size effect on the bonding properties of gold-thiolate nanoclusters with FCC-like core structures. Through this comparison, the smaller core size of Au20(SR)16 produces an EXAFS scattering signature that is non-FCC-like but shows very similar electronic properties with a larger FCC-like gold-thiolate nanocluster.
Neuroimaging in a memory assessment service: a completed audit cycle
Tarun Kuruvilla, Rui Zheng, Ben Soden, Sarah Greef, Iain Lyburn
Journal: The Psychiatric Bulletin / Volume 38 / Issue 1 / February 2014
Aims and method
A clinical audit was used to compare neuroimaging practice in a memory assessment service prior to and 6 months after implementation of guidance, developed from national and European guidelines and adapted to local resource availability, with multislice computed tomography (CT) as first-line structural imaging procedure.
Referrals to the service nearly doubled from the initial audit to the re-audit. Patients having at least one neuroimaging procedure increased from 68 to 76%. Patients with no reason documented for not having imaging significantly reduced from 50% to less than 1%. Despite the larger number of referrals, the mean waiting times for the scans only increased from 22 to 30 days. Variations in practice between the sectors reduced.
Clinical implications
Disseminating evidence-based guidelines adapted to local resource availability appears to have standardised neuroimaging practice in a memory assessment service. Further research into the clinical and cost benefits of the increased scanning is planned.
Heat transport properties of plates with smooth and rough surfaces in turbulent thermal convection
Ping Wei, Tak-Shing Chan, Rui Ni, Xiao-Zheng Zhao, Ke-Qing Xia
Journal: Journal of Fluid Mechanics / Volume 740 / 10 February 2014
We present an experimental study of turbulent thermal convection with smooth and rough surface plates in various combinations. A total of five cells were used in the experiments. Both the global $\mathit{Nu}$ and the $\mathit{Nu}$ for each plate (or the associated boundary layer) are measured. The results reveal that the smooth plates are insensitive to the surface (rough or smooth) and boundary conditions (i.e. nominally constant temperature or constant flux) of the other plate of the same cell. The heat transport properties of the rough plates, on the other hand, depend not only on the nature of the plate at the opposite side of the cell, but also on the boundary condition of that plate. It thus appears that, at the present level of experimental resolution, the smooth plate can influence the rough plate, but cannot be influenced by either the rough or the smooth plates. It is further found that the scaling of $\mathit{Nu}$ with $\mathit{Ra}$ for all of the smooth plates is consistent with the classical $1/ 3$ exponent. But the scaling exponent for the global $\mathit{Nu}$ for the cell with both plates being smooth is definitely less than $1/ 3$ (this result itself is consistent with all previous studies at comparable parameter range). The discrepancy between the $\mathit{Nu}$ behaviour at the whole-cell and individual-plate levels is not understood and deserves further investigation.
Deep China: The Moral Life of the Person By Arthur Kleinman, Yunxiang Yan, Jing Jun, Sing Lee, Everett Zhang, Pan Tianshu, et al. University of California Press. 2011. £18.95 (pb). 289 pp. ISBN: 9780520269453
Rui Zheng
Journal: The British Journal of Psychiatry / Volume 201 / Issue 4 / October 2012
100 Cases in Psychiatry. By Barry Wright, Subodh Dave & Nisha Dogra. Hodder Arnold. 2010. £20.99 (pb). 278pp. ISBN: 9780340986011
Journal: The British Journal of Psychiatry / Volume 199 / Issue 3 / September 2011
Cryopreservation of boar semen using straws
Gao Jun-Feng, Zheng Xiao-Feng, Ge Li-Jun, Lu Qing, Rui Rong
Journal: Chinese Journal of Agricultural Biotechnology / Volume 5 / Issue 3 / December 2008
An optimal protocol for cryopreservation of boar semen was established. First, the boar semen was pre-diluted with ZORLESCO (ZO) solution and pre-equilibrated at room temperature for 1 h. After adding extender I, spermatozoa were equilibrated at 5°C for 1.5 h; then an equal volume of extender II was added and the spermatozoa equilibrated for 2 h. The resulting spermatozoa were loaded into 0.25 ml straws, equilibrated for 10 min at 3 cm above the surface of liquid nitrogen (LN), then promptly submerged into LN. When thawing, straws were incubated in a water bath at 37°C for 30 s. This procedure yielded the highest post-thaw motility of 0.58±0.03 and plasma integrity of 63.2±1.2%, together with a normal acrosome in 51.4±2.6% of spermatozoa. Abnormal spermatozoa after freezing represented only 14.0±3.0%.
Plasma enhanced chemical vapor deposition of silicon nitride films from a metal-organic precursor
David M. Hoffman, Sri Prakash Rangarajan, Satish D. Athavale, Shashank C. Deshmukh, Demetre J. Economou, Jia-Rui Liu, Zongshuang Zheng, Wei-Kan Chu
Journal: Journal of Materials Research / Volume 9 / Issue 12 / December 1994
Published online by Cambridge University Press: 03 March 2011, pp. 3019-3021
Silicon nitride films are grown by plasma enhanced chemical vapor deposition from tetrakis(dimethylamido)silicon, Si(NMe2)4, and ammonia precursors at substrate temperatures of 200-400 °C. Backscattering spectrometry shows that the films are close to stoichiometric. Depth profiling by Auger electron spectroscopy shows uniform composition and no oxygen or carbon contamination in the bulk. The films are featureless by scanning electron microscopy under 100,000X magnification.
Low Temperature Atmospheric Pressure Chemical Vapor Deposition of Group 14 Oxide Films
David M. Hoffman, Lauren M. Atagi, Wei-Kan Chu, Jia-Rui Liu, Zongshuang Zheng, Rodrigo R. Rubiano, Robert W Springer, David C. Smith
Depositions of high quality SiO2 and SnO2 films from the reaction of homoleptic amido precursors M(NMe2)4 (M = Si, Sn) and oxygen were carried out in an atmospheric pressure chemical vapor deposition reactor. The films were deposited on silicon, glass and quartz substrates at temperatures of 250 to 450 °C. The silicon dioxide films are stoichiometric (O/Si = 2.0) with less than 0.2 atom % C and 0.3 atom % N and have hydrogen contents of 9 ± 5 atom °. They are deposited with growth rates from 380 to 900 Å/min. The refractive indexes of the SiO2 films are 1.46, and infrared spectra show a possible Si-OH peak at 950 cm−1. X-Ray diffraction studies reveal that the SiO2 film deposited at 350°C is amorphous. The tin oxide films are stoichiometric (O/Sn = 2.0) and contain less than 0.8 atom % carbon, and 0.3 atom % N. No hydrogen was detected by elastic recoil spectroscopy. The band gap for the SnO2 films, as estimated from transmission spectra, is 3.9 eV. The resistivities of the tin oxide films are in the range 10−2 to 10−3 Ω cm and do not vary significantly with deposition temperature. The tin oxide film deposited at 350°C is crystalline cassitterite with some (101) orientation.
Plasma Enhanced Metal-Organic Chemical Vapor Deposition of Germanium Nitride Thin Films
David M. Hoffman, Sri Prakash Rangarajan, Satish D. Athavale, Demetre J. Economou, Jia-Rui Liu, Zongshuang Zheng, Wei-Kan Chu
Published online by Cambridge University Press: 15 February 2011, 3
Amorphous germanium nitride thin films are prepared by plasma enhanced chemical vapor deposition from tetrakis(dimethylamido)germanium, Ge(NMe2)4, and an ammonia plasma at substrate temperatures as low as 190°C with growth rates >250 Å/min. N/Ge ratios in the films are 1.3 and the hydrogen contents are 13 atom %. The hydrogen is present primarily as N-H. The refractive indexes are close to the bulk value of 2.1, and the band gap, estimated from transmission spectra, is 4.8 eV. | CommonCrawl |
Soil-plant co-stimulation during forest vegetation restoration in a subtropical area of southern China
Chan Chen1,
Xi Fang1,2,
Wenhua Xiang1,2,
Pifeng Lei1,2,
Shuai Ouyang1,2 &
Yakov Kuzyakov1,3,4
Soil and vegetation have a direct impact on the process and direction of plant community succession, and determine the structure, function, and productivity of ecosystems. However, little is known about the synergistic influence of soil physicochemical properties and vegetation features on vegetation restoration. The aim of this study was to investigate the co-evolution of soil physicochemical properties and vegetation features in the process of vegetation restoration, and to distinguish the primary and secondary relationships between soil and vegetation in their collaborative effects on promoting vegetation restoration in a subtropical area of China.
Soil samples were collected to 40 cm in four distinct plant communities along a restoration gradient from herb (4–5 years), to shrub (11–12 years), to Pinus massoniana coniferous and broadleaved mixed forest (45–46 years), and to evergreen broadleaved forest (old growth forest). Measurements were taken of the soil physicochemical properties and Shannon–Wiener index (SD), diameter at breast height (DBH), height (H), and biomass. Principal component analysis, linear function analysis, and variation partitioning analysis were then performed to prioritize the relative importance of the leading factors affecting vegetation restoration.
Soil physicochemical properties and vegetation features showed a significant trend of improvement across the vegetation restoration gradient, reflected mainly in the high response rates of soil organic carbon (SOC) (140.76%), total nitrogen (TN) (222.48%), total phosphorus (TP) (59.54%), alkaline hydrolysis nitrogen (AN) (544.65%), available phosphorus (AP) (53.28%), species diversity (86.3%), biomass (2906.52%), DBH (128.11%), and H (596.97%). The soil properties (pH, SOC, TN, AN, and TP) and vegetation features (biomass, DBH, and H) had a clear co-evolutionary relationship over the course of restoration. The synergistic interaction between soil properties and vegetation features had the greatest effect on biomass (55.55%–72.37%), and the soil properties contributed secondarily (3.30%–31.44%). The main impact factors of biomass varied with the restoration periods.
In the process of vegetation restoration, soil and vegetation promoted each other. Vegetation restoration was the cumulative result of changes in soil fertility and vegetation features.
Forest vegetation restoration has become a priority study area in efforts to solve global environmental problems, as highlighted by the Bonn Challenge, a global effort to restore 150 million hectares of degraded land and deforested forests by 2020 (Crouzeilles et al. 2016). Establishing the mechanisms of plant communities in the process of recovery has concentrated mainly on species composition, and their quantitative characteristics and spatial distribution. While these factors are relatively clear (Xiang et al. 2013; Chen et al. 2019), there is still a lack of in-depth research on the feedback relationships between plant and soil, and the succession processes and regulation mechanisms of plant communities (Hu et al. 2017; Wang et al. 2018a). The feedback relationship between vegetation and soil has a great impact on the plant community, soil nutrient cycling, and soil and water conservation during vegetation restoration (Demenois et al. 2018). Insights into vegetation–soil feedback relationships are instrumental in predicting future scenarios under varying environmental conditions (van der Putten et al. 2013), as well as in designing measures for vegetation restoration at different succession stages (Huang et al. 2018).
The interactive effects of soil and vegetation suggest that both are always co-evolving and developing, which are recognized as an important mechanism for forest succession and development (van der Putten et al. 2013). The association between soil and aboveground vegetation may shift over the course of restoration (Huang et al. 2015). In the early stage of vegetation restoration, soil resources are the main limiting factors (van Der Maarel and Franklin 2013). Research has shown that the enrichment, spatial distribution, and redistribution of soil nutrients significantly affect the growth, reproduction, distribution, succession, and net primary productivity of plants (Alday et al. 2012). In particular, soil nutrients and water are the key factors in regulating vegetation development, as confirmed by the results of some fertilization experiments (Chang and Turner 2019) and different forest succession series (Huang et al. 2017). In turn, vegetation development can drive changes in the development and maintenance of soil (Huang et al. 2018). Especially in the late stage of vegetation restoration, the accumulation of plant biomass leads to an increase in the return of soil organic carbon (SOC) and nutrients (Gu et al. 2019). Furthermore, soil nutrient storage reflects the balance of the main ecological processes, including nutrients stored in aboveground biomass, nutrients decomposed and returned to soil, and nutrient leaching, these mixed results may cause the complexity of the interaction between soil and vegetation (Huang et al. 2018). Therefore, knowledge of how soil, vegetation and their interaction act on vegetation restoration is of particular importance for predicting future ecological restoration and development.
Subtropical forest covers an extensive area and supports a high level of biodiversity and a global carbon store, particularly in China which has 71% of the current total forest area in the subtropics according to the MODIS landcover layer for 2012, with abundant rainfall and abundant forest resources (Corlett and Hughes 2015). However, long-term severe human disturbance has a serious effect on subtropical forest ecosystems, with complex topography and climate change resulting in fewer climax forests and a decrease in the functioning of an ecological security barrier (Huang et al. 2018). The Chinese government initiated a series of state-funded forestry ecological projects, including programs to protect natural forests, the Grain to Green program, and the construction of shelterbelts in the middle and upper reaches of the Yangtze River. Consequently, forest vegetation has been rapidly restored, forming a series of secondary vegetation communities at different stages of restoration in this area (Ouyang et al. 2016). During vegetation restoration, aboveground vegetation and soil physicochemical properties gradually change (Zhang et al. 2019). Changes in plant development and soil variables during vegetation restoration have been demonstrated in several studies (Ayma-Romay and Bown 2019; Wang et al. 2018a; Zhang et al. 2019), but the restorative effect of soil or vegetation has rarely been explored, and there is little information on how soil physicochemical properties and vegetation act together to affect vegetation restoration (Chang and Turner 2019). To our knowledge, no studies have addressed the question of the relative importance of the effects of soil, vegetation and their synergism on promoting vegetation restoration. It has therefore become a burning issue to elucidate the coordinated control effect of vegetation restoration, soil, and water on vegetation ecology and restoration ecology (Chang and Turner 2019).
In this study, we followed the succession process of subtropical forest communities, and selected four distinct restoration periods (i.e. 4–5, 10–12, 45–46 years and old growth forest), which represent the four main stages of vegetation restoration in the subtropics of China. We selected permanent plots and determined soil physicochemical properties and vegetation features; i.e. species diversity, biomass, height (H), and diameter at breast height (DBH). Our objective was to investigate how soil physicochemical properties and vegetation features change and how soil and vegetation stimulate vegetation restoration individually and collectively. We formulated two hypotheses: (1) that vegetation restoration would have an obvious positive effect on soil physicochemical properties and vegetation features; and (2) that soil properties and vegetation features would collectively promote vegetation restoration, especially would have a significant impact on biomass. In addition, the main impact factors of biomass would be different in different restoration periods.
Study site
As shown in Fig. 1, the study site was located in Changsha County (28°23′–28°24′ N, 113°17′–113°27′ E), situated in the middle of Hunan Province, China. The topography features a typical low hilly landscape, at an altitude of 55–260 m above sea level with an average slope of 18°–25°. The climate is characterized by southeast monsoon and a mid-subtropical humid climate with an annual average precipitation of 1416.4 mm (primarily between April and August) and an annual mean air temperature of 17.3 °C. minimum and maximum air temperatures are 10.3 °C in January and 39.8 °C in July and August, respectively. The soils are mainly composed of red earth, which developed from slate and shale and are categorized as Alliti–Udic Ferrosols in the Chinese Soil Taxonomy, corresponding to Acrisol in the World Reference Base for Soil Resources (IUSS Working Group WRB 2006). Evergreen broadleaved forests are the climax and primary vegetation, but have been disturbed in varying degrees by human activities such as firewood collection. Natural forest protection programs in the past two decades have resulted in a variety of vegetation communities at different restoration stages in this area.
Location and plot distribution of the study area
Vegetation sampling
In October 2015, four adjacent vegetation communities, with basically similar environmental conditions (site, slope, soil and climate) as showed in Table 1 were selected to represent a vegetation restoration gradient (using the method of space-for-time substitution). These communities were:
4–5 yrs. restoration period. Controlled burns and site preparation were carried out in native evergreen broadleaved forest in the winter of 1965. A Pinus massoniana plantation was established in 1966 without any fertilization during this operation and then clear-cut in 1990. The woodlands were repeatedly cut until 2012. Since that time the vegetation has naturally recovered. The community is dominated by well-grown herbs, presently accompanied by some young shrubs, and belongs to the early stage of restoration according to the succession process of subtropical evergreen broadleaved forest (Xiang et al. 2016).
10–12 yrs. restoration period. Native evergreen broadleaved forest underwent a prescribed burn in 1965 and deforested to establish a Cunninghamia lanceolata plantation in 1966. This C. lanceolata plantation was clear-cut in 1989. The woodlands were logged every 3 to 5 years until 2004. The vegetation has naturally recovered to form a shrub community with well-grown shrubs and belongs to mid-restoration stage according to the succession process of subtropical evergreen broadleaved forest (Xiang et al. 2016). However, the shrub community has no obvious arbor layers and herbaceous plant is relatively infrequent.
45–46 yrs. restoration period. This period represents the secondary stage of mid-restoration. Native evergreen broadleaved forest was deforested in the early 1970s, and then naturally recovered to coniferous and broadleaved mixed forest. The communities are now about 45–50 years old, and have abundant seedlings and saplings, with larger plant density. However, the proportion of large diameter individuals is relatively low.
Old growth forest (representing the late stage of restoration). Native evergreen broadleaved forest has been well protected against human disturbances. According to the survey with local residents, this forest has been more than 90 years.
Table 1 Stand characteristics of the four forest types
In October 2015, we randomly established 4 fixed sample plots for long-term observation in each restoration period (Fig. 1). In the 4–5 and 10–12 years restoration periods, the plots were set at 20 m × 20 m. In the 45–46 years restoration periods and old growth forest, the plots were established at 30 m × 30 m. The 4 fixed plots in each restoration period were set in different mountains as far as possible, and the space distance between the two plots was more than 1000 m. To investigate the floristic components and tree spatial patterns of the forests, each plot (20 m × 20 m) in the 4–5 and 10–12 years restoration periods was subdivided into four subplots (10 m × 10 m), and each plot (30 m × 30 m) in the 45–46 years restoration periods and old growth forest was subdivided into nine subplots (10 m × 10 m).
Species diversity measurement
Species identities were recorded and measurements were taken of total H, the lowest live branch and crown width, and DBH for all individuals with DBH > 1 cm in each plot. The data were used to calculate vegetation structural parameters of the different restoration periods; i.e. density of main tree species, average DBH, and average H. The Shannon–Wiener index (SD) was used to quantify the diversity of woody plants species in each plant community with the equation below (Madonsela et al. 2018).
$$ \mathrm{SD}=-{\sum}_{i=1}^n{P}_i\ln {P}_i $$
In Eq. 1, n represents the total number of species in the community, and Pi represents the relative frequency of species i in the community. Table 1 summarizes the characteristics and site factors of each community.
Biomass measurement
Based on community surveys, biomass was measured by the harvest method and calculated by establishing relative growth equations of the organic biomass of the main tree species. For the 4–5 years restoration period, we collected all vegetation (shrubs, vines, herbs) in 2 m × 2 m quadrats which were on plot peripheries and then classified the same plants according to the following criteria: shrubs were composed of fruit, leaf, branch, stem, and root; vines were composed of fruit, leaf, stem, and root; and herbs were composed of aboveground and underground parts. A 1 m × 1 m quadrat was set up at the center of each 10 m × 10 m subplot to determine litter biomass. All litter was collected from the ground in these quadrats and transported to the laboratory. Determined samples were freshly weighed and then oven-dried at 85 °C to a constant weight to measure their dry mass for estimating biomass per plot area.
For the 10–12 years restoration period, according to the average DBH and average H of the shrub (> 1.5 m), and with the aim of ensuring that at least 3 average sample trees per dominant tree species were collected, 3 sample trees were selected and collected for each dominant tree in each plot periphery to determine fresh weight. Shrub samples were composed of fruit, leaf, branch, stem, and root. After oven-drying at 85 °C to a constant weight, we determined moisture content and calculated each biomass component of each tree species, establishing their relative growth equations to calculate biomass per shrub plant (Table 2). The biomass determination of shrubs (below 1.5 m), vines, herbaceous layers, and litter layer used the same method as the 4–5 years restoration period. Finally, estimated biomass per plot area was based on data from community surveys.
Table 2 Relative growth equations of different biomass components of the main tree species
For the 45–46 years restoration period, 3 sample trees were selected for each dominant tree in each plot periphery according to average DBH and average H, with the same aim as that for the 10–12 years restoration period. Stratified samples (1.3 m, 3.6 m) were collected for the aboveground part and complete samples were excavated for the underground part (within 1.5 m of the tree stump) to measure fresh weight. Tree samples were composed of leaf, branch, stem, and root, in which root included fine root (< 0.2 cm), rootlet (0.2–0.5 cm), thick root (0.5–2.0 cm), large root (> 2.0 cm), and root apex. After determining fresh weight, samples were oven-dried at 85 °C to a constant weight to calculate moisture content. We then estimated each biomass component of each tree species, established their relative growth equations and then calculated the biomass per tree plant (Table 2). The same methods as above were used to determine the biomass of shrubs, vines, herbaceous layers, and litter layer. Estimated biomass per plot area was based on data from community surveys. For the old growth forest, the relative growth equations for the main tree species in the tree layer were established using a similar method as the 45–46 years restoration period. However, because of the ban on logging in the old growth forest, the general growth equations of Cyclobalanopsis, deciduous broadleaf, evergreen broadleaf, and C. lanceolata, which were established by Ouyang et al. (2016) and Liu et al. (2010), were also used to estimate the biomass in the tree layer (Table 2).
Soil sampling and analysis
Each permanent plot was divided into 3 equal grids of cells along the diagonal for soil sampling. In each cell, soil profile characteristics were surveyed in 2015 to illustrate the consistency and comparability of soil background in different vegetation restoration periods, as shown in Table 3. Soil samples were taken by using cylindrical cores with a volume of 200 cm3 collected at depths of 0–10, 10–20, 20–30, and 30–40 cm in December 2015, and in April, June, and October 2016.
Table 3 Structural characteristics of soil profile
Soil samples from three cells at the same depth within a plot were mixed into a composite sample. Plant roots, debris, and gravels were cleared. Soil samples were air-dried and sieved through a 2-mm mesh for soil pH, available phosphorus (AP), and available potassium (AK); through a 1-mm mesh for soil alkaline hydrolysis nitrogen (AN); and through a 0.25-mm mesh for soil SOC, TN, total phosphorus (TP), total potassium (TK), total calcium (Ca), and total magnesium (Mg) determinations. The following properties were determined in the soil samples:
(1) Bulk density (BD) was calculated using weights of the dried soil sample from the known cylindrical core volume. (2) pH value was analyzed in a soil-to-water (deionized) ratio of 1:2.5 using a pH meter (FE20, Mettler Toledo, Switzerland). (3) SOC content was determined by the K2Cr2O7–H2SO4 oxidation method. (4) TN content was determined using a semi-micro Kjeldahl method (Bremner 1996). (5) TP, TK, Ca, and Mg were extracted via aqua regia and 1:1 HCl. After extraction, TP was determined by spectrophotometry and TK, Ca, and Mg by atomic emission spectrometry with inductively coupled plasma (ICP–OES) using a Perkin Elmer Optima 7300DV optical emission spectrometer (Nicia et al. 2018). (6) For AN and AK, we used the alkaline diffusion method and the ammonium acetate extraction flame spectrophotometer method (ISSCAS 1978). (7) For AP, we used the Olsen method (Olsen et al. 1983).
For data processing, we used the Microsoft Excel package (Office 2010). All statistical analyses were conducted using the R statistical software package (R Development Core Team 2016). In order to reflect the annual average situation of the soil properties, the arithmetic mean of four seasons in the same soil layer of each plot was calculated. At the same time, taking into account the great changes between soil layers of each variable, a weighted average of four soil layers was carried out. The parameter content of a soil layer as a percentage of the sum of four soil layers (fi) was calculated using Eq. 2, and the weighted average (X0) was calculated using Eq. 3.
$$ {f}_i\left(\%\right)=\frac{L_i}{\sum_{i=1}^n{L}_i}\times 100 $$
$$ {X}_0={\sum}_{i=1}^n\left({X}_i\times {f}_i\right) $$
In Eqs. 2 and 3, n represents the number of soil layers; Li represents the parameter content of a soil layer; and Xi represents the parameter content of a soil layer.
The response rate was used to determine the effects of restoration periods on soil properties and vegetation features, calculated by Eq. 4.
$$ \mathrm{Response}\ \mathrm{rate}\left(\%\right)=\frac{X_2-{X}_1}{X_1}\times 100 $$
In Eq. 4, X1 represents one of the soil properties or vegetation features in the 4–5, 10–12, and 45–46 years restoration periods, and X2 represents one of that in the old growth forest. In this study, only X1 in the 4–5 years restoration period is selected to reflect the extent of variables variation over the whole vegetation restoration. A positive value indicates an increase, a negative value indicates a decrease, and greater absolute values indicate greater change. Figure 2 was drawn by the geom_histogram function of ggplot2 package in the R statistical software. Before drawing, the values were normalized to a proportion of maximum value (= 1) and by min-max normalization to keep a common scale ranging from 0 to 1 (Jain et al. 2005). The min-max normalization was calculated by Eq. 5.
Changes in soil physical and chemical properties and vegetation features per vegetation restoration period. Soil properties (weighted mean, n = 4): bulk density (BD), pH value (pH), organic carbon (SOC), total N (TN), total P (TP), total K (TK), total Ca (Ca), total Mg (Mg), alkaline hydrolysis N (AN), available P (AP), and available K (AK). Vegetation features (mean, n = 4): species diversity, biomass, diameter at breast height (DBH) and height (H). The values were normalized to the proportion of maximum value (= 1). Values in brackets are response rates from 4–5 years to old growth forest (%)
$$ x^{\prime }=\frac{\left(x-{x}_{\mathrm{Min}}\right)}{x_{\mathrm{Max}}-{x}_{\mathrm{Min}}} $$
Principal component analysis (PCA) was used to determine the main factors in soil properties and vegetation features influencing vegetation restoration, and the correlations between soil properties and vegetation features. The PCA was implemented using the prcomp function and drawn by the ggplot2 package of the R software. The selection criteria for principal components included a cumulative contribution rate over 85% and eigenvalues greater than 1. Indicator whose absolute value of a loading matrix was greater than 0.7 was selected as the dominant factors (Armstrong 1967) for vegetation restoration. The cosine values of the angles between variables indicate relationship strength; angles ranging from 0° to 90° indicate variables have positive correlations, and 90° to 180° indicate negative correlations.
Based on the results of PCA, we used linear function analysis to further examine the significant correlations of soil properties and vegetation features. Before fitting the linear function, data were normalized by min-max normalization for unifying dimensions, and also calculated using Eq. 5. It was assumed that the relation between soil properties and vegetation features can be expressed by Eq. 6, where k represents slope, and b represents a constant.
$$ y= kx+b $$
Figure 3 was produced via the lm function and plot function in R. Variation partitioning analysis (VPA) was performed to quantify the relative contributions of soil factors, vegetation factors and their joint action to changes in biomass by the varpart function of vegan package. Before VPA, the suitably independent variables with the variance inflation factor (VIF) < 3 were selected by using the car package (Yang et al. 2017), and then the factor analysis (FA) were used to reduced soil factors and vegetation factors into a common factor respectively using psych package in R. Figure 4 was drawn by the geom_bar function of ggplot2 package in R. Following the results of PCA and linear function analysis, we conducted a stepwise regression analysis (SRA) to analyze, screen, and eliminate variables that cause multicollinearity, and to determine the leading impact factors of biomass per restoration period. The SRA was performed by the step function in R.
Relationships between soil properties and vegetation features (with the fitted lines, n = 16). Vegetation features include biomass (a); diameter at breast height (b); height (c); and Shannon–Wiener Index (d). Significant correlations between soil properties and vegetation features are indicated with asterisks (*: p < 0.05, **: p < 0.01, ***: p < 0.001). All variables were normalized by min-max normalization to keep a common scale ranging from 0 to 1
Variation partition analysis of the effects of soil properties and vegetation features on biomass. The numbers in each bar indicate proportions of variation of the biomass explained by soil properties (sky blue) and vegetation features (pink) individually and collectively (light orange) or not explained by either factor (white)
Changes in vegetation features and soil physicochemical properties during vegetation restoration
Vegetation features and soil physicochemical properties varied in the regularity of change according to the different restoration periods (Fig. 2). Vegetation features (species diversity, biomass, DBH, and H) increased remarkably with vegetation restoration, and the response rates increased by 86.36%, 2906.52%, 128.11%, and 596.97% respectively. Specifically, the highest values of species diversity and biomass were observed in the old growth forest, and the highest values of DBH and H were observed in the 45–46 years restoration period. The change trends of biomass, DBH, and H were basically the same (Fig. 2a). The maximum values of soil BD, pH value, Mg content, and AK content occurred in the 10–12 years restoration period. BD, pH, and AK content showed a decreasing trend whereas Mg showed an increasing trend with vegetation restoration (Fig. 2b). The response rates of BD, pH, and AK were negative but changed slightly. However, the contents of SOC, TN, TP, TK, Ca, AN, and AP increased with vegetation restoration, and their maximum values were recorded in the old growth forest except for TK (Fig. 2c and d). The response rates of SOC, TN, TP, TK, Ca, AN, and AP ranged from 10.63% to 544.35%, with AN having the highest response rate of 544.65%, followed by TN (222.48%) and SOC (140.76%).
Factors of soil properties and vegetation features influencing vegetation restoration and their relationships
The results of PCA showed that soil properties and vegetation features explained 81.54% of the variations (PC1 = 48.70%; PC2 = 20.21%; PC3 = 12.63%), revealing three main correlated variable groups of vegetation restoration (Fig. 5). There was a strong positive correlation between PC1 and SOC, TN, AN, AP, biomass, DBH, and H, and a negative correlation between PC1 and soil pH. As shown in Fig. 5, the successful discrimination of the 45–46 years restoration periods and old growth forest from other periods were strongly influenced by PC1. In the selection criteria, PC2 was correlated positively with Mg, and PC3 with TK. Figure 5 also shows that the successful discrimination of the 10–12 years restoration period from the 4–5 years restoration period was highly influenced by PC2 and PC3. Therefore, the key factors influencing vegetation restoration can be summarized as soil water and fertilizer conservation capacity (pH), organic matter (SOC), macro nutrients (TN, TK), medium nutrients (Mg), available nutrients (AN, AP), and the plant community growth situation (biomass, DBH, H).
Variables ordination diagram of PCA for the first three principal component axes (n = 16). E indicates eigen values; percentages in brackets indicate contribution rate. The cumulative contribution rate of 3 principal components was over 80% with eigenvalues greater than 1. Absolute value of a loading matrix greater than 0.7 indicates that variable has a significant contribution to a principal component. The distance of arrows from the center indicates the strength of the contributing variable to principal component. The cosine values of the angles between variables indicate relationship strength; angles ranging from 0° to 90° indicate variables have positive correlations, and 90° to 180° indicate negative correlations
The results of PCA also showed that biomass, DBH, and H had significant correlations with each other, while species diversity was weakly correlated with them. Biomass, DBH, and H had similar relationships with soil factors (Fig. 5). Specifically, the order of high positive correlations with soil factors was SOC > TN > AN > TP > AP, whereas a high negative correlation was with soil pH. The order of factors with high positive correlations with species diversity was Ca > AP > AN > TN (Fig. 5).
As shown in Fig. 3, the results of linear function analysis revealed that as SOC, TN, AN, and AP increased, biomass, DBH, and H significantly increased (p < 0.05). However, biomass decreased remarkably with the increased in pH (p < 0.001). With the increasing of Ca and AP, species diversity showed a great increase trend (p < 0.01).
Effects of soil properties and vegetation features (DBH and H) on biomass variation
The VPA results showed that the combination of soil properties and vegetation features explained 90.51% of the variation in biomass in the whole restoration process, and explained 83.44%, 99.99%, 99.99%, and 98.15% of the variation in 4–5, 10–12, 45–46 years vegetation periods and old growth forest, respectively (Fig. 4). The interaction of soil properties and vegetation features all had the highest explanation for the variation in biomass, ranging from 55.55%–72.32%. The soil properties alone explained 3.30%–31.44% of the variation, and the vegetation features alone explained 5.09%–24.32%, among which soil properties had higher individual explanation than vegetation features except the 45–46 years restoration period.
The results of SRA (Table 4) indicated that the factors influencing biomass in the whole restoration process included DBH and SOC. The fitting equation was: ybiomass = 7151.27xDBH + 7595.62xSOC (R2 = 0.914, p = 0.000). However, there were different factors influencing the biomass per restoration periods. In the 4–5 years restoration period, SOC was the only dominant factor, and the fitting equation was: ybiomass = − 966.94xSOC (R2 = 0.903, p = 0.050). H, pH, and AP were the main influential factors in the 10–12 years restoration period. The fitting equation was: ybiomass = 15,620.74xH − 1.00xpH − 3484.06xAP (R2 = 0.990, p = 0.000). H and pH were the main factors in the 45–46 years restoration period (ybiomass = − 10,432.46xH + 14,071.07xpH; R2 = 0.990, p = 0.000). In the old growth forest, SOC, TN and AP were the impacting factors (ybiomass = 45,060.13xSOC + 18,771.33xTN + 26,287.80xAP; R2 = 0.990, p = 0.000). In all periods, AN was not screened into the regression equation.
Table 4 Stepwise regression of corresponding factors for biomass (n = 16)
Soil physicochemical properties during vegetation restoration
Our results showed that soil BD decreased, and the contents of SOC, TN, TP, TK, Ca, AN, and AP increased with vegetation restoration (Fig. 2), indicating that soil physicochemical properties improved significantly. These results are partially consistent with our hypothesis and with the results of Zhang et al. (2019).
The rapid recovery of SOC at our study site has been proven to be affected by plant biomass and soil nutrients (Gu et al. 2019). The response rate of SOC (140.76%) in this research was higher than the results under semi-arid conditions (71%) recorded by Boix-Fayos et al. (2009), which may be due to the more humid conditions in subtropical regions. Consistent with the rapid accumulation of SOC, the rates of change in TN and AN were greater than the SOC change. This result differs from the results of studies in the same subtropical area of southwest China (Xu et al. 2018), which may be due to differences in the degree of degradation and type of vegetation system. Additionally, soil N is also input from other N sources, such as atmospheric N deposition, and symbiotic N fixation by legumes (Alday et al. 2012). This explains why the recovery rates of TN and AN were greater than SOC. Our results for the increase in TP and AP contents are consistent with the results of Zhang et al. (2019), who proposed that soil TP and AP contents gradually increase with the composition of tree species, annual litter yield, and SOC content along with the development of a forest's second succession. This is also supported by the significant positive correlation of soil TP and AP contents with the species diversity, biomass, and contents of SOC, TN and AN observed in this study (Fig. 5), which suggests that the accumulation of SOC improves soil nutrients during vegetation restoration (Zhang et al. 2019).
The variation ranges of BD and pH in the subtropical regions of China are 0.97–1.47 g·cm− 3 and 4.5–6.0, respectively (Hunan Provincial Department of Agriculture 1989). Our results were in the variation ranges. All the pH samples in our study indicate that soil pH (4.54–4.96) was lower than the results of Takoutsing et al. (2016), being formed by a moderate ferrallitic effect under high temperature and high humidity conditions in subtropical regions (Li et al. 2012b). Meanwhile, decreasing soil BD and pH has also been attributed to the accumulation of organic matter, which is conducive to the formation of soil aggregates and the improvement of soil microbial activity (Bienes et al. 2016). This in turn releases a large number of small molecular organic acids during the decomposition of organic matter (van Breemen et al. 1984), resulting in a decline in soil BD and pH values. The SOC in our study increased and showed negative relationships with BD and pH during vegetation restoration (Fig. 5). In addition, this study shows that biomass stimulated the decrease in soil pH during vegetation restoration (Fig. 3). The accumulation of biomass led to increased biomass in the roots, almost certainly reflecting the development of the vegetation community from annual plant species to perennial plants, which is more conducive to the release and accumulation of the various acid exudates (Pang et al. 2018). Although BD and pH showed a general declining trend, their values reached a peak in the 10–12 years restoration period (Fig. 2). These results may be caused by a combination of factors (i.e. soil texture, vegetation types and soil acid–base equilibrium). Firstly, as herbs developed into shrubs in our study site, the erosion effect of rainwater on soil silt and clay particles resulted in a high proportion of sand particles in the 10–12 years restoration period, reflecting the transformation of soil texture to sandy soil with high BD (Wang et al. 2018b). Secondly, changes in vegetation types could be a major driver behind the difference in cations absorption of the vegetation and consequent variation in the proportions of soil cations (Gu et al. 2019). From soil acid–base equilibrium, the increase in cations (especially Mg and AK) suggests that the soil H+ was replaced by increased alkaline ions (Berthrong et al. 2009). Due to the similar soil parent materials at different restoration stages, soil Ca, Mg, and K contents, which are all derived from parent rock materials, change little in response rates during vegetation restoration (Takoutsing et al. 2016).
Vegetation development during restoration periods
In our study, species diversity increased with an 86.36% recovery rate as restoration progressed, and these results are consistent with the results of Wang et al. (2018a). The amount of biomass increased significantly with the greatest recovery rate (2906.52%) over the old growth forest, followed by H (596%) and DBH (128%) in the 45–46 years period. These results are partially consistent with our hypothesis and are very similar to the results of Hu et al. (2017).
Improvements to the soil environment can provide community habitat quality which then promotes the enrichment of community diversity (Huang et al. 2015). Ca content had a significant positive effect on species diversity (Fig. 3), reflected in the following mechanisms. Firstly, Ca2+ has the function of maintaining the homeostasis of intracellular ions, especially in acidic soil where higher Ca2+ content can counterbalance the toxicity of aluminum ions for plants, further improving plant resistance to adversity and being conducive to the improvement of community diversity (Roem et al. 2002). Secondly, the increase in soil Ca content alongside vegetation restoration can be instrumental in the coexistence of species with different Ca requirements and the settlement of calcium-loving species (Hooper 1998). Additionally, soil P determines the species composition of a vegetation community (Huang et al. 2015); thus, soil AP content was considered as another major factor determining species diversity increase (Fig. 3).
In our study, biomass, DBH, and H had basically the same changing trend, and were all significantly affected by SOC, TN, AN, and AP contents (Figs. 4 and 5). This is consistent with Brandies et al. (2009), who demonstrated that there are significant positive growth rates and similar effect factors between biomass, DBH, and H in a general case. Data analysis of our study site confirmed that the percentages of individual trees with DBH greater than 8 cm and H greater than 5 m were larger in the 45–46 years restoration period (54% and 77% respectively) than in the old growth forest (41% and 63% respectively) (Chen et al. 2019). The greatest values of DBH and H in the 45–46 years restoration period may be because Pinus massoniana, as the dominant species, is a fast-growing heliophilous plant that gets more light by increasing vertical growth (H) (Cheng et al. 2011).
Soil SOC, TN, AN, and AP contents were leading factors in stimulating the increase in biomass, DBH, and H (Fig. 3). As the environmental basis for vegetation survival, improving the soil provides a better habitat and essential nutrients for vegetation growth (Huang et al. 2018), ultimately promoting the positive succession of vegetation (Liang et al. 2010). The accumulation of SOC affects biomass, DBH, and H mainly by decomposing and releasing large amounts of nutrients to meet plant growth needs, and by improving soil texture and promoting microbial activity which provide a better growing environment for vegetation (Alday et al. 2012). Moreover, the increase in N content promotes growth of the leaf area and improves plant photosynthesis, providing sufficient energy for the growth of individual plants. P is the nutrient that most limits productivity and species richness (Huang et al. 2015), and also controls leaf litter decomposition (Zeng et al. 2016). In addition, soil P changes the structure of the root system, promotes the formation and growth of fine roots, lateral roots and secretions of root exudates, and thereby stimulates plants to make more efficient use of soil nutrients (Li et al. 2017).
Key factors affecting vegetation restoration
Soil factors (pH, SOC, TN, TK, Mg, AN, and AP) and vegetation features (biomass, DBH, and H) were the main factors influencing vegetation restoration at our study site. This is consistent with the finding that the recovery of degraded ecosystems not only relies on soil rehabilitation, but also on the reconstruction, productivity, and function of vegetation (Liang et al. 2010; Peng et al. 2012).
The soil properties and vegetation features can be viewed as three distinct groups. The first group can be summarized as soil pH, SOC, TN, AN, AP, biomass, DBH, and H across the vegetation restoration periods. The roles of soil pH, SOC, TN, AN, and AP have been analyzed above. Specifically, soil resource is the main limiting factor in the early period of vegetation restoration. However, in the later period of vegetation restoration, the change in community characteristics leads to light conditions becoming a limiting factor (van Der Maarel and Franklin 2013). With the accumulation of biomass, a complex community structure reduces the understory light transmittance, controlling the vegetation in the understory including the growth and mortality of tree seedlings and saplings (Montgomery and Chazdon 2001). Therefore, the shade tolerant species are successively established, increasing understory vegetation richness. On the other hand, heliophilous species are shaped by increasing H and diameter to gain more light by adapting to strong interspecific competition. At increasingly larger H and DBH, light transmittance could further influence a species' light-capturing ability and distribution (Cheng et al. 2011). The limitation of light conditions for vegetation growth and performance in the late vegetation restoration period means that increases in biomass, DBH, and H are the key growth factors, determining restoration success.
The second group of soil variables includes Mg. The increase in Mg during the restoration periods was accompanied by a series of improvements in the plants' physiological processes, such as photosynthetic efficiency, carbohydrate metabolism, and synergistic absorption with P (Unger 2010). The third group showed that the vegetation restoration development was determined by TK. Besides N and P, K is the limiting nutrient with a significant influence on vegetation growth and development (Pang et al. 2018), mainly reflected in the impact on plant photosynthesis and respiration by controlling the regulation of stomata opening (Unger 2010), even though here the effect of TK on vegetation development was not significant.
Previous studies have suggested that species diversity was the dominant vegetation factor for vegetation restoration in a large scale (Crouzeilles et al. 2016), because higher species richness can enhance ecosystem stability and increase nutrient use efficiency (Hu et al. 2017). However, species diversity was not considered to be an influential factor for vegetation restoration in our study. The difference could be due to the non-significance of the relationships between species diversity and the main soil physicochemical properties or biomass, indicating that species diversity had no significant effect on the recovery of soil fertility and plant communities at our study site. In addition, species diversity showed a decreasing trend in the 45–46 years restoration period (Fig. 2), in which dominant species transformed from simple shrubs and herbs to pioneer species such as Pinus massoniana. In fact, needles of some Pinus species have been reported as a hindering factor which influences the regeneration of native plants and increases in species diversity (Navarro-Cano et al. 2010). It is reasonable that species diversity has no significant effect on vegetation restoration in specific study area, but further research is needed.
Soil and vegetation factors affecting biomass
The variation of biomass was one of the important indexes reflecting vegetation restoration (Mansourian et al. 2005). Therefore, the relative importance of soil properties and vegetation features in driving biomass development can reflect the degree of their individual and joint influence on vegetation restoration.
Our study revealed that the change in biomass was strongly influenced by the interaction of soil properties and vegetation features, which explained 55.55%–72.32% of the biomass variation (Fig. 4). This dominant contribution by joint influence to biomass may be explained by the close interaction between vegetation and soil (Liang et al. 2010). As we discussed above, there was a clear co-evolutionary relationship between soil factors (pH, SOC, TN, AN, and TP) and vegetation features (DBH and H) across the restoration periods. This result suggests that the variations in key soil factors (pH, SOC, TN, AN, and TP) were likely to promote the growth of plant and the restoration of vegetation structure (Alday et al. 2012). In turn, vegetation features (DBH, and H) could influence improvements in the soil environment (Fig. 3). These results also offer the further evidence for the hypothesis that the mechanisms of plant and soil promote vegetation restoration synergistically.
This study also found that soil properties explained 3.30%–31.44% of the variation in biomass, which was basically higher than explanation of vegetation (5.09%–24.32%). This result provides evidence that the importance of soil properties in driving the changes observed in biomass is more than that of vegetation features in the study region, which is most likely because the advantage of hydrothermal conditions in the subtropical region accelerates the material circulation, and promotes the enrichment of soil organic matter (Corlett and Hughes 2015); thus, providing a fertile environment for plant growth. The regulation mechanism of soil properties on biomass development had been discussed previously. With vegetation restoration, the increased in plant species has intensified the competition of aboveground parts for light resources and underground roots for soil resources (Cheng et al. 2011; Li et al. 2017), which further induces the variations of individual growth and morphological structure of trees (DBH and H). As DBH and H increased, more fine materials and litters can be intercepted and accumulated by plants, further enhancing the accumulation of biomass (Li et al. 2017).
The biomass development at our study site was influenced by different soil and vegetation factors in different restoration periods. In the early restoration period (4–5 years), SOC was the major influential factor (Table 4). The possibility is that SOC is the main source of most nutrients, and that the accumulation of SOC promotes improvements in other soil factors, such as TN, AN, and AP, which have a notable effect on vegetation growth and development (Alday et al. 2012). In the 4–5 years restoration period, SOC content was low (Fig. 2), which is not conducive to the improvement of soil structure or the accumulation of nutrients (Bienes et al. 2016). Therefore, the low SOC not only limits the growth of plant roots, but also intensifies the contradiction between the demand of plant growth for water and nutrients and the supply of soil water and nutrients, resulting in hindrances to plant growth.
H, pH, and AP were the main factors driving biomass development in the 10–12 years restoration period. This could be attributed to the competition of shrubs for light, which would drive the increasing of H to adapt to interspecific competition (Cheng et al. 2011). Additionally, the accumulation of biomass impels plants to need more N- and P-rich substances (such as enzymes, transport proteins, and amino acids) to participate in metabolic activities (Qin et al. 2016). Therefore, shrubs need to absorb more N and P for growth than do herbs. In particular, P is an important limited factor in red soil area of south China (Gao et al. 2014). However, in the 10–12 years restoration period, the increasing of pH affected the availability of P (Duan et al. 2008), suggesting that the role of AP may intensity the inequity of competition among plants, rather than promote the accumulation of biomass.
Biomass in the 45–46 years restoration period was conditioned by the synergistic effect of H and pH. The significant effects of H and pH may be caused by a combination of two factors. Firstly, the dominant tree species (Pinus massoniana) of 45–46 years restoration period obtains more light by increasing H and canopy density (Cheng et al. 2011), resulting in lower density of woody plants (Table 1); thus, H had a negative effect on biomass. Secondly, low soil pH is beneficial to improve soil permeability, aggregates and porosity (such as BD), and the accumulation of soil nutrients (such as SOC, N and P) (Ramírez et al. 2015), and enhances the availability of P, K, Ca, and Mg (Duan et al. 2008). Meanwhile, soil pH decreased with vegetation restoration, and the bioaccumulation and material circulation increased with advantageous hydrothermal conditions (Corlett and Hughes 2015), which were beneficial to the increment of soil nutrient content; thus restoration stimulates the increase of biomass.
In the old growth forest (sub-climax community), the structure of plant community has reached a state of stable (Peng et al. 2012), which means that the development of vegetation features (DBH and H) has entered a slow growth stage and has less of an impact on biomass. Instead, as a nutrient bank and soil health indicator (Bienes et al. 2016), SOC continues to influence biomass growth. In addition, evergreen trees with a long leaf life need to accumulate more organic substances (such as lignin) to construct defensive structures, and require higher N and P content to maintain normal growth and metabolism (Zeng et al. 2016). Therefore, the supply capacity of soil N and P largely determines the effectiveness of vegetation restoration (Li et al. 2012a).
The present work has shown that vegetation restoration can improve significantly soil texture and fertility (especially N, P, and SOC) and vegetation features (species diversity, biomass, DBH, and H). The study showed a clear coupling relationship between some soil factors (pH, SOC, TN, AN, and TP) and vegetation development and structural components (biomass, DBH, and H). At the same time, soil properties and vegetation features had a strongly cooperative influence on the variation of biomass, which suggested that the successful restoration of a degraded forest was driven mainly by their synergistic effect. The individual effect of soil factors on biomass development was greater than that of vegetation factors. Notably, the controlling factors of biomass had differed in the different restoration periods.
The datasets generated and/or analyzed during the current study are not publicly available due [the data is a part of the author's graduation thesis] but are available from the corresponding author on reasonable request.
AK:
Available potassium
Alkaline hydrolysis nitrogen
AP:
Available phosphorus
BD:
Ca:
Total calcium
Mg:
Total magnesium
MODIS:
Moderate Resolution Imaging Spectroradiometer
PCA:
R 2 :
Adjustment decision coefficient
SOC:
SRA:
Stepwise regression analysis
Total potassium
TN:
TP:
Total phosphorus
Alday JG, Marrs RH, Martínez-Ruiz C (2012) Soil and vegetation development during early succession on restored coal wastes: a six-year permanent plot study. Plant Soil 353(1–2):305–320
Armstrong JS (1967) Derivation of theory by means of factor analysis or tom swift and his electric factor analysis machine. Am Stat 21(5):17–21 http://repository.upenn.edu/marketing_papers/13. Accessed 15 Dec 2019
Ayma-Romay AI, Bown HE (2019) Biomass and dominance of conservative species drive above-ground biomass productivity in a mediterranean-type forest of Chile. For Ecosyst 6:47. https://doi.org/10.1186/s40663-019-0205-z
Berthrong ST, Jobbágy EG, Jackson RB (2009) A global meta-analysis of soil exchangeable cations, pH, carbon, and nitrogen with afforestation. Ecol Appl 19(8):2228–2241
Bienes R, Marques MJ, Sastre B, García-Díaz A, Ruiz-Colmenero M (2016) Eleven years after shrub revegetation in semiarid eroded soils. Influence in soil properties. Geoderma 273:106–114
Boix-Fayos C, de Vente J, Albaladejo J, Martínez-Mena M (2009) Soil carbon erosion and stock as affected by land use changes at the catchment scale in Mediterranean ecosystems. Agric Ecosyst Environ 133(1–2):75–85
Brandies T, Randolph KD, Strub M (2009) Modeling Caribbean tree stem diameters from tree height and crown width measurements. Math Comput For Nat-Res Sci 1(2):78–85
Bremner JM (1996) Nitrogen-total. In: Sparks DL (ed) Methods of soil analysis. Part 3: chemical methods, SSSA book series 5. Soil Science Society of America, Madison, pp 1085–1121
Chang CC, Turner BL (2019) Ecological succession in a changing world. J Ecol 107:503–509
Chen JL, Fang X, Gu X, Li LD, Liu ZD, Wang LF, Zhang SJ (2019) Composition, structure, and floristic characteristics of two forest communities in the Central-Subtropical China. Scientia Silvae Sinicae 55(2):159–172 (in Chinese with English abstract)
Cheng XP, Kiyoshi U, Tsuyoshi H, Shao PY (2011) Height growth, diameter-height relationships and branching architecture of Pinus massoniana and Cunninghamia lanceolatain early regeneration stages in Anhui Province, eastern China: effects of light intensity and regeneration mode. For Stud China 13(1):1–12
Corlett RT, Hughes AC (2015) Subtropical forests. In: Peh KSH, Corlett RT, Bergeron Y (eds) The Routledge handbook of forest ecology. Routledge, Oxford, pp 46–55
Crouzeilles R, Curran M, Ferreira MS, Lindenmayer DB, Grelle CEV, Benayas JMR (2016) A global meta-analysis on the ecological drivers of forest restoration success. Nat Commun 7:11666
Demenois J, Rey F, Ibanez T, Stokes A, Carriconde F (2018) Linkages between root traits, soil fungi and aggregate stability in tropical plant communities along a successional vegetation gradient. Plant Soil 424(1–2):319–334
Duan WJ, Ren H, Fu SL, Guo QF, Wang J (2008) Pathways and determinants of early spontaneous vegetation succession in degraded lowland of South China. J Integr Plant Biol 50(2):147–156
Gao Y, He NP, Yu GR, Chen WL, Wang QF (2014) Long-term effects of different land use types on C, N, and P stoichiometry and storage in subtropical ecosystems: a case study in China. Ecol Eng 67:171–181
Gu X, Fang X, Xiang WH, Zeng YL, Zhang SJ, Lei PF, Peng CH, Kuzyakov Y (2019) Vegetation restoration stimulates soil carbon sequestration and stabilization in a subtropical area of southern China. Catena 181:104098. https://doi.org/10.1016/j.catena.2019.104098
Hooper DU (1998) The role of complementarity and competition in ecosystem responses to variation in plant diversity. Ecology 79(2):704–719
Hu F, Du H, Zeng FP, Peng WX, Song TQ (2017) Plant community characteristics and their relationships with soil properties in a karst region of Southwest China. Contemp Probl Ecol 10(6):707–716
Huang FF, Zhang WQ, Gan XH, Huang YH, Guo YD, Wen XY (2018) Changes in vegetation and soil properties during recovery of a subtropical forest in South China. J Mt Sci 15(1):46–58
Huang YT, Ai XR, Yao L, Zang RG, Ding Y, Huang JH, Feng G, Liu JC (2015) Changes in the diversity of evergreen and deciduous species during natural recovery following clear-cutting in a subtropical evergreen-deciduous broadleaved mixed forest of Central China. Trop Conserv Sci 8(4):1033–1052
Huang ZY, Chen J, Ai XY, Li RR, Ai YW, Li W (2017) The texture, structure and nutrient availability of artificial soil on cut slopes restored with OSSS – influence of restoration time. J Environ Manag 200:502–510
Hunan Provincial Department of Agriculture (1989) Hunan soil. Agriculture Press, Beijing (in Chinese with English abstract)
Institute of Soil Science, Chinese Academy of Sciences (1978) The analysis of soil physical-chemical properties. Shanghai scientific and Technical Publishers, Shanghai (in Chinese with English abstract)
IUSS Working Group WRB (2006) World reference base for soil resources 2006: a framework for international classification, correlation and communication. http://www.ige.unicamp.br/pedologia/wsrr103e.pdf: 2006. Accessed 15 Dec 2019
Jain A, Nandakumar K, Ross A (2005) Score normalization in multimodal biometric systems. Pattern Recogn 38(12):2270–2285
Li DJ, Niu SL, Luo YQ (2012a) Global patterns of the dynamics of soil carbon and nitrogen stocks following afforestation: a meta-analysis. New Phytol 195(1):172–181
Li JY, Xu RK, Zhang H (2012b) Iron oxides serve as natural anti-acidification agents in highly weathered soils. J Soils Sediments 12(6):876–887
Li QX, Jia ZQ, Liu T, Feng LL, He LXZ (2017) Effects of different plantation types on soil properties after vegetation restoration in an alpine sandy land on the Tibetan plateau, China. J Arid Land 9(2):200–209
Liang J, Wang XA, Yu ZD, Dong ZM, Wang JC (2010) Effects of vegetation succession on soil fertility within farming-plantation ecotone in Ziwuling mountains of the loess plateau in China. Agr Sci China 9(10):1481–1491
Liu WW, Xiang WH, Tian DL, Yan WD (2010) General allometric equations for estimating Cunninghamia lanceolata tree biomass on large scale in southern China. J Cent South Univ Forest T 30(4):7–14 (in Chinese with English abstract)
Madonsela S, Cho MA, Ramoelo A, Mutanga O, Naidoo L (2018) Estimating tree species diversity in the savannah using NDVI and woody canopy cover. Int J Appl Earth Obs 66:106–115
Mansourian S, Vallauri D, Dudley N (2005) Forest restoration in landscapes: beyond planting trees. Springer Science & Business Media, New York
Montgomery RA, Chazdon RL (2001) Forest structure, canopy architecture, and light transmittance in tropical wet forests. Ecology 82(10):2707–2718
Navarro-Cano JA, Barberá GG, Castillo VM (2010) Pine litter from afforestations hinder the establishment of endemic plants in semiarid scrubby habitats of Natura 2000 network. Restor Ecol 18(2):165–169
Nicia P, Bejger R, Zadrożny P, Sterzyńska M (2018) The impact of restoration processes on the selected soil properties and organic matter transformation of mountain fens under Caltho-Alnetum community in the Babiogórski National Park in outer Flysch Carpathians, Poland. J Soils Sediments 18(8):2770–2776
Olsen SR, Watanabe FS, Bowman RA (1983) Evaluation of fertilizer phosphate residues by plant uptake and extractable phosphorus. Soil Sci Soc Am J 47(5):952–958
Ouyang S, Xiang WH, Wang XP, Zeng YL, Lei PF, Deng XW, Peng CH (2016) Significant effects of biodiversity on forest biomass during the succession of subtropical forest in South China. Forest Ecol Manag 372:291–302
Pang DB, Cao JH, Dan XQ, Guan YH, Peng XW, Cui M, Wu XQ, Zhou JX (2018) Recovery approach affects soil quality in fragile karst ecosystems of Southwest China: implications for vegetation restoration. Ecol Eng 123:151–160
Peng WX, Song TQ, Zeng FP, Wang KL, Du H, Lu SY (2012) Relationships between woody plants and environmental factors in karst mixed evergreen-deciduous broadleaf forest, Southwest China. J Food Agric Environ 10:890–896
Qin J, Xi WM, Rahmlow A, Kong HY, Zhang Z, Shangguan ZP (2016) Effects of forest plantation types on leaf traits of Ulmus pumila and Robinia pseudoacacia on the loess plateau, China. Ecol Eng 97:416–425
R Development Core Team (2016) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna https://wwwR-projectorg/. Accessed 15 Dec 2019
Ramírez JF, Fernandez Y, González PJ, Salazar X, Iglesias JM, Olivera Y (2015) Influence of fertilization on the physical and chemical properties of a soil dedicated to the production of Megathyrsus maximus seed. Pastos y Forrajes 38(4):479–486
Roem WJ, Klees H, Berendse F (2002) Effects of nutrient addition and acidification on plant species diversity and seed germination in heathland. J Appl Ecol 39(6):937–948
Takoutsing B, Weber JC, Tchoundjeu Z, Shepherd K (2016) Soil chemical properties dynamics as affected by land use change in the humid forest zone of Cameroon. Agrofor Syst 90(6):1089–1102
Unger MA (2010) Relationships between soil chemical properties and forest structure, productivity and floristic diversity along an altitudinal transect of moist tropical forest in Amazonia, Ecuador. Georg-August-Universität Göttingen, Dissertation
van Breemen N, Driscoll CT, Mulder J (1984) Acidic deposition and internal proton sources in acidification of soils and waters. Nature 307:599–604
van Der Maarel E, Franklin J (2013) Vegetation ecology, 2nd edn. Wiley-Blackwell, Oxford
van der Putten WH, Bardgett RD, Bever JD, Bezemer TM, Casper BB, Fukami T, Kardol P, Klironomos JN, Kulmatiski A, Schweitzer JA, Suding KN, Van de Voorde TFJ, Wardle DA (2013) Plant-soil feedbacks: the past, the present and future challenges. J Ecol 101(2):265–276
Wang D, Zhang B, Zhu LL, Yang YS, Li MM (2018a) Soil and vegetation development along a 10-year restoration chronosequence in tailing dams in the Xiaoqinling gold region of Central China. Catena 167:250–256
Wang N, Zhu XY, Fang X, Gu X, Chen JL (2018b) The variation of soil organic carbon and soil particle-sizes in different degraded forests in the subtropical region. J Soil Water Conserv 32(3):218–225 (in Chinese with English abstract)
Xiang WH, Liu SH, Lei XD, Frank SC, Tian DL, Wang GJ, Deng XW (2013) Secondary forest floristic composition, structure, and spatial pattern in subtropical China. J For Res 18(1):111–120
Xiang WH, Zhou J, Ouyang S, Zhang SL, Lei PF, Li JX, Deng XW, Fang X, Forrester DI (2016) Species-specific and general allometric equations for estimating tree biomass components of subtropical forests in southern China. Eur J Forest Res 135(5):963–979
Xu CH, Xiang WH, Gou MM, Chen L, Lei PF, Fang X, Deng XW, Ouyang S (2018) Effects of forest restoration on soil carbon, nitrogen, phosphorus, and their stoichiometry in Hunan, southern China. Sustainability 10(6):1874
Yang T, Adams JM, Shi Y, He JS, Jing X, Chen LT, Tedersoo L, Chu HY (2017) Soil fungal diversity in natural grasslands of the Tibetan plateau: associations with plant diversity and productivity. New Phytol 215(2):756–765
Zeng QC, Li X, Dong YH, An SS, Darboux F (2016) Soil and plant components ecological stoichiometry in four steppe communities in the loess plateau of China. Catena 147:481–488
Zhang YH, Xu XL, Li ZW, Liu MX, Xu CH, Zhang RF, Luo W (2019) Effects of vegetation restoration on soil quality in degraded karst landscapes of Southwest China. Sci Total Environ 650:2657–2665
We thank the administrative staff at the Dashanchong Forest Farm, Changsha County, Hunan Province, for their support.
This work was supported by the National Forestry Public Welfare Industry Research Project (grant no. 201504411) and the National Natural Science Foundation of China (grant nos. 31570447 and 31300524).
Faculty of Life Science and Technology, Central South University of Forestry and Technology, Changsha, 410004, China
Chan Chen, Xi Fang, Wenhua Xiang, Pifeng Lei, Shuai Ouyang & Yakov Kuzyakov
Huitong National Field Station for Scientific Observation and Research of Chinese Fir Plantation Ecosystem in Hunan Province, Huitong, 438107, China
Xi Fang, Wenhua Xiang, Pifeng Lei & Shuai Ouyang
Department of Soil Science of Temperate Ecosystems, Georg-August University of Gottingen, 37077, Göttingen, Germany
Yakov Kuzyakov
Department of Agricultural Soil Science, Georg-August University of Gottingen, 37077, Göttingen, Germany
Chan Chen
Xi Fang
Wenhua Xiang
Pifeng Lei
Shuai Ouyang
XF and CC designed the idea and study, and coordinated the manuscript preparation. CC, XF, WX, PL, SO, and YK processed the data and analyzed the results. CC, XF, WX, and YK contributed to the manuscript writing and editing. All authors read and approved the final manuscript.
Correspondence to Xi Fang.
Chen, C., Fang, X., Xiang, W. et al. Soil-plant co-stimulation during forest vegetation restoration in a subtropical area of southern China. For. Ecosyst. 7, 32 (2020). https://doi.org/10.1186/s40663-020-00242-3
Accepted: 17 April 2020
Vegetation restoration
Soil physicochemical properties
Vegetation features | CommonCrawl |
How to find the solutions to $a+b+c+d=60$?
How can I find all the solutions to:
$$a+b+c+d=60\quad (0\leq a,b,c,d\leq 30,\; a,b,c,d\in\mathbb N)$$
I've tried to use Solve[], but it says that there are more variables than equations. I want to know if Mathematica has a built-in way to do it, I know how to find the number of solutions (which is a basic result in combinatorics), I also know that I could make a messy thing to find them.
equation-solving
Billy RubinaBilly Rubina
$\begingroup$ The variables are supposed to have integer values? $\endgroup$ – J. M. will be back soon♦ Jun 13 '15 at 17:51
$\begingroup$ @Guesswhoitis. No, non-negative integers. $\endgroup$ – Billy Rubina Jun 13 '15 at 18:01
FrobeniusSolve is useful for these kinds of equations. Your constraints may be implemented by using Pick as follows.
Block[{s = FrobeniusSolve[{1, 1, 1, 1}, 60]},
Pick[s, UnitStep[30 - s[[All, 1]], 30 - s[[All, 2]],
30 - s[[All, 3]], 30 - s[[All, 4]]], 1]
KennyColnagoKennyColnago
$\begingroup$ OP's equation has infinitely many solutions, unless one enforces the constraint you are implying in this answer. $\endgroup$ – J. M. will be back soon♦ Jun 13 '15 at 18:00
$\begingroup$ @Guesswhoitis. Yes. I made a crappy question (forgot the constraint) but somehow, he read my mind and guessed correctly what I was doing. $\endgroup$ – Billy Rubina Jun 13 '15 at 18:02
For "all" solutions use Reduce. Assuming that the intended domain is Integers,
Reduce[{a + b + c + d == 60, a <= 30, b <= 30, c <= 30, d <= 30}, {a, b, c,
d}, Integers]
(a | b | c | d) [Element] Integers && -30 <= a <= 30 && ((b == -a && c == 30 && d == 30) || (-a < b <= 30 && 30 - a - b <= c <= 30 && d == 60 - a - b - c))
For nonnegative integers,
Reduce[{a + b + c + d == 60, 0 <= a <= 30, 0 <= b <= 30, 0 <= c <= 30,
0 <= d <= 30}, {a, b, c, d}, Integers]
(a | b | c | d) [Element] Integers && ((a == 0 && ((b == 0 && c == 30 && d == 30) || (1 <= b <= 29 && 30 - b <= c <= 30 && d == 60 - b - c) || (b == 30 && 0 <= c <= 30 && d == 30 - c))) || (1 <= a <= 29 && ((0 <= b < 30 - a && 30 - a - b <= c <= 30 && d == 60 - a - b - c) || (b == 30 - a && 0 <= c <= 30 && d == 30 - c) || (30 - a < b <= 30 && 0 <= c <= 60 - a - b && d == 60 - a - b - c))) || (a == 30 && ((b == 0 && 0 <= c <= 30 && d == 30 - c) || (1 <= b <= 29 && 0 <= c <= 30 - b && d == 30 - b - c) || (b == 30 && c == 0 && d == 0))))
For specific examples, use FindInstance
Manipulate[
FindInstance[{a + b + c + d == 60, 0 <= a <= 30, 0 <= b <= 30, 0 <= c <= 30,
0 <= d <= 30}, {a, b, c, d}, Integers, n],
{{n, 10, "Instances"}, 1, 100, 1, Appearance -> "Labeled"}]
Bob HanlonBob Hanlon
This returns some of them:
cs = PadRight[Select[IntegerPartitions[60, 4], And @@ Thread[# <= 30] &], {Automatic, 4}]
This returns all of them:
Flatten[Permutations /@ cs, 1]
Not the answer you're looking for? Browse other questions tagged equation-solving or ask your own question.
Number of solutions for inequality
How to obtain a fixed number of solutions to $x+y=1$?
NSolve gives additional solutions that don't satisfy the equations!
All possible solutions to the Matrix Equation (free variables appearing)
Plotting solutions of systems in complex plane
Find all solutions which satisfy the given conditions
solutions to complex polynomial systems
Solving equation with many variables
Find all the closest positive integer solutions within a bound for a simple linear equation | CommonCrawl |
Powers of $2$ starting with $123$...Does a pattern exist?
I'm currently working on Project Euler problem #686 "Powers of Two". The first power of $2$ which starts with $123$... is $2^{90}$. I noticed that the next powers of $2$ that start with $123$... seem to follow a pattern. The exponent is always increased by either $196$, $289$ or $485$ (which is $196 + 289$). But I'm not able to figure out what the pattern actually is. Any hint is highly welcome.
project-euler
edited Apr 8 '21 at 9:12
AndreasAndreas
If a power of 2 starts with 123, then it must be between $1.23\times 10^n$ and $1.24\times 10^n$ for some $n$.
So you want $k$ and $n$ for which $$1.23\times 10^n\leq 2^k < 1.24\times 10^n$$.
This is easier to deal with if we take logs (base 10). Then you want
$$\log(1.23) + n\leq k\log(2) < \log(1.24)+n$$
That is, the fractional part $\lfloor k\log(2)\rfloor$ satistfies $$\log(1.23)\leq \lfloor k\log(2)\rfloor < \log(1.24),$$
that is, writing these as decimals,
$$0.0899051143939792\leq \lfloor 0.3010299956639811 k\rfloor < 0.09342168516223505.$$
Note that $0.0899051143939792$ and $0.09342168516223505$ don't differ by much. If you've found $k_1$ and $k_2$ that satisfy the above inequalities, then $0.3010299956639811 (k_1-k_2)$ is pretty close to an integer.
We could find example differences $\Delta k$ by looking for rational approximations of $\log(2)$, which we can do using the continued fraction expansion of $\log(2)$:
$$\log(2)=\frac{1}{3+\frac{1}{3+\frac1{9+...}}}$$
The numbers in that expresison are $3,3,9,2,2,4,6,2,1,...$ with no discernible pattern. If we cut the fraction off at various places we get "best rational approximations" of $\log(2)$. The first few are:
$$\log(2)\approx\frac{1}{3}$$ $$\log(2)\approx\frac{1}{3+\frac{1}{3}}=\frac{3}{10}$$ $$\log(2)\approx\frac{1}{3+\frac{1}{3+\frac1{9}}}=\frac{28}{93}$$ $$\log(2)\approx\frac{59}{196}$$
The errors in these are, respectively, $0.0323033...$, $0.001029996...$, $0.00045273..$ and $0.0000095875...$.
If you have a fraction $\frac{p}{q}$ which is within $\epsilon$ of $\log(2)$, then $q$ will sometimes work as $\Delta k$, because it means that multiplying $2^k$ by $2^q$ will change $\lfloor\log(2)k\rfloor$ by $q\epsilon$. If $2^k$ starts with 123, then as long as $q\epsilon$ is less than $\log(1.24) - \log(1.23)$, we have a chance that $2^{k+q}$ also will.
$\log(1.24)-\log(1.23)=0.0035167...$. Multiplying the errors above by the denominators, we get
$\frac13$ obviously won't work: $0.0323033\times3=0.0969...$ which is way bigger than $0.0035167$
$\frac3{10}$ won't work: $0.001029996\times10=0.0102999..$, which is also bigger than $0.0035167$
$\frac{28}{93}$ won't work, but only just: $0.00045273\times93=0.0042104...$. You probably found some "near misses" 93 apart.
$\frac{59}{196}$ works! $0.0000095875\times 196 = 0.0018791$, which is less than $0.0035167$. So it's possible to have $2^k$ and $2^{k+196}$ both starting with 123 - but not all of $2^k$, $2^{k+196}$ and $2^{k+2\times196}$, since $2\times 0.0018791=0.003758$, which is (just) too big.
The next continued fractions for $\log(2)$ are $\frac{146}{485}$ (recognise anything there?), $\frac{643}{2136}$ and $\frac{4004}{13301}$.
You'll notice this method missed your 289. That's because the continued fraction method gives the best rational approximations. It's true that $\log(2)\approx\frac{87}{289}$, but that's not as good an approximation as $\frac{59}{196}$.
In general, you're looking for numbers $q$ for which $q\log(2)$ is within $0.0035167$ of an integer. Finding those will be easier than finding powers of 2, perhaps :)
The property of 196 and 485 that makes them give rise to patterns in the powers of 2 that start with 123 is just "$q\times\log(2)$ is nearly an integer". That's got nothing much to do with the specific prefix you chose. If you look for powers of 2 starting with, say, 234, you'll probably see some of the exact same numbers popping up - but not 196, alas, since $\log(2.35/2.34)=0.00185...$ is a tighter requirement, and $196\times0.0000095875=0.00187$ is now too big. 485 will still work, for any starting three digits (though only just for 999), as will 2136, 13301, etc.
answered Apr 8 '21 at 9:25
Michael HartleyMichael Hartley
see if this helps: \begin{align*} 2^k&= 123\dots\\ 2^k&= 1.23\dots\times 10^{p}\\ log_{10}[2^k] &= log_10[1.23\dots\times 10^{p}]\\ k\cdot log_{10}[2] &= log_{10}[1.23]+p\\ \end{align*} where $p\in\mathbb{Z}_+$
Now the algorithm is straightforward. Iterate over k and multiply by $log_{10}[2] = 0.30103$ until you reach a k that has fractional value between $log_{10}[1.23]=0.08990$ and $log_{10}[1.24]=0.09342$. The reason for the pattern you observe is also that the log multiples may be close to some integer values.
For example: $196 \times log(2) = 59.001$, $289 \times log(2) = 86.99$ and $485 \times log(2) = 145.9995$
Rahul MadhavanRahul Madhavan
Not the answer you're looking for? Browse other questions tagged project-euler or ask your own question.
Comparing Powers with Different Bases Using Logarithms?
HINT for summing digits of a large power
Last digits of factorial | CommonCrawl |
/[escript]/trunk/doc/user/levelset.tex
Diff of /trunk/doc/user/levelset.tex
revision 1811 by ksteube, Thu Sep 25 23:11:13 2008 UTC
revision 1973 by lgraham, Thu Nov 6 02:31:37 2008 UTC
15 \section{Rayleigh-Taylor Instability} \section{Rayleigh-Taylor Instability}
16 \label{LEVELSET CHAP} \label{LEVELSET CHAP}
18 In this chapter we will implement the Level Set Method in Escript for tracking the interface between two fluids for Computational Fluid Dynamics (CFD). In this chapter we will implement the Level Set Method in Escript for tracking the interface between two fluids for Computational Fluid Dynamics (CFD). The method is tested with a Rayleigh-Taylor Instability problem, which is an instability of the interface between two fluids with differing densities. \\
19 Normally in Earth science problems two or more fluids in a system with different properties are of interest. For example, lava dome growth in volcanology, with the contrast of the two mediums as being lava and air. The interface between the two mediums is often referred to as a free surface (free boundary value problem); the problem arises due to the large differences in densities between the lava and air, with their ratio being around 2000, and so the interface between the two fluids move with respect to each other. Normally in Earth science problems two or more fluids in a system with different properties are of interest. For example, lava dome growth in volcanology, with the contrast of the two mediums as being lava and air. The interface between the two mediums is often referred to as a free surface (free boundary value problem); the problem arises due to the large differences in densities between the lava and air, with their ratio being around 2000, and so the interface between the two fluids move with respect to each other.
20 %and so the lava with the much higher density is able to move independently with respect to the air, and the interface between the two fluids is not constrained. %and so the lava with the much higher density is able to move independently with respect to the air, and the interface between the two fluids is not constrained.
21 There are a number of numerical techniques to define and track the free surfaces. One of these methods, which is conceptually the simplest, is to construct a Lagrangian grid which moves with the fluid, and so it tracks the free surface. The limitation of this method is that it cannot track surfaces that break apart or intersect. Another limitation is that the elements in the grid can become severely distorted, resulting in numerical instability. The Arbitrary Lagrangian-Eulerian (ALE) method for CFD in moving domains is used to overcome this problem by remeshing, but there is an overhead in computational time, and it results in a loss of accuracy due to the process of mapping the state variables every remesh by interpolation. There are a number of numerical techniques to define and track the free surfaces. One of these methods, which is conceptually the simplest, is to construct a Lagrangian grid which moves with the fluid, and so it tracks the free surface. The limitation of this method is that it cannot track surfaces that break apart or intersect. Another limitation is that the elements in the grid can become severely distorted, resulting in numerical instability. The Arbitrary Lagrangian-Eulerian (ALE) method for CFD in moving domains is used to overcome this problem by remeshing, but there is an overhead in computational time, and it results in a loss of accuracy due to the process of mapping the state variables every remesh by interpolation.
23 There is a technique to overcome these limitations called the Level Set Method, for tracking interfaces between two fluids. The advantages of the method is that CFD can be performed on a fixed Cartesian mesh, and therefore problems with remeshing can be avoided. The field equations for calculating variables such as velocity and pressure are solved on the the same mesh. The Level Set Method is based upon the implicit representation of the interface by a continuous function. The function takes the form as a signed distance function, $\phi(x)$, of the interface in a Eulerian coordinate system. For example, the zero isocontour of the unit circle $\phi(x)=x^2 + y^2 -1$ is the set of all points where $\phi(x)=0$. There is a technique to overcome these limitations called the Level Set Method, for tracking interfaces between two fluids. The advantages of the method is that CFD can be performed on a fixed Cartesian mesh, and therefore problems with remeshing can be avoided. The field equations for calculating variables such as velocity and pressure are solved on the the same mesh. The Level Set Method is based upon the implicit representation of the interface by a continuous function. The function takes the form as a signed distance function, $\phi(x)$, of the interface in a Eulerian coordinate system. For example, the zero isocontour of the unit circle $\phi(x)=x^2 + y^2 -1$ is the set of all points where $\phi(x)=0$. Refer to Figure \ref{UNITCIRCLE}.
24 % %
25 \begin{figure} \begin{figure}
26 \center \center
27 \scalebox{0.5}{\includegraphics{figures/unitcircle.eps}} \scalebox{0.7}{\includegraphics{figures/unitcircle.eps}}
28 \caption{Implicit representation of the curve $x^2 + y^2 = 1$.} \caption{Implicit representation of the curve $x^2 + y^2 = 1$.}
29 \label{UNITCIRCLE} \label{UNITCIRCLE}
30 \end{figure} \end{figure}
32 The implicit representation can be used to define the interior and exterior of a fluid region. Since the isocontour at $\phi(x)=0$ has been defined as the interface, a point in the domain can be determined if its inside or outside of the interface, by looking at the local sign of $\phi(x)$. A point is inside the interface when $\phi(x)<0$, and outside the interface when $\phi(x)>0$. Parameters values such as density and viscosity can then be defined for two different mediums, depending on which side of the interface they are located. The displacement of the interface at the zero isocontour of $\phi(x)$ is calculated each time step by using the velocity field. This is achieved my solving the advection equation: The implicit representation can be used to define the interior and exterior of a fluid region. Since the isocontour at $\phi(x)=0$ has been defined as the interface, a point in the domain can be determined if its inside or outside of the interface, by looking at the local sign of $\phi(x)$. For example, a point is inside the interface when $\phi(x)<0$, and outside the interface when $\phi(x)>0$. Parameters values such as density and viscosity can then be defined for two different mediums, depending on which side of the interface they are located.
35 \subsection{Calculation of the Displacement of the Interface}
37 The displacement of the interface at the zero isocontour of $\phi(x)$ is calculated each time-step by using the velocity field. This is achieved my solving the advection equation:
39 \begin{equation} \begin{equation}
40 \frac{\partial \phi}{\partial t} + \vec{v} \cdot \nabla \phi = 0, \frac{\partial \phi}{\partial t} + \vec{v} \cdot \nabla \phi = 0,
41 \label{ADVECTION} \label{ADVECTION}
42 \end{equation} \end{equation}
44 where $\vec{v}$ is the velocity field. The advection equation is solved using a mid-point method, which is a two step procedure: where $\vec{v}$ is the velocity field. The advection equation is solved using a mid-point method, which is a two step procedure:
46 Firstly, $\phi^{1/2}$ is calculated solving: Firstly, $\phi^{1/2}$ is calculated solving:
49 \frac{\phi^{1/2} - \phi^{-}}{dt/2} + \vec{v} \cdot \nabla \phi^{-} = 0. \frac{\phi^{1/2} - \phi^{-}}{dt/2} + \vec{v} \cdot \nabla \phi^{-} = 0.
50 \label{MIDPOINT FIST} \label{MIDPOINT FIST}
53 Secondly, using $\phi^{1/2}$, $\phi^{+}$ is calculated solving: Secondly, using $\phi^{1/2}$, $\phi^{+}$ is calculated solving:
56 \frac{\phi^{+} - \phi^{-}}{dt} + \vec{v} \cdot \nabla \phi^{1/2} = 0. \frac{\phi^{+} - \phi^{-}}{dt} + \vec{v} \cdot \nabla \phi^{1/2} = 0.
57 \label{MIDPOINT SECOND} \label{MIDPOINT SECOND}
60 For more details on the mid-point procedure see reference \cite{BOURGOUIN2006}. In certain situations the mid-point procedure has been shown to produce artifacts in the numerical solutions. A more robust procedure is to use the Taylor-Galerkin scheme with the presence of diffusion, which gives more stable solutions. The expression is derived by either inserting Equation (\ref{MIDPOINT FIST}) into Equation (\ref{MIDPOINT SECOND}), or by expanding $\phi$ into a Taylor series: For more details on the mid-point procedure see reference \cite{BOURGOUIN2006}. In certain situations the mid-point procedure has been shown to produce artifacts in the numerical solutions. A more robust procedure is to use the Taylor-Galerkin scheme with the presence of diffusion, which gives more stable solutions. The expression is derived by either inserting Equation (\ref{MIDPOINT FIST}) into Equation (\ref{MIDPOINT SECOND}), or by expanding $\phi$ into a Taylor series:
63 \phi^{+} \simeq \phi^{-} + dt\frac{\partial \phi^{-}}{\partial t} + \frac{dt^2}{2}\frac{\partial^{2}\phi^{-}}{\partial t^{2}}, \phi^{+} \simeq \phi^{-} + dt\frac{\partial \phi^{-}}{\partial t} + \frac{dt^2}{2}\frac{\partial^{2}\phi^{-}}{\partial t^{2}},
64 \label{TAYLOR EXPANSION} \label{TAYLOR EXPANSION}
67 by inserting by inserting
70 \frac{\partial \phi^{-}}{\partial t} = - \vec{v} \cdot \nabla \phi^{-}, \frac{\partial \phi^{-}}{\partial t} = - \vec{v} \cdot \nabla \phi^{-},
71 \label{INSERT ADVECTION} \label{INSERT ADVECTION}
74 and and
77 \frac{\partial^{2} \phi^{-}}{\partial t^{2}} = \frac{\partial}{\partial t}(-\vec{v} \cdot \nabla \phi^{-}) = \vec{v}\cdot \nabla (\vec{v}\cdot \nabla \phi^{-}), \frac{\partial^{2} \phi^{-}}{\partial t^{2}} = \frac{\partial}{\partial t}(-\vec{v} \cdot \nabla \phi^{-}) = \vec{v}\cdot \nabla (\vec{v}\cdot \nabla \phi^{-}),
78 \label{SECOND ORDER} \label{SECOND ORDER}
81 into Equation \ref{TAYLOR EXPANSION} into Equation (\ref{TAYLOR EXPANSION})
84 \phi^{+} = \phi^{-} - dt\vec{v}\cdot \nabla \phi^{-} + \frac{dt^2}{2}\vec{v}\cdot \nabla (\vec{v}\cdot \nabla \phi^{-}). \phi^{+} = \phi^{-} - dt\vec{v}\cdot \nabla \phi^{-} + \frac{dt^2}{2}\vec{v}\cdot \nabla (\vec{v}\cdot \nabla \phi^{-}).
85 \label{TAYLOR GALERKIN} \label{TAYLOR GALERKIN}
The fluid dynamics is governed by the Stokes equations. In geophysical problems the velocity of fluids are low; that is, the inertial forces are small compared with the viscous forces, therefore the inertial terms in the Navier-Stokes equations can be ignored. For a body force $f$ the governing equations are given by:
89 \subsection{Governing Equations for Fluid Flow}
91 The fluid dynamics is governed by the Stokes equations. In geophysical problems the velocity of fluids are low; that is, the inertial forces are small compared with the viscous forces, therefore the inertial terms in the Navier-Stokes equations can be ignored. For a body force $f$ the governing equations are given by:
94 \nabla \cdot (\eta(\nabla \vec{v} + \nabla^{T} \vec{v})) - \nabla p = -f, \nabla \cdot (\eta(\nabla \vec{v} + \nabla^{T} \vec{v})) - \nabla p = -f,
95 \label{GENERAL NAVIER STOKES} \label{GENERAL NAVIER STOKES}
98 with the incompressibility condition with the incompressibility condition
100 \begin{equation} \begin{equation}
101 \nabla \cdot \vec{v} = 0. \nabla \cdot \vec{v} = 0.
102 \label{INCOMPRESSIBILITY} \label{INCOMPRESSIBILITY}
103 \end{equation} \end{equation}
105 where $p$, $\eta$ and $f$ are the pressure, viscosity and body forces, respectively. where $p$, $\eta$ and $f$ are the pressure, viscosity and body forces, respectively.
106 Alternatively, the Stokes equations can be represented in Einstein summation tensor notation (compact notation): Alternatively, the Stokes equations can be represented in Einstein summation tensor notation (compact notation):
109 -(\eta(v\hackscore{i,j} + v\hackscore{j,i})),\hackscore{j} - p,\hackscore{i} = f\hackscore{i}, -(\eta(v\hackscore{i,j} + v\hackscore{j,i})),\hackscore{j} - p,\hackscore{i} = f\hackscore{i},
110 \label{GENERAL NAVIER STOKES COM} \label{GENERAL NAVIER STOKES COM}
113 with the incompressibility condition with the incompressibility condition
116 -v\hackscore{i,i} = 0. -v\hackscore{i,i} = 0.
117 \label{INCOMPRESSIBILITY COM} \label{INCOMPRESSIBILITY COM}
120 The subscript $,i$ denotes the derivative of the function with respect to $x\hackscore{i}$. A linear relationship between the deviatoric stress $\sigma^{'}\hackscore{ij}$ and the stretching $D\hackscore{ij} = \frac{1}{2}(v\hackscore{i,j} + v\hackscore{j,i})$ is defined as \cite{GROSS2006}: The subscript comma $i$ denotes the derivative of the function with respect to $x\hackscore{i}$. A linear relationship between the deviatoric stress $\sigma^{'}\hackscore{ij}$ and the stretching $D\hackscore{ij} = \frac{1}{2}(v\hackscore{i,j} + v\hackscore{j,i})$ is defined as \cite{GROSS2006}:
123 \sigma^{'}\hackscore{ij} = 2\eta D^{'}\hackscore{ij}, \sigma^{'}\hackscore{ij} = 2\eta D^{'}\hackscore{ij},
124 \label{STRESS} \label{STRESS}
127 where the deviatoric stretching $D^{'}\hackscore{ij}$ is defined as where the deviatoric stretching $D^{'}\hackscore{ij}$ is defined as
130 D^{'}\hackscore{ij} = D^{'}\hackscore{ij} - \frac{1}{3}D\hackscore{kk}\delta\hackscore{ij}. D^{'}\hackscore{ij} = D^{'}\hackscore{ij} - \frac{1}{3}D\hackscore{kk}\delta\hackscore{ij}.
131 \label{DEVIATORIC STRETCHING} \label{DEVIATORIC STRETCHING}
134 The $\delta\hackscore{ij}$ is the Kronecker $\delta$-symbol, which is a matrix with ones for its diagonal entries ($i = j$) and zeros for the remaining entries ($i \neq j$). The body force $f$ in Equation (\ref{GENERAL NAVIER STOKES COM}) is the gravity acting in the $x\hackscore{3}$ direction and is given as $f = -g \rho \delta\hackscore{i3}$. where $\delta\hackscore{ij}$ is the Kronecker $\delta$-symbol, which is a matrix with ones for its diagonal entries ($i = j$) and zeros for the remaining entries ($i \neq j$). The body force $f$ in Equation (\ref{GENERAL NAVIER STOKES COM}) is the gravity acting in the $x\hackscore{3}$ direction and is given as $f = -g \rho \delta\hackscore{i3}$.
135 The Stokes equations is a saddle point problem, and can be solved using a Uzawa scheme. A class called StokesProblemCartesian in Escript can be used to solve for velocity and pressure. The Stokes equations is a saddle point problem, and can be solved using a Uzawa scheme. A class called StokesProblemCartesian in Escript can be used to solve for velocity and pressure.
136 In order to keep numerical stability, the time step size needs to be below a certain value, known as the Courant number. The Courant number is defined as: In order to keep numerical stability, the time-step size needs to be below a certain value, known as the Courant number. The Courant number is defined as:
139 C = \frac{v \delta t}{h}. C = \frac{v \delta t}{h}.
140 \label{COURANT} \label{COURANT}
143 where $\delta t$, $v$, and $h$ are the time-step, velocity, and the width of an element in the mesh, respectively. The velocity $v$ may be chosen as the maximum velocity in the domain. In this problem the Courant number is taken to be 0.4 \cite{BOURGOUIN2006}.
where $\delta t$, $v$, and $h$ are the time step, velocity, and the width of an element in the mesh, respectively. The velocity $v$ may be chosen as the maximum velocity in the domain. In this problem the Courant number is taken to be 0.4 \cite{BOURGOUIN2006}.
146 As the computation of the distance function progresses, it becomes distorted, and so it needs to be updated in order to stay regular. This process is known as the reinitialization procedure. The aim is to iteratively find a solution to the reinitialization equation: \subsection{Reinitialization of Interface}
148 As the computation of the distance function progresses, it becomes distorted, and so it needs to be updated in order to stay regular. This process is known as the reinitialization procedure. The aim is to iteratively find a solution to the reinitialization equation:
151 \frac{\partial \psi}{\partial \tau} + sign(\psi)(1 - \nabla \psi) = 0. \frac{\partial \psi}{\partial \tau} + sign(\psi)(1 - \nabla \psi) = 0.
152 \label{REINITIALISATION} \label{REINITIALISATION}
155 where $\tau$ is artificial time. This equation is solved to meet the definition of the level set function, $\lvert \nabla \psi \rvert = 1$; the normalization condition. However, it has been shown that in using this reinitialization procedure it is prone to mass loss and inconsistent positioning of the interface \cite{SUCKALE2008}. where $\tau$ is artificial time. This equation is solved to meet the definition of the level set function, $\lvert \nabla \psi \rvert = 1$; the normalization condition. However, it has been shown that in using this reinitialization procedure it is prone to mass loss and inconsistent positioning of the interface \cite{SUCKALE2008}.
The Rayleigh-Taylor instability problem is used as a benchmark to validate CFD implementations \cite{VANKEKEN1997}. Figure \ref{RT2DSETUP} shows the setup of the problem. A rectangular domain with two different fluids is considered, with the greater density fluid on the top and the lighter density fluid on the bottom. The viscosities of the two fluids are equal (isoviscos). An initial perturbation is given to the interface of $\phi=0.02cos(\frac{\pi x}{\lambda}) + 0.2$. The aspect ratio, $\lambda = L/H = 0.9142$, is chosen such that it gives the greatest disturbance of the fluids.
158 \subsection{Benchmark Problem}
160 The Rayleigh-Taylor instability problem is used as a benchmark to validate CFD implementations \cite{VANKEKEN1997}. Figure \ref{RT2DSETUP} shows the setup of the problem. A rectangular domain with two different fluids is considered, with the greater density fluid on the top and the lighter density fluid on the bottom. The viscosities of the two fluids are equal (isoviscos). An initial perturbation is given to the interface of $\phi=0.02cos(\frac{\pi x}{\lambda}) + 0.2$. The aspect ratio, $\lambda = L/H = 0.9142$, is chosen such that it gives the greatest disturbance of the fluids.
162 \begin{figure} \begin{figure}
163 \center \center
164 \scalebox{0.7}{\includegraphics{figures/RT2Dsetup.eps}} \scalebox{0.7}{\includegraphics{figures/RT2Dsetup.eps}}
Removed from v.1811
Added in v.1973 | CommonCrawl |
Association between medical resources and the proportion of oldest-old in the Chinese population
Chao Tan1,
Cai-Zhi Tang1,
Xing-Shu Chen1 &
Yong-Jun Luo1
The potential association between medical resources and the proportion of oldest-old (90 years of age and above) in the Chinese population was examined, and we found that the higher proportion of oldest-old was associated with the higher number of beds in hospitals and health centers.
Life expectancy is influenced by many factors, including social and economic development levels, environmental factors, lifestyle choices and genetics [1]. Past studies on longevity mostly focused on regional differences [2], and the influence of genes and the natural environment. Some of these studies did not consider the intrinsic interactions among the factors that could influence longevity. Therefore, specific aims of the current study include: 1) to analyze the spatial characteristics of the long-lived population (referred to as oldest-old) in China; 2) to estimate the distribution of the factors that influence longevity; and 3) to systematically and quantitatively analyze the influence of different factors on longevity and identify the key factors determining the distribution of the long-lived population.
Data acquisition and preprocessing
Oldest-old population, hygiene and the economy
The oldest-old population, hygiene and economic data in the 31 provinces of China (except for Hong Kong, Macao and Taiwan) were downloaded from the National Bureau of Statistics [3]. Definition of rural (villages and towns), urban areas (cities), and the oldest-old population was defined as 90 years and above, and derived from the 6th National Population Census data of 2010.
Gross domestic product (GDP) data were obtained from the National Bureau of Statistics for 2011. These variables were standardized as follows: the proportion of the oldest-old per 100,000, GDP per person, and the number of beds in hospitals and health centers per 1000 persons. For multivariate regression, actual values of the variables, rather than the standardized values of the variables, were used since longevity is affected by the total GDP, the number of beds and Air pollution index (API).
Air quality data were acquired from the China Environmental Protection Network [4]. Data from 86 cities in 2010 were available. API is a dimensionless index based on PM10, SO2 and NO2 to describe air quality and short-term trends, and is divided into 5 levels (Additional file 1: Tab.S1). Annual API level was calculated based on daily reports and interpolated using ArcGIS 10.2 (ESRI, Redlands, CA, USA).
Data and statistical analysis
Spatial interpolation
Since API was available only from 86 cities in China, air quality data were interpolated using ArcGIS 10.2, inverse distance weight (IDW) interpolation.
The correlation between two variables was analyzed using the following Pearson equation (Formula 1) using SPSS 19 (Statistical Product and Service Solutions, IBM, Armonk, NY, USA):
$$ {r}_{xy}=\frac{\sum_{i=1}^n\left({x}_i-\overline{x}\right)\left({y}_i-\overline{y}\right)}{\sqrt{\sum_{i=1}^n{\left({x}_i-\overline{x}\right)}^2{\sum}_{i=1}^n{\left({y}_i-\overline{y}\right)}^2}} $$
where xi and yi represent the variables, \( \overline{x} \) and \( \overline{y} \) represent the average xi and yi, and r is the correlation coefficient.
In addition to analysis using a zero-order model (not considering the potential impact of covariants), data were also analyzed using a second-order model (controlling the potential impact of two covariants).
Multivariate linear regression analysis was conducted to examine the association between the proportion of oldest-old and factors. The criteria for entering independent variables into the equation was: Enter, Criteria = PIN (0.05) and Pout (0.1).
Spatial characteristics
The proportion of oldest-old was higher in the eastern and central regions of China (Additional file 1: Fig. S1). Rural areas had a higher proportion of oldest-old than in towns and cities in 28 out of the 31 provinces (Additional file 1: Fig. S2). The proportion of oldest-old residing in rural areas and cities varied considerably (12.16–85.70%, 4.30–81.52% respectively), while the proportion in towns was 4.70–24.73%.
Factors associated with the proportion of oldest-old
In general, GDP per capita was higher in the eastern regions than in the western regions. Shanghai has the highest GDP per capita (74,572.54 yuan) (Additional file 1: Fig. S3). Guizhou has the lowest GDP per capita (13,243.72 yuan).
The number of beds in hospitals and health centers
The number of beds in hospitals and health centers per 1000 persons was 2.33–6.80 (Additional file 1: Fig. S4). The number of beds in hospitals and health centers per 1000 persons in rural areas was 1.85–4.28. The number of beds in hospitals and health centers per 1000 persons in cities was higher than in rural areas, and varied considerably (3.00–10.89).
In general, annual API was lower in the southern regions than in the northern regions (Additional file 1: Fig. S5). The lowest was in Hainan. The highest API was in Gansu.
Relationship between the proportion of oldest-old and influencing factors
The proportion of the oldest-old correlated positively with GDP (r = 0.876, P < 0.001, Additional file 1: Tab. S2), and the number of beds in hospitals and health centers (r = 0.905, P < 0.001). There was a trend for negative correlation between the proportion of the oldest-old and API, but statistical analysis failed to validate the finding (r = − 0.125, P = 0.502).
Due to the interaction among GDP and the number of beds in hospitals and health centers, we controlled the impact of covariants using second-order partial correlation analysis. The correlation coefficient between the proportion of oldest-old and the number of beds in hospitals and health centers is 0.633 (P < 0.001, Additional file 1: Tab.S3). The partial correlation coefficient between the proportion of oldest-old and the API is − 0.446 (P = 0.015).
The multivariate regression yielded the following equation: the proportion of oldest-old = 1.206 × GDP + 0.416× the number of beds in hospitals and health centers - 1161.246 × API + 67,387.873 (F = 60,882, P < 0.001, Additional file 1: Tab. S4). There was a statistically significant association between the proportion of oldest-old with the number of beds in hospitals and health centers (P < 0.001), API (P = 0.015), but not with GDP (P = 0.119, Additional file 1: Tab. S5).
In our analysis, the proportion of oldest-old correlated positively with the number of beds in hospitals and health centers, which in turn was correlated with GDP per capita. A 1% increase in income has been reported to be associated with 0.01% in mortality rate and ~ 0.02% increase in average life expectancy [5].
We failed to show a correlation between the proportion of oldest-old with API using a zero-order model. However, when using a second-order model to control GDP and the number of beds in hospitals and health centers, we noticed a negative correlation, implicating complex interaction among these factors. However, there is little evidence for an association between air quality and acute deaths [6].
The current study has several limitations. First, air quality was reflected only by API (that considers PM10 only), and not by PM2.5 due to data unavailability. More importantly, perhaps, separate API data for urban and rural areas were not available.
The proportion of oldest-old in the population is higher in the eastern and central parts than the western part of China. In 28 of the 31 provinces, the proportion of oldest-old is higher in rural areas than in urban areas. Medical resources, as reflected by the number of beds in hospitals and health centers, is the most important factor that could increase longevity.
The dataset used and analyzed during the current study is available from the corresponding author upon reasonable request.
Air pollution index
IDW:
Inverse distance weight
Zhai DH. A research on regional longevity phenomenon, China's regional standards and its evaluation index system. Popul Econ. 2012;4:71–7.
Sarkodie SA, Strezov V, Jiang Y, Evans T. Proximate determinants of particulate matter (PM2.5) emission, mortality and life expectancy in Europe, Central Asia, Australia, Canada and the US. Sci Total Environ. 2019;683(SEP.15):489–97.
Yu GQ, Zhai WW, Wei Y, Zhang ZY, Qin J. Spatio-temporal analysis of centenarians in longevity region in southwestern China [article in China]. South China J Prev Med. 2018;44(2):116–21.
The National Bureau of Statistics. http://www.stats.gov.cn/tjsj/. Accessed on 20 Sep 2018.
The China Environmental Protection Network. http://datacenter.mep.gov.cn/websjzx/queryIndex.vm. Accessed on 18 Sep 2018.
Young SS, Smith RL, Lopiano KK. Air quality and acute deaths in California, 2000-2012. Reg Toxicol Pharmacol. 2017;88:173–84. https://doi.org/10.1016/j.yrtph.2017.06.003.
Data were downloaded from a variety of sources that include the National Bureau of Statistics. The authors also thank Yue Xiao for collecting data.
This work was supported by the National Natural Science Foundation of China (41877518), the Key Special Program of Logistic Scientific Research of PLA (BLJ18J005), and the Key Support Objects of Excellent Talent Pool of Military Medical University.
Department of Military Medical Geography, Army Medical Service Training Base, Army Medical University, Chongqing, 400038, China
Chao Tan, Cai-Zhi Tang, Xing-Shu Chen & Yong-Jun Luo
Chao Tan
Cai-Zhi Tang
Xing-Shu Chen
Yong-Jun Luo
CT collected/processed the data, and drafted the manuscript. YJL, XSC and CZT reviewed the results and provided critical input for data interpretation/presentation. All authors had read and approved the final manuscript.
Correspondence to Yong-Jun Luo.
Tan, C., Tang, CZ., Chen, XS. et al. Association between medical resources and the proportion of oldest-old in the Chinese population. Military Med Res 8, 14 (2021). https://doi.org/10.1186/s40779-021-00307-6
Medical resource
Oldest-old | CommonCrawl |
Schauder type estimates of linearized Mullins-Sekerka problem
CPAA Home
On asymptotic stability of solitons in a nonlinear Schrödinger equation
May 2012, 11(3): 1051-1062. doi: 10.3934/cpaa.2012.11.1051
A faithful symbolic extension
Jacek Serafin 1,
Institute of Mathematics and Computer Science, Wroclaw University of Technology, Wybrzeze Wyspianskiego 27, 50-370 Wroclaw, Poland
Received November 2010 Revised February 2011 Published December 2011
We construct a symbolic extension of an aperiodic zero-dimensional topological system in such a way that the bonding map is one-to-one on the set of invariant measures.
Keywords: Topological dynamical system, symbolic extension., entropy.
Mathematics Subject Classification: Primary: 37B10, 37B4.
Citation: Jacek Serafin. A faithful symbolic extension. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1051-1062. doi: 10.3934/cpaa.2012.11.1051
M. Boyle, Lower entropy factors of sofic systems,, Ergodic Theory and Dynam. Systems, 3 (1983), 541. doi: 10.1017/S0143385700002133. Google Scholar
M. Boyle and T. Downarowicz, The entropy theory of symbolic extensions,, Invent. Math., 156 (2004), 119. doi: 10.1007/s00222-003-0335-2. Google Scholar
M. Boyle, D. Fiebig and U. Fiebig, Residual entropy, conditional entropy and subshift covers,, Forum Math., 14 (2002), 713. doi: 10.1515/form.2002.031. Google Scholar
D. Burguet, Examples of $C^r$ interval maps with large symbolic extension entropy,, Discrete Contin. Dyn. Syst., 26 (2010), 873. doi: 10.3934/dcds.2010.26.873. Google Scholar
T. Downarowicz, Entropy of a symbolic extension of a totally disconnected dynamical system,, Ergodic Theory and Dynam. Systems, 21 (2001), 1051. doi: 10.1017/S014338570100150X. Google Scholar
T. Downarowicz, Entropy structure,, J. Anal. Math., 96 (2005), 57. doi: 10.1007/BF02787825. Google Scholar
T. Downarowicz, Minimal models for noninvertible and not uniquely ergodic systems,, Israel J. Math., 156 (2006), 93. doi: 10.1007/BF02773826. Google Scholar
T. Downarowicz, "Entropy in Dynamical Systems," New Mathematical Monographs, No. 18,, Cambridge University Press, (2011). Google Scholar
T. Downarowicz and A. Maass, Smooth interval maps have symbolic extensions: the antarctic theorem,, Invent. Math., 176 (2009), 617. doi: 10.1007/s00222-008-0172-4. Google Scholar
T. Downarowicz and S. E. Newhouse, Symbolic extensions and smooth dynamical systems,, Invent. Math., 160 (2005), 453. doi: 10.1007/s00222-004-0413-0. Google Scholar
E. Lindenstrauss, Lowering topological entropy,, J. Anal. Math., 67 (1995), 231. doi: 10.1007/BF02787792. Google Scholar
J. Serafin, Universally finitary symbolic extensions,, Fund. Math., 206 (2009), 281. doi: 10.4064/fm206-0-16. Google Scholar
David Burguet. Examples of $\mathcal{C}^r$ interval map with large symbolic extension entropy. Discrete & Continuous Dynamical Systems - A, 2010, 26 (3) : 873-899. doi: 10.3934/dcds.2010.26.873
Mike Boyle, Tomasz Downarowicz. Symbolic extension entropy: $c^r$ examples, products and flows. Discrete & Continuous Dynamical Systems - A, 2006, 16 (2) : 329-341. doi: 10.3934/dcds.2006.16.329
Wen-Guei Hu, Song-Sun Lin. On spatial entropy of multi-dimensional symbolic dynamical systems. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3705-3717. doi: 10.3934/dcds.2016.36.3705
Yun Zhao, Wen-Chiao Cheng, Chih-Chang Ho. Q-entropy for general topological dynamical systems. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 2059-2075. doi: 10.3934/dcds.2019086
João Ferreira Alves, Michal Málek. Zeta functions and topological entropy of periodic nonautonomous dynamical systems. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 465-482. doi: 10.3934/dcds.2013.33.465
Karsten Keller, Sergiy Maksymenko, Inga Stolz. Entropy determination based on the ordinal structure of a dynamical system. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3507-3524. doi: 10.3934/dcdsb.2015.20.3507
Jakub Šotola. Relationship between Li-Yorke chaos and positive topological sequence entropy in nonautonomous dynamical systems. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 5119-5128. doi: 10.3934/dcds.2018225
Katrin Gelfert. Lower bounds for the topological entropy. Discrete & Continuous Dynamical Systems - A, 2005, 12 (3) : 555-565. doi: 10.3934/dcds.2005.12.555
Jaume Llibre. Brief survey on the topological entropy. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3363-3374. doi: 10.3934/dcdsb.2015.20.3363
Fryderyk Falniowski, Marcin Kulczycki, Dominik Kwietniak, Jian Li. Two results on entropy, chaos and independence in symbolic dynamics. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3487-3505. doi: 10.3934/dcdsb.2015.20.3487
Dongkui Ma, Min Wu. Topological pressure and topological entropy of a semigroup of maps. Discrete & Continuous Dynamical Systems - A, 2011, 31 (2) : 545-556. doi: 10.3934/dcds.2011.31.545
Piotr Oprocha, Paweł Potorski. Topological mixing, knot points and bounds of topological entropy. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3547-3564. doi: 10.3934/dcdsb.2015.20.3547
Boris Hasselblatt, Zbigniew Nitecki, James Propp. Topological entropy for nonuniformly continuous maps. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 201-213. doi: 10.3934/dcds.2008.22.201
Michał Misiurewicz. On Bowen's definition of topological entropy. Discrete & Continuous Dynamical Systems - A, 2004, 10 (3) : 827-833. doi: 10.3934/dcds.2004.10.827
Alfredo Marzocchi, Sara Zandonella Necca. Attractors for dynamical systems in topological spaces. Discrete & Continuous Dynamical Systems - A, 2002, 8 (3) : 585-597. doi: 10.3934/dcds.2002.8.585
Jan Philipp Schröder. Ergodicity and topological entropy of geodesic flows on surfaces. Journal of Modern Dynamics, 2015, 9: 147-167. doi: 10.3934/jmd.2015.9.147
Eva Glasmachers, Gerhard Knieper, Carlos Ogouyandjou, Jan Philipp Schröder. Topological entropy of minimal geodesics and volume growth on surfaces. Journal of Modern Dynamics, 2014, 8 (1) : 75-91. doi: 10.3934/jmd.2014.8.75
Dante Carrasco-Olivera, Roger Metzger Alvan, Carlos Arnoldo Morales Rojas. Topological entropy for set-valued maps. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3461-3474. doi: 10.3934/dcdsb.2015.20.3461
César J. Niche. Topological entropy of a magnetic flow and the growth of the number of trajectories. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 577-580. doi: 10.3934/dcds.2004.11.577
Yujun Ju, Dongkui Ma, Yupan Wang. Topological entropy of free semigroup actions for noncompact sets. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 995-1017. doi: 10.3934/dcds.2019041
Jacek Serafin | CommonCrawl |
27 Field Energy and Field Momentum
27–1Local conservation
It is clear that the energy of matter is not conserved. When an object radiates light it loses energy. However, the energy lost is possibly describable in some other form, say in the light. Therefore the theory of the conservation of energy is incomplete without a consideration of the energy which is associated with the light or, in general, with the electromagnetic field. We take up now the law of conservation of energy and, also, of momentum for the fields. Certainly, we cannot treat one without the other, because in the relativity theory they are different aspects of the same four-vector.
Very early in Volume I, we discussed the conservation of energy; we said then merely that the total energy in the world is constant. Now we want to extend the idea of the energy conservation law in an important way—in a way that says something in detail about how energy is conserved. The new law will say that if energy goes away from a region, it is because it flows away through the boundaries of that region. It is a somewhat stronger law than the conservation of energy without such a restriction.
To see what the statement means, let's look at how the law of the conservation of charge works. We described the conservation of charge by saying that there is a current density $\FLPj$ and a charge density $\rho$, and that when the charge decreases at some place there must be a flow of charge away from that place. We call that the conservation of charge. The mathematical form of the conservation law is \begin{equation} \label{Eq:II:27:1} \FLPdiv{\FLPj}=-\ddp{\rho}{t}. \end{equation} This law has the consequence that the total charge in the world is always constant—there is never any net gain or loss of charge. However, the total charge in the world could be constant in another way. Suppose that there is some charge $Q_1$ near some point $(1)$ while there is no charge near some point $(2)$ some distance away (Fig. 27–1). Now suppose that, as time goes on, the charge $Q_1$ were to gradually fade away and that simultaneously with the decrease of $Q_1$ some charge $Q_2$ would appear near point $(2)$, and in such a way that at every instant the sum of $Q_1$ and $Q_2$ was a constant. In other words, at any intermediate state the amount of charge lost by $Q_1$ would be added to $Q_2$. Then the total amount of charge in the world would be conserved. That's a "world-wide" conservation, but not what we will call a "local" conservation, because in order for the charge to get from $(1)$ to $(2)$, it didn't have to appear anywhere in the space between point $(1)$ and point $(2)$. Locally, the charge was just "lost."
Fig. 27–1.Two ways to conserve charge: (a) $Q_1+Q_2$ is constant; (b) $dQ_1/dt=$ $-\int\Figj\cdot\Fign\,da=$ $-dQ_2/dt$.
There is a difficulty with such a "world-wide" conservation law in the theory of relativity. The concept of "simultaneous moments" at distant points is one which is not equivalent in different systems. Two events that are simultaneous in one system are not simultaneous for another system moving past. For "world-wide" conservation of the kind described, it is necessary that the charge lost from $Q_1$ should appear simultaneously in $Q_2$. Otherwise there would be some moments when the charge was not conserved. There seems to be no way to make the law of charge conservation relativistically invariant without making it a "local" conservation law. As a matter of fact, the requirement of the Lorentz relativistic invariance seems to restrict the possible laws of nature in surprising ways. In modern quantum field theory, for example, people have often wanted to alter the theory by allowing what we call a "nonlocal" interaction—where something here has a direct effect on something there—but we get in trouble with the relativity principle.
"Local" conservation involves another idea. It says that a charge can get from one place to another only if there is something happening in the space between. To describe the law we need not only the density of charge, $\rho$, but also another kind of quantity, namely $\FLPj$, a vector giving the rate of flow of charge across a surface. Then the flow is related to the rate of change of the density by Eq. (27.1). This is the more extreme kind of a conservation law. It says that charge is conserved in a special way—conserved "locally."
It turns out that energy conservation is also a local process. There is not only an energy density in a given region of space but also a vector to represent the rate of flow of the energy through a surface. For example, when a light source radiates, we can find the light energy moving out from the source. If we imagine some mathematical surface surrounding the light source, the energy lost from inside the surface is equal to the energy that flows out through the surface.
27–2Energy conservation and electromagnetism
We want now to write quantitatively the conservation of energy for electromagnetism. To do that, we have to describe how much energy there is in any volume element of space, and also the rate of energy flow. Suppose we think first only of the electromagnetic field energy. We will let $u$ represent the energy density in the field (that is, the amount of energy per unit volume in space) and let the vector $\FLPS$ represent the energy flux of the field (that is, the flow of energy per unit time across a unit area perpendicular to the flow). Then, in perfect analogy with the conservation of charge, Eq. (27.1), we can write the "local" law of energy conservation in the field as \begin{equation} \label{Eq:II:27:2} \ddp{u}{t}=-\FLPdiv{\FLPS}. \end{equation}
Of course, this law is not true in general; it is not true that the field energy is conserved. Suppose you are in a dark room and then turn on the light switch. All of a sudden the room is full of light, so there is energy in the field, although there wasn't any energy there before. Equation (27.2) is not the complete conservation law, because the field energy alone is not conserved, only the total energy in the world—there is also the energy of matter. The field energy will change if there is some work being done by matter on the field or by the field on matter.
However, if there is matter inside the volume of interest, we know how much energy it has: Each particle has the energy $m_0c^2/\sqrt{1-v^2/c^2}$. The total energy of the matter is just the sum of all the particle energies, and the flow of this energy through a surface is just the sum of the energy carried by each particle that crosses the surface. We want now to talk only about the energy of the electromagnetic field. So we must write an equation which says that the total field energy in a given volume decreases either because field energy flows out of the volume or because the field loses energy to matter (or gains energy, which is just a negative loss). The field energy inside a volume $V$ is \begin{equation*} \int_Vu\,dV, \end{equation*} and its rate of decrease is minus the time derivative of this integral. The flow of field energy out of the volume $V$ is the integral of the normal component of $\FLPS$ over the surface $\Sigma$ that encloses $V$, \begin{equation} \int_\Sigma\FLPS\cdot\FLPn\,da.\notag \end{equation} So \begin{equation} \label{Eq:II:27:3} -\ddt{}{t}\int_Vu\,dV=\int_\Sigma\FLPS\cdot\FLPn\,da+ (\text{work done on matter inside $V$}). \end{equation} \begin{equation} \label{Eq:II:27:3} -\ddt{}{t}\int_Vu\,dV=\int_\Sigma\FLPS\cdot\FLPn\,da+ \begin{pmatrix} \text{work done}\\[-.5ex] \text{on matter}\\[-.5ex] \text{inside $V$} \end{pmatrix}\!. \end{equation}
We have seen before that the field does work on each unit volume of matter at the rate $\FLPE\cdot\FLPj$. [The force on a particle is $\FLPF=q(\FLPE+\FLPv\times\FLPB)$, and the rate of doing work is $\FLPF\cdot\FLPv=q\FLPE\cdot\FLPv$. If there are $N$ particles per unit volume, the rate of doing work per unit volume is $Nq\FLPE\cdot\FLPv$, but $Nq\FLPv=\FLPj$.] So the quantity $\FLPE\cdot\FLPj$ must be equal to the loss of energy per unit time and per unit volume by the field. Equation (27.3) then becomes \begin{equation} \label{Eq:II:27:4} -\ddt{}{t}\int_Vu\,dV=\int_\Sigma\FLPS\cdot\FLPn\,da+ \int_V\FLPE\cdot\FLPj\,dV. \end{equation}
This is our conservation law for energy in the field. We can convert it into a differential equation like Eq. (27.2) if we can change the second term to a volume integral. That is easy to do with Gauss' theorem. The surface integral of the normal component of $\FLPS$ is the integral of its divergence over the volume inside. So Eq. (27.3) is equivalent to \begin{equation*} -\int_V\ddp{u}{t}\,dV=\int_V\FLPdiv{\FLPS}\,dV+ \int_V\FLPE\cdot\FLPj\,dV, \end{equation*} where we have put the time derivative of the first term inside the integral. Since this equation is true for any volume, we can take away the integrals and we have the energy equation for the electromagnetic fields: \begin{equation} \label{Eq:II:27:5} -\ddp{u}{t}=\FLPdiv{\FLPS}+\FLPE\cdot\FLPj. \end{equation}
Now this equation doesn't do us a bit of good unless we know what $u$ and $\FLPS$ are. Perhaps we should just tell you what they are in terms of $\FLPE$ and $\FLPB$, because all we really want is the result. However, we would rather show you the kind of argument that was used by Poynting in 1884 to obtain formulas for $\FLPS$ and $u$, so you can see where they come from. (You won't, however, need to learn this derivation for our later work.)
27–3Energy density and energy flow in the electromagnetic field
The idea is to suppose that there is a field energy density $u$ and a flux $\FLPS$ that depend only upon the fields $\FLPE$ and $\FLPB$. (For example, we know that in electrostatics, at least, the energy density can be written $\tfrac{1}{2}\epsO\FLPE\cdot\FLPE$.) Of course, the $u$ and $\FLPS$ might depend on the potentials or something else, but let's see what we can work out. We can try to rewrite the quantity $\FLPE\cdot\FLPj$ in such a way that it becomes the sum of two terms: one that is the time derivative of one quantity and another that is the divergence of a second quantity. The first quantity would then be $u$ and the second would be $\FLPS$ (with suitable signs). Both quantities must be written in terms of the fields only; that is, we want to write our equality as \begin{equation} \label{Eq:II:27:6} \FLPE\cdot\FLPj=-\ddp{u}{t}-\FLPdiv{\FLPS}. \end{equation}
The left-hand side must first be expressed in terms of the fields only. How can we do that? By using Maxwell's equations, of course. From Maxwell's equation for the curl of $\FLPB$, \begin{equation*} \FLPj=\epsO c^2\FLPcurl{\FLPB}-\epsO\,\ddp{\FLPE}{t}. \end{equation*} Substituting this in (27.6) we will have only $\FLPE$'s and $\FLPB$'s: \begin{equation} \label{Eq:II:27:7} \FLPE\cdot\FLPj=\epsO c^2\FLPE\cdot(\FLPcurl{\FLPB})- \epsO\FLPE\cdot\ddp{\FLPE}{t}. \end{equation} We are already partly finished. The last term is a time derivative—it is$(\ddpl{}{t})(\tfrac{1}{2}\epsO\FLPE\cdot\FLPE)$. So $\tfrac{1}{2}\epsO\FLPE\cdot\FLPE$ is at least one part of $u$. It's the same thing we found in electrostatics. Now, all we have to do is to make the other term into the divergence of something.
Notice that the first term on the right-hand side of (27.7) is the same as \begin{equation} \label{Eq:II:27:8} (\FLPcurl{\FLPB})\cdot\FLPE. \end{equation} And, as you know from vector algebra, $(\FLPa\times\FLPb)\cdot\FLPc$ is the same as $\FLPa\cdot(\FLPb\times\FLPc)$; so our term is also the same as \begin{equation} \label{Eq:II:27:9} \FLPdiv{(\FLPB\times\FLPE)} \end{equation} and we have the divergence of "something," just as we wanted. Only that's wrong! We warned you before that $\FLPnabla$ is "like" a vector, but not "exactly" the same. The reason it is not is because there is an additional convention from calculus: when a derivative operator is in front of a product, it works on everything to the right. In Eq. (27.7), the $\FLPnabla$ operates only on $\FLPB$, not on $\FLPE$. But in the form (27.9), the normal convention would say that $\FLPnabla$ operates on both $\FLPB$ and $\FLPE$. So it's not the same thing. In fact, if we work out the components of $\FLPdiv{(\FLPB\times\FLPE)}$ we can see that it is equal to $\FLPE\cdot(\FLPcurl{\FLPB})$ plus some other terms. It's like what happens when we take a derivative of a product in algebra. For instance, \begin{equation*} \ddt{}{x}(fg)=\ddt{f}{x}\,g+f\,\ddt{g}{x}. \end{equation*}
Rather than working out all the components of $\FLPdiv{(\FLPB\times\FLPE)}$, we would like to show you a trick that is very useful for this kind of problem. It is a trick that allows you to use all the rules of vector algebra on expressions with the $\FLPnabla$ operator, without getting into trouble. The trick is to throw out—for a while at least—the rule of the calculus notation about what the derivative operator works on. You see, ordinarily, the order of terms is used for two separate purposes. One is for calculus: $f(d/dx)g$ is not the same as $g(d/dx)f$; and the other is for vectors: $\FLPa\times\FLPb$ is different from $\FLPb\times\FLPa$. We can, if we want, choose to abandon momentarily the calculus rule. Instead of saying that a derivative operates on everything to the right, we make a new rule that doesn't depend on the order in which terms are written down. Then we can juggle terms around without worrying.
Here is our new convention: we show, by a subscript, what a differential operator works on; the order has no meaning. Suppose we let the operator $D$ stand for $\ddpl{}{x}$. Then $D_f$ means that only the derivative of the variable quantity $f$ is taken. Then \begin{equation*} D_ff=\ddp{f}{x}. \end{equation*} But if we have $D_ffg$, it means \begin{equation*} D_ffg=\biggl(\ddp{f}{x}\biggr)g. \end{equation*} But notice now that according to our new rule, $fD_fg$ means the same thing. We can write the same thing any which way: \begin{equation*} D_ffg=gD_ff=fD_fg=fgD_f. \end{equation*} You see, the $D_f$ can even come after everything. (It's surprising that such a handy notation is never taught in books on mathematics or physics.)
You may wonder: What if I want to write the derivative of $fg$? I want the derivative of both terms. That's easy, you just say so; you write $D_f(fg)+D_g(fg)$. That is just $g(\ddpl{f}{x})+f(\ddpl{g}{x})$, which is what you mean in the old notation by $\ddpl{(fg)}{x}$.
You will see that it is now going to be very easy to work out a new expression for $\FLPdiv{(\FLPB\times\FLPE)}$. We start by changing to the new notation; we write \begin{equation} \label{Eq:II:27:10} \FLPdiv{(\FLPB\times\FLPE)}=\FLPnabla_B\cdot(\FLPB\times\FLPE)+ \FLPnabla_E\cdot(\FLPB\times\FLPE). \end{equation} The moment we do that we don't have to keep the order straight any more. We always know that $\FLPnabla_E$ operates on $\FLPE$ only, and $\FLPnabla_B$ operates on $\FLPB$ only. In these circumstances, we can use $\FLPnabla$ as though it were an ordinary vector. (Of course, when we are finished, we will want to return to the "standard" notation that everybody usually uses.) So now we can do the various things like interchanging dots and crosses and making other kinds of rearrangements of the terms. For instance, the middle term of Eq. (27.10) can be rewritten as $\FLPE\cdot\FLPnabla_B\times\FLPB$. (You remember that $\FLPa\cdot\FLPb\times\FLPc=\FLPb\cdot\FLPc\times\FLPa$.) And the last term is the same as $\FLPB\cdot\FLPE\times\FLPnabla_E$. It looks freakish, but it is all right. Now if we try to go back to the ordinary convention, we have to arrange that the $\FLPnabla$ operates only on its "own" variable. The first one is already that way, so we can just leave off the subscript. The second one needs some rearranging to put the $\FLPnabla$ in front of the $\FLPE$, which we can do by reversing the cross product and changing sign: \begin{equation*} \FLPB\cdot(\FLPE\times\FLPnabla_E)= -\FLPB\cdot(\FLPnabla_E\times\FLPE). \end{equation*} Now it is in a conventional order, so we can return to the usual notation. Equation (27.10) is equivalent to \begin{equation} \label{Eq:II:27:11} \FLPdiv{(\FLPB\times\FLPE)}= \FLPE\cdot(\FLPcurl{\FLPB})-\FLPB\cdot(\FLPcurl{\FLPE}). \end{equation} (A quicker way would have been to use components in this special case, but it was worth taking the time to show you the mathematical trick. You probably won't see it anywhere else, and it is very good for unlocking vector algebra from the rules about the order of terms with derivatives.)
We now return to our energy conservation discussion and use our new result, Eq. (27.11), to transform the $\FLPcurl{\FLPB}$ term of Eq. (27.7). That energy equation becomes \begin{equation} \label{Eq:II:27:12} \FLPE\cdot\FLPj=\epsO c^2\FLPdiv{(\FLPB\times\FLPE)}+\epsO c^2 \FLPB\cdot(\FLPcurl{\FLPE})-\ddp{}{t}(\tfrac{1}{2}\epsO \FLPE\cdot\FLPE). \end{equation} \begin{align} \label{Eq:II:27:12} \FLPE\cdot\FLPj=\,&\epsO c^2\FLPdiv{(\FLPB\times\FLPE)}\,+\\ &\epsO c^2\FLPB\cdot(\FLPcurl{\FLPE})\,-\ddp{}{t}(\tfrac{1}{2}\epsO \FLPE\cdot\FLPE).\notag \end{align} Now you see, we're almost finished. We have one term which is a nice derivative with respect to $t$ to use for $u$ and another that is a beautiful divergence to represent $\FLPS$. Unfortunately, there is the center term left over, which is neither a divergence nor a derivative with respect to $t$. So we almost made it, but not quite. After some thought, we look back at the differential equations of Maxwell and discover that $\FLPcurl{\FLPE}$ is, fortunately, equal to $-\ddpl{\FLPB}{t}$, which means that we can turn the extra term into something that is a pure time derivative: \begin{equation*} \FLPB\cdot(\FLPcurl{\FLPE})=\FLPB\cdot\biggl( -\ddp{\FLPB}{t}\biggr)=-\ddp{}{t}\biggl( \frac{\FLPB\cdot\FLPB}{2}\biggr). \end{equation*} Now we have exactly what we want. Our energy equation reads \begin{equation} \label{Eq:II:27:13} \FLPE\cdot\FLPj=\FLPdiv{(\epsO c^2\FLPB\times\FLPE)}- \ddp{}{t}\biggl(\frac{\epsO c^2}{2}\,\FLPB\cdot\FLPB+ \frac{\epsO}{2}\,\FLPE\cdot\FLPE\biggr), \end{equation} \begin{align} \label{Eq:II:27:13} \FLPE\cdot\FLPj=\;&\FLPdiv{(\epsO c^2\FLPB\times\FLPE)}\,-\\[1ex] &\ddp{}{t}\biggl(\frac{\epsO c^2}{2}\,\FLPB\cdot\FLPB+ \frac{\epsO}{2}\,\FLPE\cdot\FLPE\biggr),\notag \end{align} which is exactly like Eq. (27.6), if we make the definitions \begin{equation} \label{Eq:II:27:14} u=\frac{\epsO}{2}\,\FLPE\cdot\FLPE+ \frac{\epsO c^2}{2}\,\FLPB\cdot\FLPB \end{equation} and \begin{equation} \label{Eq:II:27:15} \FLPS=\epsO c^2\FLPE\times\FLPB. \end{equation} (Reversing the cross product makes the signs come out right.)
Our program was successful. We have an expression for the energy density that is the sum of an "electric" energy density and a "magnetic" energy density, whose forms are just like the ones we found in statics when we worked out the energy in terms of the fields. Also, we have found a formula for the energy flow vector of the electromagnetic field. This new vector, $\FLPS=\epsO c^2\FLPE\times\FLPB$, is called "Poynting's vector," after its discoverer. It tells us the rate at which the field energy moves around in space. The energy which flows through a small area $da$ per second is $\FLPS\cdot\FLPn\,da$, where $\FLPn$ is the unit vector perpendicular to $da$. (Now that we have our formulas for $u$ and $\FLPS$, you can forget the derivations if you want.)
27–4The ambiguity of the field energy
Before we take up some applications of the Poynting formulas [Eqs. (27.14) and (27.15)], we would like to say that we have not really "proved" them. All we did was to find a possible "$u$" and a possible "$\FLPS$." How do we know that by juggling the terms around some more we couldn't find another formula for "$u$" and another formula for "$\FLPS$"? The new $\FLPS$ and the new $u$ would be different, but they would still satisfy Eq. (27.6). It's possible. It can be done, but the forms that have been found always involve various derivatives of the field (and always with second-order terms like a second derivative or the square of a first derivative). There are, in fact, an infinite number of different possibilities for $u$ and $\FLPS$, and so far no one has thought of an experimental way to tell which one is right! People have guessed that the simplest one is probably the correct one, but we must say that we do not know for certain what is the actual location in space of the electromagnetic field energy. So we too will take the easy way out and say that the field energy is given by Eq. (27.14). Then the flow vector $\FLPS$ must be given by Eq. (27.15).
It is interesting that there seems to be no unique way to resolve the indefiniteness in the location of the field energy. It is sometimes claimed that this problem can be resolved by using the theory of gravitation in the following argument. In the theory of gravity, all energy is the source of gravitational attraction. Therefore the energy density of electricity must be located properly if we are to know in which direction the gravity force acts. As yet, however, no one has done such a delicate experiment that the precise location of the gravitational influence of electromagnetic fields could be determined. That electromagnetic fields alone can be the source of gravitational force is an idea it is hard to do without. It has, in fact, been observed that light is deflected as it passes near the sun—we could say that the sun pulls the light down toward it. Do you not want to allow that the light pulls equally on the sun? Anyway, everyone always accepts the simple expressions we have found for the location of electromagnetic energy and its flow. And although sometimes the results obtained from using them seem strange, nobody has ever found anything wrong with them—that is, no disagreement with experiment. So we will follow the rest of the world—besides, we believe that it is probably perfectly right.
We should make one further remark about the energy formula. In the first place, the energy per unit volume in the field is very simple: It is the electrostatic energy plus the magnetic energy, if we write the electrostatic energy in terms of $E^2$ and the magnetic energy as $B^2$. We found two such expressions as possible expressions for the energy when we were doing static problems. We also found a number of other formulas for the energy in the electrostatic field, such as $\rho\phi$, which is equal to the integral of $\FLPE\cdot\FLPE$ in the electrostatic case. However, in an electrodynamic field the equality failed, and there was no obvious choice as to which was the right one. Now we know which is the right one. Similarly, we have found the formula for the magnetic energy that is correct in general. The right formula for the energy density of dynamic fields is Eq. (27.14).
27–5Examples of energy flow
Fig. 27–2.The vectors $\FigE$, $\FigB$, and $\FigS$ for a light wave.
Our formula for the energy flow vector $\FLPS$ is something quite new. We want now to see how it works in some special cases and also to see whether it checks out with anything that we knew before. The first example we will take is light. In a light wave we have an $\FLPE$ vector and a $\FLPB$ vector at right angles to each other and to the direction of the wave propagation. (See Fig. 27–2.) In an electromagnetic wave, the magnitude of $\FLPB$ is equal to $1/c$ times the magnitude of $\FLPE$, and since they are at right angles, \begin{equation*} \abs{\FLPE\times\FLPB}=\frac{E^2}{c}. \end{equation*} Therefore, for light, the flow of energy per unit area per second is \begin{equation} \label{Eq:II:27:16} S=\epsO cE^2. \end{equation} For a light wave in which $E=E_0\cos\omega(t-x/c)$, the average rate of energy flow per unit area, $\av{S}$—which is called the "intensity" of the light—is the mean value of the square of the electric field times $\epsO c$: \begin{equation} \label{Eq:II:27:17} \text{Intensity} = \av{S} = \epsO c\av{E^2}. \end{equation}
Believe it or not, we have already derived this result in Section 31–5 of Vol. I, when we were studying light. We can believe that it is right because it also checks against something else. When we have a light beam, there is an energy density in space given by Eq. (27.14). Using $cB=E$ for a light wave, we get that \begin{equation*} u=\frac{\epsO}{2}\,E^2+\frac{\epsO c^2}{2}\biggl( \frac{E^2}{c^2}\biggr)=\epsO E^2. \end{equation*} But $\FLPE$ varies in space, so the average energy density is \begin{equation} \label{Eq:II:27:18} \av{u} = \epsO\av{E^2}. \end{equation} Now the wave travels at the speed $c$, so we should think that the energy that goes through a square meter in a second is $c$ times the amount of energy in one cubic meter. So we would say that \begin{equation*} \av{S} = \epsO c\av{E^2}. \end{equation*} And it's right; it is the same as Eq. (27.17).
Fig. 27–3.Near a charging capacitor, the Poynting vector $\FigS$ points inward toward the axis.
Now we take another example. Here is a rather curious one. We look at the energy flow in a capacitor that we are charging slowly. (We don't want frequencies so high that the capacitor is beginning to look like a resonant cavity, but we don't want dc either.) Suppose we use a circular parallel plate capacitor of our usual kind, as shown in Fig. 27–3. There is a nearly uniform electric field inside which is changing with time. At any instant the total electromagnetic energy inside is $u$ times the volume. If the plates have a radius $a$ and a separation $h$, the total energy between the plates is \begin{equation} \label{Eq:II:27:19} U=\biggl(\frac{\epsO}{2}\,E^2\biggr)(\pi a^2h). \end{equation} This energy changes when $E$ changes. When the capacitor is being charged, the volume between the plates is receiving energy at the rate \begin{equation} \label{Eq:II:27:20} \ddt{U}{t}=\epsO\pi a^2hE\dot{E}. \end{equation} So there must be a flow of energy into that volume from somewhere. Of course you know that it must come in on the charging wires—not at all! It can't enter the space between the plates from that direction, because $\FLPE$ is perpendicular to the plates; $\FLPE\times\FLPB$ must be parallel to the plates.
You remember, of course, that there is a magnetic field that circles around the axis when the capacitor is charging. We discussed that in Chapter 23. Using the last of Maxwell's equations, we found that the magnetic field at the edge of the capacitor is given by \begin{equation*} 2\pi ac^2B=\dot{E}\cdot\pi a^2, \end{equation*} or \begin{equation*} B=\frac{a}{2c^2}\,\dot{E}. \end{equation*} Its direction is shown in Fig. 27–3. So there is an energy flow proportional to $\FLPE\times\FLPB$ that comes in all around the edges, as shown in the figure. The energy isn't actually coming down the wires, but from the space surrounding the capacitor.
Let's check whether or not the total amount of flow through the whole surface between the edges of the plates checks with the rate of change of the energy inside—it had better; we went through all that work proving Eq. (27.15) to make sure, but let's see. The area of the surface is $2\pi ah$, and $\FLPS=\epsO c^2\FLPE\times\FLPB$ is in magnitude \begin{equation*} \epsO c^2E\biggl(\frac{a}{2c^2}\,\dot{E}\biggr), \end{equation*} so the total flux of energy is \begin{equation*} \pi a^2h\epsO E\dot{E}. \end{equation*} It does check with Eq. (27.20). But it tells us a peculiar thing: that when we are charging a capacitor, the energy is not coming down the wires; it is coming in through the edges of the gap. That's what this theory says!
How can that be? That's not an easy question, but here is one way of thinking about it. Suppose that we had some charges above and below the capacitor and far away. When the charges are far away, there is a weak but enormously spread-out field that surrounds the capacitor. (See Fig. 27–4.) Then, as the charges come together, the field gets stronger nearer to the capacitor. So the field energy which is way out moves toward the capacitor and eventually ends up between the plates.
Fig. 27–4.The fields outside a capacitor when it is being charged by bringing two charges from a large distance.
As another example, we ask what happens in a piece of resistance wire when it is carrying a current. Since the wire has resistance, there is an electric field along it, driving the current. Because there is a potential drop along the wire, there is also an electric field just outside the wire, parallel to the surface. (See Fig. 27–5.) There is, in addition, a magnetic field which goes around the wire because of the current. The $\FLPE$ and $\FLPB$ are at right angles; therefore there is a Poynting vector directed radially inward, as shown in the figure. There is a flow of energy into the wire all around. It is, of course, equal to the energy being lost in the wire in the form of heat. So our "crazy" theory says that the electrons are getting their energy to generate heat because of the energy flowing into the wire from the field outside. Intuition would seem to tell us that the electrons get their energy from being pushed along the wire, so the energy should be flowing down (or up) along the wire. But the theory says that the electrons are really being pushed by an electric field, which has come from some charges very far away, and that the electrons get their energy for generating heat from these fields. The energy somehow flows from the distant charges into a wide area of space and then inward to the wire.
Fig. 27–5.The Poynting vector $\FigS$ near a wire carrying a current.
Finally, in order to really convince you that this theory is obviously nuts, we will take one more example—an example in which an electric charge and a magnet are at rest near each other—both sitting quite still. Suppose we take the example of a point charge sitting near the center of a bar magnet, as shown in Fig. 27–6. Everything is at rest, so the energy is not changing with time. Also, $\FLPE$ and $\FLPB$ are quite static. But the Poynting vector says that there is a flow of energy, because there is an $\FLPE\times\FLPB$ that is not zero. If you look at the energy flow, you find that it just circulates around and around. There isn't any change in the energy anywhere—everything which flows into one volume flows out again. It is like incompressible water flowing around. So there is a circulation of energy in this so-called static condition. How absurd it gets!
Fig. 27–6.A charge and a magnet produce a Poynting vector that circulates in closed loops.
Perhaps it isn't so terribly puzzling, though, when you remember that what we called a "static" magnet is really a circulating permanent current. In a permanent magnet the electrons are spinning permanently inside. So maybe a circulation of the energy outside isn't so queer after all.
You no doubt begin to get the impression that the Poynting theory at least partially violates your intuition as to where energy is located in an electromagnetic field. You might believe that you must revamp all your intuitions, and, therefore have a lot of things to study here. But it seems really not necessary. You don't need to feel that you will be in great trouble if you forget once in a while that the energy in a wire is flowing into the wire from the outside, rather than along the wire. It seems to be only rarely of value, when using the idea of energy conservation, to notice in detail what path the energy is taking. The circulation of energy around a magnet and a charge seems, in most circumstances, to be quite unimportant. It is not a vital detail, but it is clear that our ordinary intuitions are quite wrong.
27–6Field momentum
Next we would like to talk about the momentum in the electromagnetic field. Just as the field has energy, it will have a certain momentum per unit volume. Let us call that momentum density $\FLPg$. Of course, momentum has various possible directions, so that $\FLPg$ must be a vector. Let's talk about one component at a time; first, we take the $x$-component. Since each component of momentum is conserved we should be able to write down a law that looks something like this: \begin{equation*} -\ddp{}{t} \begin{pmatrix} \text{momentum}\\ \text{of matter} \end{pmatrix} _x\!\!=\ddp{g_x}{t}+ \begin{pmatrix} \text{momentum}\\ \text{outflow} \end{pmatrix} _x. \end{equation*} The left side is easy. The rate-of-change of the momentum of matter is just the force on it. For a particle, it is $\FLPF=q(\FLPE+\FLPv\times\FLPB)$; for a distribution of charges, the force per unit volume is $(\rho\FLPE+\FLPj\times\FLPB)$. The "momentum outflow" term, however, is strange. It cannot be the divergence of a vector because it is not a scalar; it is, rather, an $x$-component of some vector. Anyway, it should probably look something like \begin{equation*} \ddp{a}{x}+\ddp{b}{y}+\ddp{c}{z}, \end{equation*} because the $x$-momentum could be flowing in any one of the three directions. In any case, whatever $a$, $b$, and $c$ are, the combination is supposed to equal the outflow of the $x$-momentum.
Now the game would be to write $\rho\FLPE+\FLPj\times\FLPB$ in terms only of $\FLPE$ and $\FLPB$—eliminating $\rho$ and $\FLPj$ by using Maxwell's equations—and then to juggle terms and make substitutions to get it into a form that looks like \begin{equation*} \ddp{g_x}{t}+\ddp{a}{x}+\ddp{b}{y}+\ddp{c}{z}. \end{equation*} Then, by identifying terms, we would have expressions for $g_x$, $a$, $b$, and $c$. It's a lot of work, and we are not going to do it. Instead, we are only going to find an expression for $\FLPg$, the momentum density—and by a different route.
There is an important theorem in mechanics which is this: whenever there is a flow of energy in any circumstance at all (field energy or any other kind of energy), the energy flowing through a unit area per unit time, when multiplied by $1/c^2$, is equal to the momentum per unit volume in the space. In the special case of electrodynamics, this theorem gives the result that $\FLPg$ is $1/c^2$ times the Poynting vector: \begin{equation} \label{Eq:II:27:21} \FLPg=\frac{1}{c^2}\,\FLPS. \end{equation} So the Poynting vector gives not only energy flow but, if you divide by $c^2$, also the momentum density. The same result would come out of the other analysis we suggested, but it is more interesting to notice this more general result. We will now give a number of interesting examples and arguments to convince you that the general theorem is true.
First example: Suppose that we have a lot of particles in a box—let's say $N$ per cubic meter—and that they are moving along with some velocity $\FLPv$. Now let's consider an imaginary plane surface perpendicular to $\FLPv$. The energy flow through a unit area of this surface per second is equal to $Nv$, the number which flow through the surface per second, times the energy carried by each one. The energy in each particle is $m_0c^2/\sqrt{1-v^2/c^2}$. So the energy flow per second is \begin{equation*} Nv\,\frac{m_0c^2}{\sqrt{1-v^2/c^2}}. \end{equation*} But the momentum of each particle is $m_0v/\sqrt{1-v^2/c^2}$, so the density of momentum is \begin{equation*} N\,\frac{m_0v}{\sqrt{1-v^2/c^2}}, \end{equation*} which is just $1/c^2$ times the energy flow—as the theorem says. So the theorem is true for a bunch of particles.
It is also true for light. When we studied light in Volume I, we saw that when the energy is absorbed from a light beam, a certain amount of momentum is delivered to the absorber. We have, in fact, shown in Chapter 34 of Vol. I that the momentum is $1/c$ times the energy absorbed [Eq. (34.24) of Vol. I]. If we let $\FLPU$ be the energy arriving at a unit area per second, then the momentum arriving at a unit area per second is $\FLPU/c$. But the momentum is travelling at the speed $c$, so its density in front of the absorber must be $\FLPU/c^2$. So again the theorem is right.
Fig. 27–7.The energy $U$ in motion at the speed $c$ carries the momentum $U/c$.
Finally we will give an argument due to Einstein which demonstrates the same thing once more. Suppose that we have a railroad car on wheels (assumed frictionless) with a certain big mass $M$. At one end there is a device which will shoot out some particles or light (or anything, it doesn't make any difference what it is), which are then stopped at the opposite end of the car. There was some energy originally at one end—say the energy $U$ indicated in Fig. 27–7(a)—and then later it is at the opposite end, as shown in Fig. 27–7(c). The energy $U$ has been displaced the distance $L$, the length of the car. Now the energy $U$ has the mass $U/c^2$, so if the car stayed still, the center of gravity of the car would be moved. Einstein didn't like the idea that the center of gravity of an object could be moved by fooling around only on the inside, so he assumed that it is impossible to move the center of gravity by doing anything inside. But if that is the case, when we moved the energy $U$ from one end to the other, the whole car must have recoiled some distance $x$, as shown in part (c) of the figure. You can see, in fact, that the total mass of the car, times $x$, must equal the mass of the energy moved, $U/c^2$ times $L$ (assuming that $U/c^2$ is much less than $M$): \begin{equation} \label{Eq:II:27:22} Mx=\frac{U}{c^2}\,L. \end{equation}
Let's now look at the special case of the energy being carried by a light flash. (The argument would work as well for particles, but we will follow Einstein, who was interested in the problem of light.) What causes the car to be moved? Einstein argued as follows: When the light is emitted there must be a recoil, some unknown recoil with momentum $p$. It is this recoil which makes the car roll backward. The recoil velocity $v$ of the car will be this momentum divided by the mass of the car: \begin{equation*} v=\frac{p}{M}. \end{equation*} The car moves with this velocity until the light energy $U$ gets to the opposite end. Then, when it hits, it gives back its momentum and stops the car. If $x$ is small, then the time the car moves is nearly equal to $L/c$; so we have that \begin{equation*} x=vt=v\,\frac{L}{c}=\frac{p}{M}\,\frac{L}{c}. \end{equation*} Putting this $x$ in Eq. (27.22), we get that \begin{equation*} p=\frac{U}{c}. \end{equation*} Again we have the relation of energy and momentum for light, from which the argument above shows the momentum density is \begin{equation} \label{Eq:II:27:23} \FLPg=\frac{\FLPU}{c^2}. \end{equation}
You may well wonder: What is so important about the center-of-gravity theorem? Maybe it is wrong. Perhaps, but then we would also lose the conservation of angular momentum. Suppose that our boxcar is moving along a track at some speed $v$ and that we shoot some light energy from the top to the bottom of the car—say, from $A$ to $B$ in Fig. 27–8. Now we look at the angular momentum of the system about the point $P$. Before the energy $U$ leaves $A$, it has the mass $m=U/c^2$ and the velocity $v$, so it has the angular momentum $mvr_A$. When it arrives at $B$, it has the same mass and, if the linear momentum of the whole boxcar is not to change, it must still have the velocity $v$. Its angular momentum about $P$ is then $mvr_B$. The angular momentum will be changed unless the right recoil momentum was given to the car when the light was emitted—that is, unless the light carries the momentum $U/c$. It turns out that the angular momentum conservation and the theorem of center-of-gravity are closely related in the relativity theory. So the conservation of angular momentum would also be destroyed if our theorem were not true. At any rate, it does turn out to be a true general law, and in the case of electrodynamics we can use it to get the momentum in the field.
Fig. 27–8.The energy $U$ must carry the momentum $U/c$ if the angular momentum about $P$ is to be conserved.
We will mention two further examples of momentum in the electromagnetic field. We pointed out in Section 26–2 the failure of the law of action and reaction when two charged particles were moving on orthogonal trajectories. The forces on the two particles don't balance out, so the action and reaction are not equal; therefore the net momentum of the matter must be changing. It is not conserved. But the momentum in the field is also changing in such a situation. If you work out the amount of momentum given by the Poynting vector, it is not constant. However, the change of the particle momenta is just made up by the field momentum, so the total momentum of particles plus field is conserved.
Finally, another example is the situation with the magnet and the charge, shown in Fig. 27–6. We were unhappy to find that energy was flowing around in circles, but now, since we know that energy flow and momentum are proportional, we know also that there is momentum circulating in the space. But a circulating momentum means that there is angular momentum. So there is angular momentum in the field. Do you remember the paradox we described in Section 17–4 about a solenoid and some charges mounted on a disc? It seemed that when the current turned off, the whole disc should start to turn. The puzzle was: Where did the angular momentum come from? The answer is that if you have a magnetic field and some charges, there will be some angular momentum in the field. It must have been put there when the field was built up. When the field is turned off, the angular momentum is given back. So the disc in the paradox would start rotating. This mystic circulating flow of energy, which at first seemed so ridiculous, is absolutely necessary. There is really a momentum flow. It is needed to maintain the conservation of angular momentum in the whole world. | CommonCrawl |
Can Lee-Yang zeros theorem account for triple point phase transition?
Lee-Yang circle theorem
How do you measure numerically the central charge of a system?
Solutions of the Yang-Baxter equation and the Tetrahedron equation
Reference for stochastic processes which helps moving from a basic level to a measure theory one
Does entropy measure extractable work?
Relation between solutions to Yang-Baxter equations, integrability and exact solvability?
Literature recommendation for classical density functional theory (DFT) and fundamental measure theory (FMT)
How do experimental physicists measure the temperature in cold atomic gases?
What were the immediate consequences Yang-Lee work on Weak Interaction?
Measure of Lee-Yang zeros
Consider a statistical mechanical system (say the 1D Ising model) on a finite lattice of size $N$, and call the corresponding partition function (as a function of, say, real temperature and real magnetic field) $Z^{(N)}(t, h)$, where $t$ is temperature and $h$ - the magnetic field.
The partition function $Z$ is analytic (in finite volume $N$) and doesn't admit any zeros. However, as soon as one passes to complex field $h$ (or temperature, but let's consider complex field here), $Z$ admits zeros on the unit circle $S^1$ in $\mathbb{C}$.
Call the set of zeros $\mathcal{Z}_N$, where $N$ emphasizes finite lattice of size $N$. It is in general a nontrivial problem to decide whether the sequence of sets $\{\mathcal{Z}_N\}_{N\in\mathbb{N}}$ accumulates on some set in $S^1$, and if it does, to describe the topology of this limit set, which we'll call $\mathcal{Z}$.
Now, suppose that for a given system we proved that there does indeed exist a nonempty set $\mathcal{Z}$ such that $\mathcal{Z}_N\rightarrow\mathcal{Z}$ as $N\rightarrow\infty$ (in some sense - say in Hausdorff metric).
Is there a natural measure $\mu$ defined on $\mathcal{Z}$ that has physical meaning? If so, what sort of properties of this measure are physically relevant (say, relating to phase transitions)?
In my mind this is quite a natural question, because it translates into "Is there a natural way to measure the set where the system develops critical behavior?"
For example, one candidate would be the Hausdorff dimension. But I am interested more in something that would measure the density of zeros in a natural way (such as, for example, the density of states measure for quantum Hamiltonians).
EDIT: I know, of course, that the 1D Ising model is exactly solvable when interaction strength and magnetic field are constant. Here I implicitly assume that interaction (nearest neighbor, to keep it simple) and/or the magnetic field depend on lattice sites.
statistical-mechanics
asked Mar 1, 2012 in Theoretical Physics by WNY (45 points) [ no revision ]
I think the original paper by Yang and Lee explains this. I believe they actually show (or just assume) that in the thermodynamic limit there is a well-behaved density of zeros, and then use this density to calculate things like critical exponents.
commented Mar 2, 2012 by genneth (565 points) [ no revision ]
I of course suspected that it should be (naturally) zero distribution. In fact, I think the distribution of zeros measure in the thermodynamic limit is just weak limit of distribution of zeros measures on finite lattices (as the lattice size grows to infinity). Thank you for the reference!
commented Mar 2, 2012 by WNY (45 points) [ no revision ]
@genneth: you're right. Note, however, that, as far as I know, one can say essentially nothing about the limiting density (in dimensions 2 and higher, at low temperature). It is, e.g., not known whether the distribution of zeros in the thermodynamic limit allows analytic continuation from {Re(h)>0} to {Re(h)<0} (it is only known that such an analytic continuation cannot be done through h=0, since there is an essential singularity there).<font color="red">
commented Mar 2, 2012 by Yvan Velenik (1,110 points) [ no revision ]
@Yvan: Also, in dimension one, as far as I know, depending on choice of interaction, distribution of zeros may also present a nontrivial problem, or am I wrong? In other words, I think even in dimension one, results and techniques are model dependent. Is this true?
@WNY: I guess that it depends what information you want to extract (obviously, if the coupling constants are uniformly bounded and decay fast enough with the distance between the corresponding spins, then there won't be a phase transition in the model). Now, independently of what you want to extract, I'd guess that the determination of the asymptotic locations of zeros is certainly difficult (and usually impossible), in general, when the interaction is not finite-range and periodic...
@YvanVelenik: I think I agree with what you wrote, but just to make sure: one can certainly be sure of the analyticity on the way to the thermodynamic limit --- and always the limit should be the last step, as is the whole point of the Y/L papers.
commented Mar 12, 2012 by genneth (565 points) [ no revision ]
@WNY: approaching thermodynamics via the zeros seems very elegant, and I think people stochastically and independently re-discover this fact, work on it, and find that it's essentially intractable. Certainly, I know of no influential (physics) literature which builds in this direction --- despite having friends who work exactly on this. I don't know if that meta-observation is worth something ;-)
p$\hbar$ysi$\varnothing$sOverflow | CommonCrawl |
New regularity of kolmogorov equation and application on approximation of semi-linear spdes with Hölder continuous drifts
On the Neumann problem of Hardy-Sobolev critical equations with the multiple singularities
January 2019, 18(1): 323-340. doi: 10.3934/cpaa.2019017
On the positive semigroups generated by Fleming-Viot type differential operators
Francesco Altomare 1,, , Mirella Cappelletti Montano 1, and Vita Leonessa 2,
Dipartimento di Matematica, Università degli Studi di Bari Aldo Moro, Campus Universitario, Via Edoardo Orabona n. 4, 70125 Bari, Italy
Dipartimento di Matematica, Informatica ed Economia, Università degli Studi della Basilicata, Campus di Macchia Romana, Viale Dell' Ateneo Lucano n. 10, 85100 Potenza, Italy
Received January 2018 Revised April 2018 Published August 2018
Fund Project: Work partially supported by the Italian INDAM-GNAMPA.
In this paper we study a class of degenerate second-order elliptic differential operators, often referred to as Fleming-Viot type operators, in the framework of function spaces defined on the $d$-dimensional hypercube $Q_d$ of $\mathbf{R}^d$, $d ≥1$.
By making mainly use of techniques arising from approximation theory, we show that their closures generate positive semigroups both in the space of all continuous functions and in weighted $L^{p}$-spaces.
In addition, we show that the semigroups are approximated by iterates of certain polynomial type positive linear operators, which we introduce and study in this paper and which generalize the Bernstein-Durrmeyer operators with Jacobi weights on $[0, 1]$.
As a consequence, after determining the unique invariant measure for the approximating operators and for the semigroups, we establish some of their regularity properties along with their asymptotic behaviours.
Keywords: Degenerate second-order elliptic differential operator, Fleming-Viot type differential operator, positive semigroup, approximation of semigroups, Bernstein-Durrmeyer operators with Jacobi weights.
Mathematics Subject Classification: Primary: 47D06, 47D07; Secondary: 41A36, 35K65.
Citation: Francesco Altomare, Mirella Cappelletti Montano, Vita Leonessa. On the positive semigroups generated by Fleming-Viot type differential operators. Communications on Pure & Applied Analysis, 2019, 18 (1) : 323-340. doi: 10.3934/cpaa.2019017
A. A. Albanese, M. Campiti and E. M. Mangino, Regularity properties of semigroups generated by some Fleming-Viot type operators, J. Math. Anal. Appl., 335 (2007), 1259-1273. doi: 10.1016/j.jmaa.2007.02.042. Google Scholar
A. A. Albanese and E. M. Mangino, On the sectoriality of a class of degenerate elliptic operators arising in population genetics, J. Evol. Equ., 15 (2015), 131-147. doi: 10.1007/s00028-014-0253-3. Google Scholar
A. A. Albanese and E. M. Mangino, Analytic semigroups and some degenerate evolution equations defined on domains with corners, Discrete Contin. Dynam. Systems, 35 (2015), 595-615. doi: 10.3934/dcds.2015.35.595. Google Scholar
F. Altomare, Korovkin-type theorems and approximation by positive linear operators, Surveys in Approximation Theory, 5 (2010), 92–164. Available from http://www.math.technion.ac.il/sat/papers/13/, ISSN 1555-578X. Google Scholar
F. Altomare and M. Campiti, Korovkin-Type Approximation Theory and its Applications, de Gruyter Studies in Mathematics, 17, Walter de Gruyter, Berlin-New York, 1994. doi: 10.1515/9783110884586. Google Scholar
F. Altomare, M. Cappelletti Montano, V. Leonessa and I. Raşa, Markov Operators, Positive Semigroups and Approximation Processes, de Gruyter Studies in Mathematics, 61, Walter de Gruyter GmbH, Berlin/Boston, 2014. Google Scholar
F. Altomare, M. Cappelletti Montano, V. Leonessa and I. Raşa, Elliptic differential operators and positive semigroups associated with generalized Kantorovich operators, J. Math. Anal. Appl., 458 (2018), 153-173. doi: 10.1016/j.jmaa.2017.08.034. Google Scholar
F. Altomare and I. Raşa, On some classes of diffusion equations and relates approximation problems, in Trends and Applications in Constructive Approximation (eds. M. G. de Bruin, H. D. Mache and J. Szabados), Internat. Ser. Numer. Math., 151, Birkhäuser Velag, Basel, (2005), 13–26. doi: 10.1007/3-7643-7356-3_2. Google Scholar
F. Altomare and I. Raşa, Lipschitz contractions, unique ergodicity and asymptotics of Markov semigroups, Boll. Unione Mat. Ital.(9), V (2012), 1-17. Google Scholar
A. Attalienti and M. Campiti, Degenerate evolution problems and Beta-type operators, Studia Math., 140 (2000), 117-139. Google Scholar
H. Bauer, Measure and Integration Theory, de Gruyter Studies in Mathematics, 26, Walter de Gruyter GmbH, Berlin/Boston, 2011. doi: 10.1515/9783110866209. Google Scholar
E. E. Berdysheva and K. Jetter, Multivariate Bernstein Durrmeyer operators with arbitrary weight functions, J. Approx. Theory, 162 (2010), 576-598. doi: 10.1016/j.jat.2009.11.005. Google Scholar
H. Berens and Y. Xu, On Bernstein-Durrmeyer polynomials with Jacobi weights, in Approximation Theory and Functional Analysis (ed. C. K. Chui), Academic Press, Boston, (1991), 25–46. Google Scholar
S. Cerrai and Ph. Clément, Schauder estimates for a degenerate second order elliptic operator on a cube, J. Diff. Eq., 242 (2007), 287-321. doi: 10.1016/j.jde.2007.08.002. Google Scholar
M. Campiti and I. Raşa, Qualitative properties of a class of Fleming-Viot operators, Acta Math. Hungar., 103 (2004), 55-69. doi: 10.1023/B:AMHU.0000028236.59446.da. Google Scholar
U. Krengel, Ergodic Theorems, de Gruyter Studies in Mathematics, 6, Walter de Guyter, Berlin/New York, 1985. doi: 10.1515/9783110844641. Google Scholar
D. Mugnolo and A. Rhandi, On the domain of a Fleming-Viot-type operator on an Lp-space with invariant measure, Note Mat., 31 (2011), 139-148. doi: 10.1285/i15900932v31n1p139. Google Scholar
R. Nagel (Ed.), One-parameter Semigroups of Positive Operators, Lecture Notes in Math., 1184, Springer-Verlag, Berlin, 1986. doi: 10.1007/BFb0074922. Google Scholar
R. Păltănea, Sur un opérateur polynomial défini sur l'ensemble des fonctions intégrables, Univ. Babeş-Bolyai, Cluj-Napoca, 83 (1983), 101-106. Google Scholar
T. Vladislav and I. Raşa, Analiza Numerica, Aproximare, problema lui Cauchy abstracta, proiectori Altomare, Ed. Tehnica, Bucuresti, 1999. Google Scholar
S. Waldron, A generalised beta integral and the limit of Bernstein-Durrmeyer operator with Jacobi weights, J. Approx. Theory, 122 (2003), 141-150. doi: 10.1016/S0021-9045(03)00041-8. Google Scholar
Purshottam Narain Agrawal, Şule Yüksel Güngör, Abhishek Kumar. Better degree of approximation by modified Bernstein-Durrmeyer type operators. Mathematical Foundations of Computing, 2021 doi: 10.3934/mfc.2021024
José F. Cariñena, Irina Gheorghiu, Eduardo Martínez. Jacobi fields for second-order differential equations on Lie algebroids. Conference Publications, 2015, 2015 (special) : 213-222. doi: 10.3934/proc.2015.0213
András Bátkai, Istvan Z. Kiss, Eszter Sikolya, Péter L. Simon. Differential equation approximations of stochastic network processes: An operator semigroup approach. Networks & Heterogeneous Media, 2012, 7 (1) : 43-58. doi: 10.3934/nhm.2012.7.43
Bernd Kawohl, Vasilii Kurta. A Liouville comparison principle for solutions of singular quasilinear elliptic second-order partial differential inequalities. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1747-1762. doi: 10.3934/cpaa.2011.10.1747
Kyeong-Hun Kim, Kijung Lee. A weighted $L_p$-theory for second-order parabolic and elliptic partial differential systems on a half space. Communications on Pure & Applied Analysis, 2016, 15 (3) : 761-794. doi: 10.3934/cpaa.2016.15.761
Nguyen Thi Hoai. Asymptotic approximation to a solution of a singularly perturbed linear-quadratic optimal control problem with second-order linear ordinary differential equation of state variable. Numerical Algebra, Control & Optimization, 2021, 11 (4) : 495-512. doi: 10.3934/naco.2020040
W. Sarlet, G. E. Prince, M. Crampin. Generalized submersiveness of second-order ordinary differential equations. Journal of Geometric Mechanics, 2009, 1 (2) : 209-221. doi: 10.3934/jgm.2009.1.209
Jaume Llibre, Amar Makhlouf. Periodic solutions of some classes of continuous second-order differential equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 477-482. doi: 10.3934/dcdsb.2017022
Xuan Wu, Huafeng Xiao. Periodic solutions for a class of second-order differential delay equations. Communications on Pure & Applied Analysis, 2021, 20 (12) : 4253-4269. doi: 10.3934/cpaa.2021159
Daniel Grieser. A natural differential operator on conic spaces. Conference Publications, 2011, 2011 (Special) : 568-577. doi: 10.3934/proc.2011.2011.568
Abdelkader Boucherif. Positive Solutions of second order differential equations with integral boundary conditions. Conference Publications, 2007, 2007 (Special) : 155-159. doi: 10.3934/proc.2007.2007.155
Lijun Yi, Zhongqing Wang. Legendre spectral collocation method for second-order nonlinear ordinary/partial differential equations. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 299-322. doi: 10.3934/dcdsb.2014.19.299
Osama Moaaz, Omar Bazighifan. Oscillation criteria for second-order quasi-linear neutral functional differential equation. Discrete & Continuous Dynamical Systems - S, 2020, 13 (9) : 2465-2473. doi: 10.3934/dcdss.2020136
Maria Do Rosario Grossinho, Rogério Martins. Subharmonic oscillations for some second-order differential equations without Landesman-Lazer conditions. Conference Publications, 2001, 2001 (Special) : 174-181. doi: 10.3934/proc.2001.2001.174
Qiong Meng, X. H. Tang. Multiple solutions of second-order ordinary differential equation via Morse theory. Communications on Pure & Applied Analysis, 2012, 11 (3) : 945-958. doi: 10.3934/cpaa.2012.11.945
Willy Sarlet, Tom Mestdag. Compatibility aspects of the method of phase synchronization for decoupling linear second-order differential equations. Journal of Geometric Mechanics, 2021 doi: 10.3934/jgm.2021019
Doria Affane, Mustapha Fateh Yarou. Well-posed control problems related to second-order differential inclusions. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021042
Zoltan Satmari. Iterative Bernstein splines technique applied to fractional order differential equations. Mathematical Foundations of Computing, 2021 doi: 10.3934/mfc.2021039
Shouchuan Hu, Nikolaos S. Papageorgiou. Nonlinear Neumann equations driven by a nonhomogeneous differential operator. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1055-1078. doi: 10.3934/cpaa.2011.10.1055
Angelo Favini, Yakov Yakubov. Regular boundary value problems for ordinary differential-operator equations of higher order in UMD Banach spaces. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 595-614. doi: 10.3934/dcdss.2011.4.595
Francesco Altomare Mirella Cappelletti Montano Vita Leonessa | CommonCrawl |
welcome reception (including food and beverages)
opening - Roderich Moessner, director of the mpipks & scientific coordinators
Spin Liquids (chair: Sebastian Eggert)
09:00 - 09:20 Maria Hermanns (Universität zu Köln)
Kitaev spin liquids
The Kitaev honeycomb model is arguably one of the most influential examples of a topologically ordered phase of matter. At the heart of the this model is the Kitaev interaction, which is of Ising type, but where the exchange easy-axis depends on the bond direction. It is one of the few highly frustrated spin models that is exactly solvable and, thus, has shaped our understanding of quantum spin liquid phases in general. In recent years, it has been shown that Kitaev interactions occur naturally in certain transition metal compounds, which are commonly referred to as Kitaev materials. An important hallmark of these systems is the surprising variety of quantum spin liquids that can be realized — in particular in three spatial dimensions. The richness of the theoretical model combined with recent breakthroughs in material synthesis have lead to a rapidly growing field which is marked by the strong interplay of theory and experiment. In this talk, we'll give an overview of the physics of three-dimensional Kitaev spin liquids, identify hallmarks of topological order and discuss how these may be relevant for the materials.
09:20 - 09:40 Bella Lake (Helmholtz Zentrum Berlin für Materialien und Energie GmbH)
Physical realization of a new quantum spin liquid based on a novel frustration mechanism
Unlike conventional magnets where the magnetic moments are partially or completely static in the ground state, in a quantum spin liquid they remain in collective motion down to the lowest temperatures. The importance of this state is that it is coherent and highly entangled without breaking local symmetries. Such phenomena is usually sought in simple lattices where antiferromagnetic interactions and/or anisotropies favoring specific alignments of the magnetic moments, are frustrated by lattice geometries incompatible with such order. Despite an extensive search among such compounds, experimental realizations remain very few. Here we investigate the new spin-1/2 magnet, Ca10Cr7O28, which has a novel unexplored lattice with several isotropic interactions consisting of strong ferromagnetic and weaker antiferromagnetic couplings. Despite its unconventional structure and Hamiltonian, we show experimentally that it displays all the features expected of a quantum spin liquid. Bulk properties measurements, neutron scattering and muon spin relaxation reveal coherent spin dynamics in the ground state, the complete absence of static magnetism and diffuse spinon excitations. Pseudo-Fermion renormalization group calculations verify that the Hamiltonian of Ca10Cr7O28 supports a dynamical ground state which furthermore is robust to significant variations of the exchange constants.
09:40 - 10:00 Johannes Reuther (Freie Universität Berlin)
Functional renormalization-group perspective on the spin-liquid candidate \(Ca_{10}Cr_{7}O_{28}\)
We theoretically investigate the magnetic properties of the bilayer-kagome spin-liquid candidate $Ca_{10}Cr_7O_{28}$. Taking the experimentally observed exchange couplings as an input, we apply the pseudo-fermion functional renormalization group (PFFRG) method to calculate the spin-structure factor and magnetic susceptibility. In agreement with experiments, the temperature dependence of the susceptibility indicates a non-magnetic ground state. We further find qualitative agreement between the calculated and measured spin-structure factors, particularly, our simulations reproduce the characteristic ring-like correlation profile in momentum space which can be interpreted as a signature of molten 120 degree Neel order. By tuning the model parameters away from those realized in $Ca_{10}Cr_7O_{28}$ we show that the spin-liquid phase is of remarkable stability that is rooted in a hierarchy of different frustration and fluctuation effects.
10:00 - 10:20 Alexander Tsirlin (Universität Augsburg)
Spin liquids in triangular antiferromagnets
Heavy transition metals ($5d$) and rare-earth atoms ($4f$) entail strong spin-orbit coupling that renders magnetic interactions anisotropic. This creates new mechanisms of magnetic frustration, which can give rise to quantum spin-liquid behavior at low temperatures. In this contribution, I will present recent experimental results on two quantum spin-liquid candidates, Ba$_3$InIr$_2$O$_9$ and YbMgGaO$_4$. YbMgGaO$_4$ entails Yb$^{3+}$ spins arranged on a triangular lattice. From low-temperature specific-heat measurements the absence of a spin gap is inferred, and the T^(2/3) power-law behavior resembles the U(1) quantum spin liquid. $\mu$SR confirms the absence of magnetic ordering or spin freezing down to at least 50 mK. However, inelastic neutron scattering reveals large broadening of crystal-field excitations and other signatures of structural disorder that we ascribe to the random distribution of Mg and Ga in the crystal structure. Ba$_3$InIr$_2$O$_9$ is a mixed-valence compound, where local moments are on Ir-Ir dimers. The interactions between these dimers form triangular and honeycomb geometries simultaneously. Using neutron scattering, NMR, and $\mu$SR we demonstrate that this material is free from structural disorder and shows persistent spin dynamics down to at least 20 mK, with the magnetic susceptibility reaching constant value below 1 K. Spin-lattice relaxation rate follows power-law behavior suggesting the absence of a spin gap
Methods 1 (chair: Maria Hermanns)
11:00 - 11:20 Matthias Vojta (Technische Universität Dresden)
Heisenberg-Kitaev physics in magnetic fields
The Heisenberg-Kitaev model and its variants have attracted enormous attention recently, as they are believed to describe the physics proximate to the Kitaev spin liquid as realized in honeycomb-lattice materials such as Na$_2$IrO$_3$ or $\alpha$-RuCl$_3$. We have investigated a variety of these models in applied magnetic fields using semiclassical techniques. The results reveal surprisingly rich phase diagrams, with non-trivial intermediate phases including vortex crystals and other multi-Q states. We discuss possible origins of large magnetic anisotropies as observed experimentally, and we highlight different mechanisms to stabilize zigzag magnetic order and their distinct field response.
11:20 - 11:40 Stefan Wessel (RWTH Aachen)
Quantum Monte Carlo of frustrated spin systems in the spin dimer basis
Quantum Monte Carlo simulations of frustrated quantum spin systems are usually plagued by a severe sign-problem. However, due to its dependence on the underlying computational basis, it can be feasible to reduce or even avoid the sign-problem by an appropriate local basis choise, which reduces the entanglement between the computational unit cells. Here, we consider in particular the possibility of simulating dimerized quantum spin systems witin a spin-dimer basis world-line formulation. We discuss this simulation scheme and present results for thermodynamic properties of the frustrated spin-half Heisenberg ladder as well as for different frustrated two-dimensional quantum spin systems.
11:40 - 12:00 Lode Pollet (Ludwig-Maximilians-Universität München)
Momentum cluster approach to bosonic Harper-Hofstadter models
Since the discovery of the quantum Hall effect, the lattice geometry's influence on charged particles in magnetic fields has been the subject of extensive research. Prototypical models such as the non-interacting Harper-Hofstadter model exhibit fractionalization of the Bloch bands with non-trivial topology, manifesting in quantum Hall phases. Ultracold atomic gases with artificial magnetic fields enabled the experimental study of the non-interacting model, while the effect of strong interactions on the band properties remains an open problem. We study the strongly-interacting bosonic Harper-Hofstadter-Mott model using a reciprocal cluster mean-field (RCMF) method, finding several stable and metastable gapped phases with possibly (intrinsic) topological order competing with superfluid phases. We discuss the consequences for experimental realizations and outline strategies to improve the accuracy of the method.
12:00 - 12:20 Stefan Kehrein (Georg-August-Universität Göttingen)
Flow equation holography
AdS/CFT correspondence and the holographic principle have inspired a new way of looking at interacting quantum field theories in both equilibrium and non-equilibrium. A major challenge for a condensed matter theorist is that they only apply to a very special class of supersymmetric conformal gauge theories in a large-N limit. In this talk I will present a new approach that captures an important aspect of AdS/CFT correspondence for eigenstates of generic many-body Hamiltonians, namely the holographic formulation of entanglement entropies (Ryu-Takayanagi conjecture). I will illustrate this with some explicit calculations for fermionic systems in one and two dimensions. References:\newline S. Kehrein, Preprint arXiv:1703.03925
Topology & Correlations 1 (chair: Matthias Vojta)
14:00 - 14:20 Ronny Thomale (Universität Würzburg)
High temperature quantum spin Hall effect
The field of quantum Hall effect witnessed a huge stimulus when graphene turned out to be a suited host material to observe the integer effect at room temperature and the fractional effect at temperatures that were orders of magnitude higher than accomplished previously. This step is still to come in the field of topological insulators, in particular regarding the quantum spin Hall effect. I will report on a new theoretical scheme to create a systematic framework for high temperature quantum spin Hall effect. We illustrate the idea at the example of a joint experimental/theoretical analysis of Bi/SiC (0001), where by STM we identify the edge state density of states and by ARPES we confirm a bulk gap of 650 meV, nearly two orders of magnitude higher than that of HgTe.
14:20 - 14:40 Jan Carl Budich (University of Gothenburg)
From strongly correlated topological insulators to first-order topological quantum phase transitions
We discuss the influence of electronic correlations on the bulk properties of topological band insulators. In particular, we show how a Hund coupling favoring high spin configurations can provide a generic mechanism for inducing topological insulator phases in multi-band Hubbard models. Interestingly, strong correlations can have qualitative effects on the transition from a trivial band insulator to a topological insulator, ultimately leading to the occurrence of first-order topological quantum phase transitions.
14:40 - 15:00 Jens Eisert (Freie Universität Berlin)
Fermionic topological quantum states as tensor networks
Tensor network states, and in particular projected entangled pair states, play an important role in the description of strongly correlated quantum lattice systems. They do not only serve as variational states in numerical simulation methods, but also provide a framework for classifying phases of quantum matter and capture notions of topological order in a stringent and rigorous language. The rapid development in this field for spin models and bosonic systems has not yet been mirrored by an analogous development for fermionic models. In this work, we introduce a framework of tensor networks having a fermionic component capable of capturing notions of topological order. At the heart of the formalism are axioms of fermionic matrix product operator injectivity, stable under concatenation. Building upon that, we formulate a Grassmann number tensor network ansatz for the ground state of fermionic twisted quantum double models. A specific focus is put on the paradigmatic example of the fermionic toric code. This work shows that the program of describing topologically ordered systems using tensor networks carries over to fermionic models.
15:00 - 15:20 Titus Neupert (Universität Zürich)
One-dimensional hinge modes of three-dimensional topological insulators
I will discuss two instances of one-dimensional conducting edge channels that can appear on the boundary of three-dimensional topological crystalline insulators, one supported by an experimental observation and the second one being a theoretical prediction. For the first part, I will discuss channels that appear at step edges on the surface of (Pb,Sn)Se. These conducting channels can be understood as arising from a Berry curvature mismatch between Dirac surface states on either side of the step edge. Experimentally, they have been found to be remarkably robust against defects, magnetic fields and elevated temperature. Second, I will introduce the concept of higher-order three-dimensional topological insulators, which have gapped surfaces, but support topologically protected gapless states on their one-dimensional physical edges.
Cold Gases & Driven Systems (chair: Dirk Manske)
16:20 - 16:40 Roderich Moessner (Max-Planck-Institut für Physik komplexer Systeme)
Spatio-temporal 'time-crystalline' order in Floquet many-body systems
16:40 - 17:00 Matteo Rizzi (Johannes-Gutenberg-Universität Mainz)
Exploring interacting topological insulators with ultracold atoms: the synthetic Creutz-Hubbard model
Understanding the robustness of topological phases of matter in the presence of strong interactions, and synthesising novel strongly-correlated topological materials, lie among the most important and difficult challenges of modern theoretical and experimental physics. In this work, we present a complete theoretical analysis of the synthetic Creutz-Hubbard ladder, which is a paradigmatic model that provides a neat playground to address these challenges. We put special attention to the competition of correlated topological phases and orbital quantum magnetism in the regime of strong interactions. These results are furthermore confirmed and extended by extensive numerical simulations. Moreover we propose how to experimentally realize this model in a synthetic ladder, made of two internal states of ultracold fermionic atoms in a one-dimensional optical lattice. Our work paves the way towards quantum simulators of interacting topological insulators with cold atoms. J. Jünemann, A. Piga, S.-J. Ran, M. Lewenstein, M. Rizzi, and A. Bermudez arXiv:1612.02996
17:00 - 17:20 Michael Knap (Technische Universität München)
Scrambling and thermalization in a diffusive quantum many-body system
Out-of-time ordered (OTO) correlation functions describe scrambling of information in correlated quantum matter. They are of particular interest in incoherent quantum systems lacking well defined quasi-particles. Thus far, it is largely elusive how OTO correlators spread in incoherent systems with diffusive transport governed by a few globally conserved quantities. Here, we study the dynamical response of such a system using high-performance matrix-product-operator techniques. Specifically, we consider the non-integrable, one-dimensional Bose-Hubbard model in the incoherent high-temperature regime. Our system exhibits diffusive dynamics in time-ordered correlators of globally conserved quantities, whereas OTO correlators display a ballistic, light-cone spreading of quantum information. The slowest process in the global thermalization of the system is thus diffusive, yet information spreading is not inhibited by such slow dynamics. We furthermore develop an experimentally feasible protocol to overcome some challenges faced by existing proposals and to probe time-ordered and OTO correlation functions. Our study opens new avenues for both the theoretical and experimental exploration of thermalization and information scrambling dynamics.
17:20 - 17:40 Walter Hofstetter (Johann Wolfgang Goethe-Universität Frankfurt)
Interacting and driven topological states of ultracold lattice gases
The last years have witnessed dramatic progress in experimental control and theoretical modeling of quantum simulations based on ultracold atoms. Major recent developments include synthetic gauge fields for neutral atoms, induced by time-periodic driving of the system, which allow the simulation of topologically nontrivial phases of matter with strong interactions. I will discuss two examples: We present a systematic study of spectral functions of a time-periodically driven Falicov-Kimball Hamiltonian. In the high-frequency limit, this system can be effectively described as a Harper-Hofstadter-Falicov-Kimball model. Using real-space Floquet dynamical mean-field theory (DMFT), we take into account interaction effects and contributions from higher Floquet bands in a non-perturbative way. Our calculations show a high degree of similarity between the interacting driven system and its effective static counterpart with respect to spectral properties. We also demonstrate the possibility of using real-space Floquet DMFT to study edge states on a cylinder geometry. We furthermore consider a spinful and time-reversal invariant version of the Hofstadter-Harper problem, which has been realized in ultracold atoms, with an additional staggered potential and spin-orbit coupling. Without interactions, the system exhibits various phases such as topological and normal insulator, metal and semi-metal phases with two or even more Dirac cones. Using real-space dynamical mean-field theory (DMFT), we investigate the stability of the Quantum Spin Hall state in the presence of strong interactions. To test the bulk-boundary correspondence between edge mode parity and bulk Chern index of the interacting system, we calculate an effective topological Hamiltonian based on the local self-energy of DMFT.
17:40 - 18:00 Christoph Karrasch (Freie Universität Berlin)
Transport in quasiperiodic interacting systems: from superdiffusion to subdiffusion
Using a combination of numerically exact and renormalization-group techniques we study the nonequilibrium transport of electrons in an one-dimensional interacting system subject to a quasiperiodic potential. For this purpose we calculate the growth of the mean-square displacement as well as the melting of domain walls. While the system is nonintegrable for all studied parameters, there is no on finite region default of parameters for which we observe diffusive transport. In particular, our model shows a rich dynamical behavior crossing over from superdiffusion to subdiffusion. We discuss the implications of our results for the general problem of many-body localization, with a particular emphasis on the rare region Griffiths picture of subdiffusion.
19:30 - 20:30 Bruce Normand (Paul Scherrer Institut)
K2 without \(O_2\)
Hubbard Systems (chair: Johannes Knolle)
09:00 - 09:20 Michael Lang (Johann Wolfgang Goethe-Universität Frankfurt)
Breakdown of Hooke's law of elasticity at the Mott critical endpoint in an organic conductor
The Mott transition is a key phenomenon in strongly correlated electron systems. Despite its relevance for a wide range of materials, fundamental aspects of this transition are still unresolved. Of particular interest is the role of the lattice degrees of freedom in the Mott transition. In this talk, we will present results of the thermal expansion coefficient as a function of Helium-gas pressure around the Mott transition of the organic charge-transfer salt $\kappa$-(BEDT-TTF)$_2$Cu[N(CN)$_2$]Cl. The salient result of our study is the observation of a strong, highly non-linear lattice response upon approaching the second-order critical endpoint of the transition. This apparent breakdown of Hooke's law of elasticity reflects an intimate, non-perturbative coupling of the critical electronic degrees of freedom to the crystal lattice. Our results are fully consistent with mean-field criticality, predicted theoretically for electrons coupled to a compressible lattice with finite shear modulus [2]. [1] E. Gati et al., Science Advances 2: e1601646 (2016) [2] M. Zacharias et al., Phys. Rev. Lett. 109, 176401 (2012) Work has been done in collaboration with E. Gati, M. Garst, R.S. Manna, U. Tutsch, B. Wolf, L. Bartosch, T. Sasaki, H. Schubert, and J.A. Schlueter
09:20 - 09:40 Walter Metzner (Max-Planck-Institut für Festkörperforschung)
Competition between magnetism and superconductivity in the Hubbard model and in the cuprates
We analyze the competition of magnetism and superconductivity in the two-dimensional Hubbard model with a moderate interaction strength, including the possibility of incommensurate spiral magnetic order. Using an unbiased renormalization group approach, we compute magnetic and superconducting order parameters in the ground state. In addition to previously established regions of Neel order coexisting with d-wave superconductivity, the calculations reveal further coexistence regions where superconductivity is accompanied by incommensurate magnetic order [1]. We show that a Fermi surface reconstruction due to spiral antiferromagnetic order may explain the rapid change in the Hall number in a strong magnetic field as recently observed near optimal doping in cuprate superconductors. The single-particle spectral function in the spiral state exhibits hole pockets which look like Fermi arcs due to a strong momentum dependence of the spectral weight [2]. [1] H. Yamase, A. Eberlein, and W. Metzner, Phys. Rev. Lett. 116, 096402 (2016). [2] A. Eberlein, W. Metzner, S. Sachdev, and H. Yamase, Phys. Rev. Lett. 117, 187001 (2016).
09:40 - 10:00 Reinhard Noack (Philipps-Universität Marburg)
Application of the hybrid-space DMRG to multichain and two-dimensional Hubbard models
I describe the hybrid momentum-real-space DMRG method and discuss applications to multichain and two-dimensional Hubbard models at and away from half-filling and with and without frustration.
10:00 - 10:20 Philippe Corboz (University of Amsterdam)
Stripe order in the 2D Hubbard model
The discovery of high-temperature superconductivity in the cuprates has stimulated intense study of the Hubbard and t-J models on a square lattice. However, the accurate simulation of these models is one of the major challenges in computational physics. In this talk I report on recent progress in simulating the Hubbard model at a particularly challenging point in the phase diagram, $U/t=8$, and doping $\delta=1/8$, at which an extremely close competition between a uniform d-wave superconducting state and different types of stripe states is found. Here I mostly focus on results obtained with infinite projected-entangled pair states (iPEPS) - a variational tensor network approach where the accuracy can be systematically controlled by the so-called bond-dimension D. Systematic extrapolations to the exact, infinite D limit show that the fully-filled stripe ordered state is the lowest energy state. Consistent results are obtained with density matrix embedding theory, the density matrix renormalization group, and constrained-path auxiliary field quantum Monte Carlo, demonstrating the power of current state-of-the-art numerical methods to solve challenging open problems.
Methods 2 (chair: Walter Metzner)
10:40 - 11:00 Karsten Held (Technische Universität Wien)
Non-local correlations beyond DMFT: from quantum criticality to metal insulator transitions
Dynamical mean field theory (DMFT) has been a big step forward for our understanding of electronic correlations. A major part of the electronic correlations, the local ones, are included. On the other hand, DMFT neglects non-local correlations that are at the origin of many physical phenomena such as (quantum) criticality, high-T superconductivity, weak localization and other vertex corrections to transport in nanoscopic systems. To address these topics the scientific frontier moved to cluster and diagrammatic extensions of DMFT such as the dynamical vertex approximation [1,2]. I will present an introduction to the diagramtic extensions of DMFT and discuss selected applications: the calculation of quantum critical exponents rendering Kohn lines in the bandstructure [3], the fate of the Mott-Hubbard metal-insulator tranistion in two dimensions [4], and the application to realisitc materials calcualtions [5]. [1] A. Toschi, A. A. Katanin, and K. Held, Phys. Rev. B 75, 045118 (2007). [2] G. Rohringer et al., in preparation. [3] T. Schäfer et al., arXiv:1605.06355. [4] T. Schäfer et al., Phys. Rev. B 91, 125109 (2015). [5] A. Galler et al., Phys. Rev. B 95, 115107 (2016).
11:00 - 11:20 Enrico Arrigoni (Technische Universität Graz)
Correlated impurities and interfaces out of equilibrium: auxiliary master equation approach
I will present a numerical scheme to address correlated quantum impurities out of equilibrium down to the Kondo scale. This Auxiliary Master Equation Approach [1,2,3] is also suited to deal with transport across correlated interfaces within nonequilibrium Dynamical Mean Field Theory [5]. The method consists in mapping the original impurity problem onto an auxiliary open quantum system including bath orbitals as well as a coupling to a Markovian environment. The mapping becomes exponentially exact upon increasing the number of bath orbitals. The intervening auxiliary orbitals allow for a treatment of non-Markovian dynamics at the impurity. The time dependence of the auxiliary system is controlled by a Lindblad master equation whose parameters are determined via an optimization scheme. Green's functions are evaluated via (non-hermitian) Lanczos exact diagonalisation [2] or by matrix-product states (MPS) [3]. In particular, the MPS implementation produces highly accurate spectral functions for the Anderson impurity model in the Kondo regime. The approach can be also used in equilibrium, where we obtain a remarkably close agreement to numerical renormalization group. I will present results for electric and thermal transport across an Anderson impurity [2,3,4] and across correlated interfaces [5]. An implementation within Floquet theory for periodic driving [6] will be discussed as well. [1] E. Arrigoni et al., Phys. Rev. Lett. 110, 086403 (2013) [2] A. Dorda et al., Phys. Rev. B 89 165105 (2014) [3] A. Dorda et al., Phys. Rev. B 92, 125145 (2015) [4] A. Dorda et al., Phys. Rev. B 94, 245125 (2016) [5] I. Titvinidze et al., Phys. Rev. B 92, 245125 (2015) [6] M. Sorantin et al., in preparation.
11:20 - 11:40 Hans Gerd Evertz (Technische Universität Graz)
Multiorbital real time impurity solver with matrix product states
A multi-orbital impurity solver for dynamical mean field theory (DMFT) is presented, which employs a tensor network similar to Matrix Product States. It yields high spectral resolution at all frequencies. The solver works directly on the real-time / real-frequency axis and yields high spectral resolution at all frequencies. We use a large number (O(100)) of bath sites, and therefore achieve an accurate representation of the bath. The solver can treat full rotationally invariant interactions with reasonable numerical effort. We first show the efficiency and accuracy of the method by a benchmark for the three-orbital testbed material SrVO3, where we observe multiplet structures in the high-energy spectrum which are almost impossible to resolve by other multi-orbital methods. Finally we will present new results for five-orbital models.
11:40 - 12:00 Rainer Härtle (Georg-August-Universität Göttingen)
Transport through strongly correlated materials: a hierarchical quantum master equation approach
We investigate the transport properties of strongly correlated materials in the framework of nonequilibrium dynamical mean field theory, where we use the hierarchical quantum master equation approach to solve the respective impurity problem [1,2]. The approach employs a hybridization expansion which can be converged if the temperature of the environment is not too low [3]. It is time-local and can, therefore, be used to study the long-lived dynamics inherent to this problem, including the nonequilibrium steady states. Different truncation levels allow for a systematic analysis of the relevant physical processes. Our results elucidate, in particular, the role of inelastic processes for the transport properties of materials that can be described by the Hubbard model. [1] Jin et al., JCP 128, 234703 (2008); [2] Härtle et al., PRB 88, 235426 (2013); [3] Härtle et al., PRB 92, 085430 (2015).
12:00 - 12:20 Simone Montangero (Universität Ulm)
Recent advancements in tensor network methods
We review some recent advancements in tensor network algorithms and their application to the study of correlated matter. We present novel approaches to study abelian and non-abelian lattice gauge theories, open many-body quantum systems and systems with long-range interactions or periodic boundary conditions. These novel approaches allowed us to obtain results on a variety of phenomena hardly accessible before, such as the Kibble-Zurek mechanism in Wigner crystals, the out-of-equilibrium dynamics of the Schwinger model and the phase diagram of the disordered Bose-Hubbard model.
Excitations (chair: Roser Valenti)
14:00 - 14:20 Christian Jooß (Georg-August-Universität Göttingen)
Tuning of hot polaron states with a nanosecond lifetime in a manganite perovskite via correlations
Understanding and controlling the relaxation process of optically excited charge carriers in solids with strong correlations is of great interest in the quest for new strategies to exploit solar energy. Usually, optically excited electrons in a solid thermalize rapidly on a femtosecond to picosecond timescale due to interactions with other electrons and phonons. New mechanisms to slow down thermalization would thus be of great significance for efficient light energy conversion, e.g. in photovoltaic devices. Ultrafast optical pump probe experiments in the manganite Pr$_0.65$Ca$_0.35$MnO$_3$, a photovoltaic, thermoelectric and electro-catalytic material with strong polaronic correlations, reveal an ultra-slow recombination dynamics on a nanosecond-time scale [1]. The nature of long living excitations is further elucidated by photovoltaic measurements, showing the presence of photo-diffusion of excited electron-hole polaron pairs [1,2]. Theoretical considerations suggest that the excited charge carriers are trapped in a hot polaron state. The dependence of the lifetime on charge order implies that strong correlation between the excited polaron and the octahedral dynamics of its environment appears to be substantial for stabilizing the hot polaron [3]. Furthermore, modification of the interfacial atomic and electronic structure of the manganite-titanite junctions gives insights into the processes underlying the transfer of a small polaron between materials with different correlations [4]. [1] Evolution of hot polaron states with a nanosecond lifetime in a manganite, D. Raiser, S. Mildner, B. Ifland, M. Sotoudeh, P. Blöchl, S. Techert, C. Jooss, Advanced Energy Materials, 2017, 1602174, DOI: 10.1002/aenm.201602174. [2] Polaron absorption for photovoltaic energy conversion in a manganite-titanate pn-heterojunction, G. Saucke, J. Norpoth, D. Su, Y. Zhu and Ch. Jooss, Phys. Rev. B 85, 165315 (2012). [3] Contribution of Jahn-Teller and charge transfer excitations to the photovoltaic effect of manganite/titanite heterojunctions, B. Ifland, J. Hoffmann, B. Kressdorf, V. Roddatis, M. Seibt and Ch. Jooss, New Journal of Physics, accepted for publication. [4] Current–voltage characteristics of manganite–titanite perovskite junctions, B. Ifland, P. Peretzki, B. Kressdorf, Ph. Saring, A. Kelling, M. Seibt and Ch. Jooss, Beilstein Journal of Nanotechnology, 2015, 6, 1467–1484
14:20 - 14:40 Achim Rosch (Universität zu Köln)
Pumping correlated materials with approximate conservation laws
Weak perturbations can drive an interacting many-particle system far from its initial equilibrium state if one is able to pump into degrees of freedom approximately protected by conservation laws. This concept has for example been used to realize Bose-Einstein condensates of photons, magnons, and excitons. Integrable quantum system like the one-dimensional Heisenberg model are characterized by an infinite set of conservation laws. Here we develop a theory of weakly driven integrable systems and show that pumping can induce huge spin or heat currents even in the presence of integrability breaking perturbations, since it activates local and quasi-local approximate conserved quantities. The resulting steady state is approximately, though efficiently, described by a generalized Gibbs ensemble, which depends sensitively on the structure but not on the overall amplitude of perturbations or on the initial state. We suggest to realize novel heat or spin pumps using spin-chain materials driven by THz radiation.
14:40 - 15:00 Frank Pollmann (Technische Universität München)
Dynamical signatures of quantum spin liquids
Condensed matter is found in a variety of phases, the vast majority of which are characterized in terms of symmetry breaking. However, the last few decades have yielded a plethora of theoretically proposed quantum phases of matter which fall outside this paradigm. Recent focus lies on the search for concrete realizations of quantum spin liquids. These are notoriously difficult to identify experimentally because of the lack of local order parameters. In my talk, I will discuss universal properties found in dynamical response functions that are useful to characterize these exotic states of matter. First, we show that the anyonic statistics of fractionalized excitations display characteristic signatures in threshold spectroscopic measurements. The low energy onset of associated correlation functions near the threshold show universal behavior depending on the statistics of the anyons. This explains some recent theoretical results in spin systems and also provides a route towards detecting statistics in experiments such as neutron scattering and tunneling spectroscopy [1]. Second, we introduce a matrix-product state based method to efficiently obtain dynamical response functions for two-dimensional microscopic Hamiltonians, which we apply to different phases of the Kitaev-Heisenberg model. We find significant broad high energy features beyond spin-wave theory even in the ordered phases proximate to spin liquids. This includes the phase with zig-zag order of the type observed in α-RuCl3, where we find high energy features like those seen in inelastic neutron scattering experiments. [1] S. C. Morampudi, A. M. Turner, F. Pollmann, and F. Wilczek, arXiv:1608.05700. [2] M. Gohlke, R. Verresen, R. Moessner, and and F. Pollmann, arXiv:1701.04678.
15:00 - 15:20 Michael Bonitz (Christian-Albrechts-Universität zu Kiel)
Dynamics of highly excited strongly correlated fermions - a nonequilibrium Green functions approach
Strongly correlated quantum particles with half-integer spin are of growing interest in many fields, including condensed matter, dense plasmas and ultracold atoms. From a theory point of view these systems are very challenging. Also, ab initio quantum simulations such as DMRG are essentially limited to the 1D case. Here, I will present an example of recent breakthroughs we could achieve using a Nonequilibrium Green functions approach [1] that has recently allowed us to simulate the nonequilibrium transport in 2D and 3D fully including strong correlation effects. We achieve, for the first time, excellent agreement with ultracold atom experiments [2]. Our results are close to DMRG results where they are available but, in many cases, allow for much longer simulations [3]. I will close by discussing prospects of using NEGF for transport and optics of solids as well as solids in contact with a plasma [4]. [1] Michael Bonitz, "Quantum Kinetic Theory", 2nd edition, Springer 2016. [2] Niclas Schlünzen, Sebastian Hermanns, Michael Bonitz, and Claudio Verdozzi, Phys. Rev. B 93, 035107 (2016). [3] Niclas Schlünzen, Jan-Philip Joost, Fabian Heidrich-Meisner, and Michael Bonitz, Phys. Rev. B (2017) [4] Karsten Balzer, Niclas Schlünzen, Jan-Philip Joost, and Michael Bonitz, Phys. Rev. B 94, 245118(2016).
15:20 - 15:40 Martin Eckstein (Friedrich-Alexander-Universität Erlangen-Nürnberg)
Manipulating correlated electron systems with short electric field transients
Femtosecond laser technology has opened the possibility to probe and control the dynamics of complex condensed matter phases on microscopic timescales. In this talk, I will focus on various proposals to manipulate complex states with spin and orbital order, using the electric field of the laser. This can be done both in the transient regime, where one can use short field transients to switch between different polarizations of a composite orbital order, and for periodic driving, where the laser-driven system effectively evolves with a Floquet-Hamiltonian with light-induced spin and orbital exchange interactions.
Impurity & Kondo Systems (chair: Markus Garst)
16:20 - 16:40 Johannes Knolle (University of Cambridge)
Disorder-free localization: absence of ergodicity without quenched disorder
The venerable phenomena of Anderson localization, along with the much more recent many-body localization, both depend crucially on the presence of disorder. The latter enters either in the form of quenched disorder in the parameters of the Hamiltonian, or through a special choice of a disordered initial state. Here, we present a model with localization arising in a very simple, completely translationally invariant quantum model, with only local interactions between spins and fermions. By identifying an extensive set of conserved quantities, we show that the system generates purely dynamically its own disorder, which gives rise to localization of fermionic degrees of freedom. Our work gives an answer to a decades old question whether quenched disorder is a necessary condition for localization. It also offers new insights into the physics of many-body localization, lattice gauge theories, and quantum disentangled liquids.
16:40 - 17:00 Theo Costi (Forschungszentrum Jülich)
Time evolution of the Kondo resonance in response to a quench
We investigate the time evolution of the Kondo resonance in response to a quench by applying the time-dependent numerical renormalization group (TDNRG) approach to the Anderson impurity model in the strong correlation limit [1]. For this purpose, we derive within TDNRG a numerically tractable expression for the retarded two-time nonequilibrium Green function $G(t+t',t)$, and its associated time-dependent spectral function, $A(\omega,t)$, for times $t$ both before and after the quench. Quenches from both mixed valence and Kondo correlated initial states to Kondo correlated final states are considered. For both cases, we find that the Kondo resonance in the zero temperature spectral function only fully develops at very long times $t\gtrsim 1/T_{\rm K}$, where $T_{\rm K}$ is the Kondo temperature of the final state. In contrast, the final state satellite peaks develop on a fast time scale $1/\Gamma$ during the time interval $-1/\Gamma \lesssim t \lesssim +1/\Gamma$, where $\Gamma$ is the hybridization strength. Initial and final state spectral functions are recovered in the limits $t\rightarrow -\infty$ and $t\rightarrow +\infty$, respectively. Our formulation of two-time nonequilibrium Green functions within TDNRG provides a first step towards using this method as an impurity solver within nonequilibrium dynamical mean field theory. Finally, we show how to improve the calculation of spectral functions in the long time limit within a new multiple quench TDNRG approach with potential application to precise steady state transport calculations for quantum dots [2]. [1] H. T. M. Nghiem and T. A. Costi, arXiv:1701.07558 [2]F. B. Anders, Phys. Rev. Lett. {\bf 101}, 66804 (2008)
17:00 - 17:20 Johann Kroha (Universität Bonn)
Kondo breakdown in Kondo lattices: RKKY coupling and non-equilibrium THz spectroscopy
The fate of the fermionic quasiparticles near a quantum phase transition (QPT) in certain heavy-fermion compounds where the Hertz-Moriya-Millis scenario does not apply has been subject of intense debate for many years. It is generally believed that this Kondo destruction is driven by the critical fluctuations near the QPT. Here we show that the heavy-fermion quasiparticles can be destroyed by the RKKY interaction even without critical fluctuations [1]. This is due to a hitherto unrecognized interference of Kondo screening and the RKKY interaction beyond the Doniach scenario: In a Kondo lattice, the spin exchange coupling between a local spin and the conduction electrons acquires nonlocal contributions due to conduction electron scattering from surrounding local spins and subsequent RKKY interaction. We develop a novel type of renormalization group theory for this RKKY-modified Kondo vertex [1]. The Kondo temperature, $T_K(y)$, is suppressed in a universal way, controlled by the dimensionless RKKY coupling parameter y. Complete spin screening ceases to exist beyond a critical RKKY strength $y_c$ even in the absence of magnetic ordering. At this breakdown point, $T_K(y)$ remains nonzero and is not defined for larger RKKY couplings, $y>y_c$. These results agree quantitatively with STM spectroscopy experiments on continuously tunable two-impurity Kondo systems [2] and on two-site Kondo systems on a metallic surface [3]. We discuss in detail most recent time-resolved THz reflectometry experiments on the heavy-fermion compound $CeCu_{6-x}Au_x$ at the quantum critical concentration x=0.1 [4]. In these experiments, the spectral weight as well as the energy scale $T_{K}^*$ for the formation of the heavy-fermion quasiparticles can be extracted from the intensity and the delay time, respectively, of a Kondo-induced, THz reflex. THz artifacts, e.g., reflexes from the optical components, are carefully excluded. Both experiment and theory support a quantum critical scenario in $CeCu_{6-x}Au_x$ where the heavy-fermion quasiparticles disintegrate near the QPT in that their spectral weight collapses, but their resonance width (the lattice Kondo temperature $T_K^*$) remains finite. This is consistent with $\omega/T$ scaling as an indicator for critical Kondo destruction. [1] A. Nejati, K. Ballmann, and J. Kroha, Phys. Rev. Lett. 118, 117204 (2017). [2] J. Bork, Y.-H. Zhang, L. Diekhöhner L. Borda, P. Simon, J. Kroha, P. Wahl, and K. Kern, Nature Physics 7, 901 (2011). [3] N. Neel, R. Berndt, J. Kröger, T. O. Wehling, A. I. Lichtenstein, and M. I. Katsnelson, Phys. Rev. Lett. 107, 106804 (2011): A. Nejati and J. Kroha, arXiv:1612.06620 (2016). [4] Ch. Wetli, J. Kroha, K. Kliemt, C. Krellner, O. Stockert, H. von Löhneysen, and M. Fiebig, arXiv:1703.04443.
17:20 - 17:40 Frithjof Anders (Technische Universität Dortmund)
Non-equilibrium dynamics in a two-impurity model close to quantum phase transition
We show that the two impurity Anderson model exhibit an additional quantum critical point at infinitely many specific distances between both impurities for an inversion symmetric 1D dispersion. Unlike the quantum critical point previously established by Jones and Varma, it is robust against particle-hole or parity symmetry breaking. The quantum critical point separates a spin doublet from a spin singlet ground state and is, therefore, protected by symmetry. A finite single particle tunneling t or an applied uniform gate voltages will drive the system across the quantum critical point. The discriminative magnetic properties of the different phases cause a jump in the spectral functions at low temperature which might be useful for future spintronics devices. A local parity conservation will prevent the spin-spin correlation function to decay to its equilibrium value after spin-manipulations.
poster session (focus on odd poster numbers)
Spin Ice & Magnetic Systems (chair: Stefan Wessel)
09:00 - 09:20 Santiago Grigera (CONICET and UNLP)
Is the ground-state of spin-ice like ice?
This talk will address the question on the ground state of real spin-ice systems. Spin-ice are magnetic systems, Dy$_2$Ti$_2$O$_7$ to name one example, that are charaterised by big magnetic moments sitting on a pyrochlore lattice. At first instance, the interaction between these moments leads to magnetic frustration and the expectation of a extensively degenerate ice-like ground state. Theoretical work and recent experimental evidence put this naive expectation into question. In this talk I will discuss experimental work and numerical simulations on two different spin-ice systems, Dy$_2$Ti$_2$O$_7$ and Ho$_2$Ti$_2$O$_7$, that aim at elucidating the true ground state of spin-ice materials.
09:20 - 09:40 Patrik Henelius (KTH Stockholm)
Refrustration and competing orders in the prototypical \(Dy_2Ti_2O_7\) spin ice material
Spin ices, frustrated magnetic materials analogous to common water ice, have emerged over the past fifteen years as exemplars of high frustration in three dimensions. By analyzing existing experimental data and carefully remeasuring the thermodynamic quantity $\chi T$ we find that in this material, small effective spin-spin exchange interactions compete with the magnetostatic dipolar interaction responsible for the main spin ice phenomenology. This causes an unexpected 'refrustration' of the long-range order that would be expected from the incompletely self-screened dipolar interaction and which positions the material at the boundary between two competing classical long-range ordered ground states, very close to where theory suggests that a continuous transition described by a noncompact CP$^1$ theory. Furthermore, experimentally we find a interaction induced peak in $\chi T$, which constitutes a magnetic analogue of the Joule temperature in classical gases.
09:40 - 10:00 Karlo Penc (Wigner Research Centre for Physics)
Direct observation of spin-quadrupolar excitations in \(Sr_2CoGe_2O_7\) by high field ESR
Exotic spin-multipolar ordering in large spin transition metal insulators has so far eluded unambiguous experimental observation. A less studied, but perhaps more feasible fingerprint of multipole character emerges in the excitation spectrum in the form of quadrupolar transitions. Such multipolar excitations are desirable as they can be manipulated with the use of light or electric field and can be captured by means of conventional experimental techniques. Here we study single crystals of multiferroic Sr2CoGe2O7, and show that due to its nearly isotropic nature a purely quadrupolar bimagnon mode appears in the electron spin resonance (ESR) spectrum. This non-magnetic spin-excitation couples to the electric field of the light and becomes observable for a specific experimental configuration, in full agreement with a theoretical analysis of the selection rules.
10:00 - 10:20 Roser Valenti (Johann Wolfgang Goethe-Universität Frankfurt)
Breakdown of magnons in a strongly spin-orbital coupled magnet
The recent discovery that strongly spin-orbital coupled magnets such as alpha-RuCl3, may display a broad excitation continuum inconsistent with conventional magnons has lead to the proposal of having a possible realization of a Kitaev spin liquid with a coherent continuum of majorana excitations. In this talk we discuss the underlying interactions in alpha-RuCl3 and present a more general scenario to describe the observed continuum that is consistent with ab initio calculations and available experimental observations.
Nonequilibrium 1 (chair: Achim Rosch)
10:40 - 11:00 Michael Potthoff (Universität Hamburg)
Electron correlations, quantum nutation and geometrical torque in spin dynamics
We study the real-time dynamics of a single spin in an external magnetic field, coupled antiferromagnetically to an extended system of conduction electrons which is initiated by a sudden switch of the field direction. The problem is treated numerically by means of tight-binding spin dynamics (TB-SD) assuming the spin as a classical observable and by time-dependent density-matrix renormalization group (t-DMRG). Effects of conduction-electron correlations, of quantum spin fluctuations, and energy dissipation are shown to result in various anomalous effects, such as ill-defined Gilbert damping, incomplete spin relaxation due to time-scale separation, quantum nutation, and precessional motion with enhanced effective Larmor frequency due to a geometrical torque.
11:00 - 11:20 Sebastian Diehl (Universität zu Köln)
Fate of the Kosterlitz-Thouless transition in driven open quantum systems
Recent developments in diverse areas - ranging from cold atomic gases to light driven semiconductors to microcavity arrays - move systems into the focus which are located on the interface of quantum optics, many-body physics and statistical mechanics. They share in common that coherent and driven?dissipative quantum dynamics occur on an equal footing, creating genuine non-equilibrium scenarios without immediate counterpart in equilibrium condensed matter physics. We study such systems in two dimensions on the basis of a duality transformation mapping the problem into a non-linear noisy electrodynamics, where the charges represent vortices. In the absence of vortices, the problem is equivalent to the Kardar-Parisi-Zhang equation. We show that the paradigmatic quasi-long range order of equilibrium systems at low temperature must be absent asymptotically. More precisely, the non-equilibrium drive generates two independent scales, leading to distinct but subalgebraic behavior of the asymptotic correlation functions. Although the usual Kosterlitz-Thouless phase transition does thus not exist out of equilibrium, a new phase transition into a vortex turbulent state is found, which occurs as a function of increasing non-equilibrium strength.
11:20 - 11:40 Robin Steinigeweg (Universität Osnabrück)
Typical and untypical states for non-equilibrium quantum dynamics
The real-time broadening of density profiles starting from non-equilibrium states is at the center of transport in condensed-matter systems and dynamics in ultracold atomic gases. Initial profiles close to equilibrium are expected to evolve according to linear response, e.g., as given by the current correlator evaluated exactly at equilibrium. Significantly off equilibrium, linear response is expected to break down and even a description in terms of canonical ensembles is questionable. We unveil that single pure states with density profiles of maximum local amplitude yield a broadening in perfect agreement with linear response, if the structure of these states involves randomness in terms of decoherent off-diagonal density-matrix elements. While these states allow for spin diffusion in the XXZ spin-1/2 chain at large exchange anisotropies, coherences yield entirely different behavior [1]. In contrast, charge diffusion in the strongly interacting Hubbard chain turns out to be stable against varying such details of the initial conditions [2]. [1] R. Steinigeweg, F. Jin, D. Schmidtke, H. De Raedt, K. Michielsen, J. Gemmer, Phys. Rev. B 95, 035155 (2017). [2] R. Steinigeweg, F. Jin, H. De Raedt, K. Michielsen, J. Gemmer, arXiv:1702.00421 (2017).
11:40 - 12:00 Janine Splettstoesser (Chalmers University of Technology)
Fundamental restrictions on relaxation of charge, spin and energy in interacting quantum dots: a new duality from parity-superselection
12:00 - 12:20 Dirk Manske (Max-Planck-Institut für Festkörperforschung)
Higgs spectroscopy of superconductors in non-equilibrium
Time-resolved pump-probe experiments recently attracted great interest, since they allow to detecting hidden states and they provide new information on the underlying dynamics in solids in real time. With the observation of a Higgs mode in superconductors it is now possible to investigate the superconducting order parameter directly. Recently, we have established a theory for superconductors in non-equilibrium, for example in a pump-probe experiment. Using the Density-Matrix-Theory (DMT) we have developed an approach to calculate the response of conventional and unconventional superconductors in a time-resolved experiment. In particular, DMT method is not restricted to small timescales; in particular it provides a microscopic description of the quench, and also allows also the incorporation of phonons. Furthermore, we employ DMT to time-resolved Raman scattering experiments and make predictions for 2-band superconductors. Very recently, we have focused on the theory for order parameter amplitude ('Higgs') oscillations which are the realization of the Higgs mode in superconductors [1]. New predictions are made for the Leggett mode in 2-band superconductors. Finally, we address the question of induced superconductivity in non-equilibrium. Many of our predictions have been recently confirmed experimentally. References: [1] H. Krull et al., Nat. Comm. 7, 11921 (2016).
Transport (chair: Jean-Sebastien Caux)
14:00 - 14:20 Wolfram Brenig (Technische Universität Braunschweig)
Thermal transport in Kitaev-Heisenberg spin systems
We present results for the dynamical thermal conductivity of the Kitaev-Heisenberg model on ladders and the Kitaev model on honeycomb lattices. In the pure Kitaev limit, and in contrast to other integrable spin systems, the ladder represents a perfect heat insulator. This is a fingerprint of fractionalization into mobile Majorana matter and a static Z2 gauge field. We find a full suppression of the Drude weight and a pseudogap in the conductivity. With Heisenberg exchange, we find a crossover from a heat insulator to conductor, due to recombination of fractionalized spins into triplons. For the honeycomb lattice, we show that very similar behavior occurs in 2D. Our results rest on several approaches comprising a mean-field theory, complete summation over all gauge sectors, exact diagonalization, and quantum typicality calculations.
14:20 - 14:40 Tilman Enss (Ruprecht-Karls-Universität Heidelberg)
Quantum-limited spin transport in two-dimensional Fermi gases
We measure and compute the transport properties of two-dimensional ultracold Fermi gases during transverse demagnetization in a magnetic field gradient. Using a phase-coherent spin-echo sequence, we are able to distinguish bare spin diffusion from the Leggett-Rice effect, in which demagnetization is slowed by the precession of spin current around the local magnetization. When the two-dimensional scattering length is tuned to be comparable to the inverse Fermi wave vector, we find that the bare transverse spin diffusivity reaches a minimum of order $\hbar/m$. The rate of demagnetization is also reflected in the growth rate of the s-wave contact, which quantifies how scale invariance is broken by near-resonant interactions. Our observations support the conjecture that in systems with strong scattering, the local relaxation rate is bounded from above by $kT/\hbar$, in analogy with what is found in materials with $T$-linear resistivity. [1] C. Luciuk, S. Smale, F. Boettcher, H. Sharum, B.A. Olsen, S. Trotzky, T. Enss, and J. H. Thywissen, Phys. Rev. Lett. (2017), arXiv:1612.00815.
14:40 - 15:00 Fabian Heidrich-Meisner (Ludwig-Maximilians-Universität München)
Nonequilibrium and transport dynamics in the 1D Fermi-Hubbard model
15:00 - 15:20 Volker Meden (RWTH Aachen)
Exponential and power-law renormalization in phonon-assisted tunneling
We investigate the spinless Anderson-Holstein model routinely employed to describe the basic physics of phonon-assisted tunneling in molecular devices. Our focus in on small to intermediate electron-phonon coupling; we complement a recent strong coupling study [Phys. Rev. B 87, 075319 (2013)]. The entire crossover from the antiadiabatic regime to the adiabatic one is considered. Our analysis using the essentially analytical functional renormalization group approach backed-up by numerical renormalization group calculations goes beyond lowest order perturbation theory in the electron-phonon coupling. In particular, we provide an analytic expression for the effective tunneling coupling at particle-hole symmetry valid for all ratios of the bare tunnel coupling and the phonon frequency. It contains the exponential polaronic as well as the power-law renormalization; the latter can be traced back to x-ray edge-like physics. In the antiadibatic and the adiabatic limit it agrees with the known expressions obtained by mapping to an effective interacting resonant level model and lowest order perturbation theory, respectively. In addition, we discuss spectral and linear transport properties of the model.
group photo (to be published on the workshop web page)
Methods 3 & Exotic Phases (chair: Bella Lake)
16:20 - 16:40 Corinna Kollath (Universität Bonn)
Evolution of correlations and order coupled to an environment
The properties of collective phases occuring in strongly correlated materials are characterized by their correlations and the occuring orders. The controlled switching of these properties requires the understanding of the evolution of the correlations and the order after an excitation of the system. This excitation can be the application of an electric field or the switching of other system parameters. A lot of work has been done on the propagation of correlations in isolated systems. Here we will focus on the evolution in systems under the influence of an environment as the phononic coupling represents in materials or light field in cold atomic samples. The states considered are topologically non-trivial states or pairing states.
16:40 - 17:00 Örs Legeza (Wigner Research Centre for Physics)
Tensor product methods and entanglement optimization for models with long range interactions
Tensor network states and specifically matrix-product states have proven to be a powerful tool for simulating ground states of strongly correlated spin and fermionic models. In this contribution, we focus on tensor network states techniques that can be used for the treatment of high-dimensional optimization tasks in strongly correlated quantum many-body systems with long range interactions. We will present our recent developments on fermionic orbital optimization and tree-tensor network states and discuss properties of various strongly correlated systems in light of the entanglement which gives insight into the fundamental nature of the correlations in their ground states. Examples will be shown for extended periodic systems, transition metal complexes and graphene nanoribbons.
17:00 - 17:20 Roman Orus (Johannes-Gutenberg-Universität Mainz)
Tensor networks for 2d quantum lattice systems: topological spin liquids, steady states, and corner spectra
In this talk I will briefly present three recent advances in tensor network states and techniques for 2d quantum lattice systems. Namely, (i) the classification of SU(2) spin liquids with PEPS and their characterization via the entanglement spectrum in cylinders, (ii) a new algorithm for computing steady states of 2d dissipative systems, and (iii) the holographic encoding of universal properties in the spectra of corner transfer matrices and corner tensors in numerical simulations with infinite PEPS.
17:20 - 17:40 Jesko Sirker (University of Manitoba)
Many-body localization in chains and ladders
I will give an overview of our recent work on many-body localization in infinite chains with binary disorder. I will discuss the phase diagram as well as possible realizations of such systems using cold atomic gases. In a second part, I will discuss Anderson and many-body localization in ladder-like systems.
poster session (focus on even poster numbers)
Exotic Phases 1 (chair: Tilman Enss)
09:00 - 09:20 Fakher Assaad (Julius-Maximilians-Universität Würzburg)
Dirac fermions with competing mass terms: non-Landau transition with emergent symmetry
There has recently been a flurry of negative sign free fermionic models that exhibit exotic phases and phase transitions. After reviewing these advances, I will place the emphasis on a model of Dirac fermions in 2+1 dimensions with dynamically generated, anti-commuting SO(3) N\'eel and Z2 Kekul\'e mass terms. The phase diagram is obtained from finite-size scaling and includes a direct and continuous transition between N\'eel and Kekul\'e phases. The fermions remain gapped across the transition, and our data support an emergent SO(4) symmetry unifying the two order parameters. This is only one of many negative sign free models with dynamically generated competing mass terms that allow to investigate phase transitions beyond the Ginzburg-Landau-Wilson paradigm.
09:20 - 09:40 Markus Garst (Technische Universität Dresden)
Elastic phase transitions in Mott insulators and close to nematic quantum critical points
In the presence of an elastic coupling, the strong fluctuations of critical modes close to a second-order phase transition might induce an elastic phase transition of the crystal lattice. In this case, the long-range shear forces of the crystal fundamentally influences the critical properties [1]. One can distinguish isostructural transitions, where the crystal symmetry remains unchanged, and symmetry-breaking elastic phase transitions. The former is relevant for the universality of the Mott endpoint at finite temperature [2], which is corroborated by recent experiments of expansivity in an organic conductor [3]. We discuss symmetry-breaking elastic quantum phase transitions in general, for which the critical thermodynamics of phonons violate Debye's law [4]. Finally, we argue that elasticity is relevant for nematic quantum criticality [5], that is discussed to be relevant to the cuprates, ruthenates and pnictides. [1] M. Zacharias, A. Rosch, and M. Garst Critical elasticity at zero and finite temperature, Eur. Phys. J. Special Topics 224, 1021 (2015) [2] M. Zacharias, L. Bartosch, and M. Garst, Mott metal-insulator transition on compressible lattices, Phys. Rev. Lett. 109, 176401 (2012) [3] E. Gati, M. Garst, R. S. Manna, U. Tutsch, B. Wolf, L. Bartosch, H. Schubert, T. Sasaki, J. A. Schlueter, and M. Lang, Breakdown of Hooke's law of elasticity at the Mott critical endpoint in an organic conductor, Science Advances 2, e1601646 (2016). [4] M. Zacharias, I. Paul, and M. Garst, Quantum critical elasticity, Phys. Rev. Lett. 115, 025703 (2015) [5] I. Paul and M. Garst, Lattice effects on nematic quantum criticality in metals, arXiv:1610:06168
09:40 - 10:00 Martin Hohenadler (Julius-Maximilians-Universität Würzburg)
Charge-density-wave transitions in Luttinger and Fermi liquids coupled to quantum phonons
We present quantum Monte Carlo results for the charge-density-wave transition of Luttinger and Fermi liquids coupled to quantum phonons. The power of a new directed-loop algorithm for retarded interactions is demonstrated for the one-dimensional Holstein model. In two dimensions, the critical temperature and Ising universality as well as the nature of the disordered phase are investigated using the continuous-time interaction expansion method.
10:00 - 10:20 Florian Gebhard (Philipps-Universität Marburg)
Optical phonons for Peierls chains with long-range Coulomb interactions
We consider a chain of atoms that are bound together by a harmonic force. Spin-1/2 electrons that move between neighboring chain sites (Hückel model) induce a lattice dimerization at half band filling (Peierls effect). We supplement the Hückel model with a local Hubbard interaction and a long-range Ohno potential, and calculate the average bond-length, dimerization, and optical phonon frequencies for finite straight and zig-zag chains using the density-matrix renormalization group (DMRG) method. We check our numerical approach against analytic results for the Hückel model. The Hubbard interaction mildly affects the average bond length but substantially enhances the dimerization and increases the optical phonon frequencies whereas, for moderate Coulomb parameters, the long-range Ohno interaction plays no role.
Nonequilibrium 2 (chair: Jeroen van den Brink)
10:40 - 11:00 Jean-Sebastien Caux (University of Amsterdam)
Dynamics of probed, pulsed, quenched and driven integrable systems
Recent years have witnessed rapid progress in the use of integrability in characterizing the out-of-equilibrium dynamics of low-dimensional systems such as interacting atomic gases and quantum spin chains. This talk will provide an introduction to these developments.
11:00 - 11:20 Markus Heyl (Max-Planck-Institut für Physik komplexer Systeme)
Dynamical quantum phase transitions
Dynamical quantum phase transitions (DQPTs) have emerged as a nonequilibrium analogue to conventional phase transitions with physical quantities becoming nonanalytic at critical times. I will summarize the recent developments including the first experimental observations of DQPTs in systems of ultracold gases in optical lattices as well as trapped ions. While the formal analogies of DQPTs to equilibrium phase transitions are straightforward, a major challenge is to connect to fundamental concepts such as scaling and universality. In this talk I will show that for DQPTs in Ising models exact renormalization group transformations in complex parameter space can be formulated. As a result of this construction, the DQPTs turn out to be critical points associated with unstable fixed points of equilibrium Ising models implying scaling and universality in this far-from equilibrium regime.
11:20 - 11:40 David Luitz (Technische Universität München)
Information propagation in chaotic quantum systems
I will discuss the transport of information in generic one-dimensional quantum systems using various measures. As a local probe, one can use out-of-time order correlation functions, which quantify the growth of the commutator of local operators in time. They exhibit a light-cone structure in chaotic (weakly disordered or Floquet) systems, which becomes a logarithmic light cone in strongly disordered many-body localized (MBL) systems and a power law light-cone at intermediate disorder, where particles are presumably transported subdiffusively. Both cases illustrate a slow, subballistic information propagation, which is reflected in a sublinear power-law growth of the entanglement entropy after a quench from a product state in a subdiffusive system and a logarithmic growth in the MBL case. The propagation of information is a generic property of the time evolution operator and therefore also visible in the operator space entanglement entropy of the evolution operator.
11:40 - 12:00 Dirk Schuricht (Utrecht University)
Time evolution during and after finite-time quantum quenches in one-dimensional systems
We study the time evolution in the Tomonaga--Luttinger model (TLM) and the transverse-field Ising chain (TFIC) subject to quantum quenches of finite duration, ie, a continuous change in the interaction strength or transverse magnetic field respectively. We analyse several observables including two-point correlation functions, which show a characteristic bending and delay of the light cone due to the finite quench duration. For example, we extract the universal behaviour of the Green functions in the TLM and provide analytic, non-perturbative results for the delay. As another example, for quenches between the phases in the TFIC we show that the Loschmidt echo exhibits characteristic non-analyticities due to a dynamical phase transition, which show clear signatures of the finite quench duration.
12:00 - 12:20 Eric Jeckelmann (Leibniz Universität Hannover)
Two matrix-product-state methods for quantum transport in correlated electron-phonon systems
We present two matrix-product-state methods for investigating quantum transport in one-dimensional correlated electron-phonon systems. The time-evolving block decimation method with local basis optimization (TEBD-LBO) [1] allows us to simulate the nonequilibrium dynamics of many-particle systems with strongly fluctuating bosonic degrees of freedom. Thus we can study nonlinear transport properties [2] and dissipation effects [1,3]. We also show that the dynamical DMRG combined with linear response theory [4] enables the calculation of the linear conductance in Luttinger liquids using a proper scaling of system length, zero-frequency current-current correlation functions, and broadening. This approach works for lattice models of Luttinger liquids including impurities, phonons or contacts to metallic leads. [1] C. Brockt, F. Dorfner, L. Vidmar, F. Heidrich-Meisner, and E. Jeckelmann, Phys. Rev. B 92, 241106(R) (2015). [2] M. Einhellinger, A. Cojuhovschi, and E. Jeckelmann, Phys. Rev. B 85, 235141 (2012). [3] Christoph Brockt and Eric Jeckelmann, Phys. Rev. B 95, 064309 (2017). [4] D. Bohr, P. Schmitteckert, and P. W\"{o}lfle, Europhys. Lett. 73, 246 (2006)."
Magnetic Systems (chair: Frank Pollmann)
14:00 - 14:20 Giniyat Khaliullin (Max-Planck-Institut für Festkörperforschung)
Soft spins and Higgs mode in ruthenates
I will discuss a special class of Mott insulators, where spin-orbit coupling dictates a nonmagnetic $J=0$ ground state, and the magnetic response is given by gapped singlet-triplet excitations. Exchange interactions as well as crystalline electric fields may close the spin gap, resulting in a Bose condensation of spin-orbit excitons. In addition to usual magnons, a Higgs amplitude mode, most prominent near quantum critical point, is expected. Upon electron doping, ferromagnetic correlations and triplet superconductivity may emerge. These predictions [1,2] will be discussed in the context of recent neutron and Raman light scattering experiments [3,4] in ruthenium oxides. [1] G. Khaliullin, Phys.Rev.Lett. {\bf 111}, 197201 (2013). [2] J. Chaloupka and G. Khaliullin, Phys.Rev.Lett. {\bf 116}, 017203 (2016). [3] A. Jain {\it et al.}, Nature Phys. 2017 (in press). [4] M. Souliou {\it et al.}, unpublished.
14:20 - 14:40 George Jackeli (Universität Stuttgart & Max-Planck-Institut für Festkörperforschung)
Spin-orbital frustration in Mott insulators
In Mott insulators, unquenched orbital degrees of freedom often frustrate the magnetic interactions and lead to a plethora of interesting phases with unusual spin patterns or non-magnetic states without long-range order. I will review from this perspective the theoretical concepts and experimental data on the late transition metal compounds, mostly focusing on iridates. In the second part, I will present our recent theoretical study of interplay of spin and orbital degrees in double-perovskite compounds with spin one-half ions occupying the frustrated fcc sublattice, such as molybdenum and osmium oxides. I will argue that this interplay might lead to a rich variety of the phases that include non-collinear ordered patterns with or without net moment, and, most remarkably, non-magnetic disordered spin-orbit dimer state.
14:40 - 15:00 Maria Daghofer (Universität Stuttgart)
Phases and excitations arising of spin-orbit coupled \(d^4\) and \(d^1\) systems
We study strongly-correlated $t_{2g}$ systems with strong spin-orbit coupling. The case of one hole, i.e. five electrons, per site has attracted considerable attention in the context of iridates: The hole is thought to be well modelled by a spin-like $j=1/2$ degree of freedom and the versatile material class has been proposed to host topologically nontrivial states as well as an analogue to cuprate superconductors. We well here, however, concentrate on two related but different scenarios. The first is the case of one \emph{electron}, which has in contrast addition to the $j=1/2$ pseudospin also an orbital degree of freedom. We will discuss (combined) spin and orbital excitation spectra expected for such cases. The second scenario to be discussed is given by two holes, i.e., a $d^4$ configuration. While the single-site ground state is here a (boring) singlet, triplet excitations do not have very high energies, so that magnetic couplings can lead to exciton condensation and magnetic order. We will present a honeycomb lattice with a Kitaev-Heisenberg-like bond geometry. The difference to the well-studied $d^4$ case is that the spin-length can here fluctuate, and we will discuss the impact of this difference as well as the magnetic excitations expected for the various phases.
15:00 - 15:20 Kai Phillip Schmidt (Friedrich-Alexander-Universität Erlangen-Nürnberg)
Mutually attracting spin waves in the square-lattice quantum antiferromagnet
The Heisenberg model for S=1/2 describes the interacting spins of electrons localized on lattice sites due to strong repulsion. It is the simplest strong-coupling model in condensed matter physics with wide-spread applications. Its relevance has been boosted further by the discovery of curate high-temperature superconductors. In leading order, their undoped parent compounds realize the Heisenberg model on square-lattices. Much is known about the model, but mostly at small wave vectors, i.e., for long-range processes, where the physics is governed by spin waves (magnons), the Goldstone bosons of the long-range ordered antiferromagnetic phase. Much less, however, is known for short-range processes, i.e., at large wave vectors. Yet these processes are decisive for understanding high-temperature superconductivity. Recent reports suggest that one has to resort to qualitatively different fractional excitations, spinons [1]. By contrast, we present a comprehensive picture in terms of dressed magnons with strong mutual attraction on short length scales [2]. The resulting spectral signatures agree strikingly with experimental data [3]. [1] Dalla Piazza et al., Nature Physics 11, 62 (2015). [2] M. Powalski, G.S. Uhrig, and K.P. Schmidt, PRL 115, 207202 (2015). [3] M. Powalski, K.P. Schmidt, and G.S. Uhrig, arXiv:1701.04730 (2017).
15:20 - 15:40 Jeroen van den Brink (IFW Dresden)
Electronic correlations and magnetism in iridates
Correlation effects, in particular intra- and interorbital electron-electron interactions, are very substantial in 3d transition-metal compounds such as the copper oxides, but relativistic spin-orbit coupling (SOC) is weak. In 5d transition-metal compounds such as iridates, the interesting situation arises that the SOC and Coulomb interactions meet on the same energy scale. The electronic structure of iridates thus depends on a strong competition between the electronic hopping amplitudes, local energy-level splittings, electron-electron interaction strengths, and the SOC of the Ir 5d electrons. The interplay of these ingredients offers the potential to stabilize relatively well-understood states such as a 2D Heisenberg-like antiferromagnet in Sr$_2$IrO$_4$, but in principle also far more exotic ones, such a topological Kitaev quantum spin liquid, in (hyper)honeycomb iridates [1-3]. I will discuss the microscopic electronic structures of these iridates, their proximity to idealized Heisenberg and Kitaev models and our contributions to establsihing the physical factors that appear to have preempted the realization of quantum spin liquid phases so far. [1] Jackeli & Khaliullin, PRL 102, 017205 (2009) [2] Nussinov & Van den Brink, RMP 87, 1 (2015), arXiv:1303.5922 [3] Gretarsson et al., PRL 110, 076402 (2013), Lupascu et al., PRL 112, 147201 (2014), Katukuri et al., NJP 16, 013056 (2014), Katukuri et al., PRX 4, 021051 (2014), Kim et al., Nature Comm. 5, 4453 (2014), Bogdanov et al., Nature Comm. 6, 7306 (2015), Nishimoto et al., Nature Comm. 7, 10273 (2016), Plotnikova et al., PRL 116, 106401 (2016)
Exotic Phases 2 (chair: Maria Daghofer)
16:20 - 16:40 Frederic Mila (Ecole Polytechnique Federal de Lausanne)
Majorana edge states and level crossings in chains of Co adatoms
Motivated by recent STM experiments on chains of Co adatoms deposited on Cu2N that have revealed a series of ground state level crossings as a function of magnetic field [1], and by their possible connection to Majorana edge states [2], I will discuss the origin of level crossings in the transverse field Ising model perturbed by an additional spin-spin interaction parallel or perpendicular to the magnetic field. I will show that, in a chain of N spins, an additional spin-spin interaction induces N level crossings in the ground state however small it might be, provided it has the same sign as the main interaction [3]. The proof relies on a mapping on Kitaev's 1D model of a p-wave superconductor [4], for which it can be shown that similar level crossings occur provided the pairing amplitude is smaller than the hopping integral due to the oscillating character of the Majorana edge states [3-6]. [1] R. Toskovic, R. van den Berg, A. Spinelli, I. S. Eliens, B. van den Toorn, B. Bryant, J.-S. Caux, and A. F. Otte, Nat. Phys. 12, 656 (2016). [2] F. Mila, Nat. Phys. 12, 633 (2016). [3] G. Vionnet, B. Kumar, F. Mila, arXiv:1701.08057. [4] A. Y. Kitaev. Phys.-Usp. 44 131 (2001). [5] H.-C. Kao, Phys. Rev. B 90, 245435 (2014). [6] Suraj S. Hegde and Smitha Vishveshwara, Phys. Rev. B 94, 115166 (2016).
16:40 - 17:00 Andreas Klümper (Bergische Universität Wuppertal)
Thermodynamics, contact and density profiles of multi-component fermionic and bosonic gases
We address the problem of computing the thermodynamic properties of the repulsive one-dimensional two-component Fermi gas and the two-component Bose gas with contact interaction. We derive an exact system of only two non-linear integral equations for the thermodynamics of the homogeneous model. This system allows for an easy and extremely accurate calculation of thermodynamic properties circumventing the difficulties associated with the truncation of the thermodynamic Bethe ansatz system of equations. We present extensive results for the densities, polarization, magnetic susceptibility, specific heat, interaction energy, Tan contact and local correlation function of opposite spins. Our results show that at low and intermediate temperatures the experimentally accessible contact is a non-monotonic function of the coupling strength. As a function of the temperature the contact presents a pronounced local minimum in the Tonks-Girardeau regime which signals an abrupt change of the momentum distribution in a small interval of temperature. The density profiles of the system in the presence of a harmonic trapping potential are computed using the exact solution of the homogeneous model coupled with the local density approximation. At finite temperature the density profile presents a double shell structure only when the polarization in the center of the trap is above a critical value.
17:00 - 17:20 Andreas Weichselbaum (Ludwig-Maximilians-Universität München)
DMRG simulations of SU(N) Heisenberg models keeping millions of states
Andreas Weichselbaum ^1, Sylvain Capponi ^2, Andreas Läuchli ^3, Alexei Tsvelik ^4, and Philippe Lecheminant ^5 ^1 Ludwig Maximilians University, Munich, Germany ^2 CNRS Toulouse, Université Paul Sabatier, France ^3 University of Innsbruck, Austria ^4 Brookhaven National Laboratory, Upton, NY, USA ^5 Université de Cergy Pontoise, France The density matrix renormalization group (DMRG) is applied to $SU(N)$ symmetric Heisenberg chains and ladders while fully exploiting the underlying $SU(N)$ symmetry. Since these models can be motivated from symmetric
17:20 - 17:40 Marcus Kollar (Universität Augsburg)
From Luttinger liquids to Luttinger droplets
The exactly solvable Tomonaga-Luttinger model describes two flavors of interacting electrons with linear dispersion in one dimension, but some of its properties are characteristic for a wider class of one-dimensional systems according to the Luttinger liquid paradigm [1]. The exact solution for linear dispersion is based on bosonization, which represents fermionic particle-hole excitations in terms of canonical bosons and maps the Tomonaga-Luttinger Hamiltonian onto a free bosonic theory. We use the framework of constructive finite-size bosonization [2] to derive explicit bosonic representations of general bilinear fermion operators including arbitrary dispersion terms. As an application, Luttinger `droplets' with position-dependent parameters are investigated. [1] F. D. M. Haldane, J. Phys. C: Solid State Phys. 14, 2585 (1981). [2] J. von Delft and H. Schoeller, Ann. Phys. 7, 225 (1998).
conference dinner at the Grandcafe & Restaurant Central
Spin Systems (chair: Tobias Meng)
09:00 - 09:20 Andreas Honecker (Université de Cergy-Pontoise)
Finite-temperature dynamics and thermal intraband magnon scattering in the antiferromagnetic spin-one chain
The antiferromagnetic spin-one chain is considerably one of the most fundamental quantum many-body systems. We perform a comparative study of its dynamical spin structure factor at finite temperatures, using exact diagonalization, quantum Monte Carlo simulations, and in particular a recently developed finite-temperature matrix-product-state method working in frequency space. Firstly, open chain ends yield localized edge states whose low-frequency spectral signatures persist at finite temperatures. Moreover, we observe the thermal activation of a distinct low-energy continuum contribution to the spin spectral function with an enhanced spectral weight at low momenta and its upper threshold. This emerging thermal spectral feature of the antiferromagnetic spin-one chain is argued to result from intraband magnon scattering due to the thermal population of the single-magnon branch, which features a large bandwidth-to-gap ratio in the present system. [1] J. Becker, T. Köhler, A.C. Tiegel, S.R. Manmana, S. Wessel, A. Honecker, Phys. Rev. B 96, 060403(R), 060403(R) (2017)
09:20 - 09:40 Holger Frahm (Leibniz Universität Hannover)
Emergence of non-compact degrees of freedom in the continuum limit of quantum spin chains
In recent years a growing number of vertex models or (super-)spin chains arising in the description of disorder problems or intersecting loops have been found to possess a continuous spectrum of crtitical exponents. In the lattice model this is accompanied by strong finite size corrections in the spectrum of low energy excitations. For the corresponding effective field theory this implies the presence of primary fields with weights taking continuous values within some interval. We address the question of how to characterize such a spectrum and to identify the corresponding conformal field theory in these systems.
09:40 - 10:00 Frank Göhmann (Bergische Universität Wuppertal)
Anisotropic magnetic interactions and spin dynamics in the spin chain compound \(Cu(py)_2Br_2\): an experimental and theoretical study
In recent years we have combined van Vleck's `method of moments' with methods for the exact calculation of short-range temperature-dependent correlation functions in order to explain the response of Heisenberg-type spin chains to ESR experiments. In this talk I review our results and some of the general problems connected with the description of the micro-wave absorption by exchange interacting spin systems. Our insight is then used to interpret recent experimental data on the quasi-1d compound Cu (py)_2 Br_2.
10:00 - 10:20 Jürgen Schnack (Universität Bielefeld)
Extreme magnetocaloric properties of \(\)Gd10Fe10 sawtooth spin rings: a case study using the finite-temperature Lanczos method
Unusual magnetocaloric properties are a hallmark of magnetic spin systems with competing interactions. In this presentation I am going to introduce molecular magnetic systems with strong frustration. I will demonstrate links to extended quantum magnets. The magnetocaloric properties can be evaluated by means of the finite-temperature Lanczos method which produces quasi-exact results. The method will be explained shortly and then applied to several investigated systems.
Exotic Phases 3 (chair: Frederic Mila)
10:40 - 11:00 Tobias Meng (Technische Universität Dresden)
A simple way to fractionalisation: detecting charge e/2 quasiparticles in quantum wires with dynamical response functions
In this talk, I will discuss a particularly simple way to fractionalization, namely the emergence of quasiparticles of charge e/2 in spin-orbit coupled quantum wires with strong interactions. This physics may already have an experimental realization. It also has promising applications: coupling a wire hosting quasiparticles of charge e/2 to a standard superconductor leads to an 8$\pi$-Josephson effect, and to topological zero energy states bound to interfaces.
11:00 - 11:20 Johannes Richter (Otto-von-Guericke Universität Magdeburg)
Emergence of magnetic order in kagome antiferromagnets: the role of next-nearest-neighbor bonds
The existence of a non-magnetic spin-liquid ground state of the spin-1/2 kagome Heisenberg antiferromagnet (KHAF) is well established. Meanwhile, also for the spin-1 KHAF evidence for the absence of magnetic long-range order (LRO) was found. On the other hand, recently it has been reported that magnetic LRO can be established by (i) increasing the spin quantum number to s>1 [1,2], (ii) by including an easy-plane anisotropy for s=1 [3,4], as well as (iii) by an interlayer coupling in a layered kagome system [5]. Another route to magnetic LRO is given by including further neighbor couplings [6,7]. We briefly review the results of [1-7] and present new results for the ground-state phase diagram for the J1-J2 KHAF (where J1 is the nearest-neighbor coupling and J2 is the 2nd neighbor coupling) for spin quantum numbers s=1/2 and s=1 using the Coupled-Cluster Method (CCM) in high orders of approximation and Lanczos exact diagonalization of finite lattices of up to N=36 sites. Starting from the pure KHAF (i.e., at J2=0) we find that a finite strength of the 2nd neighbor coupling J2 is necessary to drive the system from the spin-liquid state to a ground state with magnetic LRO either of $q=0$ symmetry (for antiferromagnetic J2) or of $\sqrt{3} \times\sqrt{3}$ symmetry (for ferromagnetic J2), where the CCM allows to determine the critical values of J2. We also study thermodynamic properties by using the high-temperature expansion (HTE) up to 13th order [8]. The tendency to establish ground-state LRO of $q=0$ or $\sqrt{3} \times \sqrt{3}$ symmetry can already be seen in the static spin structure factor S(q) at temperatures T of the order of J1. Increasing the strength of J2, in S(q) maxima at those magnetic wave vectors Q arise that correspond to the $q=0$ or $\sqrt{3} \times \sqrt{3}$ symmetry. By using Pade approximants of the HTE series we can compute the uniform susceptibility $\chi(T)$ and the magnetic specific heat $c(T)$ down to temperatures of T$\sim$0.6J1, where the 2nd neighbor coupling J2 has already a noticeable effect on the temperature profile of $\chi(T)$ and $c(T)$. By using a scheme interpolating between the low-energy and high-energy degrees of freedom [9] we also calculate the specific heat below T=0.6J1. While for the pure KHAF an extraordinary large value of $c(T)$ at low temperatures is present, the 2nd neighbor coupling J2 may lead to a more conventional temperature profile of $c(T)$. [1] O. Goetze, D.J.J. Farnell, R.F. Bishop, P.H.Y. Li, and J. Richter, Phys. Rev. B 84, 224428 (2011). [2] J. Oitmaa and R. R. P. Singh, Phys. Rev. B 93, 014424 (2016). [3] A. L. Chernyshev and M. E. Zhitomirsky, Phys. Rev. Lett. 113, 237202 (2014). [4] O. Goetze and J. Richter, Phys. Rev. B 91, 104402 (2015). [5] O. Goetze and J. Richter, Europhys. Lett. (EPL) 114, 67004 (2016). [6] Y. Iqbal, D. Poilblanc, and F. Becca, Phys. Rev. B 91, 020402 (2015). [6] Y. Iqbal, D. Poilblanc, and F. Becca, Phys. Rev. B 91, 020402 (2015). [7] H.J. Liao, Z.Y. Xie, J. Chen, Z.Y. Liu, H.D. Xie, R.Z. Huang, B. Normand, T. Xiang, arXiv:1610.04727. [8] A. Lohmann, H.-J. Schmidt, and J. Richter, Phys. Rev. B 89, 014415 (2014). [9] H.J. Schmidt, A. Hauser, A. Lohmann, and J. Richter, arXiv:1702:00487, Phys. Rev. E accepted.
11:20 - 11:40 Reinhold Egger (Universität Düsseldorf)
Topological Kondo effects
In this talk I discuss the topological Kondo effect realized in Majorana box devices. With normal leads, it represents a stable isotropic multi-channel Kondo fixed point that can be probed by linear conductance measurements. With superconducting leads, we predict a 6pi periodicity of the Josephson current, corresponding to fractionalized charge transfer.
11:40 - 12:00 Peter Kopietz (Johann Wolfgang Goethe-Universität Frankfurt)
Critical pairing fluctuations in the normal state of a superconductor: pseudogap and logarithmic correction to the quasi-particle damping
We study the effect of classical critical fluctuations of the superconducting order parameter on the electronic properties in the normal state of a clean superconductor in three dimensions. Using a functional renormalization group approach to take the non-Gaussian nature of critical fluctuations into account, we show that in the vicinity of the critical temperature $T_c$ the electronic density of states exhibits a fluctuation-induced pseudogap. Moreover, in the BCS regime where the inverse coherence length is much smaller than the Fermi wavevector, classical critical order parameter fluctuations give rise to a non-analytic contribution to the quasi-particle damping of order $ ( T_c^3 / E_F^2 ) \ln ( 80 / Gi )$, where $E_F$ is the Fermi energy and the Ginzburg-Levanyuk number $Gi$ is a dimensionless measure for the width of the critical region. In the BCS regime where $Gi$ is typically in the range between $10^{-14}$ and $10^{-12}$ there is a sizable range of temperatures above $T_c$ where the quasiparticle damping due to critical superconducting fluctuations is larger than the usual $T^2$-quasi-particle damping in three-dimensional Fermi-liquids. On the other hand, in the strong coupling regime where $Gi$ is of order unity we find that the quasiparticle damping due to critical pairing fluctuations is proportional to the temperature. We also use functional renormalization group methods to derive and classify various types of induced interaction processes due to particle-hole fluctuations in Fermi systems close to the superconducting instability.
12:00 - 12:20 Ilya Eremin (Ruhr-Universität Bochum)
Cooper-pairing with small Fermi energies in multiband superconductors: BCS-BEC crossover and time-reversal symmetry broken state
In my talk I will consider the interplay between superconductivity and formation of bound pairs of fermions in multi-band 2D fermionic systems (BCS-BEC crossover). In two spatial dimensions a bound state develops already at weak coupling, and BCS-BEC crossover can be analyzed already at weak coupling, when calculations are fully under control. We found that the behavior of the compensated metal with one electron and one hole bands is different in several aspects from that in the one-band model. There is again a crossover from BCS-like behavior at EF>>E0 (E0 being the bound state energy formation in a vacuum) to BEC-like behavior at EF<< E0 with Tins > Tc. However, in distinction to the one-band case, the actual Tc, below which long-range superconducting order develops, remains finite and of order Tins even when EF = 0 on both bands. The reason for a finite Tc is that the filled hole band acts as a reservoir of fermions. The pairing reconstructs fermionic dispersion and transforms some spectral weight into the newly created hole band below the original electron band and electron band above the original hole band. A finite density of fermions in these two bands gives rise to a finite Tc even when the bare Fermi level is exactly at the bottom of the electron band and at the top of the hole band. I also analyze the formation of the s+is state in a four-band model across the Lifshitz transition including BCS-BEC crossover effects on the shallow bands. Similar to the BCS case, we find that with hole doping the phase difference between superconducting order parameters of the hole bands change from 0 to π through an intermediate s + is state, breaking time-reversal symmetry (TRS). | CommonCrawl |
Artificial Intelligence Meta
Artificial Intelligence Stack Exchange is a question and answer site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment. It only takes a minute to sign up.
What is the purpose of an activation function in neural networks?
Modified 1 year, 10 months ago
It is said that activation functions in neural networks help introduce non-linearity.
What does non-linearity mean in this context?
How does the introduction of this non-linearity help?
Are there any other purposes of activation functions?
neural-networks
deep-learning
activation-functions
nbro
MohsinMohsin
Almost all of the functionalities provided by the non-linear activation functions are given by other answers. Let me sum them up:
First, what does non-linearity mean? It means something (a function in this case) which is not linear with respect to a given variable/variables i.e. $f(c1.x1 + c2.x2...cn.xn + b) != c1.f(x1) + c2.f(x2) ... cn.f(xn) + f(b).$ NOTE: There is some ambiguity about how one might define linearity. In polynomial equations we define linearity in somewhat a different way as compared to in vectors or some systems which take an input $x$ and give an output $f(x)$. See the second answer.
What does non-linearity mean in this context? It means that the Neural Network can successfully approximate functions (up-to a certain error $e$ decided by the user) which does not follow linearity or it can successfully predict the class of a function that is divided by a decision boundary that is not linear.
Why does it help? I hardly think you can find any physical world phenomenon which follows linearity straightforwardly. So you need a non-linear function that can approximate the non-linear phenomenon. Also, a good intuition would be any decision boundary or a function is a linear combination of polynomial combinations of the input features (so ultimately non-linear).
Purposes of activation function? In addition to introducing non-linearity, every activation function has its own features.
Sigmoid $\frac{1} {(1 + e ^ {-(w1*x1...wn*xn + b)})}$
This is one of the most common activation function and is monotonically increasing everywhere. This is generally used at the final output node as it squashes values between 0 and 1 (if the output is required to be 0 or 1). Thus above 0.5 is considered 1 while below 0.5 as 0, although a different threshold (not 0.5) maybe set. Its main advantage is that its differentiation is easy and uses already calculated values and supposedly horseshoe crab neurons have this activation function in their neurons.
Tanh $\frac{e ^ {(w1*x1...wn*xn + b)} - e ^ {-(w1*x1...wn*xn + b)})}{(e ^ { (w1*x1...wn*xn + b)} + e ^ {-(w1*x1...wn*xn + b)}}$
This has an advantage over the sigmoid activation function as it tends to centre the output to 0 which has an effect of better learning on the subsequent layers (acts as a feature normaliser). A nice explanation here. Negative and positive output values maybe considered as 0 and 1 respectively. Used mostly in RNN's.
Re-Lu activation function - This is another very common simple non-linear (linear in positive range and negative range exclusive of each other) activation function that has the advantage of removing the problem of vanishing gradient faced by the above two i.e. gradient tends to 0 as x tends to +infinity or -infinity. Here is an answer about Re-Lu's approximation power in-spite of its apparent linearity. ReLu's have a disadvantage of having dead neurons which result in larger NN's.
Also, you can design your own activation functions depending on your specialized problem. You may have a quadratic activation function which will approximate quadratic functions much better. But then, you have to design a cost function that should be somewhat convex in nature, so that you can optimise it using first-order differentials and the NN actually converges to a decent result. This is the main reason why standard activation functions are used. But I believe with proper mathematical tools, there is a huge potential for new and eccentric activation functions.
For example, say you are trying to approximate a single-variable quadratic function say $a.x^2 + c$. This will be best approximated by a quadratic activation $w1.x^2 + b$ where$w1$ and $b$ will be the trainable parameters. But designing a loss function that follows the conventional first-order derivative method (gradient descent) can be quite tough for non-monotonically increasing function.
For Mathematicians: In the sigmoid activation function $(1 / (1 + e ^ {-(w1*x1...wn*xn + b)})$ we see that $e ^ {-(w1*x1...wn*xn + b)}$ is always < 1. By binomial expansion, or by reverse calculation of the infinite GP series we get $sigmoid(y)$ = $1 + y + y^2.....$. Now in a NN $y = e ^ {-(w1*x1...wn*xn + b)}$. Thus we get all the powers of $y$ which is equal to $e ^ {-(w1*x1...wn*xn + b)}$ thus each power of $y$ can be thought of as a multiplication of several decaying exponentials based on a feature $x$, for eaxmple $y^2 = e^ {-2(w1x1)} * e^ {-2(w2x2)} * e^ {-2(w3x3)} *...... e^ {-2(b)}$. Thus each feature has a say in the scaling of the graph of $y^2$.
Another way of thinking would be to expand the exponentials according to Taylor Series: $$e^{x}=1+\frac{x}{1 !}+\frac{x^{2}}{2 !}+\frac{x^{3}}{3 !}+\cdots$$
So we get a very complex combination, with all the possible polynomial combinations of input variables present. I believe if a Neural Network is structured correctly the NN can fine-tune these polynomial combinations by just modifying the connection weights and selecting polynomial terms maximum useful, and rejecting terms by subtracting the output of 2 nodes weighted properly.
The $tanh$ activation can work in the same way since output of $|tanh| < 1$. I am not sure how Re-Lu's work though, but due to its rigid structure and problem of dead neurons we require larger networks with ReLu's for a good approximation.
But for a formal mathematical proof, one has to look at the Universal Approximation Theorem.
A visual proof that neural nets can compute any function
The Universal Approximation Theorem For Neural Networks- An Elegant Proof
For non-mathematicians some better insights visit these links:
Activation Functions by Andrew Ng - for more formal and scientific answer
How does neural network classifier classify from just drawing a decision plane?
Differentiable activation function A visual proof that neural nets can compute any function
Faizy
$\begingroup$ I would argue that ReLU is actually more common in NNs today than sigmoid :) $\endgroup$
– Andreas Storvik Strauman
Apr 8, 2018 at 21:28
$\begingroup$ @AndreasStorvikStrauman and you are quite correct...But sigmoid has a child called softmax :) $\endgroup$
Sep 1, 2018 at 11:31
$\begingroup$ How do you come to the conclusion that $e ^ {-(w1*x1...wn*xn + b)}$ is always $<1$? In general it shouldn't. $\endgroup$
– naive
$\begingroup$ @naive yeah you are correct... can't figure out why I wrote such a thing. I'll correct it when I get time, thanks for the heads up. $\endgroup$
If you only had linear layers in a neural network, all the layers would essentially collapse to one linear layer, and, therefore, a "deep" neural network architecture effectively wouldn't be deep anymore but just a linear classifier.
$$y = f(W_1 W_2 W_3x) = f(Wx)$$
where $W$ corresponds to the matrix that represents the network weights and biases for one layer, and $f()$ to the activation function.
Now, with the introduction of a non-linear activation unit after every linear transformation, this won't happen anymore.
$$y = f_1( W_1 f_2( W_2f_3( W_3x)))$$
Each layer can now build up on the results of the preceding non-linear layer which essentially leads to a complex non-linear function that is able to approximate every possible function with the right weighting and enough depth/width.
Marcel_marcel1991Marcel_marcel1991
$\begingroup$ It should be noted that although a composition of multiple linear operators (on a Euclidean space) can always be collapsed to a single matrix $W$, this doesn't mean keeping instead separate matrices $W_1, W_2...$ never makes sense. In particular, if $W_2$ maps from a high-dimensional space to a low-dimensional one and $W_1$ back to the high-dimensional one, then $W_1(W_2\:x)$ is cheaper to compute than $W(x)$. So, "else it would be equivalent to a single layer" is not really and argument for why nonlinearities in between the layers are needed. What's actually needed is the nonlinearity. $\endgroup$
Let's first talk about linearity. Linearity means the map (a function), $f: V \rightarrow W$, used is a linear map, that is, it satisfies the following two conditions
$f(x + y) = f(x) + f(y), \; x, y \in V$
$f(c x) = cf(x), \; c \in \mathbb{R}$
You should be familiar with this definition if you have studied linear algebra in the past.
However, it's more important to think of linearity in terms of linear separability of data, which means the data can be separated into different classes by drawing a line (or hyperplane, if more than two dimensions), which represents a linear decision boundary, through the data. If we cannot do that, then the data is not linearly separable. Often times, data from a more complex (and thus more relevant) problem setting is not linearly separable, so it is in our interest to model these.
To model nonlinear decision boundaries of data, we can utilize a neural network that introduces non-linearity. Neural networks classify data that is not linearly separable by transforming data using some nonlinear function (or our activation function), so the resulting transformed points become linearly separable.
Different activation functions are used for different problem setting contexts. You can read more about that in the book Deep Learning (Adaptive Computation and Machine Learning series).
For an example of non linearly separable data, see the XOR data set.
Can you draw a single line to separate the two classes?
answered Mar 3, 2018 at 0:18
smasma
$\begingroup$ So, without activation functions, outputs of NNs would always be linear, since output from previous layer will be multiplied with weights and added to bias, at each layer. So, in order for a NN to learn or approximate complex functions, different activation functions are being used depending on the purpose. Purpose of an activation function is to introduce non-linearity which those multiplications did. Is my intuition correct? $\endgroup$
– Naveen Reddy Marthala
$\begingroup$ Yup that is correct - different activations functions may work better depending on the problem context. $\endgroup$
– sma
Consider a very simple neural network, with just 2 layers, where the first has 2 neurons and the last 1 neuron, and the input size is 2. The inputs are $x_1$ and $x_1$.
The weights of the first layer are $w_{11}, w_{12}, w_{21}$ and $w_{22}$. We do not have activations, so the outputs of the neurons in the first layer are
\begin{align} o_1 = w_{11}x_1 + w_{12}x_2 \\ o_2 = w_{21}x_1 + w_{22}x_2 \end{align}
Let's calculate the output of the last layer with weights $z_1$ and $z_2$
$$out = z_1o_1 + z_2o_2$$
Just substitute $o_1$ and $o_2$ and you will get:
$$out = z_1(w_{11}x_1 + w_{12}x_2) + z_2(w_{21}x_1 + w_{22}x_2)$$
$$out = (z_1w_{11} + z_2 w_{21})x_1 + (z_2w_{22} + z_1w_{12})x_2$$
And look at this! If we create NN just with one layer with weights $z_1w_{11} + z_2 w_{21}$ and $z_2w_{22} + z_1w_{12}$ it will be equivalent to our 2 layers NN.
The conclusion: without nonlinearity, the computational power of a multilayer NN is equal to 1-layer NN.
Also, you can think of the sigmoid function as differentiable IF the statement that gives a probability. And adding new layers can create new, more complex combinations of IF statements. For example, the first layer combines features and gives probabilities that there are eyes, tail, and ears on the picture, the second combines new, more complex features from the last layer and gives probability that there is a cat.
For more information: Hacker's guide to Neural Networks.
First Degree Linear Polynomials
Non-linearity is not the correct mathematical term. Those that use it probably intend to refer to a first degree polynomial relationship between input and output, the kind of relationship that would be graphed as a straight line, a flat plane, or a higher degree surface with no curvature.
To model relations more complex than y = a1x1 + a2x2 + ... + b, more than just those two terms of a Taylor series approximation is needed.
Tune-able Functions with Non-zero Curvature
Artificial networks such as the multi-layer perceptron and its variants are matrices of functions with non-zero curvature that, when taken collectively as a circuit, can be tuned with attenuation grids to approximate more complex functions of non-zero curvature. These more complex functions generally have multiple inputs (independent variables).
The attenuation grids are simply matrix-vector products, the matrix being the parameters that are tuned to create a circuit that approximates the more complex curved, multivariate function with simpler curved functions.
Oriented with the multi-dimensional signal entering at the left and the result appearing on the right (left-to-right causality), as in the electrical engineering convention, the vertical columns are called layers of activations, mostly for historical reasons. They are actually arrays of simple curved functions. The most commonly used activations today are these.
Leaky ReLU
Threshold (binary step)
The identity function is sometimes used to pass through signals untouched for various structural convenience reasons.
These are less used but were in vogue at one point or another. They are still used but have lost popularity because they place additional overhead on back propagation computations and tend to lose in contests for speed and accuracy.
The more complex of these can be parametrized and all of them can be perturbed with pseudo-random noise to improve reliability.
Why Bother With All of That?
Artificial networks are not necessary for tuning well developed classes of relationships between input and desired output. For instance, these are easily optimized using well developed optimization techniques.
Higher degree polynomials — Often directly solvable using techniques derived directly from linear algebra
Periodic functions — Can be treated with Fourier methods
Curve fitting — converges well using the Levenberg–Marquardt algorithm, a damped least-squares approach
For these, approaches developed long before the advent of artificial networks can often arrive at an optimal solution with less computational overhead and more precision and reliability.
Where artificial networks excel is in the acquisition of functions about which the practitioner is largely ignorant or the tuning of the parameters of known functions for which specific convergence methods have not yet been devised.
Multi-layer perceptrons (ANNs) tune the parameters (attenuation matrix) during training. Tuning is directed by gradient descent or one of its variants to produce a digital approximation of an analog circuit that models the unknown functions. The gradient descent is driven by some criteria toward which circuit behavior is driven by comparing outputs with that criteria. The criteria can be any of these.
Matching labels (the desired output values corresponding to the training example inputs)
The need to pass information through narrow signal paths and reconstruct from that limited information
Another criteria inherent in the network
Another criteria arising from a signal source from outside the network
In summary, activation functions provide the building blocks that can be used repeatedly in two dimensions of the network structure so that, combined with an attenuation matrix to vary the weight of signaling from layer to layer, is known to be able to approximate an arbitrary and complex function.
Deeper Network Excitement
The post-millenial excitement about deeper networks is because the patterns in two distinct classes of complex inputs have been successfully identified and put into use within larger business, consumer, and scientific markets.
Heterogeneous and semantically complex structures
Media files and streams (images, video, audio)
Douglas DaseecoDouglas Daseeco
Upcoming moderator election in January 2023
Why is there a sigmoid function in the hidden layer of a neural network?
Why do activation functions need to be differentiable in the context of neural networks?
Why do we prefer ReLU over linear activation functions?
How exactly can ReLUs approximate non-linear and curved functions?
What role the activation function plays in the forward pass and how it is different from backpropagation
Is ReLU a non-linear activation function?
Why is non-linearity desirable in a neural network?
About the choice of the activation functions in the Multilayer Perceptron, and on what does this depends?
What does "linear unit" mean in the names of activation functions?
What is meant by "well-behaved gradient" in this context?
Why identity function is generally treated as an activation function?
What is meant by "lateral connection" in the context of neural networks?
Why are SVMs / Softmax classifiers considered linear while neural networks are non-linear? | CommonCrawl |
Boundless Statistics
Sample Surveys
The Literary Digest Poll
Incorrect polling techniques used during the 1936 presidential election led to the demise of the popular magazine, The Literary Digest.
Critique the problems with the techniques used by the Literary Digest Poll
As it had done in 1920, 1924, 1928 and 1932, The Literary Digest conducted a straw poll regarding the likely outcome of the 1936 presidential election. Before 1936, it had always correctly predicted the winner. It predicted Landon would beat Roosevelt.
In November, Landon carried only Vermont and Maine; President F. D. Roosevelt carried the 46 other states. Landon's electoral vote total of eight is a tie for the record low for a major-party nominee since the American political paradigm of the Democratic and Republican parties began in the 1850s.
The polling techniques used were to blame, even though they polled 10 million people and got a response from 2.4 million.They polled mostly their readers, who had more money than the typical American during the Great Depression. Higher income people were more likely to vote Republican.
Subsequent statistical analysis and studies have shown it is not necessary to poll ten million people when conducting a scientific survey. A much lower number, such as 1,500 persons, is adequate in most cases so long as they are appropriately chosen.
This debacle led to a considerable refinement of public opinion polling techniques and later came to be regarded as ushering in the era of modern scientific public opinion research.
bellwether: anything that indicates future trends
straw poll: a survey of opinion which is unofficial, casual, or ad hoc
The Literary Digest
The Literary Digest was an influential general interest weekly magazine published by Funk & Wagnalls. Founded by Isaac Kaufmann Funk in 1890, it eventually merged with two similar weekly magazines, Public Opinion and Current Opinion.
The Literary Digest: Cover of the February 19, 1921 edition of The Literary Digest.
Beginning with early issues, the emphasis of The Literary Digest was on opinion articles and an analysis of news events. Established as a weekly news magazine, it offered condensations of articles from American, Canadian, and European publications. Type-only covers gave way to illustrated covers during the early 1900s. After Isaac Funk's death in 1912, Robert Joseph Cuddihy became the editor. In the 1920s, the covers carried full-color reproductions of famous paintings. By 1927, The Literary Digest climbed to a circulation of over one million. Covers of the final issues displayed various photographic and photo-montage techniques. In 1938, it merged with the Review of Reviews, only to fail soon after. Its subscriber list was bought by Time.
Presidential Poll
The Literary Digest is best-remembered today for the circumstances surrounding its demise. As it had done in 1920, 1924, 1928 and 1932, it conducted a straw poll regarding the likely outcome of the 1936 presidential election. Before 1936, it had always correctly predicted the winner.
The 1936 poll showed that the Republican candidate, Governor Alfred Landon of Kansas, was likely to be the overwhelming winner. This seemed possible to some, as the Republicans had fared well in Maine, where the congressional and gubernatorial elections were then held in September, as opposed to the rest of the nation, where these elections were held in November along with the presidential election, as they are today. This outcome seemed especially likely in light of the conventional wisdom, "As Maine goes, so goes the nation," a saying coined because Maine was regarded as a "bellwether" state which usually supported the winning candidate's party.
In November, Landon carried only Vermont and Maine; President Franklin Delano Roosevelt carried the 46 other states. Landon's electoral vote total of eight is a tie for the record low for a major-party nominee since the American political paradigm of the Democratic and Republican parties began in the 1850s. The Democrats joked, "As goes Maine, so goes Vermont," and the magazine was completely discredited because of the poll, folding soon thereafter.
1936 Presidential Election: This map shows the results of the 1936 presidential election. Red denotes states won by Landon/Knox, blue denotes those won by Roosevelt/Garner. Numbers indicate the number of electoral votes allotted to each state.
In retrospect, the polling techniques employed by the magazine were to blame. Although it had polled ten million individuals (of whom about 2.4 million responded, an astronomical total for any opinion poll), it had surveyed firstly its own readers, a group with disposable incomes well above the national average of the time, shown in part by their ability still to afford a magazine subscription during the depths of the Great Depression, and then two other readily available lists: that of registered automobile owners and that of telephone users. While such lists might come close to providing a statistically accurate cross-section of Americans today, this assumption was manifestly incorrect in the 1930s. Both groups had incomes well above the national average of the day, which resulted in lists of voters far more likely to support Republicans than a truly typical voter of the time. In addition, although 2.4 million responses is an astronomical number, it is only 24% of those surveyed, and the low response rate to the poll is probably a factor in the debacle. It is erroneous to assume that the responders and the non-responders had the same views and merely to extrapolate the former on to the latter. Further, as subsequent statistical analysis and study have shown, it is not necessary to poll ten million people when conducting a scientific survey. A much lower number, such as 1,500 persons, is adequate in most cases so long as they are appropriately chosen.
George Gallup's American Institute of Public Opinion achieved national recognition by correctly predicting the result of the 1936 election and by also correctly predicting the quite different results of the Literary Digest poll to within about 1%, using a smaller sample size of 50,000. This debacle led to a considerable refinement of public opinion polling techniques and later came to be regarded as ushering in the era of modern scientific public opinion research.
The Year the Polls Elected Dewey
In the 1948 presidential election, the use of quota sampling led the polls to inaccurately predict that Dewey would defeat Truman.
Criticize the polling methods used in 1948 that incorrectly predicted that Dewey would win the presidency
Many polls, including Gallup, Roper, and Crossley, wrongfully predicted the outcome of the election due to their use of quota sampling.
Quota sampling is when each interviewer polls a certain number of people in various categories that are representative of the whole population, such as age, race, sex, and income.
One major problem with quota sampling includes the possibility of missing an important representative category that is key to how people vote. Another is the human element involved.
Truman, as it turned out, won the electoral vote by a 303-189 majority over Dewey, although a swing of just a few thousand votes in Ohio, Illinois, and California would have produced a Dewey victory.
One of the most famous blunders came when the Chicago Tribune wrongfully printed the inaccurate headline, "Dewey Defeats Truman" on November 3, 1948, the day after Truman defeated Dewey.
quadrennial: happening every four years
margin of error: An expression of the lack of precision in the results obtained from a sample.
quota sampling: a sampling method that chooses a representative cross-section of the population by taking into consideration each important characteristic of the population proportionally, such as income, sex, race, age, etc.
The United States presidential election of 1948 was the 41stquadrennial presidential election, held on Tuesday, November 2, 1948. Incumbent President Harry S. Truman, the Democratic nominee, successfully ran for election against Thomas E. Dewey, the Republican nominee.
This election is considered to be the greatest election upset in American history. Virtually every prediction (with or without public opinion polls) indicated that Truman would be defeated by Dewey. Both parties had severe ideological splits, with the far left and far right of the Democratic Party running third-party campaigns. Truman's surprise victory was the fifth consecutive presidential win for the Democratic Party, a record never surpassed since contests against the Republican Party began in the 1850s. Truman's feisty campaign style energized his base of traditional Democrats, most of the white South, Catholic and Jewish voters, and—in a surprise—Midwestern farmers. Thus, Truman's election confirmed the Democratic Party's status as the nation's majority party, a status it would retain until the conservative realignment in 1968.
Incorrect Polls
As the campaign drew to a close, the polls showed Truman was gaining. Though Truman lost all nine of the Gallup Poll's post-convention surveys, Dewey's Gallup lead dropped from 17 points in late September to 9% in mid-October to just 5 points by the end of the month, just above the poll's margin of error. Although Truman was gaining momentum, most political analysts were reluctant to break with the conventional wisdom and say that a Truman victory was a serious possibility. The Roper Poll had suspended its presidential polling at the end of September, barring "some development of outstanding importance," which, in their subsequent view, never occurred. Dewey was not unaware of his slippage, but he had been convinced by his advisers and family not to counterattack the Truman campaign.
Let's take a closer look at the polls. The Gallup, Roper, and Crossley polls all predicted a Dewey win. The actual results are shown in the following table:. How did this happen?
1948 Election: The table shows the results of three polls against the actual results in the 1948 presidential election. Notice that Dewey was ahead in all three polls, but ended up losing the election.
The Crossley, Gallup, and Roper organizations all used quota sampling. Each interviewer was assigned a specified number of subjects to interview. Moreover, the interviewer was required to interview specified numbers of subjects in various categories, based on residential area, sex, age, race, economic status, and other variables. The intent of quota sampling is to ensure that the sample represents the population in all essential respects.
This seems like a good method on the surface, but where does one stop? What if a significant criterion was left out–something that deeply affected the way in which people vote? This would cause significant error in the results of the poll. In addition, quota sampling involves a human element. Pollsters, in reality, were left to poll whomever they chose. Research shows that the polls tended to overestimate the Republican vote. In earlier years, the margin of error was large enough that most polls still accurately predicted the winner, but in 1948, their luck ran out. Quota sampling had to go.
Mistake in the Newspapers
One of the most famous blunders came when the Chicago Tribune wrongfully printed the inaccurate headline, "Dewey Defeats Truman" on November 3, 1948, the day after incumbent United States President Harry S. Truman beat Republican challenger and Governor of New York Thomas E. Dewey.
The paper's erroneous headline became notorious after a jubilant Truman was photographed holding a copy of the paper during a stop at St. Louis Union Station while returning by train from his home in Independence, Missouri to Washington, D.C.
Dewey Defeats Truman: President Truman holds up the newspaper that wrongfully reported his defeat.
Using Chance in Survey Work
When conducting a survey, a sample can be chosen by chance or by more methodical methods.
Distinguish between probability samples and non-probability samples for surveys
A probability sampling is one in which every unit in the population has a chance (greater than zero) of being selected in the sample, and this probability can be accurately determined.
Probability sampling includes simple random sampling, systematic sampling, stratified sampling, and cluster sampling. These various ways of probability sampling have two things in common: every element has a known nonzero probability of being sampled, and random selection is involved at some point.
Non-probability sampling is any sampling method wherein some elements of the population have no chance of selection (these are sometimes referred to as 'out of coverage'/'undercovered'), or where the probability of selection can't be accurately determined.
purposive sampling: occurs when the researchers choose the sample based on who they think would be appropriate for the study; used primarily when there is a limited number of people that have expertise in the area being researched
nonresponse: the absence of a response
In order to conduct a survey, a sample from the population must be chosen. This sample can be chosen using chance, or it can be chosen more systematically.
Probability Sampling for Surveys
A probability sampling is one in which every unit in the population has a chance (greater than zero) of being selected in the sample, and this probability can be accurately determined. The combination of these traits makes it possible to produce unbiased estimates of population totals, by weighting sampled units according to their probability of selection.
Let's say we want to estimate the total income of adults living in a given street by using a survey with questions. We visit each household in that street, identify all adults living there, and randomly select one adult from each household. (For example, we can allocate each person a random number, generated from a uniform distribution between 0 and 1, and select the person with the highest number in each household). We then interview the selected person and find their income. People living on their own are certain to be selected, so we simply add their income to our estimate of the total. But a person living in a household of two adults has only a one-in-two chance of selection. To reflect this, when we come to such a household, we would count the selected person's income twice towards the total. (The person who is selected from that household can be loosely viewed as also representing the person who isn't selected. )
Income in the United States: Graph of United States income distribution from 1947 through 2007 inclusive, normalized to 2007 dollars. The data is from the US Census, which is a survey over the entire population, not just a sample.
In the above example, not everybody has the same probability of selection; what makes it a probability sample is the fact that each person's probability is known. When every element in the population does have the same probability of selection, this is known as an 'equal probability of selection' (EPS) design. Such designs are also referred to as 'self-weighting' because all sampled units are given the same weight.
Probability sampling includes: Simple Random Sampling, Systematic Sampling, Stratified Sampling, Probability Proportional to Size Sampling, and Cluster or Multistage Sampling. These various ways of probability sampling have two things in common: every element has a known nonzero probability of being sampled, and random selection is involved at some point.
Non-Probability Sampling for Surveys
Non-probability sampling is any sampling method wherein some elements of the population have no chance of selection (these are sometimes referred to as 'out of coverage'/'undercovered'), or where the probability of selection can't be accurately determined. It involves the selection of elements based on assumptions regarding the population of interest, which forms the criteria for selection. Hence, because the selection of elements is nonrandom, non-probability sampling does not allow the estimation of sampling errors. These conditions give rise to exclusion bias, placing limits on how much information a sample can provide about the population. Information about the relationship between sample and population is limited, making it difficult to extrapolate from the sample to the population.
Let's say we visit every household in a given street and interview the first person to answer the door. In any household with more than one occupant, this is a non-probability sample, because some people are more likely to answer the door (e.g. an unemployed person who spends most of their time at home is more likely to answer than an employed housemate who might be at work when the interviewer calls) and it's not practical to calculate these probabilities.
Non-probability sampling methods include accidental sampling, quota sampling, and purposive sampling. In addition, nonresponse effects may turn any probability design into a non-probability design if the characteristics of nonresponse are not well understood, since nonresponse effectively modifies each element's probability of being sampled.
How Well Do Probability Methods Work?
Even when using probability sampling methods, bias can still occur.
Analyze the problems associated with probability sampling
Undercoverage occurs when some groups in the population are left out of the process of choosing the sample.
Nonresponse occurs when an individual chosen for the sample can't be contacted or does not cooperate.
Response bias occurs when a respondent lies about his or her true beliefs.
The wording of questions–especially if they are leading questions– can affect the outcome of a survey.
The larger the sample size, the more accurate the survey.
undercoverage: Occurs when a survey fails to reach a certain portion of the population.
response bias: Occurs when the answers given by respondents do not reflect their true beliefs.
Probability vs. Non-probability Sampling
In earlier sections, we discussed how samples can be chosen. Failure to use probability sampling may result in bias or systematic errors in the way the sample represents the population. This is especially true of voluntary response samples–in which the respondents choose themselves if they want to be part of a survey– and convenience samples–in which individuals easiest to reach are chosen.
However, even probability sampling methods that use chance to select a sample are prone to some problems. Recall some of the methods used in probability sampling: simple random samples, stratified samples, cluster samples, and systematic samples. In these methods, each member of the population has a chance of being chosen for the sample, and that chance is a known probability.
Problems With Probability Sampling
Random sampling eliminates some of the bias that presents itself in sampling, but when a sample is chosen by human beings, there are always going to be some unavoidable problems. When a sample is chosen, we first need an accurate and complete list of the population. This type of list is often not available, causing most samples to suffer from undercoverage. For example, if we chose a sample from a list of households, we will miss those who are homeless, in prison, or living in a college dorm. In another example, a telephone survey calling landline phones will potentially miss those who are unlisted, those who only use a cell phone, and those who do not have a phone at all. Both of these examples will cause a biased sample in which poor people, whose opinions may very well differ from those of the rest of the population, are underrepresented.
Another source of bias is nonresponse, which occurs when a selected individual cannot be contacted or refuses to participate in the survey. Many people do not pick up the phone when they do not know the person who is calling. Nonresponse is often higher in urban areas, so most researchers conducting surveys will substitute other people in the same area to avoid favoring rural areas. However, if the people eventually contacted differ from those who are rarely at home or refuse to answer questions for one reason or another, some bias will still be present.
Ringing Phone: This image shows a ringing phone that is not being answered.
A third example of bias is called response bias. Respondents may not answer questions truthfully, especially if the survey asks about illegal or unpopular behavior. The race and sex of the interviewer may influence people to respond in a way that is more extreme than their true beliefs. Careful training of pollsters can greatly reduce response bias.
Finally, another source of bias can come in the wording of questions. Confusing or leading questions can strongly influence the way a respondent answers questions.
When reading the results of a survey, it is important to know the exact questions asked, the rate of nonresponse, and the method of survey before you trust a poll. In addition, remember that a larger sample size will provide more accurate results.
The Gallup Poll
The Gallup Poll is a public opinion poll that conducts surveys in 140 countries around the world.
Examine the pros and cons of the way in which the Gallup Poll is conducted
The Gallup Poll measures and tracks the public's attitudes concerning virtually every political, social, and economic issues of the day in 140 countries around the world.
The Gallup Polls have been traditionally known for their accuracy in predicting presidential elections in the United States from 1936 to 2008. They were only incorrect in 1948 and 1976.
Today, Gallup samples people using both landline telephones and cell phones. They have gained much criticism for not adapting quickly enough for a society that is growing more and more towards using only their cell phones over landlines.
public opinion polls: surveys designed to represent the beliefs of a population by conducting a series of questions and then extrapolating generalities in ratio or within confidence intervals
Objective: not influenced by the emotions or prejudices
Overview of the Gallup Organization
Gallup, Inc. is a research-based performance-management consulting company. Originally founded by George Gallup in 1935, the company became famous for its public opinion polls, which were conducted in the United States and other countries. Today, Gallup has more than 40 offices in 27 countries. The world headquarters are located in Washington, D.C., while the operational headquarters are in Omaha, Nebraska. Its current Chairman and CEO is Jim Clifton.
The Gallup Organization: The Gallup, Inc. world headquarters in Washington, D.C. The National Portrait Gallery can be seen in the reflection.
History of Gallup
George Gallup founded the American Institute of Public Opinion, the precursor to the Gallup Organization, in Princeton, New Jersey in 1935. He wished to objectively determine the opinions held by the people. To ensure his independence and objectivity, Dr. Gallup resolved that he would undertake no polling that was paid for or sponsored in any way by special interest groups such as the Republican and Democratic parties, a commitment that Gallup upholds to this day.
In 1936, Gallup successfully predicted that Franklin Roosevelt would defeat Alfred Landon for the U.S. presidency; this event quickly popularized the company. In 1938, Dr. Gallup and Gallup Vice President David Ogilvy began conducting market research for advertising companies and the film industry. In 1958, the modern Gallup Organization was formed when George Gallup grouped all of his polling operations into one organization. Since then, Gallup has seen huge expansion into several other areas.
The Gallup Poll is the division of Gallup that regularly conducts public opinion polls in more than 140 countries around the world. Gallup Polls are often referenced in the mass media as a reliable and objective audience measurement of public opinion. Gallup Poll results, analyses, and videos are published daily on Gallup.com in the form of data-driven news. The poll loses about $10 million a year but gives the company the visibility of a very well-known brand.
Historically, the Gallup Poll has measured and tracked the public's attitudes concerning virtually every political, social, and economic issue of the day, including highly sensitive and controversial subjects. In 2005, Gallup began its World Poll, which continually surveys citizens in more than 140 countries, representing 95% of the world's adult population. General and regional-specific questions, developed in collaboration with the world's leading behavioral economists, are organized into powerful indexes and topic areas that correlate with real-world outcomes.
Reception of the Poll
The Gallup Polls have been recognized in the past for their accuracy in predicting the outcome of United States presidential elections, though they have come under criticism more recently. From 1936 to 2008, Gallup correctly predicted the winner of each election–with the notable exceptions of the 1948 Thomas Dewey-Harry S. Truman election, when nearly all pollsters predicted a Dewey victory, and the 1976 election, when they inaccurately projected a slim victory by Gerald Ford over Jimmy Carter. For the 2008 U.S. presidential election, Gallup correctly predicted the winner, but was rated 17th out of 23 polling organizations in terms of the precision of its pre-election polls relative to the final results. In 2012, Gallup's final election survey had Mitt Romney 49% and Barack Obama 48%, compared to the election results showing Obama with 51.1% to Romney's 47.2%. Poll analyst Nate Silver found that Gallup's results were the least accurate of the 23 major polling firms Silver analyzed, having the highest incorrect average of being 7.2 points away from the final result. Frank Newport, the Editor-in-Chief of Gallup, responded to the criticism by stating that Gallup simply makes an estimate of the national popular vote rather than predicting the winner, and that their final poll was within the statistical margin of error.
In addition to the poor results of the poll in 2012, many people are criticizing Gallup on their sampling techniques. Gallup conducts 1,000 interviews per day, 350 days out of the year, among both landline and cell phones across the U.S., for its health and well-being survey. Though Gallup surveys both landline and cell phones, they conduct only 150 cell phone samples out of 1000, making up 15%. The population of the U.S. that relies only on cell phones (owning no landline connections) makes up more than double that number, at 34%. This fact has been a major criticism in recent times of the reliability Gallup polling, compared to other polls, in its failure to compensate accurately for the quick adoption of "cell phone only" Americans.
Telephone Surveys
Telephone surveys can reach a wide range of people very quickly and very inexpensively.
Identify the advantages and disadvantages of telephone surveys
About 95% of people in the United States have a telephone (see, so conducting a poll by calling people is a good way to reach nearly every part of the population.
Calling people by telephone is a quick process, allowing researches to gain a lot of data in a short amount of time.
In certain polls, the interviewer or interviewee (or both) may wish to remain anonymous, which can be achieved if the poll is conducted via telephone by a third party.
Non-response bias is one of the major problems with telephone surveys as many people do not answer calls from people they do not know.
Due to certain uncontrollable factors (e.g., unlisted phone numbers, people who only use cell phones, or instances when no one is home/available to take pollster calls), undercoverage can negatively affect the outcome of telephone surveys.
non-response bias: Occurs when the sample becomes biased because some of those initially selected refuse to respond.
A telephone survey is a type of opinion poll used by researchers. As with other methods of polling, their are advantages and disadvantages to utilizing telephone surveys.
Large scale accessibility. About 95% of people in the United States have a telephone (see ), so conducting a poll by via telephone is a good way to reach most parts of the population.
Efficient data collection. Conducting calls via telephone produces a quick process, allowing researches to gain a large amount of data in a short amount of time. Previously, pollsters physically had to go to each interviewee's home (which, obviously, was more time consuming).
Inexpensive. Phone interviews are not costly (e.g., telephone researchers do not pay for travel).
Anonymity. In certain polls, the interviewer or interviewee (or both) may wish to remain anonymous, which can be achieved if the poll is conducted over the phone by a third party.
Lack of visual materials. Depending on what the researchers are asking, sometimes it may be helpful for the respondent to see a product in person, which of course, cannot be done over the phone.
Call screening. As some people do not answer calls from strangers, or may refuse to answer the poll, poll samples are not always representative samples from a population due to what is known as non-response bias. In this type of bias, the characteristics of those who agree to be interviewed may be markedly different from those who decline. That is, the actual sample is a biased version of the population the pollster wants to analyze. If those who refuse to answer, or are never reached, have the same characteristics as those who do answer, then the final results should be unbiased. However, if those who do not answer have different opinions, then the results have bias. In terms of election polls, studies suggest that bias effects are small, but each polling firm has its own techniques for adjusting weights to minimize selection bias.
Undercoverage. Undercoverage is a highly prevalent source of bias. If the pollsters only choose telephone numbers from a telephone directory, they miss those who have unlisted landlines or only have cell phones (which is is becoming more the norm). In addition, if the pollsters only conduct calls between 9:00 a.m and 5:00 p.m, Monday through Friday, they are likely to miss a huge portion of the working population—those who may have very different opinions than the non-working population.
Chance Error and Bias
Chance error and bias are two different forms of error associated with sampling.
Differentiate between random, or chance, error and bias
The error that is associated with the unpredictable variation in the sample is called a random, or chance, error. It is only an "error" in the sense that it would automatically be corrected if we could survey the entire population.
Random error cannot be eliminated completely, but it can be reduced by increasing the sample size.
A sampling bias is a bias in which a sample is collected in such a way that some members of the intended population are less likely to be included than others.
There are various types of bias, including selection from a specific area, self-selection, pre-screening, and exclusion.
standard error: A measure of how spread out data values are around the mean, defined as the square root of the variance.
random sampling: a method of selecting a sample from a statistical population so that every subject has an equal chance of being selected
bias: (Uncountable) Inclination towards something; predisposition, partiality, prejudice, preference, predilection.
In statistics, a sampling error is the error caused by observing a sample instead of the whole population. The sampling error can be found by subtracting the value of a parameter from the value of a statistic. The variations in the possible sample values of a statistic can theoretically be expressed as sampling errors, although in practice the exact sampling error is typically unknown.
In sampling, there are two main types of error: systematic errors (or biases) and random errors (or chance errors).
Random/Chance Error
Random sampling is used to ensure that a sample is truly representative of the entire population. If we were to select a perfect sample (which does not exist), we would reach the same exact conclusions that we would have reached if we had surveyed the entire population. Of course, this is not possible, and the error that is associated with the unpredictable variation in the sample is called random, or chance, error. This is only an "error" in the sense that it would automatically be corrected if we could survey the entire population rather than just a sample taken from it. It is not a mistake made by the researcher.
Random error always exists. The size of the random error, however, can generally be controlled by taking a large enough random sample from the population. Unfortunately, the high cost of doing so can be prohibitive. If the observations are collected from a random sample, statistical theory provides probabilistic estimates of the likely size of the error for a particular statistic or estimator. These are often expressed in terms of its standard error:
[latex]\displaystyle \text{SE}_{\bar{\text{x}}} = \frac{\text{s}}{\sqrt{\text{n}}}[/latex]
In statistics, sampling bias is a bias in which a sample is collected in such a way that some members of the intended population are less likely to be included than others. It results in a biased sample, a non-random sample of a population in which all individuals, or instances, were not equally likely to have been selected. If this is not accounted for, results can be erroneously attributed to the phenomenon under study rather than to the method of sampling.
There are various types of sampling bias:
Selection from a specific real area. For example, a survey of high school students to measure teenage use of illegal drugs will be a biased sample because it does not include home-schooled students or dropouts.
Self-selection bias, which is possible whenever the group of people being studied has any form of control over whether to participate. Participants' decision to participate may be correlated with traits that affect the study, making the participants a non-representative sample. For example, people who have strong opinions or substantial knowledge may be more willing to spend time answering a survey than those who do not.
Pre-screening of trial participants, or advertising for volunteers within particular groups. For example, a study to "prove" that smoking does not affect fitness might recruit at the local fitness center, but advertise for smokers during the advanced aerobics class and for non-smokers during the weight loss sessions.
Exclusion bias, or exclusion of particular groups from the sample. For example, subjects may be left out if they either migrated into the study area or have moved out of the area.
The Literary Digest. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/The_Literary_Digest. License: CC BY-SA: Attribution-ShareAlike
bellwether. Provided by: Wiktionary. Located at: http://en.wiktionary.org/wiki/bellwether. License: CC BY-SA: Attribution-ShareAlike
straw poll. Provided by: Wiktionary. Located at: http://en.wiktionary.org/wiki/straw_poll. License: CC BY-SA: Attribution-ShareAlike
ElectoralCollege1936. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/File:ElectoralCollege1936.svg. License: CC BY-SA: Attribution-ShareAlike
LiteraryDigest-19210219. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/File:LiteraryDigest-19210219.jpg. License: CC BY-SA: Attribution-ShareAlike
Dewey Defeats Truman. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Dewey_Defeats_Truman. License: CC BY-SA: Attribution-ShareAlike
United States presidential election, 1948. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/United_States_presidential_election,_1948. License: CC BY-SA: Attribution-ShareAlike
The 1948 Presidential Election Polls. Provided by: The University of Alabama in Huntsville. Located at: http://www.math.uah.edu/stat/data/1948Election.html. License: CC BY: Attribution
Boundless. Provided by: Boundless Learning. Located at: http://www.boundless.com//statistics/definition/quota-sampling. License: CC BY-SA: Attribution-ShareAlike
quadrennial. Provided by: Wiktionary. Located at: http://en.wiktionary.org/wiki/quadrennial. License: CC BY-SA: Attribution-ShareAlike
margin of error. Provided by: Wiktionary. Located at: http://en.wiktionary.org/wiki/margin_of_error. License: CC BY-SA: Attribution-ShareAlike
Deweytruman12. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/File:Deweytruman12.jpg. License: CC BY-SA: Attribution-ShareAlike
Sampling (statistics). Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Sampling_(statistics)%23Probability_and_nonprobability_sampling. License: CC BY-SA: Attribution-ShareAlike
nonresponse. Provided by: Wiktionary. Located at: http://en.wiktionary.org/wiki/nonresponse. License: CC BY-SA: Attribution-ShareAlike
Boundless. Provided by: Boundless Learning. Located at: http://www.boundless.com//statistics/definition/purposive-sampling. License: CC BY-SA: Attribution-ShareAlike
United States Income Distribution 1947-2007. Provided by: Wikimedia. Located at: http://commons.wikimedia.org/wiki/File:United_States_Income_Distribution_1947-2007.svg. License: CC BY-SA: Attribution-ShareAlike
Sampling (statistics). Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Sampling_(statistics). License: CC BY-SA: Attribution-ShareAlike
Boundless. Provided by: Boundless Learning. Located at: http://www.boundless.com//statistics/definition/undercoverage. License: CC BY-SA: Attribution-ShareAlike
Boundless. Provided by: Boundless Learning. Located at: http://www.boundless.com//statistics/definition/response-bias. License: CC BY-SA: Attribution-ShareAlike
Tower, Phone, Mail, Icon, Rings - Free image - 25477. Provided by: Pixabay. Located at: http://pixabay.com/en/tower-phone-mail-icon-rings-25477/. License: CC BY: Attribution
Gallup (company). Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Gallup_(company). License: CC BY-SA: Attribution-ShareAlike
Boundless. Provided by: Boundless Learning. Located at: http://www.boundless.com//statistics/definition/public-opinion-polls. License: CC BY-SA: Attribution-ShareAlike
Objective. Provided by: Wiktionary. Located at: http://en.wiktionary.org/wiki/Objective. License: CC BY-SA: Attribution-ShareAlike
Gallup Portrait. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/File:Gallup_Portrait.jpg. License: CC BY-SA: Attribution-ShareAlike
Opinion poll. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Opinion_poll. License: CC BY-SA: Attribution-ShareAlike
Boundless. Provided by: Boundless Learning. Located at: http://www.boundless.com//statistics/definition/non-response-bias. License: CC BY-SA: Attribution-ShareAlike
standard error. Provided by: Wiktionary. Located at: http://en.wiktionary.org/wiki/standard_error. License: CC BY-SA: Attribution-ShareAlike
Sampling bias. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Sampling_bias. License: CC BY-SA: Attribution-ShareAlike
Sampling error. Provided by: Wikipedia. Located at: http://en.wikipedia.org/wiki/Sampling_error. License: CC BY-SA: Attribution-ShareAlike
bias. Provided by: Wiktionary. Located at: http://en.wiktionary.org/wiki/bias. License: CC BY-SA: Attribution-ShareAlike
Boundless. Provided by: Boundless Learning. Located at: http://www.boundless.com//statistics/definition/random-sampling. License: CC BY-SA: Attribution-ShareAlike | CommonCrawl |
JGM Home
De Donder form for second order gravity
March 2020, 12(1): 107-140. doi: 10.3934/jgm.2020006
On the degenerate boussinesq equations on surfaces
Siran Li 1,2, , Jiahong Wu 3, and Kun Zhao 4,,
Department of Mathematics, Rice University, MS 136 P.O. Box 1892, Houston, Texas, 77251, USA
Department of Mathematics, McGill University, Burnside Hall, 805 Sherbrooke Street West, Montreal, Quebec, H3A 0B9, Canada
Department of Mathematics, Oklahoma State University, 401 Mathematical Sciences, Stillwater, Oklahoma, 74078, USA
Department of Mathematics, Tulane University, 6823 Saint Charles Avenue, New Orleans, LA 70118, USA
* Corresponding author: Kun Zhao
Received May 2019 Revised November 2019 Published January 2020
In this paper we study the non-degenerate and partially degenerate Boussinesq equations on a closed surface $ \Sigma $. When $ \Sigma $ has intrinsic curvature of finite Lipschitz norm, we prove the existence of global strong solutions to the Cauchy problem of the Boussinesq equations with full or partial dissipations. The issues of uniqueness and singular limits (vanishing viscosity/vanishing thermal diffusivity) are also addressed. In addition, we establish a breakdown criterion for the strong solutions for the case of zero viscosity and zero thermal diffusivity. These appear to be among the first results for Boussinesq systems on Riemannian manifolds.
Keywords: Boussinesq Equations, Strong Solution, Closed Surfaces, Well-posedness, Breakdown Criteria, Vanishing Viscosity Limit, Vanishing Diffusivity Limit.
Mathematics Subject Classification: Primary: 35Q35, 58J90; Secondary: 76D03.
Citation: Siran Li, Jiahong Wu, Kun Zhao. On the degenerate boussinesq equations on surfaces. Journal of Geometric Mechanics, 2020, 12 (1) : 107-140. doi: 10.3934/jgm.2020006
H. Abidi and T. Hmidi, On the global well-posedness for Boussinesq system, J. Diff. Equ., 233 (2007), 199-220. doi: 10.1016/j.jde.2006.10.008. Google Scholar
D. Adhikari, C. Cao, H. Shang, J. Wu, X. Xu and Z. Ye, Global regularity results for the 2D Boussinesq equations with partial dissipation, J. Diff. Equ., 260 (2016), 1893-1917. doi: 10.1016/j.jde.2015.09.049. Google Scholar
D. Adhikari, C. Cao and J. Wu, The 2D Boussinesq equations with vertical viscosity and vertical diffusivity, J. Diff. Equ., 249 (2010), 1078-1088. doi: 10.1016/j.jde.2010.03.021. Google Scholar
D. Adhikari, C. Cao and J. Wu, Global regularity results for the 2D Boussinesq equations with vertical dissipation, J. Diff. Equ., 251 (2011), 1637-1655. doi: 10.1016/j.jde.2011.05.027. Google Scholar
D. Adhikari, C. Cao, J. Wu and X. Xu, Small global solutions to the damped two-dimensional Boussinesq equations, J. Diff. Equ., 256 (2014), 3594-3613. doi: 10.1016/j.jde.2014.02.012. Google Scholar
D. Alonso–Orán, A. Córdoba and Á. D. Martínez, Continuity of weak solutions of the critical surface quasigeostrophic equation on $\mathbb{S}^2$, Adv. Math., 328 (2018), 264-299. doi: 10.1016/j.aim.2018.01.015. Google Scholar
D. Alonso–Orán, A. Córdoba and Á. D. Martínez, Global well-posedness of critical surface quasigeostrophic equation on the sphere, Adv. Math., 328 (2018), 248-263. doi: 10.1016/j.aim.2018.01.016. Google Scholar
T. Aubin, Nonlinear Analysis on Manifolds. Monge–Amperè Equations, Grundlehern der Mathematischen Wissenschaften, Springer–Verlag, 252, 1982. doi: 10.1007/978-1-4612-5734-9. Google Scholar
J. T. Beale, T. Kato and A. Majda, Remarks on the breakdown of smooth solutions for the $3$–$D$ Euler equations, Comm. Math. Phys., 94 (1984), 61-66. doi: 10.1007/BF01212349. Google Scholar
A. Biswas, C. Foias and A. Larios, On the attractor for the semi-dissipative Boussinesq equations, Ann. Inst. H. Poincaré Anal. Non Linéaire, 34 (2017), 381-405. doi: 10.1016/j.anihpc.2015.12.006. Google Scholar
L. Brandolese and M. Schonbek, Large time decay and growth for solutions of a viscous Boussinesq system, Transactions AMS, 364 (2012), 5057-5090. doi: 10.1090/S0002-9947-2012-05432-8. Google Scholar
H. Brezis and S. Wainger, A note on limiting cases of Sobolev embeddings and convolution inequalities, Comm. PDE, 5 (1980), 773-789. doi: 10.1080/03605308008820154. Google Scholar
L. Caffarelli, R. Kohn and L. Nirenberg, Partial regularity of suitable weak solutions of the Navier-Stokes equations, Comm. Pure Appl. Math., 35 (1982), 771-831. doi: 10.1002/cpa.3160350604. Google Scholar
J.R. Cannon and E. DiBenedetto, The initial value problem for the Boussinesq equations with data in $L^p$, Approximation Methods for Navier-Stokes Problems (Proc. Sympos., Univ. Paderborn, Paderborn, 1979), Lecture Notes in Math., 771, Springer, Berlin, 1980,129–144. doi: 10.1007/BFb0086903. Google Scholar
C. Cao and J. Wu, Global regularity for the 2D anisotropic Boussinesq equations with vertical dissipation, Arch. Ration. Mech. Anal., 208 (2013), 985-1004. doi: 10.1007/s00205-013-0610-3. Google Scholar
D. Chae, Global regularity for the 2D Boussinesq equations with partial viscosity terms, Adv. Math., 203 (2006), 497-513. doi: 10.1016/j.aim.2005.05.001. Google Scholar
D. Chae, P. Constantin and J. Wu, An incompressible 2D didactic model with singularity and explicit solutions of the 2D Boussinesq equations, J. Math. Fluid Mech., 16 (2014), 473-480. doi: 10.1007/s00021-014-0166-5. Google Scholar
D. Chae and H. Nam, Local existence and blow-up criterion for the Boussinesq equations, Proc. Roy. Soc. Edinburgh Sect. A, 127 (1997), 935-946. doi: 10.1017/S0308210500026810. Google Scholar
D. Chae, S. Kim and H. Nam, Local existence and blow-up criterion of Hölder continuous solutions of the Boussinesq equations, Nagoya Math. J., 155 (1999), 55-80. doi: 10.1017/S0027763000006991. Google Scholar
D. Chae and O. Y. Imanuvilov, Generic solvability of the axisymmetric $3$-D Euler equations and the $2$-D Boussinesq equations, J. Diff. Equ., 156 (1999), 1-17. doi: 10.1006/jdeq.1998.3607. Google Scholar
D. Chae and J. Wu, The 2D Boussinesq equations with logarithmically supercritical velocities, Adv. Math., 230 (2012), 1618-1645. doi: 10.1016/j.aim.2012.04.004. Google Scholar
D. Córdoba, C. Fefferman and R. De La Llave, On squirt singularities in hydrodynamics, SIAM J. Math. Anal., 36 (2004), 204-213. doi: 10.1137/S0036141003424095. Google Scholar
R. Danchin and M. Paicu, Existence and uniqueness results for the Boussinesq system with data in Lorentz spaces, Phys. D, 237 (2008), 1444-1460. doi: 10.1016/j.physd.2008.03.034. Google Scholar
R. Danchin and M. Paicu, Global well-posedness issues for the inviscid Boussinesq system with Yudovich's type data, Commun. Math. Phys., 290 (2009), 1-14. doi: 10.1007/s00220-009-0821-5. Google Scholar
R. Danchin and M. Paicu, Global existence results for the anisotropic Boussinesq system in dimension two, Math. Models Methods Appl. Sci., 21 (2011), 421-457. doi: 10.1142/S0218202511005106. Google Scholar
C. Doering, J. Wu, K. Zhao and X. Zheng, Long-time behavior of two-dimensional Boussinesq equations without buoyancy diffusion, Phys. D, 376/377 (2018), 144-159. doi: 10.1016/j.physd.2017.12.013. Google Scholar
W. E and C.-W. Shu, Small-scale structures in Boussinesq convection, Phys. Fluids, 6 (1994), 49-58. doi: 10.1063/1.868044. Google Scholar
H. Engler, An alternative proof of the Brezis–Wainger inequality, Comm. PDE, 14 (1989), 541-544. Google Scholar
P. Górka, Brézis–Wainger inequality on Riemannian manifolds, J. Ineq. Appl., (2008), ID 715961, 1–6. doi: 10.1155/2008/715961. Google Scholar
B. Guo and G. Yuan, On the suitable weak solutions to the Boussinesq equations in a bounded domain, Acta Math. Sinica, 12 (1996), 205-216. doi: 10.1007/BF02108163. Google Scholar
E. Hebey, Nonlinear Analysis on Manifolds: Sobolev Spaces and Inequalities, Courant Lecture Notes in Mathematics, 5. New York University, Courant Institute of Mathematical Sciences, New York; American Mathematical Society, Providence, RI, 1999. Google Scholar
T. Hmidi and S. Keraani, On the global well-posedness of the 2D Boussinesq system with a zero diffusivity, Adv. Diff. Equ., 12 (2007), 461-480. Google Scholar
T. Hmidi and S. Keraani, On the global well-posedness of the Boussinesq system with zero viscosity, Indiana Univ. Math. J., 58 (2009), 1591-1618. doi: 10.1512/iumj.2009.58.3590. Google Scholar
T. Hmidi, S. Keraani and F. Rousset, Global well-posedness for a Boussinesq-Navier-Stokes system with critical dissipation, J. Diff. Equ., 249 (2010), 2147-2174. doi: 10.1016/j.jde.2010.07.008. Google Scholar
T. Hmidi, S. Keraani and F. Rousset, Global well-posedness for Euler-Boussinesq system with critical dissipation, Comm. PDE, 36 (2011), 420-445. doi: 10.1080/03605302.2010.518657. Google Scholar
T. Hou and C. Li, Global well-posedness of the viscous Boussinesq equations, Disc. Cont. Dyn. Sys., 12 (2005), 1-12. doi: 10.3934/dcds.2005.12.1. Google Scholar
L. Hu and H. Jian, Blow-up criterion for 2-D Boussinesq equations in bounded domain, Front. Math. China, 2 (2007), 559-581. doi: 10.1007/s11464-007-0034-1. Google Scholar
W. Hu, I. Kukavica and M. Ziane, Persistence of regularity for a viscous Boussinesq equations with zero diffusivity, Asymptot. Anal., 91 (2015), 111-124. doi: 10.3233/ASY-141261. Google Scholar
W. Hu, I. Kukavica and M. Ziane, On the regularity for the Boussinesq equations in a bounded domain, J. Math. Phys., 54 (2013), 081507, 10 pp. doi: 10.1063/1.4817595. Google Scholar
A. A. Il'in, The Navier–Stokes equation and Euler euqation on two-dimensional closed manifolds, Mathematics of the USSR–Sbornik, 69 (1991), 559-579. Google Scholar
Q. Jiu, C. Miao, J. Wu and Z. Zhang, The 2D incompressible Boussinesq equations with general critical dissipation, SIAM J. Math. Anal., 46 (2014), 3426-3454. doi: 10.1137/140958256. Google Scholar
Q. Jiu, J. Wu and W. Yang, Eventual regularity of the two-dimensional Boussinesq equations with supercritical dissipation, J. Nonlinear Science, 25 (2015), 37-58. doi: 10.1007/s00332-014-9220-y. Google Scholar
S. Kobayashi and K. Nomizu, Foundations of Differential Geometry, Vol.I, Interscience, 1963. Google Scholar
M. Lai, R. Pan and K. Zhao, Initial boundary value problem for 2D viscous Boussinesq equations, Arch. Ration. Mech. Anal., 199 (2011), 739-760. doi: 10.1007/s00205-010-0357-z. Google Scholar
A. Larios, E. Lunasin and E. S. Titi, Global well-posedness for the 2D Boussinesq system with anisotropic viscosity and without heat diffusion, J. Diff. Equ., 255 (2013), 2636-2654. doi: 10.1016/j.jde.2013.07.011. Google Scholar
D. Li and X. Xu, Global wellposedness of an inviscid 2D Boussinesq system with nonlinear thermal diffusivity, Dyn. Par. Diff. Equ., 10 (2013), 255-265. doi: 10.4310/DPDE.2013.v10.n3.a2. Google Scholar
J. Li, H. Shang, J. Wu, X. Xu and Z. Ye, Regularity criteria for the 2D Boussinesq equations with supercritical dissipation, Comm. Math. Sci., 14 (2016), 1999-2022. doi: 10.4310/CMS.2016.v14.n7.a10. Google Scholar
J. Li and E. S. Titi, Global well-posedness of the 2D Boussinesq equations with vertical dissipation, Arch. Ration. Mech. Anal., 220 (2016), 983-1001. doi: 10.1007/s00205-015-0946-y. Google Scholar
S. A. Lorca and J. L. Boldrini, The initial value problem for a generalized Boussinesq model, Nonlinear Analysis, 36 (1999), 457-480. doi: 10.1016/S0362-546X(97)00635-4. Google Scholar
J. Saito, Boussinesq equations in thin spherical domains, Kyushu J. Math., 59 (2005), 443-465. doi: 10.2206/kyushujm.59.443. Google Scholar
A. Sarria and J. Wu, Blowup in stagnation-point form solutions of the inviscid 2d Boussinesq equations, J. Diff. Equ., 259 (2015), 3559-3576. doi: 10.1016/j.jde.2015.04.029. Google Scholar
A. Stefanov and J. Wu, A global regularity result for the 2D Boussinesq equations with critical dissipation, J. d'Analyse Mathematique, 137 (2019), 269-290. doi: 10.1007/s11854-018-0073-4. Google Scholar
Y. Taniuchi, A note on the blow-up criterion for the inviscid 2-D Boussinesq equations, in: R. Salvi (Ed.), The Navier-Stokes Equations: Theory and Numerical Methods, Lec. Notes Pure Appl. Math., 223 (2002), 131-140. Google Scholar
L. Tao, J. Wu, K. Zhao and X. Zheng, Stability near hydrostatic equilibrium to the 2D Boussinesq equations without thermal diffusion, submitted. Google Scholar
M. Taylor, Euler equation on a rotating surface, J. Funct. Anal., 270 (2016), 3884-3945. doi: 10.1016/j.jfa.2016.02.023. Google Scholar
N. Varopoulos, Small time Gaussian estimates for heat diffusion kernels, Part I: The semigroup techniques, Bulletin des Sciences Mathématiques, 113 (1989), 253-277. Google Scholar
C. Wang, The Calderón–Zygmund inequality on a compact Riemannian manifold, Pacific J. Math., 217 (2004), 181-200. doi: 10.2140/pjm.2004.217.181. Google Scholar
J. P. Whitehead and C. R. Doering, Ultimate State of Two-Dimensional Rayleigh-Bénard Convection between Free-Slip Fixed-Temperature Boundaries, Phys. Rev. Lett., 106 (2011), 244501. doi: 10.1103/PhysRevLett.106.244501. Google Scholar
J. Wu and X. Xu, Well-posedness and inviscid limits of the Boussinesq equations with fractional Laplacian dissipation, Nonlinearity, 27 (2014), 2215-2232. doi: 10.1088/0951-7715/27/9/2215. Google Scholar
J. Wu, X. Xu, L. Xue and Z. Ye, Regularity results for the 2D Boussinesq equations with critical and supercritical dissipation, Comm. Math. Sci., 14 (2016), 1963-1997. doi: 10.4310/CMS.2016.v14.n7.a9. Google Scholar
W. Yang, Q. Jiu and J. Wu, Global well-posedness for a class of 2D Boussinesq systems with fractional dissipation, J. Diff. Equ., 257 (2014), 4188-4213. doi: 10.1016/j.jde.2014.08.006. Google Scholar
K. Zhao, 2D inviscid heat conductive Boussinesq system in a bounded domain, Michigan Math. J., 59 (2010), 329-352. Google Scholar
Boris Andreianov, Mohamed Maliki. On classes of well-posedness for quasilinear diffusion equations in the whole space. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 505-531. doi: 10.3934/dcdss.2020361
Xavier Carvajal, Liliana Esquivel, Raphael Santos. On local well-posedness and ill-posedness results for a coupled system of mkdv type equations. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020382
Xiaopeng Zhao, Yong Zhou. Well-posedness and decay of solutions to 3D generalized Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 795-813. doi: 10.3934/dcdsb.2020142
Noufel Frikha, Valentin Konakov, Stéphane Menozzi. Well-posedness of some non-linear stable driven SDEs. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 849-898. doi: 10.3934/dcds.2020302
Charlotte Rodriguez. Networks of geometrically exact beams: Well-posedness and stabilization. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021002
Zhouxin Li, Yimin Zhang. Ground states for a class of quasilinear Schrödinger equations with vanishing potentials. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020298
Antoine Benoit. Weak well-posedness of hyperbolic boundary value problems in a strip: when instabilities do not reflect the geometry. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5475-5486. doi: 10.3934/cpaa.2020248
Tong Tang, Jianzhu Sun. Local well-posedness for the density-dependent incompressible magneto-micropolar system with vacuum. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020377
Dongfen Bian, Yao Xiao. Global well-posedness of non-isothermal inhomogeneous nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1243-1272. doi: 10.3934/dcdsb.2020161
Jose Anderson Cardoso, Patricio Cerda, Denilson Pereira, Pedro Ubilla. Schrödinger Equations with vanishing potentials involving Brezis-Kamin type problems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020392
Zhiting Ma. Navier-Stokes limit of globally hyperbolic moment equations. Kinetic & Related Models, 2021, 14 (1) : 175-197. doi: 10.3934/krm.2021001
Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $ q $-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020440
Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020348
Xinfu Chen, Huiqiang Jiang, Guoqing Liu. Boundary spike of the singular limit of an energy minimizing problem. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3253-3290. doi: 10.3934/dcds.2020124
Hai-Liang Li, Tong Yang, Mingying Zhong. Diffusion limit of the Vlasov-Poisson-Boltzmann system. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021003
Hideki Murakawa. Fast reaction limit of reaction-diffusion systems. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1047-1062. doi: 10.3934/dcdss.2020405
Lei Yang, Lianzhang Bao. Numerical study of vanishing and spreading dynamics of chemotaxis systems with logistic source and a free boundary. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1083-1109. doi: 10.3934/dcdsb.2020154
Hua Qiu, Zheng-An Yao. The regularized Boussinesq equations with partial dissipations in dimension two. Electronic Research Archive, 2020, 28 (4) : 1375-1393. doi: 10.3934/era.2020073
Dmitry Dolgopyat. The work of Sébastien Gouëzel on limit theorems and on weighted Banach spaces. Journal of Modern Dynamics, 2020, 16: 351-371. doi: 10.3934/jmd.2020014
Meilan Cai, Maoan Han. Limit cycle bifurcations in a class of piecewise smooth cubic systems with multiple parameters. Communications on Pure & Applied Analysis, 2021, 20 (1) : 55-75. doi: 10.3934/cpaa.2020257
Siran Li Jiahong Wu Kun Zhao | CommonCrawl |
Computer Aided Verification
International Conference on Computer Aided Verification
CAV 2019: Computer Aided Verification pp 426-444 | Cite as
Termination of Triangular Integer Loops is Decidable
Florian Frohn
Jürgen Giesl
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11562)
We consider the problem whether termination of affine integer loops is decidable. Since Tiwari conjectured decidability in 2004 [15], only special cases have been solved [3, 4, 14]. We complement this work by proving decidability for the case that the update matrix is triangular.
Funded by DFG grant 389792660 as part of TRR 248 and by DFG grant GI 274/6.
Download conference paper PDF
We consider affine integer loops of the form
$$\begin{aligned} \mathbf{while}\ \varphi \ \mathbf{do}\ \overline{x}\ \leftarrow A\, \overline{x}+\overline{a}. \end{aligned}$$
Here, \(A \in \mathbb {Z}^{d \times d}\) for some dimension \(d \ge 1\), \(\overline{x}\) is a column vector of pairwise different variables \(x_1,\ldots ,x_d\), \(\overline{a} \in \mathbb {Z}^d\), and \(\varphi \) is a conjunction of inequalities of the form \(\alpha > 0\) where \(\alpha \in \mathbb {A}\mathbbm {f}[\overline{x}]\) is an affine expression with rational coefficients1 over \(\overline{x}\) (i.e., \(\mathbb {A}\mathbbm {f}[\overline{x}] = \{\overline{c}^T\, \overline{x} + c \mid \overline{c} \in \mathbb {Q}^d, c \in \mathbb {Q}\}\)). So \(\varphi \) has the form \(B\,\overline{x} + \overline{b} > \overline{0}\) where \(\overline{0}\) is the vector containing k zeros, \(B \in \mathbb {Q}^{k \times d}\), and \(\overline{b} \in \mathbb {Q}^k\) for some \(k \in \mathbb {N}\). Definition 1 formalizes the intuitive notion of termination for such loops.
(Termination). Let \(f:\mathbb {Z}^d \rightarrow \mathbb {Z}^d\) with \(f(\overline{x}) = A\,\overline{x} + \overline{a}\). If
$$ \exists \overline{c} \in \mathbb {Z}^{d}.\ \forall n \in \mathbb {N}.\ \varphi [\overline{x} / f^n(\overline{c})], $$
then (1) is non-terminating and \(\overline{c}\) is a witness for non-termination. Otherwise, (1) terminates.
Here, \(f^n\) denotes the n-fold application of f, i.e., we have \(f^0(\overline{c}) = \overline{c}\) and \(f^{n+1}(\overline{c}) = f(f^n(\overline{c}))\). We call f the update of (1). Moreover, for any entity s, s[x / t] denotes the entity that results from s by replacing all occurrences of x by t. Similarly, if \(\overline{x} = \begin{bmatrix}x_1\\[-.15cm]\vdots \\x_m\end{bmatrix}\) and \(\overline{t} = \begin{bmatrix}t_1\\[-.15cm]\vdots \\t_m\end{bmatrix}\), then \(s[\overline{x} / \overline{t}]\) denotes the entity resulting from s by replacing all occurrences of \(x_i\) by \(t_i\) for each \(1 \le i \le m\).
Consider the loop
$$\begin{aligned} \mathbf{while}\,\, {y + z > 0}\, \,\mathbf{do}\,\, \left[ \begin{array}{c} w\\ x\\ y\\ z \end{array}\right] \leftarrow \left[ \begin{array}{c} 2\\ x + 1\\ - w - 2 \cdot y\\ x \end{array}\right] \end{aligned}$$
where the update of all variables is executed simultaneously. This program belongs to our class of affine loops, because it can be written equivalently as follows.
$$\begin{aligned} \mathbf{while}\, \,y + z > 0\, \,\mathbf{do}\,\, \left[ \begin{array}{c} w\\ x\\ y\\ z \end{array}\right] \leftarrow \left[ \begin{array}{cccc} 0&{}0&{}0&{}0\\ 0&{}1&{}0&{}0\\ -1&{}0&{}-2&{}0\\ 0&{}1&{}0&{}0 \end{array}\right] \left[ \begin{array}{c} w\\ x\\ y\\ z \end{array}\right] + \left[ \begin{array}{c} 2\\ 1\\ 0\\ 0 \end{array}\right] \end{aligned}$$
While termination of affine loops is known to be decidable if the variables range over the real [15] or the rational numbers [4], the integer case is a well-known open problem [2, 3, 4, 14, 15].2 However, certain special cases have been solved: Braverman [4] showed that termination of linear loops is decidable (i.e., loops of the form (1) where \(\overline{a}\) is \(\overline{0}\) and \(\varphi \) is of the form \(B\,\overline{x} > \overline{0}\)). Bozga et al. [3] showed decidability for the case that the update matrix A in (1) has the finite monoid property, i.e., if there is an \(n > 0\) such that \(A^n\) is diagonalizable and all eigenvalues of \(A^n\) are in \(\{0,1\}.\) Ouaknine et al. [14] proved decidability for the case \(d \le 4\) and for the case that A is diagonalizable.
Ben-Amram et al. [2] showed undecidability of termination for certain extensions of affine integer loops, e.g., for loops where the body is of the form \(\mathbf {if}\ x > 0\ \mathbf {then}\ \overline{x} \leftarrow A\,\overline{x}\ \mathbf {else}\ \overline{x} \leftarrow A'\,\overline{x}\) where \(A,A' \in \mathbb {Z}^{d \times d}\) and \(x \in \overline{x}\).
In this paper, we present another substantial step towards the solution of the open problem whether termination of affine integer loops is decidable. We show that termination is decidable for triangular loops (1) where A is a triangular matrix (i.e., all entries of A below or above the main diagonal are zero). Clearly, the order of the variables is irrelevant, i.e., our results also cover the case that A can be transformed into a triangular matrix by reordering A, \(\overline{x}\), and \(\overline{a}\) accordingly.3 So essentially, triangularity means that the program variables \(x_1,\ldots ,x_d\) can be ordered such that in each loop iteration, the new value of \(x_i\) only depends on the previous values of \(x_1,\ldots ,x_{i-1},x_i\). Hence, this excludes programs with "cyclic dependencies" of variables (e.g., where the new values of x and y both depend on the old values of both x and y). While triangular loops are a very restricted subclass of general integer programs, integer programs often contain such loops. Hence, tools for termination analysis of such programs (e.g., [5, 6, 7, 8, 11, 12, 13]) could benefit from integrating our decision procedure and applying it whenever a sub-program is an affine triangular loop.
Note that triangularity and diagonalizability of matrices do not imply each other. As we consider loops with arbitrary dimension, this means that the class of loops considered in this paper is not covered by [3, 14]. Since we consider affine instead of linear loops, it is also orthogonal to [4].
To see the difference between our and previous results, note that a triangular matrix A where \(c_1,\ldots ,c_k\) are the distinct entries on the diagonal is diagonalizable iff \((A - c_1 I) \ldots (A- c_k I)\) is the zero matrix.4 Here, I is the identity matrix. So an easy example for a triangular loop where the update matrix is not diagonalizable is the following well-known program (see, e.g., [2]):
$$\begin{aligned} \mathbf{while}\,\, x > 0\,\, \mathbf{do} \,\, x \leftarrow x+y;\; y \leftarrow y-1 \end{aligned}$$
It terminates as y eventually becomes negative and then x decreases in each iteration. In matrix notation, the loop body is Open image in new window , i.e., the update matrix is triangular. Thus, this program is in our class of programs where we show that termination is decidable. However, the only entry on the diagonal of the update matrix A is \(c = 1\) and Open image in new window is not the zero matrix. So A (and in fact each \(A^n\) where \(n \in \mathbb {N}\)) is not diagonalizable. Hence, extensions of this example to a dimension greater than 4 where the loop is still triangular are not covered by any of the previous results.5
Our proof that termination is decidable for triangular loops proceeds in three steps. We first prove that termination of triangular loops is decidable iff termination of non-negative triangular loops (nnt-loops) is decidable, cf. Sect. 2. A loop is non-negative if the diagonal of A does not contain negative entries. Second, we show how to compute closed forms for nnt-loops, i.e., vectors \(\overline{q}\) of d expressions over the variables \(\overline{x}\) and n such that \(\overline{q}[n/c] = f^c(\overline{x})\) for all \(c\ge 0\), see Sect. 3. Here, triangularity of the matrix A allows us to treat the variables step by step. So for any \(1 \le i \le d\), we already know the closed forms for \(x_1,\ldots ,x_{i-1}\) when computing the closed form for \(x_i\). The idea of computing closed forms for the repeated updates of loops was inspired by our previous work on inferring lower bounds on the runtime of integer programs [10]. But in contrast to [10], here the computation of the closed form always succeeds due to the restricted shape of the programs. Finally, we explain how to decide termination of nnt-loops by reasoning about their closed forms in Sect. 4. While our technique does not yield witnesses for non-termination, we show that it yields witnesses for eventual non-termination, i.e., vectors \(\overline{c}\) such that \(f^n(\overline{c})\) witnesses non-termination for some \(n \in \mathbb {N}\). Detailed proofs for all lemmas and theorems can be found in [9].
2 From Triangular to Non-Negative Triangular Loops
To transform triangular loops into nnt-loops, we define how to chain loops. Intuitively, chaining yields a new loop where a single iteration is equivalent to two iterations of the original loop. Then we show that chaining a triangular loop always yields an nnt-loop and that chaining is equivalent w.r.t. termination.
(Chaining).Chaining the loop (1) yields:
$$\begin{aligned} \mathbf{while}\,\, \varphi \wedge \varphi [\overline{x} / A\,\overline{x} + \overline{a}] \,\,\mathbf{do}\,\, \overline{x} \leftarrow A^2\,\overline{x} + A\,\overline{a} + \overline{a} \end{aligned}$$
Chaining Example 2 yields
$$\begin{aligned} \begin{array}{l} \mathbf{while}\,\, y + z> 0 \wedge - w - 2 \cdot y + x > 0 \,\,\mathbf{do}\\ \qquad \left[ \begin{array}{c} w\\ x\\ y\\ z \end{array}\right] \leftarrow \left[ \begin{array}{cccc} 0&{}0&{}0&{}0\\ 0&{}1&{}0&{}0\\ -1&{}0&{}-2&{}0\\ 0&{}1&{}0&{}0 \end{array}\right] ^2 \left[ \begin{array}{c} w\\ x\\ y\\ z \end{array}\right] + \left[ \begin{array}{cccc} 0&{}0&{}0&{}0\\ 0&{}1&{}0&{}0\\ -1&{}0&{}-2&{}0\\ 0&{}1&{}0&{}0 \end{array}\right] \left[ \begin{array}{c} 2\\ 1\\ 0\\ 0 \end{array}\right] + \left[ \begin{array}{c} 2\\ 1\\ 0\\ 0 \end{array}\right] \end{array} \end{aligned}$$
which simplifies to the following nnt-loop:
$$\begin{aligned} \mathbf{while}\,\, y + z> 0 \wedge - w - 2 \cdot y + x > 0 \,\,\mathbf{do}\,\, \left[ \begin{array}{c} w\\ x\\ y\\ z \end{array}\right] \leftarrow \left[ \begin{array}{cccc} 0&{}0&{}0&{}0\\ 0&{}1&{}0&{}0\\ 2&{}0&{}4&{}0\\ 0&{}1&{}0&{}0 \end{array}\right] \left[ \begin{array}{c} w\\ x\\ y\\ z \end{array}\right] + \left[ \begin{array}{c} 2\\ 2\\ -2\\ 1 \end{array}\right] \end{aligned}$$
Lemma 5 is needed to prove that (2) is an nnt-loop if (1) is triangular.
Lemma 5
(Squares of Triangular Matrices). For every triangular matrix A, \(A^2\) is a triangular matrix whose diagonal entries are non-negative.
Corollary 6
(Chaining Loops). If (1) is triangular, then (2) is an nnt-loop.
Immediate consequence of Definition 3 and Lemma 5. \(\square \)
(Equivalence of Chaining). (1) terminates \(\iff \) (2) terminates.
By Definition 1, (1) does not terminate iff
$$ \begin{array}{lll} &{}\exists \overline{c} \in \mathbb {Z}^{d}.\ \forall n \in \mathbb {N}.\ \varphi [\overline{x} / f^n(\overline{c})] &{} \\ \iff &{}\exists \overline{c} \in \mathbb {Z}^{d}.\ \forall n \in \mathbb {N}.\ \varphi [\overline{x} / f^{2 \cdot n}(\overline{c})] \wedge \varphi [\overline{x} / f^{2 \cdot n + 1}(\overline{c})]\\ \iff &{}\exists \overline{c} \in \mathbb {Z}^{d}.\ \forall n \in \mathbb {N}.\ \varphi [\overline{x} / f^{2 \cdot n}(\overline{c})] \wedge \varphi [\overline{x} / A\,f^{2 \cdot n}(\overline{c}) + \overline{a}] &{} (\text {by Definition of } f), \end{array} $$
i.e., iff (2) does not terminate as \(f^2(\overline{x}) = A^2\,\overline{x} + A\,\overline{a} + \overline{a}\) is the update of (2). \(\square \)
Theorem 8
(Reducing Termination to nnt-Loops). Termination of triangular loops is decidable iff termination of nnt-loops is decidable.
Immediate consequence of Corollary 6 and Lemma 7. \(\square \)
Thus, from now on we restrict our attention to nnt-loops.
3 Computing Closed Forms
The next step towards our decidability proof is to show that \(f^n(\overline{x})\) is equivalent to a vector of poly-exponential expressions for each nnt-loop, i.e., the closed form of each nnt-loop can be represented by such expressions. Here, equivalence means that two expressions evaluate to the same result for all variable assignments.
Poly-exponential expressions are sums of arithmetic terms where it is always clear which addend determines the asymptotic growth of the whole expression when increasing a designated variable n. This is crucial for our decidability proof in Sect. 4. Let \(\mathbb {N}_{\ge 1} = \{b \in \mathbb {N}\mid b \ge 1\}\) (and \(\mathbb {Q}_{>0}\), \(\mathbb {N}_{>1}\), etc. are defined analogously). Moreover, \(\mathbb {A}\mathbbm {f}[\overline{x}]\) is again the set of all affine expressions over \(\overline{x}\).
(Poly-Exponential Expressions). Let \(\mathcal {C}\) be the set of all finite conjunctions over the literals \(n = c, n \ne c\) where n is a designated variable and \(c \in \mathbb {N}\). Moreover for each formula \(\psi \) over n, let Open image in new window be the characteristic function of \(\psi \), i.e., Open image in new window if \(\psi [n/c]\) is valid and Open image in new window , otherwise. The set of all poly-exponential expressions over \(\overline{x}\) is
As n ranges over \(\mathbb {N}\), we use Open image in new window as syntactic sugar for Open image in new window . So an example for a poly-exponential expression is
Moreover, note that if \(\psi \) contains a positive literal (i.e., a literal of the form "\(n = c\)" for some number \(c \in \mathbb {N}\)), then Open image in new window is equivalent to either 0 or Open image in new window .
The crux of the proof that poly-exponential expressions can represent closed forms is to show that certain sums over products of exponential and poly-exponential expressions can be represented by poly-exponential expressions, cf. Lemma 12. To construct these expressions, we use a variant of [1, Lemma 3.5]. As usual, \(\mathbb {Q}[\overline{x}]\) is the set of all polynomials over \(\overline{x}\) with rational coefficients.
Lemma 10
(Expressing Polynomials by Differences [1]). If \(q \in \mathbb {Q}[n]\) and \(c \in \mathbb {Q}\), then there is an \(r \in \mathbb {Q}[n]\) such that \(q = r - c \cdot r[n/n-1]\) for all \(n \in \mathbb {N}\).
So Lemma 10 expresses a polynomial q via the difference of another polynomial r at the positions n and \(n-1\), where the additional factor c can be chosen freely. The proof of Lemma 10 is by induction on the degree of q and its structure resembles the structure of the following algorithm to compute r. Using the Binomial Theorem, one can verify that \(q - s + c \cdot s[n/n-1]\) has a smaller degree than q, which is crucial for the proof of Lemma 10 and termination of Algorithm 1.
As an example, consider \(q = 1\) (i.e., \(c_0 = 1\)) and \(c = 4\). Then we search for an r such that \(q = r - c \cdot r[n/n-1]\), i.e., \(1 = r - 4 \cdot r[n/n-1]\). According to Algorithm 1, the solution is \(r = \frac{c_0}{1-c} = -\frac{1}{3}\).
(Closure of \(\mathbb {PE}\) under Sums of Products and Exponentials). If \(m \in \mathbb {N}\) and \(p \in \mathbb {PE}[\overline{x}]\), then one can compute a \(q \in \mathbb {PE}[\overline{x}]\) which is equivalent to \(\sum _{i=1}^{n} m^{n - i} \cdot p[n/i-1]\).
Let Open image in new window . We have:
As \(\mathbb {PE}[\overline{x}]\) is closed under addition, it suffices to show that we can compute an equivalent poly-exponential expression for any expression of the form
We first regard the case \(m=0\). Here, the expression (4) can be simplified to
Clearly, there is a \(\psi ' \in \mathcal {C}\) such that Open image in new window is equivalent to Open image in new window . Moreover, \(\alpha \cdot b^{n-1} = \tfrac{\alpha }{b} \cdot b^n\) where \(\tfrac{\alpha }{b} \in \mathbb {A}\mathbbm {f}[\overline{x}]\). Hence, due to the Binomial Theorem
which is a poly-exponential expression as \(\tfrac{\alpha }{b}\cdot \left( {\begin{array}{c}a\\ i\end{array}}\right) \cdot (-1)^i \in \mathbb {A}\mathbbm {f}[\overline{x}]\).
From now on, let \(m \ge 1\). If \(\psi \) contains a positive literal \(n = c\), then we get
The step marked with \((\dagger )\) holds as we have Open image in new window for all \(i \in \{1,\ldots ,n\}\) and the step marked with \((\dagger \dagger )\) holds since \(i \ne c+1\) implies Open image in new window . If \(\psi \) does not contain a positive literal, then let c be the maximal constant that occurs in \(\psi \) or \(-1\) if \(\psi \) is empty. We get:
Again, the step marked with \((\dagger )\) holds since we have Open image in new window for all \(i \in \{1,\ldots ,n\}\). The last step holds as \(i \ge c+2\) implies Open image in new window . Similar to the case where \(\psi \) contains a positive literal, we can compute a poly-exponential expression which is equivalent to the first addend. We have
which is a poly-exponential expression as \(\tfrac{1}{m^{i}}\cdot \alpha \cdot (i-1)^a \cdot b^{i-1} \in \mathbb {A}\mathbbm {f}[\overline{x}]\). For the second addend, we have:
Lemma 10 ensures \(r \in \mathbb {Q}[n]\), i.e., we have \(r = \sum _{i=0}^{d_r} m_i \cdot n^i\) for some \(d_r \in \mathbb {N}\) and \(m_i \in \mathbb {Q}\). Thus, \(r[n/c+1] \cdot \left( \frac{b}{m}\right) ^{c+1} \cdot \frac{\alpha }{b} \in \mathbb {A}\mathbbm {f}[\overline{x}]\) which implies Open image in new window . It remains to show that the addend Open image in new window is equivalent to a poly-exponential expression. As \(\frac{\alpha }{b} \cdot m_i \in \mathbb {A}\mathbbm {f}[\overline{x}]\), we have
\(\square \)
The proof of Lemma 12 gives rise to a corresponding algorithm.
We compute an equivalent poly-exponential expression for
where w is a variable. (It will later on be needed to compute a closed form for Example 4, see Example 18.) According to Algorithm 2 and (3), we get
with Open image in new window , Open image in new window , and \(p_3 = \sum _{i=1}^{n} 4^{n-i} \cdot (- 2)\). We search for \(q_1, q_2, q_3 \in \mathbb {PE}[w]\) that are equivalent to \(p_1, p_2, p_3\), i.e., \(q_1 + q_2 + q_3\) is equivalent to (12). We only show how to compute \(q_2\)(and omit the computation of Open image in new window ). Analogously to (8), we get:
The next step is to rearrange the first sum as in (9). In our example, it directly simplifies to 0 and hence we obtain
Finally, by applying the steps from (10) we get:
The step marked with \((\dagger )\) holds by Lemma 10 with \(q = 1\) and \(c = 4\). Thus, we have \(r = -\tfrac{1}{3}\), cf. Example 11.
Recall that our goal is to compute closed forms for loops. As a first step, instead of the n-fold update function \(h(n,\overline{x}) = f^n(\overline{x})\) of (1) where f is the update of (1), we consider a recursive update function for a single variable \(x \in \overline{x}\):
$$ \textstyle g(0,\overline{x}) = x \quad \text {and} \quad g(n,\overline{x}) = m \cdot g(n-1, \overline{x}) + p[n/n-1] \quad \text {for all n > 0} $$
Here, \(m \in \mathbb {N}\) and \(p \in \mathbb {PE}[\overline{x}]\). Using Lemma 12, it is easy to show that g can be represented by a poly-exponential expression.
(Closed Form for Single Variables). If \(x \in \overline{x}\), \(m \in \mathbb {N}\), and \(p \in \mathbb {PE}[\overline{x}]\), then one can compute a \(\,q \in \mathbb {PE}[\overline{x}]\) which satisfies
$$ \textstyle q\,[n/0] = x \quad \text {and} \quad q = (m \cdot q + p)\;[n/n-1] \quad \text {for all } n > 0. $$
It suffices to find a \(q \in \mathbb {PE}[\overline{x}]\) that satisfies
$$\begin{aligned} \textstyle q = m^n \cdot x + \sum _{i=1}^{n} m^{n-i} \cdot p[n/i-1]. \end{aligned}$$
To see why (13) is sufficient, note that (13) implies
$$ \textstyle q[n/0] \quad = \quad m^0 \cdot x + \sum \nolimits _{i=1}^{0} m^{0-i} \cdot p[n/i-1] \quad =\quad x $$
and for \(n > 0\), (13) implies
$$ \begin{array}{llll} q &{}=&{} m^{n} \cdot x + \mathop {\sum }\nolimits _{i=1}^{n} m^{n-i} \cdot p[n/i-1]\\ &{}=&{} m^{n} \cdot x + \left( \mathop {\sum }\nolimits _{i=1}^{n-1} m^{n-i} \cdot p[n/i-1]\right) + p[n/n-1]\\ &{}=&{} m \cdot \left( m^{n-1} \cdot x + \mathop {\sum }\nolimits _{i=1}^{n-1} m^{n-i-1} \cdot p[n/i-1]\right) + p[n/n-1]\\ &{}=&{} m \cdot q[n/n-1] + p[n/n-1]\\ &{}=&{} (m \cdot q + p)[n/n-1]. \end{array} $$
By Lemma 12, we can compute a \(q' \in \mathbb {PE}[\overline{x}]\) such that
$$ \textstyle m^n \cdot x + \mathop {\sum }\nolimits _{i=1}^{n} m^{n-i} \cdot p[n/i-1] \quad = \quad m^n \cdot x + q'. $$
Moreover,
So both addends are equivalent to poly-exponential expressions. \(\square \)
We show how to compute the closed forms for the variables w and x from Example 4. We first consider the assignment \(w \leftarrow 2\), i.e., we want to compute a \(q_w \in \mathbb {PE}[w,x,y,z]\) with \(q_w [n/0] = w\) and \(q_w = (m_w \cdot q_w + p_w)\,[n/n-1]\) for \(n > 0\), where \(m_w = 0\) and \(p_w = 2\). According to (13) and (14), \(q_w\) is
For the assignment \(x \leftarrow x + 2\), we search for a \(q_x\) such that \(q_x[n/0] = x\) and \(q_x = (m_x \cdot q_x + p_x)\,[n/n-1]\) for \(n > 0\), where \(m_x = 1\) and \(p_x = 2\). By (13), \(q_x\) is
$$\textstyle m_x^n \cdot x + \sum _{i=1}^{n} m_x^{n-i} \cdot p_x[n/i-1] = 1^n \cdot x + \sum _{i=1}^{n} 1^{n-i} \cdot 2 = x + 2 \cdot n. $$
The restriction to triangular matrices now allows us to generalize Lemma 14 to vectors of variables. The reason is that due to triangularity, the update of each program variable \(x_i\) only depends on the previous values of \(x_1,\ldots ,x_{i}\). So when regarding \(x_i\), we can assume that we already know the closed forms for \(x_1,\ldots ,x_{i-1}\). This allows us to find closed forms for one variable after the other by applying Lemma 14 repeatedly. In other words, it allows us to find a vector \(\overline{q}\) of poly-exponential expressions that satisfies
$$ \textstyle \overline{q}\,[n/0] = \overline{x}\quad \text {and} \quad \overline{q} = A\, \overline{q}[n/n-1] + \overline{a} \quad \text {for all } n > 0. $$
To prove this claim, we show the more general Lemma 16. For all \(i_1,\ldots ,i_k \in \{1, \ldots , m\}\), we define \([z_1,\ldots ,z_m]_{i_1,\ldots ,i_k} = [z_{i_1},\ldots ,z_{i_k}]\) (and the notation \(\overline{y}_{i_1,\ldots ,i_k}\) for column vectors is defined analogously). Moreover, for a matrix A, \(A_{i}\) is A's \(i^{th}\) row and \(A_{i_1,\ldots ,i_n;j_1,\ldots ,j_k}\) is the matrix with rows \((A_{i_1})_{j_1,\ldots ,j_k}, \ldots , (A_{i_n})_{j_1,\ldots ,j_k}\). So for \(A = \begin{bmatrix} a_{1,1}&a_{1,2}&a_{1,3}\\ a_{2,1}&a_{2,2}&a_{2,3}\\ a_{3,1}&a_{3,2}&a_{3,3} \end{bmatrix}\), we have \(A_{1,2;1,3} = \begin{bmatrix} a_{1,1}&a_{1,3}\\ a_{2,1}&a_{2,3} \end{bmatrix}\).
(Closed Forms for Vectors of Variables). If \(\overline{x}\) is a vector of at least \(d \ge 1\) pairwise different variables, \(A \in \mathbb {Z}^{d \times d}\) is triangular with \(A_{i;i} \ge 0\) for all \(1 \le i \le d\), and \(\overline{p} \in \mathbb {PE}[\overline{x}]^d\), then one can compute \(\overline{q} \in \mathbb {PE}[\overline{x}]^d\) such that:
$$\begin{aligned} \overline{q}\,[n/0]&= \overline{x}_{1,\ldots ,d}\quad \text {and}\end{aligned}$$
$$\begin{aligned} \overline{q}&= (A\, \overline{q} + \overline{p})\;[n/n-1] \quad \text {for all } n > 0 \end{aligned}$$
Assume that A is lower triangular (the case that A is upper triangular works analogously). We use induction on d. For any \(d \ge 1\) we have:
$$ \begin{array}{llllll} &{}\overline{q} &{}=&{} (A\, \overline{q} + \overline{p})\;[n/n-1]\\ \iff &{} \overline{q}_j &{}=&{} (A_{j} \cdot \overline{q} + \overline{p}_j)\;[n/n-1] &{} \text {for all } 1 \le j \le d\\ \iff &{} \overline{q}_j &{}=&{} (A_{j;2,\ldots ,d} \cdot \overline{q}_{2,\ldots ,d} + A_{j;1} \cdot \overline{q}_1 + \overline{p}_j)\;[n/n-1] &{} \text {for all } 1 \le j \le d\\ \iff &{} \overline{q}_1 &{}=&{} (A_{1;2,\ldots ,d} \cdot \overline{q}_{2,\ldots ,d} + A_{1;1} \cdot \overline{q}_1 + \overline{p}_1)\;[n/n-1] &{} \wedge \\ &{} \overline{q}_j &{}=&{} (A_{j;2,\ldots ,d} \cdot \overline{q}_{2,\ldots ,d} + A_{j;1} \cdot \overline{q}_1 + \overline{p}_j)\;[n/n-1] &{} \text {for all } 1< j \le d\\ \iff &{} \overline{q}_1 &{}=&{} (A_{1;1} \cdot \overline{q}_1 + \overline{p}_1)\;[n/n-1] &{} \wedge \\ &{} \overline{q}_j &{}=&{} (A_{j;2,\ldots ,d} \cdot \overline{q}_{2,\ldots ,d} + A_{j;1} \cdot \overline{q}_1 + \overline{p}_j)\;[n/n-1] &{} \text {for all } 1 < j \le d \end{array} $$
The last step holds as A is lower triangular. By Lemma 14, we can compute a \(\overline{q}_1 \in \mathbb {PE}[\overline{x}]\) that satisfies
$$ \textstyle \overline{q}_1[n/0] = \overline{x}_1 \quad \text {and} \quad \overline{q}_1 = (A_{1;1} \cdot \overline{q}_1 + \overline{p}_1)\;[n/n-1] \quad \text {for all } n > 0. $$
In the induction base (\(d = 1\)), there is no j with \(1 < j \le d\). In the induction step (\(d > 1\)), it remains to show that we can compute \(\overline{q}_{2,\ldots ,d}\) such that
$$ \textstyle \overline{q}_j[n/0] = \overline{x}_j \quad \text {and} \quad \overline{q}_j = (A_{j;2,\ldots ,d} \cdot \overline{q}_{2,\ldots ,d} + A_{j;1} \cdot \overline{q}_1 + \overline{p}_j)\;[n/n-1] $$
for all \(n > 0\) and all \(1 < j \le d\), which is equivalent to
$$\begin{aligned} \overline{q}_{2,\ldots ,d}[n/0]&= \overline{x}_{2,\ldots ,d} \quad \text {and}\\[-1.3em] \overline{q}_{2,\ldots ,d}&= (A_{2,\ldots ,d;2,\ldots ,d} \cdot \overline{q}_{2,\ldots ,d} + \begin{bmatrix}A_{2;1}\\\vdots \\A_{d;1}\end{bmatrix} \cdot \overline{q}_1 + \overline{p}_{2,\ldots ,d})\;[n/n-1] \end{aligned}$$
for all \(n>0\). As \(A_{j;1} \cdot \overline{q}_1 + \overline{p}_j \in \mathbb {PE}[\overline{x}]\) for each \(2 \le j \le d\), the claim follows from the induction hypothesis. \(\square \)
Together, Lemmas 14 and 16 and their proofs give rise to the following algorithm to compute a solution for (16) and (17). It computes a closed form \(\overline{q}_1\) for \(\overline{x}_1\) as in the proof of Lemma 14, constructs the argument \(\overline{p}\) for the recursive call based on A, \(\overline{q}_1\), and the current value of \(\overline{p}\) as in the proof of Lemma 16, and then determines the closed form for \(\overline{x}_{2, \ldots , d}\) recursively.
We can now prove the main theorem of this section.
Theorem 17
(Closed Forms for nnt-Loops). One can compute a closed form for every nnt-loop. In other words, if \(f:\mathbb {Z}^d \rightarrow \mathbb {Z}^d\) is the update function of an nnt-loop with the variables \(\overline{x}\), then one can compute a \(\overline{q} \in \mathbb {PE}[\overline{x}]^d\) such that \(\overline{q}[n/c] = f^c(\overline{x})\) for all \(c \in \mathbb {N}\).
Consider an nnt-loop of the form (1). By Lemma 16, we can compute a \(\overline{q} \subseteq \mathbb {PE}[\overline{x}]^d\) that satisfies
$$ \textstyle \overline{q}[n/0] = \overline{x} \quad \text {and} \quad \overline{q} = (A\, \overline{q} + \overline{a})\;[n/n-1] \quad \text {for all } n > 0. $$
We prove \(f^c(\overline{x}) = \overline{q}[n/c]\) by induction on \(c \in \mathbb {N}\). If \(c=0\), we get
$$ f^c(\overline{x}) = f^0(\overline{x}) = \overline{x} = \overline{q}[n/0] = \overline{q}[n/c]. $$
$$ \begin{array}{l@{}llll} \text{ If } c>0\text{, } \text{ we } \text{ get: }&{} f^c(\overline{x}) &{}=&{} A\, f^{c-1}(\overline{x}) + \overline{a} &{} \text {by definition of } f\\ &{}&{}=&{} A\, \overline{q}[n/c-1] + \overline{a} &{} \text {by the induction hypothesis}\\ &{}&{}=&{} (A\, \overline{q} + \overline{a})\;[n/c-1] &{} \text {as } \overline{a} \in \mathbb {Z}^d \text { does not contain } n\\ &{}&{}=&{} \overline{q}[n/c] &{} \end{array}$$
So invoking Algorithm 3 on \(\overline{x}, A\), and \(\overline{a}\) yields the closed form of an nnt-loop (1).
We show how to compute the closed form for Example 4. For
$$ y \leftarrow 2 \cdot w + 4 \cdot y - 2, $$
we obtain
where \(q_0 = y \cdot 4^n\). For \(z \leftarrow x + 1\), we get
So the closed form of Example 4 for the values of the variables after n iterations is:
4 Deciding Non-Termination of nnt-Loops
Our proof uses the notion of eventual non-termination [4, 14]. Here, the idea is to disregard the condition of the loop during a finite prefix of the program run.
Definition 19
(Eventual Non-Termination). A vector \(\overline{c} \in \mathbb {Z}^d\) witnesses eventual non-termination of (1) if
$$ \exists n_0 \in \mathbb {N}.\ \forall n \in \mathbb {N}_{>n_0}.\ \varphi [\overline{x} / f^{n}(\overline{c})]. $$
If there is such a witness, then (1) is eventually non-terminating.
Clearly, (1) is non-terminating iff (1) is eventually non-terminating [14]. Now Theorem 17 gives rise to an alternative characterization of eventual non-termination in terms of the closed form \(\overline{q}\) instead of \(f^{n}(\overline{c})\).
Corollary 20
(Expressing Non-Termination with \(\mathbb {PE}\)). If \(\overline{q}\) is the closed form of (1), then \(\overline{c} \in \mathbb {Z}^d\) witnesses eventual non-termination iff
$$\begin{aligned} \exists n_0 \in \mathbb {N}.\ \forall n \in \mathbb {N}_{>n_0}.\ \varphi [\overline{x} / \overline{q}][\overline{x} / \overline{c}]. \end{aligned}$$
Immediate, as \(\overline{q}\) is equivalent to \(f^n(\overline{x})\). \(\square \)
So to prove that termination of nnt-loops is decidable, we will use Corollary 20 to show that the existence of a witness for eventual non-termination is decidable. To do so, we first eliminate the factors Open image in new window from the closed form \(\overline{q}\). Assume that \(\overline{q}\) has at least one factor Open image in new window where \(\psi \) is non-empty (otherwise, all factors Open image in new window are equivalent to 1) and let c be the maximal constant that occurs in such a factor. Then all addends Open image in new window where \(\psi \) contains a positive literal become 0 and all other addends become \(\alpha \cdot n^{a} \cdot b^n\) if \(n > c\). Thus, as we can assume \(n_0 > c\) in (18) without loss of generality, all factors Open image in new window can be eliminated when checking eventual non-termination.
Removing Open image in new window from \(\mathbb {PE}\)s). Let \(\overline{q}\) be the closed form of an nnt-loop (1). Let \(\overline{q}_{norm}\) result from \(\overline{q}\) by removing all addends Open image in new window where \(\psi \) contains a positive literal and by replacing all addends Open image in new window where \(\psi \) does not contain a positive literal by \(\alpha \cdot n^{a} \cdot b^n\). Then \(\overline{c} \in \mathbb {Z}^d\) is a witness for eventual non-termination iff
$$\begin{aligned} \exists n_0 \in \mathbb {N}.\ \forall n \in \mathbb {N}_{>n_0}.\ \varphi [\overline{x} / \overline{q}_{norm}][\overline{x} / \overline{c}]. \end{aligned}$$
By removing the factors Open image in new window from the closed form \(\overline{q}\) of an nnt-loop, we obtain normalized poly-exponential expressions.
(Normalized \(\mathbb {PE}\)s). We call \(p \in \mathbb {PE}[\overline{x}]\) normalized if it is in
W.l.o.g., we always assume \((b_i,a_i) \ne (b_j,a_j)\) for all \(i,j \in \{1,\ldots ,\ell \}\) with \(i \ne j\). We define \(\mathbb {NPE}= \mathbb {NPE}[\varnothing ]\), i.e., we have \(p \in \mathbb {NPE}\) if \(\alpha _j \in \mathbb {Q}\) for all \(1 \le j \le \ell \).
We continue Example 18. By omitting the factors Open image in new window ,
and \(q_x = x + 2 \cdot n, q_0 = y \cdot 4^n\), and \(q_3 = \tfrac{2}{3} - \frac{2}{3} \cdot 4^{n}\) remain unchanged. Moreover,
Thus, \(q_y = q_0 + q_1 + q_2 + q_3\) becomes
$$ \textstyle y \cdot 4^n + \frac{1}{2} \cdot w \cdot 4^{n} - \frac{4}{3} + \frac{1}{3}\cdot 4^{n} + \tfrac{2}{3}- \frac{2}{3} \cdot 4^{n} = 4^n \cdot \left( y - \frac{1}{3} + \frac{1}{2} \cdot w\right) - \frac{2}{3}. $$
Let \(\sigma = \left[ w/2,\, x/x+ 2 \cdot n, \, y/4^n \cdot \left( y - \frac{1}{3} + \frac{1}{2} \cdot w\right) - \frac{2}{3}, \, z/x-1 + 2 \cdot n\right] \). Then we get that Example 2 is non-terminating iff there are \(w,x,y,z \in \mathbb {Z}, n_0 \in \mathbb {N}\) such that
$$ \begin{array}{l} (y + z)\;\sigma> 0 \wedge (- w - 2 \cdot y + x)\; \sigma> 0 \qquad \qquad \qquad \,\,\, \iff \\ 4^n \cdot \left( y - \frac{1}{3} + \frac{1}{2} \cdot w\right) - \frac{2}{3} + x - 1 + 2 \cdot n> 0 \wedge \\ \qquad - 2 - 2 \cdot \left( 4^n \cdot \left( y - \frac{1}{3} + \frac{1}{2} \cdot w\right) - \frac{2}{3}\right) + x + 2 {\cdot } n> 0 \iff \\ p^{\varphi }_1> 0 \wedge p^{\varphi }_2 > 0\\ \end{array} $$
holds for all \(n > n_0\) where
$$ \begin{array}{llll} p^{\varphi }_1 &{}=&{} 4^n \cdot \left( y - \frac{1}{3} + \frac{1}{2} \cdot w\right) + 2 \cdot n + x - \frac{5}{3} &{} \text {and}\\ p^{\varphi }_2 &{}=&{} 4^n \cdot \left( \frac{2}{3} - 2 \cdot y - w\right) + 2 \cdot n + x - \frac{2}{3}. \end{array} $$
Recall that the loop condition \(\varphi \) is a conjunction of inequalities of the form \(\alpha > 0\) where \(\alpha \in \mathbb {A}\mathbbm {f}[\overline{x}]\). Thus, \(\varphi [\overline{x} / \overline{q}_{norm}]\) is a conjunction of inequalities \(p > 0\) where \(p \in \mathbb {NPE}[\overline{x}]\) and we need to decide if there is an instantiation of these inequalities that is valid "for large enough n". To do so, we order the coefficients \(\alpha _j\) of the addends \(\alpha _j \cdot n^{a_j} \cdot b_j^n\) of normalized poly-exponential expressions according to the addend's asymptotic growth when increasing n. Lemma 24 shows that \(\alpha _2 \cdot n^{a_2} \cdot b_2^n\) grows faster than \(\alpha _1 \cdot n^{a_1} \cdot b_1^n\) iff \(b_2 > b_1\) or both \(b_2 = b_1\) and \(a_2 > a_1\).
(Asymptotic Growth). Let \(b_1,b_2 \in \mathbb {N}_{\ge 1}\) and \(a_1, a_2 \in \mathbb {N}\). If \((b_2, a_2) >_{lex} (b_1, a_1)\), then \(\mathcal {O}(n^{a_1} \cdot b_1^n) \subsetneq \mathcal {O}(n^{a_2} \cdot b_2^n)\). Here, \({>_{lex}}\) is the lexicographic order, i.e., \((b_2,a_2) >_{lex} (b_1,a_1)\) iff \(b_2 > b_1\) or \(b_2 = b_1 \wedge a_2 > a_1\).
By considering the cases \(b_2 > b_1\) and \(b_2 = b_1\) separately, the claim can easily be deduced from the definition of \(\mathcal {O}\). \(\square \)
(Ordering Coefficients).Marked coefficients are of the form \(\alpha ^{(b,a)}\) where \(\alpha \in \mathbb {A}\mathbbm {f}[\overline{x}], b \in \mathbb {N}_{\ge 1}\), and \(a \in \mathbb {N}\). We define \(\mathrm{unmark}(\alpha ^{(b,a)}) = \alpha \) and \(\alpha _2^{(b_2,a_2)} \succ \alpha _1^{(b_1,a_1)}\) if \((b_2,a_2) >_{lex} (b_1,a_1)\). Let
$$ \textstyle p = \sum _{j=1}^\ell \alpha _j \cdot n^{a_j} \cdot b_j^n \in \mathbb {NPE}[\overline{x}], $$
where \(\alpha _j \ne 0\) for all \(1 \le j \le \ell \). The marked coefficients of p are
In Example 23 we saw that the loop from Example 2 is non-terminating iff there are \(w,x,y,z \in \mathbb {Z}, n_0 \in \mathbb {N}\) such that \(p^{\varphi }_1> 0 \wedge p^{\varphi }_2 > 0\) for all \(n > n_0\). We get:
$$\begin{aligned} \mathrm{coeffs}\left( p^{\varphi }_1\right)&= \left\{ \left( y - \tfrac{1}{3} + \tfrac{1}{2} \cdot w\right) ^{(4,0)}, 2^{(1,1)}, \left( x-\tfrac{5}{3}\right) ^{(1, 0)}\right\} \\ \mathrm{coeffs}\left( p^{\varphi }_2\right)&= \left\{ \left( \tfrac{2}{3} - 2 \cdot y - w\right) ^{(4,0)}, 2^{(1,1)}, \left( x-\tfrac{2}{3}\right) ^{(1,0)}\right\} \end{aligned}$$
Now it is easy to see that the asymptotic growth of a normalized poly-exponential expression is solely determined by its \(\succ \)-maximal addend.
(Maximal Addend Determines Asymptotic Growth). Let \(p \in \mathbb {NPE}\) and let \(\max _{\succ }(\mathrm{coeffs}(p)) = c^{(b,a)}\). Then \(\mathcal {O}(p) = \mathcal {O}(c \cdot n^a \cdot b^n)\).
Clear, as \(c \cdot n^a \cdot b^n\) is the asymptotically dominating addend of p. \(\square \)
Note that Corollary 27 would be incorrect for the case \(c = 0\) if we replaced \(\mathcal {O}(p) = \mathcal {O}(c \cdot n^a \cdot b^n)\) with \(\mathcal {O}(p) = \mathcal {O}(n^a \cdot b^n)\) as \(\mathcal {O}(0) \ne \mathcal {O}(1)\). Building upon Corollary 27, we now show that, for large n, the sign of a normalized poly-exponential expression is solely determined by its \(\succ \)-maximal coefficient. Here, we define \(\mathrm{sign}(c) = -1\) if \(c \in \mathbb {Q}_{<0} \cup \{-\infty \}\), \(\mathrm{sign}(0) = 0\), and \(\mathrm{sign}(c) = 1\) if \(c \in \mathbb {Q}_{>0} \cup \{\infty \}\).
(Sign of \(\mathbb {NPE}\)s). Let \(p \in \mathbb {NPE}\). Then \(\lim _{n \mapsto \infty } p \in \mathbb {Q}\) iff \(p \in \mathbb {Q}\) and otherwise, \(\lim _{n \mapsto \infty } p \in \{ \infty , -\infty \}\). Moreover, we have
$$ \textstyle \mathrm{sign}\left( \lim _{n \mapsto \infty } p\right) = \mathrm{sign}(\mathrm{unmark}(\max _{\succ }(\mathrm{coeffs}(p)))). $$
If \(p \notin \mathbb {Q}\), then the limit of each addend of p is in \(\{-\infty , \infty \}\) by definition of \(\mathbb {NPE}\). As the asymptotically dominating addend determines \(\lim _{n \mapsto \infty } p\) and \(\mathrm{unmark}(\max _{\succ }(\mathrm{coeffs}(p)))\) determines the sign of the asymptotically dominating addend, the claim follows. \(\square \)
Lemma 29 shows the connection between the limit of a normalized poly-exponential expression p and the question whether p is positive for large enough n. The latter corresponds to the existence of a witness for eventual non-termination by Corollary 21 as \(\varphi [\overline{x} / \overline{q}_{norm}]\) is a conjunction of inequalities \(p > 0\) where \(p \in \mathbb {NPE}[\overline{x}]\).
(Limits and Positivity of \(\mathbb {NPE}\)s). Let \(p \in \mathbb {NPE}\). Then
$$ \textstyle \exists n_0 \in \mathbb {N}.\ \forall n \in \mathbb {N}_{>n_0}.\ p> 0 \iff \lim _{n \mapsto \infty } p > 0. $$
By case analysis over \(\lim _{n \mapsto \infty } p\). \(\square \)
Now we show that Corollary 21 allows us to decide eventual non-termination by examining the coefficients of normalized poly-exponential expressions. As these coefficients are in \(\mathbb {A}\mathbbm {f}[\overline{x}]\), the required reasoning is decidable.
(Deciding Eventual Positiveness of \(\mathbb {NPE}\)s). Validity of
$$\begin{aligned} \begin{array}{l} \exists \overline{c} \in \mathbb {Z}^{d}, n_0 \in \mathbb {N}.\ \forall n \in \mathbb {N}_{>n_0}.\ \bigwedge \nolimits _{i=1}^k p_i[\overline{x}/\overline{c}] > 0 \end{array} \end{aligned}$$
where \(p_1,\ldots ,p_k \in \mathbb {NPE}[\overline{x}]\) is decidable.
For any \(p_i\) with \(1 \le i \le k\) and any \(\overline{c} \in \mathbb {Z}^{d}\), we have \(p_i[\overline{x}/\overline{c}] \in \mathbb {NPE}\). Hence:
Let \(p \in \mathbb {NPE}[\overline{x}]\) with \(\mathrm{coeffs}(p) = \left\{ \alpha _1^{(b_1,a_1)}\!,\ldots ,\alpha ^{(b_{\ell },a_{\ell })}_{\ell }\right\} \) where \(\alpha ^{(b_i,a_i)}_i \succ \alpha ^{(b_{j},a_{j})}_{j}\) for all \(1 \le i < j \le \ell \). If \(p[\overline{x}/\overline{c}] = 0\) holds, then \(\mathrm{coeffs}(p[\overline{x}/\overline{c}]) = \{ 0^{(1,0)} \}\) and thus \(\mathrm{unmark}(\max _{\succ }(\mathrm{coeffs}(p[\overline{x}/\overline{c}]))) = 0\). Otherwise, there is an \(1 \le j \le \ell \) with \(\mathrm{unmark}(\max _{\succ }(\mathrm{coeffs}(p[\overline{x}/\overline{c}]))) = \alpha _j[\overline{x}/\overline{c}] \ne 0\) and we have \(\alpha _i[\overline{x}/\overline{c}] = 0\) for all \(1 \le i \le j-1\). Hence, \(\mathrm{unmark}(\max _{\succ }(\mathrm{coeffs}(p[\overline{x}/\overline{c}]))) > 0\) holds iff \(\bigvee _{j=1}^\ell \left( \alpha _j[\overline{x}/\overline{c}] > 0 \wedge \bigwedge _{i=0}^{j-1} \alpha _i[\overline{x}/\overline{c}] = 0\right) \) holds, i.e., iff \([\overline{x}/\overline{c}]\) is a model for
$$\begin{aligned} \begin{array}{l} \mathrm{max\_coeff\_pos}(p) = \bigvee \nolimits _{j=1}^\ell \left( \alpha _j > 0 \wedge \bigwedge \nolimits _{i=0}^{j-1} \alpha _i = 0\right) . \end{array} \end{aligned}$$
Hence by the considerations above, (20) is valid iff
$$\begin{aligned} \begin{array}{l} \exists \overline{c} \in \mathbb {Z}^{d}. \; \bigwedge \nolimits _{i=1}^k \mathrm{max\_coeff\_pos}(p_i) [\overline{x}/\overline{c}] \end{array} \end{aligned}$$
is valid. By multiplying each (in-)equality in (22) with the least common multiple of all denominators, one obtains a first-order formula over the theory of linear integer arithmetic. It is well known that validity of such formulas is decidable. \(\square \)
Note that (22) is valid iff \(\bigwedge _{i=1}^k \mathrm{max\_coeff\_pos}(p_i)\) is satisfiable. So to implement our decision procedure, one can use integer programming or SMT solvers to check satisfiability of \(\bigwedge _{i=1}^k \mathrm{max\_coeff\_pos}(p_i)\). Lemma 30 allows us to prove our main theorem.
Termination of triangular loops is decidable.
By Theorem 8, termination of triangular loops is decidable iff termination of nnt-loops is decidable. For an nnt-loop (1) we obtain a \(\overline{q}_{norm} \in \mathbb {NPE}[\overline{x}]^{d}\) (see Theorem 17 and Corollary 21) such that (1) is non-terminating iff
$$\begin{aligned} \exists \overline{c} \in \mathbb {Z}^{d}, n_0 \in \mathbb {N}.\ \forall n \in \mathbb {N}_{>n_0}.\ \varphi [\overline{x} / \overline{q}_{norm}][\overline{x} / \overline{c}], \end{aligned}$$
where \(\varphi \) is a conjunction of inequalities of the form \(\alpha > 0\), \(\alpha \in \mathbb {A}\mathbbm {f}[\overline{x}]\). Hence,
$$\begin{array}{l} \varphi [\overline{x} / \overline{q}_{norm}][\overline{x} / \overline{c}] \; = \; \bigwedge _{i=1}^k p_i[\overline{x}/\overline{c}] > 0 \end{array}$$
where \(p_1,\ldots ,p_k \in \mathbb {NPE}[\overline{x}]\). Thus, by Lemma 30, validity of (20) is decidable. \(\square \)
The following algorithm summarizes our decision procedure.
In Example 26 we showed that Example 2 is non-terminating iff
$$ \textstyle \exists w,x,y,z \in \mathbb {Z},\ n_0 \in \mathbb {N}.\ \forall n \in \mathbb {N}_{>n_0}.\ p^{\varphi }_1> 0 \wedge p^{\varphi }_2 > 0 $$
is valid. This is the case iff \(\mathrm{max\_coeff\_pos}(p_1) \wedge \mathrm{max\_coeff\_pos}(p_2)\), i.e.,
is satisfiable. This formula is equivalent to \(6 \cdot y - 2 + 3 \cdot w = 0\) which does not have any integer solutions. Hence, the loop of Example 2 terminates.
Example 33 shows that our technique does not yield witnesses for non-termination, but it only proves the existence of a witness for eventual non-termination. While such a witness can be transformed into a witness for non-termination by applying the loop several times, it is unclear how often the loop needs to be applied.
Consider the following non-terminating loop:
The closed form of x is Open image in new window . Replacing x with \(q_{norm}\) in \(x > 0\) yields \(x + y + n - 1 > 0\). The maximal marked coefficient of \(x + y + n - 1\) is \(1^{(1,1)}\). So by Algorithm 4, (23) does not terminate if \(\exists x,y \in \mathbb {Z}.\ 1 > 0\) is valid. While \(1 > 0\) is a tautology, (23) terminates if \(x \le 0\) or \(x \le -y\).
However, the final formula constructed by Algorithm 4 precisely describes all witnesses for eventual non-termination.
(Witnessing Eventual Non-Termination). Let (1) be a triangular loop, let \(\overline{q}_{norm}\) be the normalized closed form of (2), and let
$$ \textstyle \left( \varphi \wedge \varphi [\overline{x} / A\,\overline{x} + \overline{a}]\right) [\overline{x}/\overline{q}_{norm}] = \bigwedge _{i=1}^k p_i > 0. $$
Then \(\overline{c} \in \mathbb {Z}^d\) witnesses eventual non-termination of (1) iff \([\overline{x}/\overline{c}]\) is a model for
$$ \textstyle \bigwedge _{i=1}^k \mathrm{max\_coeff\_pos}(p_i). $$
We presented a decision procedure for termination of affine integer loops with triangular update matrices. In this way, we contribute to the ongoing challenge of proving the 15 years old conjecture by Tiwari [15] that termination of affine integer loops is decidable. After linear loops [4], loops with at most 4 variables [14], and loops with diagonalizable update matrices [3, 14], triangular loops are the fourth important special case where decidability could be proven.
The key idea of our decision procedure is to compute closed forms for the values of the program variables after a symbolic number of iterations n. While these closed forms are rather complex, it turns out that reasoning about first-order formulas over the theory of linear integer arithmetic suffices to analyze their behavior for large n. This allows us to reduce (non-)termination of triangular loops to integer programming. In future work, we plan to investigate generalizations of our approach to other classes of integer loops.
Note that multiplying with the least common multiple of all denominators yields an equivalent constraint with integer coefficients, i.e., allowing rational instead of integer coefficients does not extend the considered class of loops.
The proofs for real or rational numbers do not carry over to the integers since [15] uses Brouwer's Fixed Point Theorem which is not applicable if the variables range over \(\mathbb {Z}\) and [4] relies on the density of \(\mathbb {Q}\) in \(\mathbb {R}\).
Similarly, one could of course also use other termination-preserving pre-processings and try to transform a given program into a triangular loop.
The reason is that in this case, \((x - c_1) \ldots (x- c_k)\) is the minimal polynomial of A and diagonalizability is equivalent to the fact that the minimal polynomial is a product of distinct linear factors.
For instance, consider Open image in new window .
Bagnara, R., Zaccagnini, A., Zolo, T.: The Automatic Solution of Recurrence Relations. I. Linear Recurrences of Finite Order with Constant Coefficients. Technical report. Quaderno 334. Dipartimento di Matematica, Università di Parma, Italy (2003). http://www.cs.unipr.it/Publications/
Ben-Amram, A.M., Genaim, S., Masud, A.N.: On the termination of integer loops. ACM Trans. Programm. Lang. Syst. 34(4), 16:1–16:24 (2012). https://doi.org/10.1145/2400676.2400679CrossRefzbMATHGoogle Scholar
Bozga, M., Iosif, R., Konecný, F.: Deciding conditional termination. Logical Methods Comput. Sci. 10(3) (2014). https://doi.org/10.2168/LMCS-10(3:8)2014
Braverman, M.: Termination of integer linear programs. In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 372–385. Springer, Heidelberg (2006). https://doi.org/10.1007/11817963_34CrossRefGoogle Scholar
Brockschmidt, M., Cook, B., Ishtiaq, S., Khlaaf, H., Piterman, N.: T2: temporal property verification. In: Chechik, M., Raskin, J.-F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 387–393. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49674-9_22CrossRefGoogle Scholar
Chen, Y.-F., et al.: Advanced automata-based algorithms for program termination checking. In: Foster, J.S., Grossman, D. (eds.) PLDI 2018, pp. 135–150 (2018). https://doi.org/10.1145/3192366.3192405
Chen, H.-Y., David, C., Kroening, D., Schrammel, P., Wachter, B.: Bit-precise procedure-modular termination analysis. ACM Trans. Programm. Lang. Syst. 40(1), 1:1–1:38 (2018). https://doi.org/10.1145/3121136CrossRefGoogle Scholar
D'Silva, V., Urban, C.: Conflict-driven conditional termination. In: Kroening, D., Păsăreanu, C.S. (eds.) CAV 2015. LNCS, vol. 9207, pp. 271–286. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-21668-3_16CrossRefGoogle Scholar
Frohn, F., Giesl, J.: Termination of triangular integer loops is decidable. In: CoRR abs/1905.08664 (2019). https://arxiv.org/abs/1905.08664
Frohn, F., Naaf, M., Hensel, J., Brockschmidt, M., Giesl, J.: Lower runtime bounds for integer programs. In: Olivetti, N., Tiwari, A. (eds.) IJCAR 2016. LNCS (LNAI), vol. 9706, pp. 550–567. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40229-1_37CrossRefGoogle Scholar
Giesl, J., et al.: Analyzing program termination and complexity automatically with AProVE. J. Autom. Reasoning 58(1), 3–31 (2017). https://doi.org/10.1007/s10817-016-9388-yMathSciNetCrossRefzbMATHGoogle Scholar
Larraz, D., Oliveras, A., Rodríguez-Carbonell, E., Rubio, A.: Proving termination of imperative programs using Max-SMT. In: Jobstmann, B., Ray, S. (eds.) FMCAD 2013, pp. 218–225 (2013). https://doi.org/10.1109/FMCAD.2013.6679413
Le, T.C., Qin, S., Chin, W.-N.: Termination and non-termination specification inference. In: Grove, D., Blackburn, S. (eds.) PLDI 2015, pp. 489–498 (2015). https://doi.org/10.1145/2737924.2737993
Ouaknine, J., Pinto, J.S., Worrell, J.: On termination of integer linear loops. In: Indyk, P. (ed.) SODA 2015, pp. 957–969 (2015). https://doi.org/10.1137/1.9781611973730.65
Tiwari, A.: Termination of linear programs. In: Alur, R., Peled, D.A. (eds.) CAV 2004. LNCS, vol. 3114, pp. 70–82. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-27813-9_6CrossRefGoogle Scholar
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
View author's OrcID profile
1.Max Planck Institute for InformaticsSaarbrückenGermany
2.LuFG Informatik 2, RWTH Aachen UniversityAachenGermany
Frohn F., Giesl J. (2019) Termination of Triangular Integer Loops is Decidable. In: Dillig I., Tasiran S. (eds) Computer Aided Verification. CAV 2019. Lecture Notes in Computer Science, vol 11562. Springer, Cham
eBook Packages Computer Science
Share paper | CommonCrawl |
Journal of Electrical Engineering and Technology
The Korean Institute of Electrical Engineers (대한전기학회)
Journal of Electrical Engineering and Technology (JEET), which is the official publication of the Korean Institute of Electrical Engineers (KIEE) being published bimonthly, released the first issue in March 2006.The journal is open to submission from scholars and experts in the wide areas of electrical engineering technologies. The scope of the journal includes all issues in the field of Electrical Engineering and Technology. Included are techniques for electrical power engineering, electrical machinery and energy conversion systems, electrophysics and applications, information and controls.Papers based on novel methodologies and implementations, creative and innovative electrical engineering associated with the four scopes are particularly welcome but not restricted to the above topics. The JEET publishes basically in conformity with publication ethics codes based on the COPE(committee on publication ethics: http://publicationethics.org/). Additionally, the JEET publication complies strictly with the general research ethics codes of the KIEE(http://www.kiee.or.kr). Reviews and tutorial articles on contemporary subjects are strongly encouraged. All papers are to be reviewed by at least three independent reviewers and authors of all accepted papers would be required to complete a copyright from transferring all rights to the KIEE. For more detailed information about manuscript preparation, please visit the web site of the KIEE at http://www.kiee.or.kr or contact the secretariat ofJEET.
http://home.jeet.or.kr/ KSCI KCI SCOPUS SCIE
Type-2 Fuzzy Logic Optimum PV/inverter Sizing Ratio for Grid-connected PV Systems: Application to Selected Algerian Locations
Makhloufi, S.;Abdessemed, R. 731
https://doi.org/10.5370/JEET.2011.6.6.731 PDF KSCI
Conventional methodologies (empirical, analytical, numerical, hybrid, etc.) for sizing photovoltaic (PV) systems cannot be used when the relevant meteorological data are not available. To overcome this situation, modern methods based on artificial intelligence techniques have been developed for sizing the PV systems. In the present study, the optimum PV/inverter sizing ratio for grid-connected PV systems with orientation due south and inclination angles of $45^{\circ}$ and $60^{\circ}$ in selected Algerian locations was determined in terms of total system output using type-2 fuzzy logic. Because measured data for the locations chosen were not available, a year of synthetic hourly meteorological data for each location generated by the PVSYST software was used in the simulation.
A Novel Algorithm for Fault Type Fast Diagnosis in Overhead Transmission Lines Using Hidden Markov Models
Jannati, M.;Jazebi, S.;Vahidi, B.;Hosseinian, S.H. 742
Power transmission lines are one of the most important components of electric power system. Failures in the operation of power transmission lines can result in serious power system problems. Hence, fault diagnosis (transient or permanent) in power transmission lines is very important to ensure the reliable operation of the power system. A hidden Markov model (HMM), a powerful pattern recognizer, classifies events in a probabilistic manner based on fault signal waveform and characteristics. This paper presents application of HMM to classify faults in overhead power transmission lines. The algorithm uses voltage samples of one-fourth cycle from the inception of the fault. The simulation performed in EMTPWorks and MATLAB environments validates the fast response of the classifier, which provides fast and accurate protection scheme for power transmission lines.
Controller Optimization for Bidirectional Power Flow in Medium-Voltage DC Power Systems
Chung, Il-Yop;Liu, Wenxin;Cartes, David A.;Cho, Soo-Hwan;Kang, Hyun-Koo 750
This paper focuses on the control of bidirectional power flow in the electric shipboard power systems, especially in the Medium-Voltage Direct Current (MVDC) shipboard power system. Bidirectional power control between the main MVDC bus and the local zones can improve the energy efficiency and control flexibility of electric ship systems. However, since the MVDC system contains various nonlinear loads such as pulsed power load and radar in various subsystems, the voltage of the MVDC and the local zones varies significantly. This voltage variation affects the control performance of the bidirectional DC-DC converters as exogenous disturbances. To improve the control performance regardless of uncertainties and disturbances, this paper proposes a novel controller design method of the bidirectional DC-DC converters using $L_1$ control theory and intelligent optimization algorithm. The performance of the proposed method is verified via large-scale real-time digital simulation of a notional shipboard MVDC power system.
A Metamodeling Approach for Leader Progression Model-based Shielding Failure Rate Calculation of Transmission Lines Using Artificial Neural Networks
Tavakoli, Mohammad Reza Bank;Vahidi, Behrooz 760
The performance of transmission lines and its shielding design during a lightning phenomenon are quite essential in the maintenance of a reliable power supply to consumers. The leader progression model, as an advanced approach, has been recently developed to calculate the shielding failure rate (SFR) of transmission lines using geometrical data and physical behavior of upward and downward lightning leaders. However, such method is quite time consuming. In the present paper, an effective method that utilizes artificial neural networks (ANNs) to create a metamodel for calculating the SFR of a transmission line based on shielding angle and height is introduced. The results of investigations on a real case study reveal that, through proper selection of an ANN structure and good training, the ANN prediction is very close to the result of the detailed simulation, whereas the Processing time is by far lower than that of the detailed model.
Voltage Quality Improvement with Neural Network-Based Interline Dynamic Voltage Restorer
Aali, Seyedreza;Nazarpour, Daryoush 769
Custom power devices such as dynamic voltage restorer (DVR) and DSTATCOM are used to improve the power quality in distribution systems. These devices require real power to compensate the deep voltage sag during sufficient time. An interline DVR (IDVR) consists of several DVRs in different feeders. In this paper, a neural network is proposed to control the IDVR performance to achieve optimal mitigation of voltage sags, swell, and unbalance, as well as improvement of dynamic performance. Three multilayer perceptron neural networks are used to identify and regulate the dynamics of the voltage on sensitive load. A backpropagation algorithm trains this type of network. The proposed controller provides optimal mitigation of voltage dynamic. Simulation is carried out by MATLAB/Simulink, demonstrating that the proposed controller has fast response with lower total harmonic distortion.
Wide-area Frequency-based Tripped Generator Locating Method for Interconnected Power Systems
Kook, Kyung-Soo;Liu, Yilu 776
Since the Internet-based real-time Global Positioning System(GPS) synchronized widearea power system frequency monitoring network (FNET) was proposed in 2001, it has been monitoring the power system frequency in interconnected United States power systems and numerous interesting behaviors have been observed, including frequency excursion propagation. We address the consistency of a frequency excursion detection order of frequency disturbance recorders in FNET in relation to the same generation trip, as well as the ability to recreate by power systems dynamic simulation. We also propose a new method, as an application of FNET measurement, to locate a tripped generator using power systems dynamic simulation and wide-area frequency measurement. The simulation database of all the possible trips of generators in the interconnected power systems is created using the off-line power systems dynamic simulation. When FNET detects a sudden drop in the monitoring frequency, which is most likely due to a generation trip in power systems, the proposed algorithm locates a tripped generator by finding the best matching case of the measured frequency excursion in the simulation database in terms of the frequency drop detection order and the time of monitoring points.
Optimal Capacitor Placement Considering Voltage-stability Margin with Hybrid Particle Swarm Optimization
Kim, Tae-Gyun;Lee, Byong-Jun;Song, Hwa-Chang 786
The present paper presents an optimal capacitor placement (OCP) algorithm for voltagestability enhancement. The OCP issue is represented using a mixed-integer problem and a highly nonlinear problem. The hybrid particle swarm optimization (HPSO) algorithm is proposed to solve the OCP problem. The HPSO algorithm combines the optimal power flow (OPF) with the primal-dual interior-point method (PDIPM) and ordinary PSO. It takes advantage of the global search ability of PSO and the very fast simulation running time of the OPF algorithm with PDIPM. In addition, OPF gives intelligence to PSO through the information provided by the dual variable of the OPF. Numerical results illustrate that the HPSO algorithm can improve the accuracy and reduce the simulation running time. Test results evaluated with the three-bus, New England 39-bus, and Korea Electric Power Corporation systems show the applicability of the proposed algorithm.
Phase Current Magnitude Variation Method to Reduce End-Effect Force of PM Linear Synchronous Motor
Kim, Min-Jae;Lim, Jae-Won;Yim, Woo-Gyong;Jung, Hyun-Kyo 793
Numerous methods are available for reducing the end-effect force of linear machines. Majority of these methods focus on redesigning the poles or slots. However, these methods require additional manufacturing cost and decrease the power density. The current paper introduces another approach to reduce the end-effect force. The new approach is a method of tuning the input phase current magnitudes individually. According to the proposed method, reduction of the end-effect force could be achieved without redesigning the poles/slots or attaching auxiliary poles/slots. The proposed method is especially applicable when the target motor is very expensive or will be used for a special mission, such as hauling army vehicles equipped with three single-phase inverters. The validity of the suggested method was exemplified by the finite element method with three-phase permanent-magnet linear synchronous motor.
Comparison of Three Modeling Methods for Identifying Unknown Magnetization of Ferromagnetic Thin Plate
Choi, Nak-Sun;Kim, Dong-Wook;Yang, Chang-Seob;Chung, Hyun-Ju;Kim, Hong-Joon;Kim, Dong-Hun 799
This study presents three different magnetization models for identifying unknown magnetization of the ferromagnetic thin plate of a ship. First, the forward problem should be solved to accurately predict outboard magnetic fields due to the magnetization distribution estimated at a certain time. To achieve this, three different modeling methods for representing remanent magnetization (i.e., magnetic charge method, magnetic dipole array method, and magnetic moment method) were utilized. Material sensitivity formulas containing the first-order gradient information of an objective function were then adopted for an efficient search of an optimum magnetization distribution on the hull. The validity of the proposed methods was tested with a scale model ship, and field signals predicted from the three different models were thoroughly investigated with reference to the experimental data.
A Study on Swarm Robot-Based Invader-Enclosing Technique on Multiple Distributed Object Environments
Ko, Kwang-Eun;Park, Seung-Min;Park, Jun-Heong;Sim, Kwee-Bo 806
Interest about social security has recently increased in favor of safety for infrastructure. In addition, advances in computer vision and pattern recognition research are leading to video-based surveillance systems with improved scene analysis capabilities. However, such video surveillance systems, which are controlled by human operators, cannot actively cope with dynamic and anomalous events, such as having an invader in the corporate, commercial, or public sectors. For this reason, intelligent surveillance systems are increasingly needed to provide active social security services. In this study, we propose a core technique for intelligent surveillance system that is based on swarm robot technology. We present techniques for invader enclosing using swarm robots based on multiple distributed object environment. The proposed methods are composed of three main stages: location estimation of the object, specified object tracking, and decision of the cooperative behavior of the swarm robots. By using particle filter, object tracking and location estimation procedures are performed and a specified enclosing point for the swarm robots is located on the interactive positions in their coordinate system. Furthermore, the cooperative behaviors of the swarm robots are determined via the result of path navigation based on the combination of potential field and wall-following methods. The results of each stage are combined into the swarm robot-based invader-enclosing technique on multiple distributed object environments. Finally, several simulation results are provided to further discuss and verify the accuracy and effectiveness of the proposed techniques.
Design of UHF CMOS Front-ends for Near-field Communications
Hamedi-Hagh, Sotoudeh;Tabesh, Maryam;Oh, Soo-Seok;Park, Noh-Joon;Park, Dae-Hee 817
This paper introduces an efficient voltage multiplier circuit for improved voltage gain and power efficiency of radio frequency identification (RFID) tags. The multiplier is fully integratable and takes advantage of both passive and active circuits to reduce the required input power while yielding the desired DC voltage. A six-stage voltage multiplier and an ultralow power voltage regulator are designed in a 0.13 ${\mu}m$ complementary metal-oxide semiconductor process for 2.45 GHz RFID applications. The minimum required input power for a 1.2 V supply voltage in the case of a 50 ${\Omega}$ antenna is -20.45 dBm. The efficiency is 15.95% for a 1 $M{\Omega}$ load. The regulator consumes 129 nW DC power and maintains the reference voltage in a 1.1% range with $V_{dd}$ varying from 0.8 to 2 V. The power supply noise rejection of the regulator is 42 dB near a 2.45 GHz frequency and performs better than -32 dB from 100 Hz to 10 GHz frequencies.
Transmission Line Analysis of Accumulation Layer in IEGT
Moon, Jin-Woo;Chung, Sang-Koo 824
Transmission line analysis of the surface a cumulation layer in injection-enhanced gate transistor (IEGT) is presented for the first time, based on per-unit-length resistance and conductance of the surface layer beneath the gate of IEGT. Lateral electric field on the accumulation layer surface, as well as the electron current injected into the accumulation layer, is governed by the well-known wave equation, and decreases as an exponential function of the lateral distance from the cathode. Unit-length resistance and conductance of the layer are expressed in terms of the device parameters and the applied gate voltage. Results obtained from the experiments are consistent with the numerical simulations.
Power Frequency Magnetic Field Reduction Method for Residents in the Vicinity of Overhead Transmission Lines Using Passive Loop
Lee, Byeong-Yoon;Myung, Sung-Ho;Cho, Yeun-Gyu;Lee, Dong-Il;Lim, Yun-Seog;Lee, Sang-Yun 829
A power frequency magnetic field reduction method using passive loop is presented. This method can be used to reduce magnetic fields generated within the restricted area near transmission lines by alternating current overhead transmission lines. A reduction algorithm is described and related equations for magnetic field reduction are explained. The proposed power frequency magnetic field reduction method is applied to a scaled-down transmission line model. The lateral distribution of reduction ratio between magnetic fields before and after passive loop installation is calculated to evaluate magnetic field reduction effects. Calculated results show that the passive loop can be used to cost-effectively reduce power frequency magnetic fields in the vicinity of transmission lines generated by overhead transmission lines, compared with other reduction methods, such as active loop, increase in transmission line height, and power transmission using underground cables.
Preparation and Characterization of Plasma Polymerized Methyl Methacrylate Thin Films as Gate Dielectric for Organic Thin Film Transistor
Ao, Wei;Lim, Jae-Sung;Shin, Paik-Kyun 836
Plasma polymerized methyl methacrylate (ppMMA) thin films were deposited by plasma polymerization technique with different plasma powers and subsequently thermally treated at temperatures of 60 to $150^{\circ}C$. To find a better ppMMA preparation technique for application to organic thin film transistor (OTFT) as dielectric layer, the chemical composition, surface morphology, and electrical properties of ppMMA were investigated. The effect of ppMMA thin-film preparation conditions on the resulting thin film properties were discussed, specifically O-H site content in the pMMA, dielectric constant, leakage current density, and hysteresis.
A New CW CO2 Laser with Precise Output and Minimal Fluctuation by Adopting a High-frequency LCC Resonant Converter
Lee, Dong-Gil;Park, Seong-Wook;Yang, Yong-Su;Kim, Hee-Je;Xu, Guo-Cheng 842
The current study proposes the design of a hybrid series-parallel resonant converter (SPRC) and a three-stage Cockcroft-Walton voltage multiplier for precisely adjusting the power generated by a continuous wave (CW) $CO_2$ laser. The design of a hybrid SPRC, called LCC resonant converter, is described, and the fundamental approximation of a high-voltage and high-frequency (HVHF) transformer with a resonant tank is discussed. The results of the current study show that the voltage drop and ripple of a three-stage Cockcroft-Walton voltage multiplier depend on frequency. The power generated by a CW $CO_2$ laser can be precisely adjusted by a variable-frequency controller using a DSP (TMS320F2812) microprocessor. The proposed LCC converter could be used to obtain a maximum laser output power of 23 W. Moreover, it could precisely adjust the laser output power within 4.3 to 23 W at an operating frequency range of 187.5 to 370 kHz. The maximum efficiency of the $CO_2$ laser system is approximately 16.5%, and the minimum ripple of output voltage is about 1.62%.
Study on the Mitigation of the Resonance due to the Power-Bus Structure using Periodic Metal-Strip Loaded Sheets
Kahng, Sung-Tek;Kim, Hyeong-Seok 849
This paper investigates a method to tackle the resonance problems of the rectangular power-bus structure(PBS) using thin sheets loaded with periodic metal strips. The equivalent surface impedance of the proposed loading is calculated and involved in the expression of the impedance that accounts for in the PBS, in order to improve the resonance behavior of the original structure. The effects of the strips and the immediate surroundings are illustrated by a number of numerical experiments. Also the restrictions of the technique are addressed.
Identification of Fuzzy Inference Systems Using a Multi-objective Space Search Algorithm and Information Granulation
Huang, Wei;Oh, Sung-Kwun;Ding, Lixin;Kim, Hyun-Ki;Joo, Su-Chong 853
We propose a multi-objective space search algorithm (MSSA) and introduce the identification of fuzzy inference systems based on the MSSA and information granulation (IG). The MSSA is a multi-objective optimization algorithm whose search method is associated with the analysis of the solution space. The multi-objective mechanism of MSSA is realized using a non-dominated sorting-based multi-objective strategy. In the identification of the fuzzy inference system, the MSSA is exploited to carry out parametric optimization of the fuzzy model and to achieve its structural optimization. The granulation of information is attained using the C-Means clustering algorithm. The overall optimization of fuzzy inference systems comes in the form of two identification mechanisms: structure identification (such as the number of input variables to be used, a specific subset of input variables, the number of membership functions, and the polynomial type) and parameter identification (viz. the apexes of membership function). The structure identification is developed by the MSSA and C-Means, whereas the parameter identification is realized via the MSSA and least squares method. The evaluation of the performance of the proposed model was conducted using three representative numerical examples such as gas furnace, NOx emission process data, and Mackey-Glass time series. The proposed model was also compared with the quality of some "conventional" fuzzy models encountered in the literature.
Reducing the Search Space for Pathfinding in Navigation Meshes by Using Visibility Tests
Kim, Hyun-Gil;Yu, Kyeon-Ah;Kim, Jun-Tae 867
A navigation mesh (NavMesh) is a suitable tool for the representation of a three-dimensional game world. A NavMesh consists of convex polygons covering free space, so the path can be found reliably without detecting collision with obstacles. The main disadvantage of a NavMesh is the huge state space. When the $A^*$ algorithm is applied to polygonal meshes for detailed terrain representation, the pathfinding can be inefficient due to the many states to be searched. In this paper, we propose a method to reduce the number of states searched by using visibility tests to achieve fast searching even on a detailed terrain with a large number of polygons. Our algorithm finds the visible vertices of the obstacles from the critical states and uses the heuristic function of $A^*$, defined as the distance to the goal through such visible vertices. The results show that the number of searched states can be substantially reduced compared to the $A^*$ search with a straight-line distance heuristic.
Decoupling Controller Design for H∞ Performance Condition
Park, Tae-Dong;Choi, Goon-Ho;Cho, Yong-Seok;Park, Ki-Heon 874
The decoupling design for the one-degree-of-freedom controller system is treated within the $H_{\infty}$ framework. In the present study, we demonstrate that the $H_{\infty}$ performance problem in the decoupling design is reduced into interpolation problems on scalar functions. To guarantee the properness of decoupling controllers and the overall transfer matrix, the relative degree conditions on the interpolating scalar functions are derived. To find the interpolating functions with relative degree constraints, Nevanlinna-Pick algorithm with starting function constraint is utilized in the present study. An illustrative example is given to provide details regarding the solution.
Adaptive Parameter Estimation Method for Wireless Localization Using RSSI Measurements
Cho, Hyun-Hun;Lee, Rak-Hee;Park, Joon-Goo 883
Location-based service (LBS) is becoming an important part of the information technology (IT) business. Localization is a core technology for LBS because LBS is based on the position of each device or user. In case of outdoor, GPS - which is used to determine the position of a moving user - is the dominant technology. As satellite signal cannot reach indoor, GPS cannot be used in indoor environment. Therefore, research and study about indoor localization technology, which has the same accuracy as an outdoor GPS, is needed for "seamless LBS". For indoor localization, we consider the IEEE802.11 WLAN environment. Generally, received signal strength indicator (RSSI) is used to obtain a specific position of the user under the WLAN environment. RSSI has a characteristic that is decreased over distance. To use RSSI at indoor localization, a mathematical model of RSSI, which reflects its characteristic, is used. However, this RSSI of the mathematical model is different from a real RSSI, which, in reality, has a sensitive parameter that is much affected by the propagation environment. This difference causes the occurrence of localization error. Thus, it is necessary to set a proper RSSI model in order to obtain an accurate localization result. We propose a method in which the parameters of the propagation environment are determined using only RSSI measurements obtained during localization. | CommonCrawl |
Tag Archives: cheating
Cheating at Professional Poker
2019-10-09 Bruce Schneier
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/cheating_at_pro_1.html
Interesting story about someone who is almost certainly cheating at professional poker.
But then I start to see things that seem so obvious, but I wonder whether they aren't just paranoia after hours and hours of digging into the mystery. Like the fact that he starts wearing a hat that has a strange bulge around the brim — one that vanishes after the game when he's doing an interview in the booth. Is it a bone-conducting headset, as some online have suggested, sending him messages directly to his inner ear by vibrating on his skull? Of course it is! How could it be anything else? It's so obvious! Or the fact that he keeps his keys in the same place on the table all the time. Could they contain a secret camera that reads electronic sensors on the cards? I can't see any other possibility! It is all starting to make sense.
In the end, though, none of this additional evidence is even necessary. The gaggle of online Jim Garrisons have simply picked up more momentum than is required and they can't stop themselves. The fact is, the mystery was solved a long time ago. It's just like De Niro's Ace Rothstein says in Casino when the yokel slot attendant gets hit for three jackpots in a row and tells his boss there was no way for him to know he was being scammed. "Yes there is," Ace replies. "An infallible way. They won." According to one poster on TwoPlusTwo, in 69 sessions on Stones Live, Postle has won in 62 of them, for a profit of over $250,000 in 277 hours of play. Given that he plays such a large number of hands, and plays such an erratic and, by his own admission, high-variance style, one would expect to see more, well, variance. His results just aren't possible even for the best players in the world, which, if he isn't cheating, he definitely is among. Add to this the fact that it has been alleged that Postle doesn't play in other nonstreamed live games at Stones, or anywhere else in the Sacramento area, and hasn't been known to play in any sizable no-limit games anywhere in a long time, and that he always picks up his chips and leaves as soon as the livestream ends. I don't really need any more evidence than that. If you know poker players, you know that this is the most damning evidence against him. Poker players like to play poker. If any of the poker players I know had the win rate that Mike Postle has, you'd have to pry them up from the table with a crowbar. The guy is making nearly a thousand dollars an hour! He should be wearing adult diapers so he doesn't have to take a bathroom break and cost himself $250.
This isn't the first time someone has been accused of cheating because they are simply playing significantly better than computer simulations predict that even the best player would play.
News article. BoingBoing post
cheatinggambling
Smart Watches and Cheating on Tests
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/smart_watches_a.html
The Independent Commission on Examination Malpractice in the UK has recommended that all watches be banned from exam rooms, basically because it's becoming very difficult to tell regular watches from smart watches.
cheatinginternetofthings
Cheating in Bird Racing
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/08/cheating_in_bir.html
I've previously written about people cheating in marathon racing by driving — or otherwise getting near the end of the race by faster means than running. In China, two people were convicted of cheating in a pigeon race:
The essence of the plan involved training the pigeons to believe they had two homes. The birds had been secretly raised not just in Shanghai but also in Shangqiu.
When the race was held in the spring of last year, the Shanghai Pigeon Association took all the entrants from Shanghai to Shangqiu and released them. Most of the pigeons started flying back to Shanghai.
But the four specially raised pigeons flew instead to their second home in Shangqiu. According to the court, the two men caught the birds there and then carried them on a bullet train back to Shanghai, concealed in milk cartons. (China prohibits live animals on bullet trains.)
When the men arrived in Shanghai, they released the pigeons, which quickly fluttered to their Shanghai loft, seemingly winning the race.
cheatingchinasports
ISP Questions Impartiality of Judges in Copyright Troll Cases
2018-06-02 Andy
Post Syndicated from Andy original https://torrentfreak.com/isp-questions-impartiality-of-judges-in-copyright-troll-cases-180602/
Following in the footsteps of similar operations around the world, two years ago the copyright trolling movement landed on Swedish shores.
The pattern was a familiar one, with trolls harvesting IP addresses from BitTorrent swarms and tracing them back to Internet service providers. Then, after presenting evidence to a judge, the trolls obtained orders that compelled ISPs to hand over their customers' details. From there, the trolls demanded cash payments to make supposed lawsuits disappear.
It's a controversial business model that rarely receives outside praise. Many ISPs have tried to slow down the flood but most eventually grow tired of battling to protect their customers. The same cannot be said of Swedish ISP Bahnhof.
The ISP, which is also a strong defender of privacy, has become known for fighting back against copyright trolls. Indeed, to thwart them at the very first step, the company deletes IP address logs after just 24 hours, which prevents its customers from being targeted.
Bahnhof says that the copyright business appeared "dirty and corrupt" right from the get go, so it now operates Utpressningskollen.se, a web portal where the ISP publishes data on Swedish legal cases in which copyright owners demand customer data from ISPs through the Patent and Market Courts.
Over the past two years, Bahnhof says it has documented 76 cases of which six are still ongoing, 11 have been waived and a majority 59 have been decided in favor of mainly movie companies. Bahnhof says that when it discovered that 59 out of the 76 cases benefited one party, it felt a need to investigate.
In a detailed report compiled by Bahnhof Communicator Carolina Lindahl and sent to TF, the ISP reveals that it examined the individual decision-makers in the cases before the Courts and found five judges with "questionable impartiality."
"One of the judges, we can call them Judge 1, has closed 12 of the cases, of which two have been waived and the other 10 have benefitted the copyright owner, mostly movie companies," Lindahl notes.
"Judge 1 apparently has written several articles in the magazine NIR – Nordiskt Immateriellt Rättsskydd (Nordic Intellectual Property Protection) – which is mainly supported by Svenska Föreningen för Upphovsrätt, the Swedish Association for Copyright (SFU).
"SFU is a member-financed group centered around copyright that publishes articles, hands out scholarships, arranges symposiums, etc. On their website they have a public calendar where Judge 1 appears regularly."
Bahnhof says that the financiers of the SFU are Sveriges Television AB (Sweden's national public TV broadcaster), Filmproducenternas Rättsförening (a legally-oriented association for filmproducers), BMG Chrysalis Scandinavia (a media giant) and Fackförbundet för Film och Mediabranschen (a union for the movie and media industry).
"This means that Judge 1 is involved in a copyright association sponsored by the film and media industry, while also judging in copyright cases with the film industry as one of the parties," the ISP says.
Bahnhof's also has criticism for Judge 2, who participated as an event speaker for the Swedish Association for Copyright, and Judge 3 who has written for the SFU-supported magazine NIR. According to Lindahl, Judge 4 worked for a bureau that is partly owned by a board member of SFU, who also defended media companies in a "high-profile" Swedish piracy case.
That leaves Judge 5, who handled 10 of the copyright troll cases documented by Bahnhof, waiving one and deciding the remaining nine in favor of a movie company plaintiff.
"Judge 5 has been questioned before and even been accused of bias while judging a high-profile piracy case almost ten years ago. The accusations of bias were motivated by the judge's membership of SFU and the Swedish Association for Intellectual Property Rights (SFIR), an association with several important individuals of the Swedish copyright community as members, who all defend, represent, or sympathize with the media industry," Lindahl says.
Bahnhof hasn't named any of the judges nor has it provided additional details on the "high-profile" case. However, anyone who remembers the infamous trial of 'The Pirate Bay Four' a decade ago might recall complaints from the defense (1,2,3) that several judges involved in the case were members of pro-copyright groups.
While there were plenty of calls to consider them biased, in May 2010 the Supreme Court ruled otherwise, a fact Bahnhof recognizes.
"Judge 5 was never sentenced for bias by the court, but regardless of the court's decision this is still a judge who shares values and has personal connections with [the media industry], and as if that weren't enough, the judge has induced an additional financial aspect by participating in events paid for by said party," Lindahl writes.
"The judge has parties and interest holders in their personal network, a private engagement in the subject and a financial connection to one party – textbook characteristics of bias which would make anyone suspicious."
The decision-makers of the Patent and Market Court and their relations.
The ISP notes that all five judges have connections to the media industry in the cases they judge, which isn't a great starting point for returning "objective and impartial" results. In its summary, however, the ISP is scathing of the overall system, one in which court cases "almost looked rigged" and appear to be decided in favor of the movie company even before reaching court.
In general, however, Bahnhof says that the processes show a lack of individual attention, such as the court blindly accepting questionable IP address evidence supplied by infamous anti-piracy outfit MaverickEye.
"The court never bothers to control the media company's only evidence (lists generated by MaverickMonitor, which has proven to be an unreliable software), the court documents contain several typos of varying severity, and the same standard texts are reused in several different cases," the ISP says.
"The court documents show a lack of care and control, something that can easily be taken advantage of by individuals with shady motives. The findings and discoveries of this investigation are strengthened by the pure numbers mentioned in the beginning which clearly show how one party almost always wins.
"If this is caused by bias, cheating, partiality, bribes, political agenda, conspiracy or pure coincidence we can't say for sure, but the fact that this process has mainly generated money for the film industry, while citizens have been robbed of their personal integrity and legal certainty, indicates what forces lie behind this machinery," Bahnhof's Lindahl concludes.
Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.
ADadsafeatAIAllalsAnti-PiracyapparinARMartArticlesAspectATIAWSBahnhofBECbiasbittorrentbleBMGbookBSIBTBusinessCCADcarCASCasecasescheatingciciaCIPCIScommunitycomplaintconspiracycontrolCopyrightcopyright trollcopyright trollscourtcourtsdataddrdefensedetDISAdishdocumentdowndressebsecedENSeteueventEventsfactfile-sharingfilmFinancialfirGeneralGoGREgroupsHATICEIDEindustryintegritintelIntellectual propertyinternetIPip addressIPSirsISPISPsLawlawslawsuitLEDMacmagazineMakemakermakersMaverickeyemediaMoneyMonitorMoUmovmovemovieMPANABNECNESNetworknewsnseoperaOtherOUsPApartypatentpersonalPIRpiracypiratepoliticalPPLPresentPrivacypsquestionsRracingratratereportrestROVroverRTIS.SAMSESskySNIsoftwarespeakerSSESSOStarstssupportSupreme CourtswedenTAGtargettedtelevisiontestTheThe Companythe pirate bayTICtortorrentTorrent SitestracingtrollsTRONtvtypoUIunUnityUSUSTRvpnwarwebwebsitewinWork
Court Orders Pirate IPTV Linker to Shut Down or Face Penalties Up to €1.25m
Post Syndicated from Andy original https://torrentfreak.com/court-orders-pirate-iptv-linker-to-shut-down-or-face-penalties-up-to-e1-25m-180911/
There are few things guaranteed in life. Death, taxes, and lawsuits filed regularly by Dutch anti-piracy outfit BREIN.
One of its most recent targets was Netherlands-based company Leaper Beheer BV, which also traded under the names Flickstore, Dump Die Deal and Live TV Store. BREIN filed a complaint at the Limburg District Court in Maastricht, claiming that Leaper provides access to unlicensed live TV streams and on-demand movies.
The anti-piracy outfit claimed that around 4,000 live channels were on offer, including Fox Sports, movie channels, commercial and public channels. These could be accessed after the customer made a payment which granted access to a unique activation code which could be entered into a set-top box.
BREIN told the court that the code returned an .M3U playlist, which was effectively a hyperlink to IPTV channels and more than 1,000 movies being made available without permission from their respective copyright holders. As such, this amounted to a communication to the public in contravention of the EU Copyright Directive, BREIN argued.
In its defense, Leaper said that it effectively provided a convenient link-shortening service for content that could already be found online in other ways. The company argued that it is not a distributor of content itself and did not make available anything that wasn't already public. The company added that it was completely down to the consumer whether illegal content was viewed or not.
The key question for the Court was whether Leaper did indeed make a new "communication to the public" under the EU Copyright Directive, a standard the Court of Justice of the European Union (CJEU) says should be interpreted in a manner that provides a high level of protection for rightsholders.
The Court took a three-point approach in arriving at its decision.
Did Leaper act in a deliberate manner when providing access to copyright content, especially when its intervention provided access to consumers who would not ordinarily have access to that content?
Did Leaper communicate the works via a new method to a new audience?
Did Leaper have a profit motive when it communicated works to the public?
The Court found that Leaper did communicate works to the public and intervened "with full knowledge of the consequences of its conduct" when it gave its customers access to protected works.
"Access to [the content] in a different way would be difficult for those customers, if Leaper were not to provide its services in question," the Court's decision reads.
"Leaper reaches an indeterminate number of potential recipients who can take cognizance of the protected works and form a new audience. The purchasers who register with Leaper are to be regarded as recipients who were not taken into account by the rightful claimants when they gave permission for the original communication of their work to the public."
With that, the Court ordered Leaper to cease-and-desist facilitating access to unlicensed streams within 48 hours of the judgment, with non-compliance penalties of 5,000 euros per IPTV subscription sold, link offered, or days exceeded, to a maximum of one million euros.
But the Court didn't stop there.
"Leaper must submit a statement audited by an accountant, supported by (clear, readable copies of) all relevant documents, within 12 days of notification of this judgment of all the relevant (contact) details of the (person or legal persons) with whom the company has had contact regarding the provision of IPTV subscriptions and/or the provision of hyperlinks to sources where films and (live) broadcasts are evidently offered without the permission of the entitled parties," the Court ruled.
Failure to comply with this aspect of the ruling will lead to more penalties of 5,000 euros per day up to a maximum of 250,000 euros. Leaper was also ordered to pay BREIN's costs of 20,700 euros.
Describing the people behind Leaper as "crooks" who previously sold media boxes with infringing addons (as previously determined to be illegal in the Filmspeler case), BREIN chief Tim Kuik says that a switch of strategy didn't help them evade the law.
"[Leaper] sold a link to consumers that gave access to unauthorized content, i.e. pay-TV channels as well as video-on-demand films and series," BREIN chief Tim Kuik informs TorrentFreak.
"They did it for profit and should have checked whether the content was authorized. They did not and in fact were aware the content was unauthorized. Which means they are clearly infringing copyright.
"This is evident from the CJEU case law in GS Media as well as Filmspeler and The Pirate Bay, aka the Dutch trilogy because the three cases came from the Netherlands, but these rulings are applicable throughout the EU.
"They just keep at it knowing they're cheating and we'll take them to the cleaners," Kuik concludes.
ACEADAddonsadsafeatAIAllalsAnti-PiracyapparinartAspectATIauthAWSBECbingblebreinCcamCASCasecasescheatingciciaCIPCIScodecommercialcomplaintComplianceCopyrightcourtdeadeathdefensedetdocumentdownecedEdgeeffENSeteuEuropeEuropean Unionfactfailfile-sharingfilmfilmspelerFlickformfoxGS MediaHAThyperlinkICEIDEIPiptviqissISTEjusticeLawlawslawsuitLEDMakemediamillionmitMoUmovmoviemoviesMPAnetherlandsnewsnseOnlineoperaOtherOUsPApeopleperlpiePIRpiracypiratePlayPPLRratraterightsholdersROVRTIS.scrSESset-top boxsportsSSEstssupporttargettedtestTheThe Companythe netherlandsthe pirate baythingsTICtortorrentTorrent SitesTorrentFreaktvUIunUSvideovpnwarwinWork
Tech wishes for 2018
2018-02-18 Eevee
Post Syndicated from Eevee original https://eev.ee/blog/2018/02/18/tech-wishes-for-2018/
Anonymous asks, via money:
What would you like to see happen in tech in 2018?
(answer can be technical, social, political, combination, whatever)
Less of this
I'm not really qualified to speak in depth about either of these things, but let me put my foot in my mouth anyway:
The Blockchain™
Bitcoin was a neat idea. No, really! Decentralization is cool. Overhauling our terrible financial infrastructure is cool. Hash functions are cool.
Unfortunately, it seems to have devolved into mostly a get-rich-quick scheme for nerds, and by nearly any measure it's turning into a spectacular catastrophe. Its "success" is measured in how much a bitcoin is worth in US dollars, which is pretty close to an admission from its own investors that its only value is in converting back to "real" money — all while that same "success" is making it less useful as a distinct currency.
Blah, blah, everyone already knows this.
What concerns me slightly more is the gold rush hype cycle, which is putting cryptocurrency and "blockchain" in the news and lending it all legitimacy. People have raked in millions of dollars on ICOs of novel coins I've never heard mentioned again. (Note: again, that value is measured in dollars.) Most likely, none of the investors will see any return whatsoever on that money. They can't, really, unless a coin actually takes off as a currency, and that seems at odds with speculative investing since everyone either wants to hoard or ditch their coins. When the coins have no value themselves, the money can only come from other investors, and eventually the hype winds down and you run out of other investors.
I fear this will hurt a lot of people before it's over, so I'd like for it to be over as soon as possible.
That said, the hype itself has gotten way out of hand too. First it was the obsession with "blockchain" like it's a revolutionary technology, but hey, Git is a fucking blockchain. The novel part is the way it handles distributed consensus (which in Git is basically left for you to figure out), and that's uniquely important to currency because you want to be pretty sure that money doesn't get duplicated or lost when moved around.
But now we have startups trying to use blockchains for website backends and file storage and who knows what else? Why? What advantage does this have? When you say "blockchain", I hear "single Git repository" — so when you say "email on the blockchain", I have an aneurysm.
Bitcoin seems to have sparked imagination in large part because it's decentralized, but I'd argue it's actually a pretty bad example of a decentralized network, since people keep forking it. The ability to fork is a feature, sure, but the trouble here is that the Bitcoin family has no notion of federation — there is one canonical Bitcoin ledger and it has no notion of communication with any other. That's what you want for currency, not necessarily other applications. (Bitcoin also incentivizes frivolous forking by giving the creator an initial pile of coins to keep and sell.)
And federation is much more interesting than decentralization! Federation gives us email and the web. Federation means I can set up my own instance with my own rules and still be able to meaningfully communicate with the rest of the network. Federation has some amount of tolerance for changes to the protocol, so such changes are more flexible and rely more heavily on consensus.
Federation is fantastic, and it feels like a massive tragedy that this rekindled interest in decentralization is mostly focused on peer-to-peer networks, which do little to address our current problems with centralized platforms.
And hey, you know what else is federated? Banks.
Again, the tech is cool and all, but the marketing hype is getting way out of hand.
Maybe what I really want from 2018 is less marketing?
For one, I've seen a huge uptick in uncritically referring to any software that creates or classifies creative work as "AI". Can we… can we not. It's not AI. Yes, yes, nerds, I don't care about the hair-splitting about the nature of intelligence — you know that when we hear "AI" we think of a human-like self-aware intelligence. But we're applying it to stuff like a weird dog generator. Or to whatever neural network a website threw into production this week.
And this is dangerously misleading — we already had massive tech companies scapegoating The Algorithm™ for the poor behavior of their software, and now we're talking about those algorithms as though they were self-aware, untouchable, untameable, unknowable entities of pure chaos whose decisions we are arbitrarily bound to. Ancient, powerful gods who exist just outside human comprehension or law.
It's weird to see this stuff appear in consumer products so quickly, too. It feels quick, anyway. The latest iPhone can unlock via facial recognition, right? I'm sure a lot of effort was put into ensuring that the same person's face would always be recognized… but how confident are we that other faces won't be recognized? I admit I don't follow all this super closely, so I may be imagining a non-problem, but I do know that humans are remarkably bad at checking for negative cases.
Hell, take the recurring problem of major platforms like Twitter and YouTube classifying anything mentioning "bisexual" as pornographic — because the word is also used as a porn genre, and someone threw a list of porn terms into a filter without thinking too hard about it. That's just a word list, a fairly simple thing that any human can review; but suddenly we're confident in opaque networks of inferred details?
I don't know. "Traditional" classification and generation are much more comforting, since they're a set of fairly abstract rules that can be examined and followed. Machine learning, as I understand it, is less about rules and much more about pattern-matching; it's built out of the fingerprints of the stuff it's trained on. Surely that's just begging for tons of edge cases. They're practically made of edge cases.
I'm reminded of a point I saw made a few days ago on Twitter, something I'd never thought about but should have. TurnItIn is a service for universities that checks whether students' papers match any others, in order to detect cheating. But this is a paid service, one that fundamentally hinges on its corpus: a large collection of existing student papers. So students pay money to attend school, where they're required to let their work be given to a third-party company, which then profits off of it? What kind of a goofy business model is this?
And my thoughts turn to machine learning, which is fundamentally different from an algorithm you can simply copy from a paper, because it's all about the training data. And to get good results, you need a lot of training data. Where is that all coming from? How many for-profit companies are setting a neural network loose on the web — on millions of people's work — and then turning around and selling the result as a product?
This is really a question of how intellectual property works in the internet era, and it continues our proud decades-long tradition of just kinda doing whatever we want without thinking about it too much. Nothing if not consistent.
More of this
A bit tougher, since computers are pretty alright now and everything continues to chug along. Maybe we should just quit while we're ahead. There's some real pie-in-the-sky stuff that would be nice, but it certainly won't happen within a year, and may never happen except in some horrific Algorithmic™ form designed by people that don't know anything about the problem space and only works 60% of the time but is treated as though it were bulletproof.
The giants are getting more giant. Maybe too giant? Granted, it could be much worse than Google and Amazon — it could be Apple!
Amazon has its own delivery service and brick-and-mortar stores now, as well as providing the plumbing for vast amounts of the web. They're not doing anything particularly outrageous, but they kind of loom.
Ad company Google just put ad blocking in its majority-share browser — albeit for the ambiguously-noble goal of only blocking obnoxious ads so that people will be less inclined to install a blanket ad blocker.
Twitter is kind of a nightmare but no one wants to leave. I keep trying to use Mastodon as well, but I always forget about it after a day, whoops.
Facebook sounds like a total nightmare but no one wants to leave that either, because normies don't use anything else, which is itself direly concerning.
IRC is rapidly bleeding mindshare to Slack and Discord, both of which are far better at the things IRC sadly never tried to do and absolutely terrible at the exact things IRC excels at.
The problem is the same as ever: there's no incentive to interoperate. There's no fundamental technical reason why Twitter and Tumblr and MySpace and Facebook can't intermingle their posts; they just don't, because why would they bother? It's extra work that makes it easier for people to not use your ecosystem.
I don't know what can be done about that, except that hope for a really big player to decide to play nice out of the kindness of their heart. The really big federated success stories — say, the web — mostly won out because they came along first. At this point, how does a federated social network take over? I don't know.
Social progress
I… don't really have a solid grasp on what's happening in tech socially at the moment. I've drifted a bit away from the industry part, which is where that all tends to come up. I have the vague sense that things are improving, but that might just be because the Rust community is the one I hear the most about, and it puts a lot of effort into being inclusive and welcoming.
So… more projects should be like Rust? Do whatever Rust is doing? And not so much what Linus is doing.
Open source funding
I haven't heard this brought up much lately, but it would still be nice to see. The Bay Area runs on open source and is raking in zillions of dollars on its back; pump some of that cash back into the ecosystem, somehow.
I've seen a couple open source projects on Patreon, which is fantastic, but feels like a very small solution given how much money is flowing through the commercial tech industry.
Nice. Fuck ads.
One might wonder where the money to host a website comes from, then? I don't know. Maybe we should loop this in with the above thing and find a more informal way to pay people for the stuff they make when we find it useful, without the financial and cognitive overhead of A Transaction or Giving Someone My Damn Credit Card Number. You know, something like Bitco— ah, fuck.
Year of the Linux Desktop
I don't know. What are we working on at the moment? Wayland? Do Wayland, I guess. Oh, and hi-DPI, which I hear sucks. And please fix my sound drivers so PulseAudio stops blaming them when it fucks up.
2018ACEADADIadsAIalgorithmsAllamazonanonymousappAppleapplicationsartATIATSaudioBASICBECBehaviorBETTbingBitcoinbleblockblockchainblockingbookBSIBusinessCCADcamcapcarCASCasecasesCERNcheatingciciaCISclicommercialcommunityComputeComputerscreativecredit cardcryptocryptocurrencydamdangerdataddrdeadesigndesktopdetdowndpdressebookebsecedEdgeeffemaileteueventFacebookfacial recognitionfamilyfanfearFederationFinancialfingerprintsfirformFunGitGogooglegotGREHABhairhashHATICEIDEindustryinfrastructureinstallintelIntellectual propertyintelligenceinternetIPiPhoneiqIRCirsissISTELawLEDlightlinuslinuxLTEMacmachine learningmailMakemakingmarketingmillionmitMoneyMoUmovmoveMPAnatureNCRNECNESNetworknewsNSAnseopen sourceoperaORGOSSOtherOUsPApartypatreonpeoplePHIphonepiePlaypoliticalpowerPPLproblemprojectProjectspspthPulseAudioRratrateRDSrestROVRTIRustS.SAMschoolSESskySlacksocialsoftwarespaceSparkStarstartupsstoragestsstudentsTAGtalktechTechnicalTechnologytedtestThethingsTICtimetortouchtrainingtwitterUIunUnityUSUsefulUSTRUXwarwebwebsitewinWorkyoutube
Random with care
Post Syndicated from Eevee original https://eev.ee/blog/2018/01/02/random-with-care/
Hi! Here are a few loose thoughts about picking random numbers.
A word about crypto
DON'T ROLL YOUR OWN CRYPTO
This is all aimed at frivolous pursuits like video games. Hell, even video games where money is at stake should be deferring to someone who knows way more than I do. Otherwise you might find out that your deck shuffles in your poker game are woefully inadequate and some smartass is cheating you out of millions. (If your random number generator has fewer than 226 bits of state, it can't even generate every possible shuffling of a deck of cards!)
Use the right distribution
Most languages have a random number primitive that spits out a number uniformly in the range [0, 1), and you can go pretty far with just that. But beware a few traps!
Random pitches
Say you want to pitch up a sound by a random amount, perhaps up to an octave. Your audio API probably has a way to do this that takes a pitch multiplier, where I say "probably" because that's how the only audio API I've used works.
Easy peasy. If 1 is unchanged and 2 is pitched up by an octave, then all you need is rand() + 1. Right?
No! Pitch is exponential — within the same octave, the "gap" between C and C♯ is about half as big as the gap between B and the following C. If you pick a pitch multiplier uniformly, you'll have a noticeable bias towards the higher pitches.
One octave corresponds to a doubling of pitch, so if you want to pick a random note, you want 2 ** rand().
Random directions
For two dimensions, you can just pick a random angle with rand() * TAU.
If you want a vector rather than an angle, or if you want a random direction in three dimensions, it's a little trickier. You might be tempted to just pick a random point where each component is rand() * 2 - 1 (ranging from −1 to 1), but that's not quite right. A direction is a point on the surface (or, equivalently, within the volume) of a sphere, and picking each component independently produces a point within the volume of a cube; the result will be a bias towards the corners of the cube, where there's much more extra volume beyond the sphere.
No? Well, just trust me. I don't know how to make a diagram for this.
Anyway, you could use the Pythagorean theorem a few times and make a huge mess of things, or it turns out there's a really easy way that even works for two or four or any number of dimensions. You pick each coordinate from a Gaussian (normal) distribution, then normalize the resulting vector. In other words, using Python's random module:
def random_direction():
x = random.gauss(0, 1)
y = random.gauss(0, 1)
z = random.gauss(0, 1)
r = math.sqrt(x*x + y*y + z*z)
return x/r, y/r, z/r
Why does this work? I have no idea!
Note that it is possible to get zero (or close to it) for every component, in which case the result is nonsense. You can re-roll all the components if necessary; just check that the magnitude (or its square) is less than some epsilon, which is equivalent to throwing away a tiny sphere at the center and shouldn't affect the distribution.
Beware Gauss
Since I brought it up: the Gaussian distribution is a pretty nice one for choosing things in some range, where the middle is the common case and should appear more frequently.
That said, I never use it, because it has one annoying drawback: the Gaussian distribution has no minimum or maximum value, so you can't really scale it down to the range you want. In theory, you might get any value out of it, with no limit on scale.
In practice, it's astronomically rare to actually get such a value out. I did a hundred million trials just to see what would happen, and the largest value produced was 5.8.
But, still, I'd rather not knowingly put extremely rare corner cases in my code if I can at all avoid it. I could clamp the ends, but that would cause unnatural bunching at the endpoints. I could reroll if I got a value outside some desired range, but I prefer to avoid rerolling when I can, too; after all, it's still (astronomically) possible to have to reroll for an indefinite amount of time. (Okay, it's really not, since you'll eventually hit the period of your PRNG. Still, though.) I don't bend over backwards here — I did just say to reroll when picking a random direction, after all — but when there's a nicer alternative I'll gladly use it.
And lo, there is a nicer alternative! Enter the beta distribution. It always spits out a number in [0, 1], so you can easily swap it in for the standard normal function, but it takes two "shape" parameters α and β that alter its behavior fairly dramatically.
With α = β = 1, the beta distribution is uniform, i.e. no different from rand(). As α increases, the distribution skews towards the right, and as β increases, the distribution skews towards the left. If α = β, the whole thing is symmetric with a hump in the middle. The higher either one gets, the more extreme the hump (meaning that value is far more common than any other). With a little fiddling, you can get a number of interesting curves.
Screenshots don't really do it justice, so here's a little Wolfram widget that lets you play with α and β live:
Note that if α = 1, then 1 is a possible value; if β = 1, then 0 is a possible value. You probably want them both greater than 1, which clamps the endpoints to zero.
Also, it's possible to have either α or β or both be less than 1, but this creates very different behavior: the corresponding endpoints become poles.
Anyway, something like α = β = 3 is probably close enough to normal for most purposes but already clamped for you. And you could easily replicate something like, say, NetHack's incredibly bizarre rnz function.
Random frequency
Say you want some event to have an 80% chance to happen every second. You (who am I kidding, I) might be tempted to do something like this:
if random() < 0.8 * dt:
do_thing()
In an ideal world, dt is always the same and is equal to 1 / f, where f is the framerate. Replace that 80% with a variable, say P, and every tic you have a P / f chance to do the… whatever it is.
Each second, f tics pass, so you'll make this check f times. The chance that any check succeeds is the inverse of the chance that every check fails, which is \(1 – \left(1 – \frac{P}{f}\right)^f\).
For P of 80% and a framerate of 60, that's a total probability of 55.3%. Wait, what?
Consider what happens if the framerate is 2. On the first tic, you roll 0.4 twice — but probabilities are combined by multiplying, and splitting work up by dt only works for additive quantities. You lose some accuracy along the way. If you're dealing with something that multiplies, you need an exponent somewhere.
But in this case, maybe you don't want that at all. Each separate roll you make might independently succeed, so it's possible (but very unlikely) that the event will happen 60 times within a single second! Or 200 times, if that's someone's framerate.
If you explicitly want something to have a chance to happen on a specific interval, you have to check on that interval. If you don't have a gizmo handy to run code on an interval, it's easy to do yourself with a time buffer:
timer += dt
# here, 1 is the "every 1 seconds"
while timer > 1:
timer -= 1
if random() < 0.8:
Using while means rolls still happen even if you somehow skipped over an entire second.
(For the curious, and the nerds who already noticed: the expression \(1 – \left(1 – \frac{P}{f}\right)^f\) converges to a specific value! As the framerate increases, it becomes a better and better approximation for \(1 – e^{-P}\), which for the example above is 0.551. Hey, 60 fps is pretty accurate — it's just accurately representing something nowhere near what I wanted. Er, you wanted.)
Rolling your own
Of course, you can fuss with the classic [0, 1] uniform value however you want. If I want a bias towards zero, I'll often just square it, or multiply two of them together. If I want a bias towards one, I'll take a square root. If I want something like a Gaussian/normal distribution, but with clearly-defined endpoints, I might add together n rolls and divide by n. (The normal distribution is just what you get if you roll infinite dice and divide by infinity!)
It'd be nice to be able to understand exactly what this will do to the distribution. Unfortunately, that requires some calculus, which this post is too small to contain, and which I didn't even know much about myself until I went down a deep rabbit hole while writing, and which in many cases is straight up impossible to express directly.
Here's the non-calculus bit. A source of randomness is often graphed as a PDF — a probability density function. You've almost certainly seen a bell curve graphed, and that's a PDF. They're pretty nice, since they do exactly what they look like: they show the relative chance that any given value will pop out. On a bog standard bell curve, there's a peak at zero, and of course zero is the most common result from a normal distribution.
(Okay, actually, since the results are continuous, it's vanishingly unlikely that you'll get exactly zero — but you're much more likely to get a value near zero than near any other number.)
For the uniform distribution, which is what a classic rand() gives you, the PDF is just a straight horizontal line — every result is equally likely.
If there were a calculus bit, it would go here! Instead, we can cheat. Sometimes. Mathematica knows how to work with probability distributions in the abstract, and there's a free web version you can use. For the example of squaring a uniform variable, try this out:
PDF[TransformedDistribution[u^2, u \[Distributed] UniformDistribution[{0, 1}]], u]
(The \[Distributed] is a funny tilde that doesn't exist in Unicode, but which Mathematica uses as a first-class operator. Also, press shiftEnter to evaluate the line.)
This will tell you that the distribution is… \(\frac{1}{2\sqrt{u}}\). Weird! You can plot it:
Plot[%, {u, 0, 1}]
(The % refers to the result of the last thing you did, so if you want to try several of these, you can just do Plot[PDF[…], u] directly.)
The resulting graph shows that numbers around zero are, in fact, vastly — infinitely — more likely than anything else.
What about multiplying two together? I can't figure out how to get Mathematica to understand this, but a great amount of digging revealed that the answer is -ln x, and from there you can plot them both on Wolfram Alpha. They're similar, though squaring has a much better chance of giving you high numbers than multiplying two separate rolls — which makes some sense, since if either of two rolls is a low number, the product will be even lower.
What if you know the graph you want, and you want to figure out how to play with a uniform roll to get it? Good news! That's a whole thing called inverse transform sampling. All you have to do is take an integral. Good luck!
This is all extremely ridiculous. New tactic: Just Simulate The Damn Thing. You already have the code; run it a million times, make a histogram, and tada, there's your PDF. That's one of the great things about computers! Brute-force numerical answers are easy to come by, so there's no excuse for producing something like rnz. (Though, be sure your histogram has sufficiently narrow buckets — I tried plotting one for rnz once and the weird stuff on the left side didn't show up at all!)
By the way, I learned something from futzing with Mathematica here! Taking the square root (to bias towards 1) gives a PDF that's a straight diagonal line, nothing like the hyperbola you get from squaring (to bias towards 0). How do you get a straight line the other way? Surprise: \(1 – \sqrt{1 – u}\).
Okay, okay, here's the actual math
I don't claim to have a very firm grasp on this, but I had a hell of a time finding it written out clearly, so I might as well write it down as best I can. This was a great excuse to finally set up MathJax, too.
Say \(u(x)\) is the PDF of the original distribution and \(u\) is a representative number you plucked from that distribution. For the uniform distribution, \(u(x) = 1\). Or, more accurately,
u(x) = \begin{cases}
1 & \text{ if } 0 \le x \lt 1 \\
0 & \text{ otherwise }
\end{cases}
Remember that \(x\) here is a possible outcome you want to know about, and the PDF tells you the relative probability that a roll will be near it. This PDF spits out 1 for every \(x\), meaning every number between 0 and 1 is equally likely to appear.
We want to do something to that PDF, which creates a new distribution, whose PDF we want to know. I'll use my original example of \(f(u) = u^2\), which creates a new PDF \(v(x)\).
The trick is that we need to work in terms of the cumulative distribution function for \(u\). Where the PDF gives the relative chance that a roll will be ("near") a specific value, the CDF gives the relative chance that a roll will be less than a specific value.
The conventions for this seem to be a bit fuzzy, and nobody bothers to explain which ones they're using, which makes this all the more confusing to read about… but let's write the CDF with a capital letter, so we have \(U(x)\). In this case, \(U(x) = x\), a straight 45° line (at least between 0 and 1). With the definition I gave, this should make sense. At some arbitrary point like 0.4, the value of the PDF is 1 (0.4 is just as likely as anything else), and the value of the CDF is 0.4 (you have a 40% chance of getting a number from 0 to 0.4).
Calculus ahoy: the PDF is the derivative of the CDF, which means it measures the slope of the CDF at any point. For \(U(x) = x\), the slope is always 1, and indeed \(u(x) = 1\). See, calculus is easy.
Okay, so, now we're getting somewhere. What we want is the CDF of our new distribution, \(V(x)\). The CDF is defined as the probability that a roll \(v\) will be less than \(x\), so we can literally write:
$$V(x) = P(v \le x)$$
(This is why we have to work with CDFs, rather than PDFs — a PDF gives the chance that a roll will be "nearby," whatever that means. A CDF is much more concrete.)
What is \(v\), exactly? We defined it ourselves; it's the do something applied to a roll from the original distribution, or \(f(u)\).
$$V(x) = P\!\left(f(u) \le x\right)$$
Now the first tricky part: we have to solve that inequality for \(u\), which means we have to do something, backwards to \(x\).
$$V(x) = P\!\left(u \le f^{-1}(x)\right)$$
Almost there! We now have a probability that \(u\) is less than some value, and that's the definition of a CDF!
$$V(x) = U\!\left(f^{-1}(x)\right)$$
Hooray! Now to turn these CDFs back into PDFs, all we need to do is differentiate both sides and use the chain rule. If you never took calculus, don't worry too much about what that means!
$$v(x) = u\!\left(f^{-1}(x)\right)\left|\frac{d}{dx}f^{-1}(x)\right|$$
Wait! Where did that absolute value come from? It takes care of whether \(f(x)\) increases or decreases. It's the least interesting part here by far, so, whatever.
There's one more magical part here when using the uniform distribution — \(u(\dots)\) is always equal to 1, so that entire term disappears! (Note that this only works for a uniform distribution with a width of 1; PDFs are scaled so the entire area under them sums to 1, so if you had a rand() that could spit out a number between 0 and 2, the PDF would be \(u(x) = \frac{1}{2}\).)
$$v(x) = \left|\frac{d}{dx}f^{-1}(x)\right|$$
So for the specific case of modifying the output of rand(), all we have to do is invert, then differentiate. The inverse of \(f(u) = u^2\) is \(f^{-1}(x) = \sqrt{x}\) (no need for a ± since we're only dealing with positive numbers), and differentiating that gives \(v(x) = \frac{1}{2\sqrt{x}}\). Done! This is also why square root comes out nicer; inverting it gives \(x^2\), and differentiating that gives \(2x\), a straight line.
Incidentally, that method for turning a uniform distribution into any distribution — inverse transform sampling — is pretty much the same thing in reverse: integrate, then invert. For example, when I saw that taking the square root gave \(v(x) = 2x\), I naturally wondered how to get a straight line going the other way, \(v(x) = 2 – 2x\). Integrating that gives \(2x – x^2\), and then you can use the quadratic formula (or just ask Wolfram Alpha) to solve \(2x – x^2 = u\) for \(x\) and get \(f(u) = 1 – \sqrt{1 – u}\).
Multiply two rolls is a bit more complicated; you have to write out the CDF as an integral and you end up doing a double integral and wow it's a mess. The only thing I've retained is that you do a division somewhere, which then gets integrated, and that's why it ends up as \(-\ln x\).
And that's quite enough of that! (Okay but having math in my blog is pretty cool and I will definitely be doing more of this, sorry, not sorry.)
Random vs varied
Sometimes, random isn't actually what you want. We tend to use the word "random" casually to mean something more like chaotic, i.e., with no discernible pattern. But that's not really random. In fact, given how good humans can be at finding incidental patterns, they aren't all that unlikely! Consider that when you roll two dice, they'll come up either the same or only one apart almost half the time. Coincidence? Well, yes.
If you ask for randomness, you're saying that any outcome — or series of outcomes — is acceptable, including five heads in a row or five tails in a row. Most of the time, that's fine. Some of the time, it's less fine, and what you really want is variety. Here are a couple examples and some fairly easy workarounds.
NPC quips
The nature of games is such that NPCs will eventually run out of things to say, at which point further conversation will give the player a short brush-off quip — a slight nod from the designer to the player that, hey, you hit the end of the script.
Some NPCs have multiple possible quips and will give one at random. The trouble with this is that it's very possible for an NPC to repeat the same quip several times in a row before abruptly switching to another one. With only a few options to choose from, getting the same option twice or thrice (especially across an entire game, which may have numerous NPCs) isn't all that unlikely. The notion of an NPC quip isn't very realistic to start with, but having someone repeat themselves and then abruptly switch to something else is especially jarring.
The easy fix is to show the quips in order! Paradoxically, this is more consistently varied than choosing at random — the original "order" is likely to be meaningless anyway, and it already has the property that the same quip can never appear twice in a row.
If you like, you can shuffle the list of quips every time you reach the end, but take care here — it's possible that the last quip in the old order will be the same as the first quip in the new order, so you may still get a repeat. (Of course, you can just check for this case and swap the first quip somewhere else if it bothers you.)
That last behavior is, in fact, the canonical way that Tetris chooses pieces — the game simply shuffles a list of all 7 pieces, gives those to you in shuffled order, then shuffles them again to make a new list once it's exhausted. There's no avoidance of duplicates, though, so you can still get two S blocks in a row, or even two S and two Z all clumped together, but no more than that. Some Tetris variants take other approaches, such as actively avoiding repeats even several pieces apart or deliberately giving you the worst piece possible.
Random drops
Random drops are often implemented as a flat chance each time. Maybe enemies have a 5% chance to drop health when they die. Legally speaking, over the long term, a player will see health drops for about 5% of enemy kills.
Over the short term, they may be desperate for health and not survive to see the long term. So you may want to put a thumb on the scale sometimes. Games in the Metroid series, for example, have a somewhat infamous bias towards whatever kind of drop they think you need — health if your health is low, missiles if your missiles are low.
I can't give you an exact approach to use, since it depends on the game and the feeling you're going for and the variables at your disposal. In extreme cases, you might want to guarantee a health drop from a tough enemy when the player is critically low on health. (Or if you're feeling particularly evil, you could go the other way and deny the player health when they most need it…)
The problem becomes a little different, and worse, when the event that triggers the drop is relatively rare. The pathological case here would be something like a raid boss in World of Warcraft, which requires hours of effort from a coordinated group of people to defeat, and which has some tiny chance of dropping a good item that will go to only one of those people. This is why I stopped playing World of Warcraft at 60.
Dialing it back a little bit gives us Enter the Gungeon, a roguelike where each room is a set of encounters and each floor only has a dozen or so rooms. Initially, you have a 1% chance of getting a reward after completing a room — but every time you complete a room and don't get a reward, the chance increases by 9%, up to a cap of 80%. Once you get a reward, the chance resets to 1%.
The natural question is: how frequently, exactly, can a player expect to get a reward? We could do math, or we could Just Simulate The Damn Thing.
from collections import Counter
histogram = Counter()
TRIALS = 1000000
chance = 1
rooms_cleared = 0
rewards_found = 0
while rewards_found < TRIALS:
rooms_cleared += 1
if random.random() * 100 < chance:
# Reward!
rewards_found += 1
histogram[rooms_cleared] += 1
chance = min(80, chance + 9)
for gaps, count in sorted(histogram.items()):
print(f"{gaps:3d} | {count / TRIALS * 100:6.2f}%", '#' * (count // (TRIALS // 100)))
1 | 0.98%
2 | 9.91% #########
3 | 17.00% ################
4 | 20.23% ####################
5 | 19.21% ###################
6 | 15.05% ###############
8 | 5.07% #####
9 | 2.09% ##
10 | 0.63%
We've got kind of a hilly distribution, skewed to the left, which is up in this histogram. Most of the time, a player should see a reward every three to six rooms, which is maybe twice per floor. It's vanishingly unlikely to go through a dozen rooms without ever seeing a reward, so a player should see at least one per floor.
Of course, this simulated a single continuous playthrough; when starting the game from scratch, your chance at a reward always starts fresh at 1%, the worst it can be. If you want to know about how many rewards a player will get on the first floor, hey, Just Simulate The Damn Thing.
1 | 13.01% #############
2 | 56.28% ########################################################
3 | 27.49% ###########################
4 | 3.10% ###
Cool. Though, that's assuming exactly 12 rooms; it might be worth changing that to pick at random in a way that matches the level generator.
(Enter the Gungeon does some other things to skew probability, which is very nice in a roguelike where blind luck can make or break you. For example, if you kill a boss without having gotten a new gun anywhere else on the floor, the boss is guaranteed to drop a gun.)
I suppose this is the same problem as random drops, but backwards.
Say you have a battle sim where every attack has a 6% chance to land a devastating critical hit. Presumably the same rules apply to both the player and the AI opponents.
Consider, then, that the AI opponents have exactly the same 6% chance to ruin the player's day. Consider also that this gives them an 0.4% chance to critical hit twice in a row. 0.4% doesn't sound like much, but across an entire playthrough, it's not unlikely that a player might see it happen and find it incredibly annoying.
Perhaps it would be worthwhile to explicitly forbid AI opponents from getting consecutive critical hits.
An emerging theme here has been to Just Simulate The Damn Thing. So consider Just Simulating The Damn Thing. Even a simple change to a random value can do surprising things to the resulting distribution, so unless you feel like differentiating the inverse function of your code, maybe test out any non-trivial behavior and make sure it's what you wanted. Probability is hard to reason about.
if (false) { align = (screen.width < 768) ? "left" : align; indent = (screen.width < 768) ? "0em" : indent; linebreak = (screen.width < 768) ? 'true' : linebreak; } var mathjaxscript = document.createElement('script'); mathjaxscript.id = 'mathjaxscript_pelican_#%@#$@#'; mathjaxscript.type = 'text/javascript'; mathjaxscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS-MML_HTMLorMML'; mathjaxscript[(window.opera ? "innerHTML" : "text")] = "MathJax.Hub.Config({" + " config: ['MMLorHTML.js']," + " TeX: { extensions: ['AMSmath.js','AMSsymbols.js','noErrors.js','noUndefined.js'], equationNumbers: { autoNumber: 'AMS' } }," + " jax: ['input/TeX','input/MathML','output/HTML-CSS']," + " extensions: ['tex2jax.js','mml2jax.js','MathMenu.js','MathZoom.js']," + " displayAlign: '"+ align +"'," + " displayIndent: '"+ indent +"'," + " showMathMenu: true," + " messageStyle: 'normal'," + " tex2jax: { " + " inlineMath: [ ['\\\\(','\\\\)'] ], " + " displayMath: [ ['$$','$$'] ]," + " processEscapes: true," + " preview: 'TeX'," + " }, " + " 'HTML-CSS': { " + " styles: { '.MathJax_Display, .MathJax .mo, .MathJax .mi, .MathJax .mn': {color: 'inherit ! important'} }," + " linebreaks: { automatic: "+ linebreak +", width: '90% container' }," + " }, " + "}); " + "if ('default' !== 'default') {" + "MathJax.Hub.Register.StartupHook('HTML-CSS Jax Ready',function () {" + "var VARIANT = MathJax.OutputJax['HTML-CSS'].FONTDATA.VARIANT;" + "VARIANT['normal'].fonts.unshift('MathJax_default');" + "VARIANT['bold'].fonts.unshift('MathJax_default-bold');" + "VARIANT['italic'].fonts.unshift('MathJax_default-italic');" + "VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" + "});" + "MathJax.Hub.Register.StartupHook('SVG Jax Ready',function () {" + "var VARIANT = MathJax.OutputJax.SVG.FONTDATA.VARIANT;" + "VARIANT['normal'].fonts.unshift('MathJax_default');" + "VARIANT['bold'].fonts.unshift('MathJax_default-bold');" + "VARIANT['italic'].fonts.unshift('MathJax_default-italic');" + "VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" + "});" + "}"; (document.body || document.getElementsByTagName('head')[0]).appendChild(mathjaxscript); }
3.03D90ACEADadsAIajaxAllALPHAappARIAarinartATIATSaudioBECBehaviorbellbetaBETTbiasBitsbleblockblogCcapcarCASCasecasescdCDNCERNcheatingciciacloudCloudflarecodeComputeComputerscontainercryptocssdamdatadeadesigndimensionDISAdisplaydocumentdowndpEASTecECRedeffeseteteventexpressfactfailfinefirfontfontsformFungamegamesGogotGREHAThealthhtmlhttphttpsICEIDEincreaseIPirsISPissISTEjavajavascriptjusticelieslightlimitLocksLTEluaMakeMalimathmetricMetroidmillionmitMoUnatureNCRNECNESnewsnseoperaOSSOtherOUsPApatreonpcpeoplepiePINPlayPPLPresentproblemProximapspythonRRAIDratraterawRDSrestROIrorrsaRTIRustS.SAMScalescrscratchSSEStarTAGteatechtedtestThethingsTICtortraptrustUIununicodeUSvariablesvideovideo gamewarwebwinwolWorkWOWwritingzero
Rosie the Countdown champion
2017-12-18 Alex Bate
Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/rosie-the-countdown-champion/
Beating the contestants at Countdown: is it cheating if you happen to know every word in the English dictionary?
Rosie plays Countdown
Allow your robots to join in the fun this Christmas with a round of Channel 4's Countdown. https://www.rosietheredrobot.com/2017/12/tea-minus-30.html
Rosie the Red Robot
First, a little bit of backstory. Challenged by his eldest daughter to build a robot, technology-loving Alan got to work building Rosie.
I became (unusually) determined. I wanted to show her what can be done… and the how can be learnt later. After all, there is nothing more exciting and encouraging than seeing technology come alive. Move. Groove. Quite literally.
Originally, Rosie had a Raspberry Pi 3 brain controlling ultrasonic sensors and motors via Python. From there, she has evolved into something much grander, and Alan has documented her upgrades on the Rosie the Red Robot blog. Using GPS trackers and a Raspberry Pi camera module, she became Rosie Patrol, a rolling, walking, interactive bot; then, with further upgrades, the Tea Minus 30 project came to be. Which brings us back to Countdown.
T(ea) minus 30
In case it hasn't been a big part of your life up until now, Countdown is one of the longest running televisions shows in history, and occupies a special place in British culture. Contestants take turns to fill a board with nine randomly selected vowels and consonants, before battling the Countdown clock to find the longest word they can in the space of 30 seconds.
The Countdown Clock
I've had quite a few requests to show just the Countdown clock for use in school activities/own games etc., so here it is! Enjoy! It's a brand new version too, using the 2010 Office package.
There's a numbers round involving arithmetic, too – but for now, we're going to focus on letters and words, because that's where Rosie's skills shine.
Using an online resource, Alan created a dataset of the ten thousand most common English words.
Many words, listed in order of common-ness. Alan wrote a Python script to order them alphabetically and by length
Next, Alan wrote a Python script to select nine letters at random, then search the word list to find all the words that could be spelled using only these letters. He used the randint function to select letters from a pre-loaded alphabet, and introduced a requirement to include at least two vowels among the nine letters.
Words that match the available letters are displayed on the screen.
With the basic game-play working, it was time to bring the project to life. For this, Alan used Rosie's camera module, along with optical character recognition (OCR) and text-to-speech capabilities.
Alan writes, "Here's a very amateurish drawing to brainstorm our idea. Let's call it a design as it makes it sound like we know what we're doing."
Alan's script has Rosie take a photo of the TV screen during the Countdown letters round, then perform OCR using the Google Cloud Vision API to detect the nine letters contestants have to work with. Next, Rosie runs Alan's code to check the letters against the ten-thousand-word dataset, converts text to speech with Python gTTS, and finally speaks her highest-scoring word via omxplayer.
You can follow the adventures of Rosie the Red Robot on her blog, or follow her on Twitter. And if you'd like to build your own Rosie, Alan has provided code and tutorials for his projects too. Thanks, Alan!
The post Rosie the Countdown champion appeared first on Raspberry Pi.
2017ACEADAIALAAllALPHAappartATIBASICBECbleblogbotsbrainCcamcameracamera modulecapCASCaseChallengeChannel 4cheatingChristmasciciaclockcloudcodeculturedatadeadesigndetdisplaydocumentdownEASTecedEnglishertseteufirformFungamegamesGogooglegotgpsGPS trackergrandeHABHAThistoryhtmlhttphttpsICEIDEInteractiveIPirsISPISTEMakemotorsmovmoveNESOCRofficeOnlineOUsPAPi 3pi camerapiePlaypre-loadedprojectProjectspspythonRraspberryraspberry piraspberry pi 3rawRDSrequestResourcerobotrobotsROVrunningS.schoolscrSearchsensorsensorsspacestormstsTDOteatechTechnologytedtelevisiontestTheTICtortrackerTutorialTutorialstvtwitterUIunUpgradeUSwinWorkwwwYour Projects
Our brand-new Christmas resources
2017-11-29 Laura Sach
Post Syndicated from Laura Sach original https://www.raspberrypi.org/blog/christmas-resources-2017/
It's never too early for Christmas-themed resources — especially when you want to make the most of them in your school, Code Club or CoderDojo! So here's the ever-wonderful Laura Sach with an introduction of our newest festive projects.
In the immortal words of Noddy Holder: "it's Christmaaaaaaasssss!" Well, maybe it isn't quite Christmas yet, but since the shops have been playing Mariah Carey on a loop since the last pumpkin lantern hit the bargain bin, you're hopefully well prepared.
To get you in the mood with some festive fun, we've put together a selection of seasonal free resources for you. Each project has a difficulty level in line with our Digital Making Curriculum, so you can check which might suit you best. Why not try them out at your local Raspberry Jam, CoderDojo, or Code Club, at school, or even on a cold day at home with a big mug of hot chocolate?
Jazzy jumpers
Jazzy jumpers (Creator level): as a child in the eighties, you'd always get an embarrassing and probably badly sized jazzy jumper at Christmas from some distant relative. Thank goodness the trend has gone hipster and dreadful jumpers are now cool!
This resource shows you how to build a memory game in Scratch where you must remember the colour and picture of a jazzy jumper before recreating it. How many jumpers can you successfully recall in a row?
Sense HAT advent calendar
Sense HAT advent calendar (Builder level): put the lovely lights on your Sense HAT to festive use by creating an advent calendar you can open day by day. However, there's strictly no cheating with this calendar — we teach you how to use Python to detect the current date and prevent would-be premature peekers!
Press the Enter key to open today's door:
(Note: no chocolate will be dispensed from your Raspberry Pi. Sorry about that.)
Code a carol
Code a carol (Developer level): Have you ever noticed how much repetition there is in carols and other songs? This resource teaches you how to break down the Twelve days of Christmas tune into its component parts and code it up in Sonic Pi the lazy way: get the computer to do all the repetition for you!
No musical knowledge required — just follow our lead, and you'll have yourself a rocking doorbell tune in no time!
Naughty and nice
Naughty and nice (Maker level): Have you been naughty or nice? Find out by using sentiment analysis on your tweets to see what sort of things you've been talking about throughout the year. For added fun, why not use your program on the Twitter account of your sibling/spouse/arch nemesis and report their level of naughtiness to Santa with an @ mention?
raspberry_pi is 65.5 percent NICE, with an accuracy of 0.9046692607003891
It's Christmaaaaaasssss
With the festive season just around the corner, it's time to get started on your Christmas projects! Whether you're planning to run your Christmas lights via a phone app, install a home assistant inside an Elf on a Shelf, or work through our Christmas resources, we would like to see what you make. So do share your festive builds with us on social media, or by posting links in the comments.
The post Our brand-new Christmas resources appeared first on Raspberry Pi.
90ADAIAllanalysisappARIAartATIaudiobellCcarcheatingChristmasciciacodeCode ClubCoderDojocommunityComputecurriculumdetDigitaldigital makingDOJdownecECRedEdgeeducationelectioneteventfirFungameGitGoHATHoCHome AssistantICEIDEInsideinstallIPirsISPlightMakemakermakingmediamusicNESnseOtherOUsPAphonePlayprojectProjectspspumpkinpythonRraspberryRaspberry Jamraspberry piRaspberry Pi ResourcesratRDSreportResourceresourcesS.santaschoolscrscratchsense hatsocialSocial MediaSonic PiStarsupporttalkteateachtedThethingsTICTodaytortwitterUIunUSWork
Fraud Detection in Pokémon Go
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/fraud_detection.html
I play Pokémon Go. (There, I've admitted it.) One of the interesting aspects of the game I've been watching is how the game's publisher, Niantec, deals with cheaters.
There are three basic types of cheating in Pokémon Go. The first is botting, where a computer plays the game instead of a person. The second is spoofing, which is faking GPS to convince the game that you're somewhere you're not. These two cheats are often used together — and you see the results in the many high-level accounts for sale on the Internet. The third type of cheating is the use of third-party apps like trackers to get extra information about the game.
None of this would matter if everyone played independently. The only reason any player cares about whether other players are cheating is that there is a group aspect of the game: gym battling. Everyone's enjoyment of that part of the game is affected by cheaters who can pretend to be where they're not, especially if they have lots of powerful Pokémon that they collected effortlessly.
Niantec has been trying to deal with this problem since the game debuted, mostly by banning accounts when it detects cheating. Its initial strategy was basic — algorithmically detecting impossibly fast travel between physical locations or super-human amounts of playing, and then banning those accounts — with limited success. The limiting factor in all of this is false positives. While Niantec wants to stop cheating, it doesn't want to block or limit any legitimate players. This makes it a very difficult problem, and contributes to the balance in the attacker/defender arms race.
Recently, Niantic implemented two new anti-cheating measures. The first is machine learning to detect cheaters. About this, we know little. The second is to limit the functionality of cheating accounts rather than ban them outright, making it harder for cheaters to know when they've been discovered.
"This is may very well be the beginning of Niantic's machine learning approach to active bot countering," user Dronpes writes on The Silph Road subreddit. "If the parameters for a shadowban are constantly adjusted server-side, as they can now easily be, then Niantic's machine learning engineers can train their detection (classification) algorithms in ever-improving, ever more aggressive ways, and botters will constantly be forced to re-evaluate what factors may be triggering the detection."
One of the expected future features in the game is trading. Creating a market for rare or powerful Pokémon would add a huge additional financial incentive to cheat. Unless Niantec can effectively prevent botting and spoofing, it's unlikely to implement that feature.
Cheating detection in virtual reality games is going to be a constant problem as these games become more popular, especially if there are ways to monetize the results of cheating. This means that cheater detection will continue to be a critical component of these games' success. Anything Niantec learns in Pokémon Go will be useful in whatever games come next.
Mystic, level 39 — if you must know.
And, yes, I know the game tracks works by tracking your location. I'm all right with that. As I repeatedly say, Internet privacy is all about trade-offs.
ACEADADIAIALAalgorithmsAllappARMartAspectATIATSBASICBECbleblockCcarcheatingciciaComputedeadetecedeffeteventfactFinancialfirformfraudFungamegamesGitGogpsGREHATinternetirslimitlocationluaMacmachine learningMakemakingmitMoUOSSOtherPApartyPlaypowerPrivacyproblempsRratrateredditrestROVS.serverspoofingSSLteatedTICtortrackertrackingunUSUsefulVirtual RealitywatchWork
Boston Red Sox Caught Using Technology to Steal Signs
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/boston_red_sox_.html
The Boston Red Sox admitted to eavesdropping on the communications channel between catcher and pitcher.
Stealing signs is believed to be particularly effective when there is a runner on second base who can both watch what hand signals the catcher is using to communicate with the pitcher and can easily relay to the batter any clues about what type of pitch may be coming. Such tactics are allowed as long as teams do not use any methods beyond their eyes. Binoculars and electronic devices are both prohibited.
In recent years, as cameras have proliferated in major league ballparks, teams have begun using the abundance of video to help them discern opponents' signs, including the catcher's signals to the pitcher. Some clubs have had clubhouse attendants quickly relay information to the dugout from the personnel monitoring video feeds.
But such information has to be rushed to the dugout on foot so it can be relayed to players on the field — a runner on second, the batter at the plate — while the information is still relevant. The Red Sox admitted to league investigators that they were able to significantly shorten this communications chain by using electronics. In what mimicked the rhythm of a double play, the information would rapidly go from video personnel to a trainer to the players.
This is ridiculous. The rules about what sorts of sign stealing are allowed and what sorts are not are arbitrary and unenforceable. My guess is that the only reason there aren't more complaints is because everyone does it.
The Red Sox responded in kind on Tuesday, filing a complaint against the Yankees claiming that the team uses a camera from its YES television network exclusively to steal signs during games, an assertion the Yankees denied.
Boston's mistake here was using a very conspicuous Apple Watch as a communications device. They need to learn to be more subtle, like everyone else.
ADAIAllappAppleartATIBECbleBTCcamcameracamerasCERNcheatingcommunicationscomplainteavesdroppingecedeffElectronicseteyesformgamegamesGoHATICEmitmonitoringNetworkOUsPAPINPlayPPLRratrateRTIS.SDRshedsignalsportsSSEteateamtechTechnologytedtelevisionTICtorUIunUSvideowatchWork
Insider Attack on Lottery Software
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/insider_attack_.html
Eddie Tipton, a programmer for the Multi-State Lottery Association, secretly installed software that allowed him to predict jackpots.
What's surprising to me is how many lotteries don't use real random number generators. What happened to picking golf balls out of wind-blown steel cages on television?
AllappATICcagecheatingciciaecECRedetfraudGoHATInsideinsidersinstallIPRrandomnumbersratS.softwaretelevisiontorUSwarwin
Datamining Pokémon
Post Syndicated from Eevee original https://eev.ee/blog/2017/08/02/datamining-pokemon/
A kind anonymous patron offers this prompt, which I totally fucked up getting done in July:
Something to do with programming languages? Alternatively, interesting game mechanics!
It's been a while since I've written a thing about programming languages, eh? But I feel like I've run low on interesting things to say about them. And I just did that level design article, which already touched on some interesting game mechanics… oh dear.
Okay, how about this. It's something I've been neck-deep in for quite some time, and most of the knowledge is squirrelled away in obscure wikis and ancient forum threads: getting data out of Pokémon games. I think that preserves the spirit of your two options, since it's sort of nestled in a dark corner between how programming languages work and how game mechanics are implemented.
A few disclaimers
In the grand scheme of things, I don't know all that much about this. I know more than people who've never looked into it at all, which I suppose is most people — but there are also people who basically do this stuff full-time, and that experience is crucial since so much of this work comes down to noticing patterns. While it sure helped to have a technical background, I wouldn't have gotten anywhere at all if I weren't acquainted with a few people who actually know what they're doing. Most of what I've done is take their work and run with it.
Also, I am not a lawyer and cannot comment on any legal questions here. Is it okay to download ROMs of games you own? Is it okay to dump ROMs yourself if you have the hardware? Does this count as reverse engineering, and do the DMCA protections apply? I have no idea. But that said, it's not exactly hard to find ROM hacking communities, and there's no way Nintendo isn't aware of them (or of the fact that every single Pokémon fansite gets their info from ROMs), so I suspect Nintendo simply doesn't care unless something risks going mainstream — and thus putting a tangible dent in the market for their own franchise.
Still, I don't want to direct an angry legal laser at anyone, so I'm going to be a bit selective about what resources I link to and what I merely allude to the existence of.
This is, necessarily, a pretty technical topic. It starts out in binary data and spirals down into microscopic details that even most programmers don't need to care about. Sometimes people approach me to ask how they can help with this work, and all I can do is imagine the entire contents of this post and shrug helplessly.
Still, as usual, I'll do my best to make this accessible without also making it a 500-page introduction to all of computing. Here is some helpful background stuff that would be clumsy to cram into the rest of the post.
Computers deal in bytes. Pop culture likes to depict computers as working in binary (individual bits or "binary digits"), which is technically true down on the level of the circuitry, but virtually none of the actual logic in a computer cares about individual bits. In fact, computers can't access individual bits directly; they can only fetch bytes, then extract bits from those bytes as a separate step.
A byte is made of eight bits, which gives it 2⁸ or 256 possible values. It's helpful to see what a byte is, but writing them in decimal is a bit clumsy, and writing them in binary is impossible to read. A clever compromise is to write them in hexadecimal, base sixteen, where the digits run from 0 to 9 and then A to F. Because sixteen is 2⁴, one hex digit is exactly four binary digits, and so a byte can conveniently be written as exactly two hex digits.
(A running theme across all of this is that many of the choices are arbitrary; in different times or places, other choices may have been made. Most likely, other choices were made, and they're still in use somewhere. Virtually the only reliable constant is that any computer you will ever encounter will have bytes made out of eight bits. But even that wasn't always the case.)
The Unix program xxd will print out bytes in a somewhat readable way. Here's its output for a short chunk of English text.
00000000: 5468 6520 7175 6963 6b20 6272 6f77 6e20 The quick brown
00000010: 666f 7820 6a75 6d70 7320 6f76 6572 2074 fox jumps over t
00000020: 6865 206c 617a 7920 646f 6727 7320 6261 he lazy dog's ba
00000030: 636b 2e0a ck..
Each line shows sixteen bytes. The left column shows the position (or "offset") in the data, in hex. (It starts at zero, because programmers like to start counting at zero; it makes various things easier.) The middle column shows the bytes themselves, written as two hex digits each, with just enough space that you can tell where the boundaries between bytes are. The right column shows the bytes interpreted as ASCII text, with any non-characters replaced with a ..
(ASCII is a character encoding, a way to represent text as bytes — which are only numbers — by listing a set of characters in some order and then assigning numbers to them. Text crops up in a lot of formats, and this makes it easy to spot at a glance. Alas, ASCII is only one of many schemes, and it only really works for English text, but it's the most common character encoding by far and has some overlap with the runners-up as well.)
Since everything is made out of bytes, there are an awful lot of schemes for how to express various kinds of information as bytes. As a result, a byte is meaningless on its own; it only has meaning when something else interprets it. It might be a plain number ranging from 0 to 255; it might be a plain number ranging from −128 to 127; it might be part of a bigger number that spans multiple bytes; it might be several small numbers crammed into one byte; it might be part of a color value; it might be a letter.
A meaningful arrangement for a whole sequence of bytes is loosely referred to as a format. If it's intended for an entire file, it's a file format. A file containing only bytes that are intended as text is called a plain text file (or format); this is in contrast to a binary file, which is basically anything else.
Some file formats are very common and well-understood, like PNG or MP3. Some are very common but were invented behind closed doors, like Photoshop's PSD, so they've had to be reverse engineered for other software to be able to read and write them. And a great many file formats are obscure and ad hoc, invented only for use by one piece of software. Programmers invent file formats all the time.
Reverse engineering a format is largely a matter of identifying common patterns and finding data that's expected to be present somewhere. Of course, in cases like Photoshop's PSD, the most productive approach is to make small changes to a file in Photoshop and then see what changed in the resulting PSD. That's not always an option — say, if you're working with a game for a handheld that won't let you easily run modified games.
Okay, hopefully that's enough of that and you can pick up the rest along the way!
Before Diamond and Pearl, all of veekun's data was just copied from other sources. Like, when I was in high school, I would spend lunch in the computer lab meticulously copy/pasting the Gold and Silver Pokédex text from another website into mine. Hey, I started the thing when I was 12.
But then… something happened. I can't remember what it was, which makes this a much less compelling story. I assume veekun got popular enough that a couple other Pokénerds found out about it and started hanging around. Then when Diamond and Pearl came out, they started digging into the games, and I thought that was super interesting, so I did it too.
This is what led veekun into being much more about ripped data, though its track record has been… bumpy.
The Nintendo DS header and filesystem
Everything in a computer is, on some level, a sequence of bytes. Game consoles and handhelds, being computers, also deal in bytes. A game cartridge is just a custom disk, and a ROM is a file containing all the bytes on that disk. (It's a specific case of a disk image, like an ISO is for CDs and DVDs. You can take a disk image of a hard drive or a floppy disk or anything else, too; they're all just bytes.)
But what are those bytes? That's the fundamental and pervasive question. In the case of a Nintendo DS cartridge, the first thing I learned was that they're arranged in a filesystem. Most disks have a filesystem — it's like a table of contents for the disk, explaining how the one single block of bytes is divided into named files.
That is fantastically useful, and I didn't even have to figure out how it works, because other people already had. Let's have a look at it, because seeing binary formats is the best way to get an idea of how they might be designed. Here's the beginning of the English version of Pokémon Diamond.
00000000: 504f 4b45 4d4f 4e20 4400 0000 4144 4145 POKEMON D...ADAE
00000010: 3031 0000 0900 0000 0000 0000 0000 0500 01..............
00000020: 0040 0000 0008 0002 0000 0002 2477 1000 .@..........$w..
00000030: 00d0 3000 0000 3802 0000 3802 1c93 0200 ..0...8...8.....
00000040: 0064 3300 7f15 0000 007a 3300 200b 0000 .d3......z3. ...
00000050: 00b8 1000 e00a 0000 0000 0000 0000 0000 ................
00000060: 5766 4100 f808 1808 0086 3300 3159 7e0d WfA.......3.1Y~.
00000070: 740a 0002 5801 3802 0000 0000 0000 0000 t...X.8.........
00000080: c05e a503 0040 0000 684b 0000 0000 0000 .^[email protected]......
000000a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
000000b0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
000000c0: 24ff ae51 699a a221 3d84 820a 84e4 09ad $..Qi..!=.......
000000d0: 1124 8b98 c081 7f21 a352 be19 9309 ce20 .$.....!.R.....
000000e0: 1046 4a4a f827 31ec 58c7 e833 82e3 cebf .FJJ.'1.X..3....
000000f0: 85f4 df94 ce4b 09c1 9456 8ac0 1372 a7fc .....K...V...r..
How do we make sense of this? Let us consult the little tool I started writing for this, porigon-z. It's abandoned and unfinished and not terribly well-written; I would just link to the documentation I consulted when writing this, but it's conspicuously 404ing now, so this'll have to do. I described the format using an old version of the Construct binary format parsing library, and it looks like this:
nds_image_struct = Struct('nds_image',
String('title', 12),
String('id', 4),
ULInt16('publisher_code'),
ULInt8('unit_code'),
ULInt8('device_code'),
ULInt8('card_size'),
String('card_info', 10),
ULInt8('flags'),
A String is text of a fixed length, either truncated or padded with NULs (character zero) to fit. The clumsy ULInt16 means an Unsigned, Little-endian, 16-bit (two byte) integer.
(What does little-endian mean? I'm glad you asked! When a number spans multiple bytes, there's a choice to be made: what order do those bytes go in? The way we write numbers is big-endian, where the biggest part appears first; but most computers are little-endian, putting the smallest part first. That means a number like 0x1234 is actually stored in two bytes as 34 12.)
Alas, this is a terrible example, since most of this is goofy internal stuff we don't actually care about. The interesting bit is the "file table". A little ways down my description of the format is this block of ULInt32s, which start at position 0x40 in the file.
file_table_offset 00 64 33 00 = 0x00336400
file_table_length 7f 15 00 00 = 0x0000157f (5503)
fat_offset 00 7a 33 00 = 0x00337a00
fat_length 20 0b 00 00 = 0x00000b20 (2848)
Excellent. Now we know that if we start at 0x00336400 and read 5503 bytes, we'll have the entire filename table.
003363f0: ffff ffff ffff ffff ffff ffff ffff ffff ................
00336400: 2802 0000 5700 4500 cd02 0000 5700 00f0 (...W.E.....W...
00336410: f502 0000 5700 01f0 fd02 0000 5700 02f0 ....W.......W...
00336420: 0b03 0000 5800 01f0 3203 0000 5a00 01f0 ....X...2...Z...
00336430: 3e03 0000 5a00 05f0 7b03 0000 5d00 00f0 >...Z...{...]...
00336440: cf03 0000 6300 00f0 f403 0000 6300 08f0 ....c.......c...
00336450: 0b04 0000 6500 08f0 5804 0000 6a00 08f0 ....e...X...j...
00336610: 0f15 0000 5d01 41f0 4315 0000 6101 41f0 ....].A.C...a.A.
00336620: 6a15 0000 6301 41f0 8b61 7070 6c69 6361 j...c.A..applica
00336630: 7469 6f6e 01f0 8361 7263 07f0 8662 6174 tion...arc...bat
00336640: 746c 6508 f087 636f 6e74 6573 740d f084 tle...contest...
00336650: 6461 7461 10f0 8464 656d 6f13 f083 6477 data...demo...dw
00336660: 631d f089 6669 656c 6464 6174 611e f087 c...fielddata...
00336670: 6772 6170 6869 632c f088 6974 656d 746f graphic,..itemto
I included one previous line for context; starting right after a whole bunch of ffs or 00s is a pretty good sign, since those are likely to be junk used to fill space. So we're probably in the right place, or at least a right place. Also we're definitely in the right place since I already know porigon-z works, but, you know.
The beginning part of this is a bunch of numbers that start out relatively low and gradually get bigger. That's a pretty good indication of an offset table — a list of "where this thing starts" and "how long it is", just like the offset/length pairs that pointed us here in the first place. The only difference here is that we have a whole bunch of them. And porigon-z confirms that this is a list of:
ULInt32('offset'),
ULInt16('top_file_id'),
ULInt16('parent_directory_id'),
My code does a bit more than this, but I don't want this post to be about the intricacies of an old version of Construct. The short version is that each entry is eight bytes long and corresponds to a directory; this list actually describes the directory tree. Decoding the first few produces:
offset 00000228, top file id 0057, parent id 0045
offset 000002cd, top file id 0057, parent id f000
offset 000002f5, top file id 0057, parent id f001
offset 000002fd, top file id 0057, parent id f002
Again, we encounter some mild weirdness. The parent ids seem to count upwards, except for the first one, and where did that f come from? It turns out that for the first record only — which is the root directory and therefore has no parent — the parent id is actually the total number of records to read. So there are 0x0045 or 69 records here. As for the f, well, I have no idea! I just discard it entirely when linking directories together.
So let's fully decode entry 3 (the fourth one, since we started at zero). It has offset 0x000002fd, which is relative to where the table starts, so we need to add that to 0x00336400 to get 0x003366fd. We don't have a length, but starting from there we see:
003366f0: 0c 6362 .cb
00336700: 5f64 6174 612e 6e61 7263 000f 7769 6669 _data.narc..wifi
00336710: 5f65 6172 7468 2e6e 6172 6315 7769 6669 _earth.narc.wifi
00336720: 5f65 6172 7468 5f70 6c61 6365 2e6e 6172 _earth_place.nar
I called the structure here a filename_list_struct. Also, as I read this code, I really wish I'd made it more sensible; sorry, I guess I'll clean it up when I get around to re-ripping gen 4. The Construct code is a bit goofy, but the idea is:
Read a byte. If it's zero, stop here. Otherwise, the top bit is a flag indicating whether this entry is a directory; the rest is a length.
The next length bytes are the filename.
Iff this is a directory, the next two bytes are the directory id.
(Ah yes, bits and flags. A flag is something that can only be true or false, so it really only needs one bit to store. So programmers like to cram flags into the same byte as other stuff to save space. Computers can't examine individual bits directly, but it's easy to manipulate them from code with a little math. Of course, using 1 bit for a flag means only 7 are left for the length, so it's limited to 127 instead of 255.)
Let's try this. The first byte is 0c. I can tell you right away that the top bit is zero; if the top bit is one, then the first hex digit will be 8 or greater. So this is just a file, and it's 0c or 12 bytes long. The next twelve bytes are cb_data.narc, so that's the filename. Repeat from the beginning: the next byte is 00, which is zero, so we're done. This directory only contains a single file, cb_data.narc.
But wait, what is this directory? We know its id is 3; its name would appear somewhere in the filename list for its parent directory, 2, along with an extra two bytes indicating it matches to directory 3. To get the name for directory 2, we'd consult directory 1; and directory 1's parent is directory 0. Directory 0 is the root, which is just / and has no name, so at that point we're done. Of course, if we read all these filename lists in order rather than skipping straight to the third one, then we'd have already seen all these names and wouldn't have to look them up.
One final question: where's the data? All we have are filenames. It turns out the data is in a totally separate table at fat_offset — "FAT" is short for "file allocation table". That's a vastly simpler list of pairs of start offset and end offset, giving the positions of the individual files, and nothing else.
All we have to do is match up the filenames to those offset pairs. This is where the "top file id" comes in: it's the id of the first file in the directory, and the others count up from there. This directory's top file id is 0x57, so cb_data.narc has file id 0x57. (If there were a next file, it would have id 0x58, and so on.) Its data is given by the 0x57th (87th) pair of offsets.
Phew! We haven't even gotten anywhere yet. But this is important for figuring out where anything even is. And you don't have to do it by hand, since I wrote a program to do it. Run:
python2 -m porigonz pokemon-diamond.nds list
To get output like this:
/application/custom_ball/data
87 0x03810200 0x0381ef8c 60812 [ 295] /application/custom_ball/data/cb_data.narc
/application/wifi_earth
88 0x037b2400 0x037d7674 152180 [ 8] /application/wifi_earth/wifi_earth.narc
89 0x037d7800 0x037d84c8 3272 [ 19] /application/wifi_earth/wifi_earth_place.narc
Hey, it's our friend cb_data.narc, with its full path! On the left is its file id, 87. Next are its start and end offsets, followed by its filesize.
You may notice that before the filenames start, you'll get a list of unnamed files. These are entries in the FAT that have no corresponding filename. I learned only recently that they're code — overlays, in fact, though I don't know what that means yet.
Now we can start looking at data and figuring it out. Finally.
NARCs and basic Pokémon data
This was fantastic. All the game data, nearly arranged into files, and even named sensibly for us. A goldmine. It didn't used to be so easy, as we will see later.
Other people had already noticed the file /poketool/personal/personal.narc contains much of the base data about Pokémon. You'll notice it has a "501" in brackets next to it, indicating that it's actually a NARC file — a "Nitro archive", though I'm not sure what "Nitro" refers to. This is a generic uncompressed container that just holds some number of sub-files — in this case, 501. The subfiles can have names, but the ones in this game generally don't, so the only way to refer to them is by number.
You may also notice that evo.narc and wotbl.narc, in the same directory, are also NARCs with 501 records. It's a pretty safe bet that they all have one record per Pokémon. That's a little odd, since Pokémon Diamond only has 493 Pokémon, but we'll figure that out later.
NARC is, as far as I can tell, an invention of Nintendo. I think it's in other DS games, though I haven't investigated any others very much, so I can't say how common it is. It's a very simple format, and it uses basically the same structure as the entire DS filesystem: a list of start/end offsets and a list of filenames. It doesn't have the same directory nesting, so it's much simpler, and also the filenames are usually missing, so it's simpler still. But you don't have to care, because you can examine the contents of a file with:
python2 -m porigonz pokemon-diamond.nds cat -f hex /poketool/personal/personal.narc
This will print every record as an unbroken string of hex, one record per line. (I admit this is not the smartest format; it's hard to see where byte boundaries are. Again, hopefully I'll fix this up a bit when I rerip gen 4.) Here are the first six Pokémon records.
2d31312d41410c032d400001000000001f144603010741000003000020073584081e10022024669202000000
3c3e3f3c50500c032d8d0005000000001f144603010741000003000020073584081e10022024669202000000
5052535064640c032dd00006000000001f144603010741000003000030473586081e1002282466920a000000
27342b413c320a0a2d414000000000001f144603010e420000000000230651cce41e821201a4469202000000
3a403a5050410a0a2d8e4001000000001f144603010e420000000000230651cce41e821201a4469202000000
That first one is pretty conspicuous, what with its being all zeroes. It's probably some a dummy entry, for whatever reason. That does make things a little simpler, though! Numbering from zero has caused some confusion in the past: Bulbasaur (National Dex number 1) would be record 0, and I've had all kinds of goofy bugs from forgetting to subtract or add 1 all over the place. With a dummy record at 0, that means Bulbasaur is 1, and everything is numbered as expected.
So, what is any of this? The heart of figuring out data formats is looking for stuff you know. That might mean looking for data you know should be there, or it might mean identifying common schemes for storing data.
A good start, then, would be to look at what I already know about Bulbasaur. Base stats are a pretty fundamental property, and Bulbasaur's are 45, 49, 49, 65, 65, and 45. In hex, that's 2d 31 31 41 41 2d. Hey, that's the beginning of the first line! It's just slightly out of order; Speed comes before the special stats.
You can also pick out some differences by comparing rows. About 60% of the way along the line, I see 03 for Bulbasaur, Ivysaur, and Venusaur, but then 00 for Charmander and Charmeleon. That's different between families, which seems like a huge hint; does that continue to hold true? (As it turns out, no! It fails for Butterfree — because it indicates the Pokémon's color, used in the Pokédex search. Most families are similar colors.)
Sometimes a byte will seem to only take one of a few small values, which usually means it's an enum (one of a list of numbered options), like the colors are. A byte that only ranges from 1 to 17 (or perhaps 0 to 16) is almost certainly type, for example, since there are 17 types.
Noticing common patterns — very tiny formats, I suppose — is also very helpful (and saves you from wild goose chases). For example, Pokémon can appear in the wild holding held items, and there are more than 256 items, so referring to an item requires two bytes. But there are only slightly more than 256 items in this game, so the second byte is always 00 or 01. If you remember that some fields must span multiple bytes, that's an incredible hint that you're looking at small 16-bit numbers; if you forget, you might thing the 01 is a separate field that only stores a single flag… and drive yourself mad trying to find a pattern to it.
The games have a number of TMs which can teach particular moves, and each Pokémon can learn a unique set of TMs. These are stored as a longer block of bytes, where each individual bit is either 1 or 0 to indicate compatibility. Those are a bit harder to identify with certainty, since (a) the set of TMs changes in every game so you can't just check what the expected value is, and (b) bitflags can produce virtually any number with no discernible pattern.
Thankfully, there's a pretty big giveaway for TMs in particular. Here are Caterpie, Metapod, and Butterfree:
2d1e232d14140606ff350100000000007f0f4600030313000003000000000000000000000000000000000000
3214371e1919060678482000000000007f0f460003033d000003000000000000000000000000000000000000
3c2d3246505006022da000060000de007f0f460003030e000008000020463fb480be14222830560301000000
Butterfree can learn TMs. Caterpie and Metapod are almost unique in that they can't learn any. Guess where the TMs are! Even better, Caterpie is only #9, so this shows up very early on.
And, well, that's the basic process. It's mostly about cheating, about leveraging every possible trick you can come up with to find patterns and landmarks. I even wrote a script for this (and several other files) that dumped out a huge HTML table with the names of the (known) Pokémon on the left and byte positions as columns. When I figured something out, or at least had a suspicion, I labelled the column and changed that byte to print as something more readable (e.g., printing the names of types instead of just their numbers).
Of course, if you have a flash cartridge or emulator (both of which were hard to come by at the time), you can always invoke the nuclear option: change the data and see what changes in the game.
Still, easy, right? How hard could this be.
Sprites: In which it gets hard
What we really really wanted were the sprites. This was a new generation with new Pokémon, after all, and sprites were the primary way we got to see them. Unlike nearly everything else, this hadn't already been figured out by other people by the time I showed up.
Finding them was easy enough — there's a file named /poketool/pokegra/pokegra.narc, which is conspicuously large. It's a NARC containing 2964 records. A little factoring reveals that 2964 is 494 × 6 — aha! There are 493 Pokémon, plus one dummy.
python2 -m porigonz pokemon-diamond.nds extract /poketool/pokegra/pokegra.narc
This will extract the contents of pokegra.narc to a directory called pokemon-diamond.nds:data, which I guess might be invalid on Windows or something, so use -d to give another directory name if you need to. Anyway, in there you'll find a directory called pokegra.narc, inside of which are 2964 numbered binary files.
Some brief inspection reveals that they definitely come in groups of six: the filesizes consistently repeat 6.5K, 6.5K, 6.5K, 6.5K, 72, 72. Sometimes a couple of the files are empty, but the pattern is otherwise very distinct. Four sprites per Pokémon, then?
Let's have a look at the first file! Since it's a dummy sprite, it should be blank or perhaps a question mark, right? Oh boy I'm so excited.
00000000: 5247 434e fffe 0001 3019 0000 1000 0100 RGCN....0.......
00000010: 5241 4843 2019 0000 0a00 1400 0300 0000 RAHC ...........
00000030: de54 59cf e00a 2374 927c 5db5 7476 87c1 .TY...#t.|].tv..
00000040: 06d1 2183 c890 ab40 3a06 a53c dced 8f55 ..!....@:..<...U
00000050: 2e90 e9a5 b0e1 3324 e2a2 ed42 4480 9790 ......3$...BD...
00000060: 5632 b157 989d bb3e 8af2 35e8 accd 9f92 V2.W...>..5.....
00000070: 7e57 79b8 8064 43b0 3295 7d4c 1476 a77b ~Wy..dC.2.}L.v.{
Hm. Okay, so, this is a problem. No matter what the actual contents are, this is a sprite, and virtually all Pokémon sprites have a big ol' blob of completely empty space in the upper-left corner. Every corner, in fact. Except for a handful of truly massive species, the corners should be empty. So no matter what scheme this is using or what order the pixels are in, I should be seeing a whole lot of zeroes somewhere. And I'm not.
Compression? Seems very unlikely, since every file is either 0, 72, or 6448 bytes, without exception.
Well, let's see what we've got here. RGCN and RAHC are almost certainly magic numbers, so this is one file format nested inside another. (A lot of file formats start with a short fixed string identifying them, a so-called "magic number". Every GIF starts with the text GIF89a, for example. A NARC file starts with CRAN — presumably it's "backwards" because it's being read as an actual little-endian number.) I assume the real data begins at 0x30.
Without that leading 0x30 (48) bytes, the file is 6400 bytes large, which is a mighty conspicuous square number! Pokémon sprites have always been square, so this could mean they're 80×80, one byte per pixel. (Hm, but Pokémon sprites don't need anywhere near 256 colors?)
I see a 30 in the first line, which is probably the address of the data. I also see a 10, which is probably the (16-bit?) length of that initial header, or the address of the second header. What about in the second header? Well, uh, hm. I see a lot of what seem to be small 16-bit or 32-bit numbers: 0x000a is 10, 0x0014 is 20, 0x0003 is 3; 0x0018 is 24. A quick check reveals that 0x1900 is 6400 (the size of the data), and so 0x1920 is the size of the data plus this second header.
This hasn't really told me anything I don't already know. It seems very conspicuous that there's no 0x50, which is 80, my assumed size of the sprite.
Well, hm, let's look at the second file. It's in the block for the same "Pokémon", so maybe it'll provide some clues.
Ah. No. It starts out completely identical. In fact, md5sum reveals that all four of these first sprites are identical. Might make sense for a dummy Pokémon. Does that pattern hold for the next Pokémon, which I assume is Bulbasaur? Not quite! Files 6 and 7 are identical, and 8 and 9 are identical, but they're distinct from each other.
What's the point of them then? Further inspection reveals that most Pokémon have paired sprites like this, but Pikachu does not — suggesting (correctly) that the sprites are male versus female, so Pokémon that don't have gender differences naturally have identical pairs of sprites.
Okay, then, let's look at Pikachu's first sprite, 150. The key is often in the differences, remember. If the dummy sprite is either blank or a question mark, then it should still have a lot of corner pixels in common with the relatively small Pikachu.
00000030: b6bd 6f4c 6c6e 3d16 b226 db0b 0818 c934 ..oLln=..&.....4
00000040: eeb7 876c e41f 9542 6a6d 73da 0022 a1cb ...l...Bjms.."..
00000050: 2683 9f01 5cfa ed9b 2275 0bce f8c4 79bf &...\..."u....y.
00000060: 5eff b76b d4dd 4582 da1d a346 f0e0 5170 ^..k..E....F..Qp
00000070: 960c cf0a 4caa 9d55 9247 3ba4 e855 293e ....L..U.G;..U)>
00000080: ce8a e73e c43f f575 4ad2 d346 e003 0189 ...>.?.uJ..F....
00000090: 065a ff67 3c7e 4d43 029e 6b8e d8ca d9b0 .Z.g<~MC..k.....
000000a0: 3e5a 17e6 b445 a51d ba8a 03db d08a b115 >Z...E..........
Well. Nope. How does that compare to Pikachu's second sprite, 151 — which ought to be extremely similar, seeing as the only gender difference is a notch in the tail?
00000030: 2957 ce67 e76f c494 f5fe 4adf d367 e008 )W.g.o....J..g..
00000040: 0182 0697 ff78 3c33 4dac 020b 6b8f d82f .....x<3M...k../
00000050: d989 3ef7 17d7 b45a a566 ba57 03bc d04f ..>....Z.f.W...O
00000060: b1ce 7668 2fea 2ceb fd8d 72a5 9b4d c848 ..vh/.,...r..M.H
00000070: 89b0 aeca 4712 a4c4 5582 2ad4 33a4 c0fa ....G...U.*.3...
00000080: 618f e6fd 5faf 1cc7 ada3 e2c3 cb1f b845 a..._..........E
00000090: 39cb 1ee2 7721 94d2 0552 9a54 6320 b009 9...w!...R.Tc ..
000000a0: 11c4 5657 8fc8 0cc7 5ded 5266 fb05 a826 ..VW....].Rf...&
Oh, my god. Nothing is similar at all.
Make a histogram? Every possible value appears with roughly the same frequency. Now that is interesting, and suggests some form of encryption — most likely one big "mask" xor'd with the whole sprite. But how to find the mask?
(It doesn't matter exactly what xor is, here. It only has two relevant properties. One is that it's self-reversing, making it handy for encryption like this — (data xor mask) xor mask produces the original data. The other is that anything xor'd with zero is left unchanged, so if I think the original data was zero — as it ought to be for the blank pixels in the corners of a sprite — then the encrypted data is just the mask! So I know at least the beginning parts of the mask for most sprites; I just have to figure out how to use a little bit of that to reconstitute the whole thing.)
I stared at this for days. I printed out copies of sprite hex and stared at them more on breaks at work. I threw everything I knew, which admittedly wasn't a lot, at this ridiculous problem.
And slowly, some patterns started to emerge. Consider the first digit of the last column in the above hex dump: it goes e, d, d, c, c, b, b, a. In fact, if you look at the entire byte, they go e0, d8, d0, c8, etc. That's just subtracting 08 on each row.
Are there other cases like this? Kinda! In the third column, the second digit alternates between 7 and f; closer inspection reveals that byte's increasing by 18 every row. Oh, the sixth column too. Hang on — in every column, the second digit alternates between two values. That seems true for every other file we've seen so far, too.
This is extremely promising! Let's try this. Take the first two rows, which are bytes 0–15 and bytes 16–31. Subtract the second row from the first row bytewise, making a sort of "delta row". For the second Pikachu, that produces:
d82b 3830 1809 789f 58ae b82c 9828 f827
As expected, the second digit in each column is an 8. Now just start with the first row and keep adding the delta to it to produce enough rows to cover the whole file, and xor that with the file itself. Results:
00000020: 0024 0030 0056 0088 003c 0060 000b 0019 .$.0.V...<.`....
00000030: 0016 009f 0060 009a 0085 00c6 0092 0035 .....`.........5
00000040: 00b3 00ed 0081 00d4 0034 005b 00a3 005e .........4.[...^
00000050: 00a1 00aa 0033 0068 00c7 0078 0030 008e .....3.h...x.0..
00000060: 0092 0065 0084 009c 0040 00b3 0077 00fb [email protected]..
00000070: 0040 00e0 0066 002a 002d 0075 007a 003f [email protected].*.-.u.z.?
00000080: 0076 00da 00b3 0008 00bb 00e6 0097 003c .v.............<
Promising! We got a bunch of zeroes, as expected, though everything else is still fairly garbled. It might help if we, say, printed this out to a file.
By now it had become clear that the small files were palettes of sixteen colors stored as RGB555 — that is, each color is packed into two bytes, with five bits each for the red, green, and blue channels. Sixteen colors means two pixels can be crammed into a single byte, so the sprites are actually 160×80, not 80×80. Combining this knowledge with the above partially-decrypted output, we get:
Kinda!
Meanwhile another fansite found our code and put up a full set of these ugly-ass corrupt sprites, so that was nice.
It took me a while to notice another pattern, which emerges if you break the sprite into blocks that are 512 bytes wide (rather than only 16). You get this:
2957 ce67 e76f c494 f5fe 4adf d367 e008 ...
29e2 ce3e e742 c4d3 f5d9 4a46 d30a e057 ...
296d ce15 e715 c412 f5b4 4aad d3ad e0a6 ...
29f8 ceec e7e8 c451 e561 a450 9763 e7f5 ...
This time, the byte in the first column is always identical all the way down. Well, kind of. This is encrypted data, remember, and I only know what the mask is because the beginning of the data is usually blank. The exceptions are when the mask is hitting actual colored pixels, at which point it becomes garbage.
But even better, look at the second byte in each column. Now they're all separated by a constant, all the way down! That means I can repeat the same logic as before, except with two "rows" that are 512 bytes long, and as long as the first 1024 bytes of the original data are all zeroes, I'll get a perfect sprite out!
And indeed, I did! Mostly. Legendary Pokémon and a handful of others tend to be quite large, so they didn't start with as many zeroes as I needed for this scheme to work. But it mostly worked, and that was pretty goddamn cool.
magical, a long-time co-conspirator, managed to scrounge up my final "working" code from that era (which then helped me find my own local copy of all my D/P research stuff, which I oughta put up somewhere). It's total nonsense, but it came pretty close to working.
Hm? What? You want to know the real answer? Yeah, I bet you do.
Okay, here you go. So the games have a random number generator, for… generating random numbers. This being fairly simple hardware with fairly simple (non-crypto) requirements, the random number generator is also fairly simple. It's an LCG, a linear congruential generator, which is a really bizarre name for a very simple idea:
ax + b
The generator is defined by the numbers a and b. (You have to pick them carefully, or you'll get numbers that don't look particularly random.) You pick a starting number (a seed) and call that x. When you want a random number, you compute ax + b. You then take a modulus, which really means "chop off the higher digits because you only have so much space to store the answer". That's your new x, which you'll plug in to get the next random number, and so on.
In the case of the gen 4 Pokémon games, a = 0x4e6d and b = 0x6073.
What does any of this have to do with the encryption? Well! The entire sprite is interpreted as a bunch of 16-bit integers. The last one is used as the seed and plugged into the RNG, and then it keeps spitting out a sequence of numbers. Reverse them, since you're starting at the end, and that's the mask.
The seed technically overlaps with the last four pixels, but it happens to work since no Pokémon sprites touched the bottom-right corner in Diamond and Pearl. In Platinum a couple very large sprites broke that rule, so they ended up switching this around and starting from the beginning. Same idea, though.
Of course, porigon-z knows how to handle this… though it's currently hardcoded to use the Platinum approach. Funny story: the algorithm was originally thought to go from the beginning, not the end, and it used an LCG with different constants. Turns out someone had just accidentally (?) discovered the reverse of the Pokémon LCG, which would produce exactly the same sequence, backwards. Super cool.
Why did that thing with subtracting the two rows kinda-sorta work, then? Well! It's because… when you… and… uh… wow, I still have no goddamn idea. That makes no sense at all. I'd sure love for someone to explain that to me. I'm sure I could explain it if I sat down and thought about it for a while, but I suspect it's something subtle and I'm not that interested.
I'd also like to know: why were the sprites encrypted in the first place? What possible point is there? They must have known we cracked the encryption, but then they used it again for Platinum, and Heart Gold and Soul Silver. Maybe it was only intended to be enough to delay us, during the gap between the Japanese and worldwide releases…? Hm.
Incidentially, the entire game text is also encrypted in much the same way. Without the encryption, it's just UTF-16 — a common character encoding that uses two bytes for every character. I have no idea why.
The dark days
So Nintendo DS cartridge have a little filesystem on them, making them act kinda like any other disk. Nice.
Game Boy cartridges… don't. A Game Boy cartridge is essentially just a single file, a program. You pop the cartridge in, and the Game Boy runs that program.
Where is the data, then? Baked into the program — referred to as hard-coded. Just, somewhere, mixed in alongside all the program code. There's no nice index of where it is; rather, somewhere there's some code that happens to say "look at the byte at 0x006f9d10 in this program and treat it as data".
I wasn't involved in data dumping in these days; I was copying stuff straight out of the wildly inaccurate Prima strategy guide. (Again, you know, I was 12.) It's hard to say exactly how people fished out the data, though I can take a few guesses.
A few guesses
To our advantage is the fact that Game Boy cartridges are much smaller than DS cartridges, so there's much less to sift through. Pokémon Red and Blue are on 1 MB cartridges, and even those are half empty (unused NULs); the original Japanese Red and Green barely fit into 512 KB, and Red and Blue ended up just slightly bigger.
To our disadvantage is that these are the very first games, so we don't have any pre-existing knowledge to look for. We don't know any Pokémon's base stats; we may not even know that "base stats" are a thing yet. Also, it's not immediately obvious, but the Pokémon aren't even stored in order. Oh, and Mew is completely separate; it really was a last-minute addition.
What do we know? Well, by playing the game, we can see what moves a Pokémon learns and when. There don't seem to be all that many moves, so it's reasonable to assume that a move would be represented with a single byte. Levels are capped at 100, so that's probably also a single byte. Most likely, the level-up moves are stored as either level move level move... or move level move level....
Great! All we need to do is put together a string of known moves both ways and find them.
Except, ah, hm. We don't actually know how the moves are numbered. But we still know the levels, so maybe we can get somewhere. Let's take Bulbasaur, which we know learns Leech Seed at level 7, Vine Whip at level 13, and Poison Powder at level 20. (Or, I guess, that should be LEECH SEED, VINE WHIP, and POISONPOWDER.) No matter whether the levels or moves come first, this will result in a string like:
07 ?? 0D ?? 14
So we can do my favorite thing and slap together a regex for that. (A regex is a very compact way to search text — or bytes — for a particular pattern. A lone . means any single character, so the regex below is a straightforward translation of the pattern above.)
>>> for match in re.finditer(rb'\x07.\x0d.\x14', rom):
... print(f"{match.start():08x} {match.group().hex()}")
0003b848 07490d1614
Exactly one match! Let's have a look at that position in the file.
0003b840: 7700 0000 0110 0900 0749 0d16 144d 1b4b w........I...M.K
0003b850: 224a 294f 304c 0000 0749 0d16 164d 1e4b "J)O0L...I...M.K
0003b860: 2b4a 374f 414c 0000 0730 0d23 1228 1637 +J7OAL...0.#.(.7
0003b870: 1b84 2370 2b67 3238 0000 0001 219e 0013 ..#p+g28....!...
This seems pretty promising! It looks like the same set of moves is repeated 16 bytes later, but with different (slightly higher) levels after a certain point, which matches how evolved Pokémon behave. So this looks to be at least Bulbasaur and Ivysaur, though I'm not quite sure what happened to Venusaur.
By repeating this process with some other Pokémon, we can start to fill in a mapping of moves to their ids. Eventually we'll realize that a Pokémon's starting moves don't seem to appear within this structure, and so we'll go searching for those for a Pokémon that starts with moves we know the ids for. That will lead us to the basic Pokémon data along with base stats, because starting moves happen to be stored there in these early games.
The text isn't encrypted, but also isn't ASCII, but it's possible to find it in much the same way by treating it as a cryptogram (or a substitution cipher). I assume that there's some consistent scheme, such that the letter "A" is always represented with the same byte. So I pick some text that I know has a few repeated letters, like BULBASAUR, and I recognize that it could be substituted in some way to read as 123145426. I can turn that into a regex!
>>> for match in re.finditer(rb'(.)(.)(.)\1(.)(.)\4\2(.)', rom, flags=re.DOTALL):
Unfortunately, this produces a zillion matches, most of them solid strings of NUL bytes. The problem is that nothing in the regex requires that the different groups are, well, different. You could write extra code to filter those cases out, or if you're masochistic enough, you could express it directly within the regex using (?!...) negative lookahead assertions:
>>> for match in re.finditer(rb'(.)(?!\1)(.)(?!\1)(?!\2)(.)\1(?!\1)(?!\2)(?!\3)(.)(?!\1)(?!\2)(?!\3)(?!\4)(.)\4\2(?!\1)(?!\2)(?!\3)(?!\4)(?!\5)(.)', rom, flags=re.DOTALL):
0000820c 4305ff43441b440522
0001c80e 81948b818092809491
00054a94 0a4d350a556d554d43
0007a55a 33466f33fff0ff4670
0007c20c 4305e843440444050b
0008e7b2 7fa8b37fa0a6a0a8ad
00094e75 81948b818092809491
000a0bcd a77fb3a7a4b1a47fb6
That's much more reasonable. (The set of matches, I mean, not the regex.) It wouldn't be hard to write a script with a bunch of known strings in it, generate appropriate regexes for each, eliminate inconsistent matches, and eventually generate a full alphabet. (Or you could assume that "B" immediately follows "A" and in general the letters are clustered together, which would lead you to correctly suspect that the strings at 0x0001c80e and 0x00094e75 are the ones you want.)
Even better, once you have an alphabet, you can use it to translate entire ROM — plenty of it will be garbage, but you'll find quite a lot of blocks of human-readable text! And now you have all the names of everything and also the entire game script.
Modern day and a brief tour of gbz80 assembly
But like I said, I wasn't involved in any of that. Until recently! I've been working on an experiment for veekun where I re-dump all the games to a new YAML-based format. Long story short: the current database is a pain to work with, and some old data has been lost entirely. Also, most of the data was extracted bits at a time with short hacky scripts that we then threw away, and I'd like to have more permanent code that can dump everything at once. It'll be nice to have an audit trail, too — multiple times in the past, we've discovered that some old thing was dumped imperfectly or copied from an unreliable source.
So I started re-dumping Red and Blue, from scratch. I've made modest progress, though it's taken a backseat to Sun and Moon lately.
It's helped immensely that there's an open source disassembly of Red and Blue floating around. What on earth is a disassembly? I'm so glad you asked!
Game Boy games were written in assembly code, which is just about as simple and painful as you can get. It's human-readable, kinda, but it's built from the basic set of instructions that a particular CPU family understands. It can't directly express concepts like "if this, do that, otherwise do something else" or "repeat this code five times". Instead, it's a single long sequence of basic operations like "compare two numbers" and "jump ahead four instructions". (Very few programmers work with assembly nowadays, but for various reasons, no other programming languages would work on the Game Boy at the time.)
To give you a more concrete idea of what this is like to work with: the Game Boy's CPU doesn't have an instruction for multiplying, so you have to do it yourself by adding repeatedly. I thought that would make a good example, but it turns out that Pokémon's multiply code is sixty lines long. Division is even longer! Here's something a bit simpler, which fills a span of memory:
FillMemory::
; Fill bc bytes at hl with a.
push de
ld d, a
.loop
ld a, d
ld [hli], a
dec bc
ld a, b
or c
jr nz, .loop
pop de
CPUs tend to have a small number of registers, which can hold values while the CPU works on them — even as fast as RAM is, it's considered much slower than registers. The downside is, well, you only have a few registers. The Game Boy CPU (a modified Z80) has eight registers that can each hold one byte: a through f, plus h and l.. They can be used together in pairs to store 16-bit values, giving the four combinations af, bc, de, and hl.
(If you need more than 16 bits, well, that's your problem! 16 bits is the most the CPU understands; it can't even access memory addresses beyond that range, so you're limited to 64K. "But wait", you ask, "how can a Game Boy cartridge be 512K or 1M?" Very painfully, that's how.)
Now we can understand the comment in the above code. Starting at the memory address given by the 16-bit number in hl, it will copy the value in a into each byte, for a total of bc bytes. Translated into English, the above means something like this:
Save copies of d and e, so I can mess with them without losing any important data that was in them. (This code doesn't use e, but there's no push d instruction.)
Copy a, the fill value, into d.
Copy d, the fill value, into a.
Copy a, the fill value, into the memory at address hl. Then increase hl (the actual registers, not the memory) by 1.
Decrease bc, the number of bytes to fill, by 1.
Copy b, part of the number of bytes to fill, into a.
OR a with c, the other part of the number of bytes to fill, and leave the result in a. The result will be zero only if bc itself is zero, in which case the "zero" flag will be set.
If the zero flag is not set (i.e., if bc isn't zero, meaning we're not done yet), jump back to the instruction marked by .loop, which is step 3.
Restore d and e to their previous values.
Return to whatever code jumped here in the first place.
Even this relatively simple example has to resort to a weird trick — ORing b and c together — just to check if bc, a value the CPU understands, is zero or not.
CPUs don't execute assembly code directly. It has to be assembled into machine code, which is (surprise!) a sequence of bytes corresponding to CPU instructions. When the above code is compiled, it produces these bytes, which you can verify for yourself appear in Pokémon Red and Blue in exactly one place:
d5 57 7a 22 0b 78 b1 20 f9 d1 c9
I stress that this is way beyond anything virtually any programmer actually needs to know. Even the few programmers working with assembly, as far as I know, don't usually care about the actual bytes that are spat out. I've actually had trouble tracking down lists of opcodes before — almost no one is trying to read machine code. We are out in the weeds a bit here.
To finally answer your hypothetical question: disassembly is the process of converting this machine code back into assembly. Most of it can be done automatically, but it takes extra human effort to make the result sensible. Let's consult the Game Boy CPU's (rather terse) opcode reference and see if we can make sense of this, pretending we don't know what the original code was.
Find d5 in the table — it's in row Dx, column x5. That's push de. The first number in the box is 1, meaning the instruction is only one byte long, so the next instruction is the very next byte. That's 57, which is ld d, a. Keep on going. Eventually we hit 20, which is jr nz, r8 and two bytes long — the notes at the bottom explain that r8 is 8-bit signed data. That means the next byte is part of this instruction; it's f9, but it's signed, so really that's -7. We end up with:
ld (hl+), a
jr nz, $-07
This looks an awful lot like what we started with, but there are a couple notable exceptions. First, the FillMemory:: line is completely missing. That's just another kind of label, and the only way to know that the first line should be labelled at all is to find some other place in the code that tries to jump here. Given just these bytes, we can't even tell if this is a complete snippet. Once we find that out, there's still no way to recover the name FillMemory; even that is just a fan name and not the name from the original code. Someone came up with that name by reading this assembly code, understanding what it's intended to do, and giving it a name.
Second, the .loop label is missing. The jr line forgot about the label and ended up with a number, which is how many bytes to jump backwards or forwards. (You can imagine how a label is much easier to work with than a number of bytes, especially when some instructions are one byte long and some are two!) An automated disassembler would be smart enough to notice this and would put a label in the right place. A really good disassembler might even recognize that this code is a simple loop that executes some number of times, and name that label .loop; otherwise, or for more complicated kinds of jumps, it would have a meaningless name that a human would have to improve.
And there's a whole project where people have done the work of restoring names like this and splitting code up sensibly! The whole thing even assembles into a byte-for-byte identical copy of the original games. It's really quite impressive, and it's made tinkering with the original games vastly more accessible. You still have to write assembly, of course, but it's better than editing machine code. Imagine trying to add a new instruction in the middle of the loop above; you'd screw up the jr's byte count, and every single later address in the ROM.
But more relevant to this post, a disassembly makes it easy to figure out where data is, since I don't have to go hunting for it! When the code is assembled, it can generate a .sym file, which lists every single "global" label and the position it ended up in the final ROM. Many of those labels are for functions, like FillMemory is, but some of them are for blocks of data.
Snagging data
I set out to write some code to dump data from Game Boy games. Red/Green, Red/Blue, and Yellow were all fairly similar, so I wanted to use as much of the same code as possible for all of those games (and their various translations).
A very early pain point was, well, the existence of all those translations. Because there's no filesystem, the only obvious way to locate data is to either search for it (which requires knowing it ahead of time, a catch-22 for a script that's meant to extract it) or to bake in the addresses. The games contain quite a lot of data I want, and they exist in quite a few distinct versions, so that would make for a lot of addresses.
Also, with a disassembly readily available, it was now (relatively) easy for people to modify the games as they saw fit, in much the same way as it's easy to modify most aspects of modern games by changing the data files. But if I simply had a list of addresses for each known release, then my code wouldn't know what to do with modified games. It's not a huge deal — obviously I don't intend to put fan modifications into veekun — but it seemed silly to write all this extraction code and then only have it work on a small handful of specific files.
I decided to at least try to find data automatically. How can I do that, when the positions of the data only existed buried within machine code somewhere?
Obviously, I just need to find that machine code. See, that whole previous section was actually relevant!
I set out to do that. Remember the goofy regex from earlier, which searched for particular patterns of bytes? I did something like that, except with machine code. And by machine code, I mean assembly. And by assembly, I mean— okay just look at this.
ld a, [#wd11e]
dec a
ld hl, #TechnicalMachines
ld b, $0
ld c, a
add hl, bc
ld a, [hl]
ld [#wd11e], a
I wrote my own little assembler that can convert Game Boy assembly into Game Boy machine code. The difference is that when it sees something like #foo, it assumes that's a value I don't know yet and sticks in a regex capturing group instead. It's smart enough to know whether the value has to be one or two bytes, based on the instruction. It also knows that if the same placeholder appears twice, it must have the same value both times. I can also pass in a placeholder value, if I only know it at runtime.
I have half a dozen or so chunks like this. Every time I wanted to find something new, I went looking for code that referenced it and copied the smallest unique chunk I could (to reduce the risk that the code itself is different between games, or in a fan hack). I did run into a few goofy difficulties, such as code that changed completely in Yellow, but I ended up with something that seems to be pretty robust and knows as little as possible about the games.
I even auto-detect the language… by examining the name of the TOWN MAP, the first item that has a unique name in every official language.
This is probably ridiculous overkill, but it was a blast to get working. It also leaves the door open for some… shenanigans I've wanted to do for a while.
But enough about the Game Boy. Let's get back to the future.
The 3DS, and what I'm doing now
Recent games have been slightly more complicated, though the complexity is largely in someone else's court. The 3DS uses encryption — real, serious encryption, not baby stuff you can work around by comparing rows of bytes.
When X and Y came out, the encryption still hadn't been broken, so all of veekun's initial data was acquired by… making a Google Docs spreadsheet and asking for volunteers to fill it in. It wasn't great, but it was better than nothing.
This was late 2013, and I suppose it's around when veekun's gentle decline into stagnation began. When X and Y were finally ripped, I was… what was I doing? I guess I was busy at work? For whatever reason, I had barely any involvement in it. Then Omega Ruby and Alpha Sapphire came out, and now everyone was busy, and it took forever just to get stuff like movesets dumped.
Now I'm working on Sun and Moon again. It's not especially hard — much of the basic structure has been preserved since Diamond and Pearl, and a lot of the Omega Ruby and Alpha Sapphire code I wrote works exactly the same with Sun and Moon — but there are a lot of edge cases.
Some changes in X/Y and beyond
The most obvious wrinkle is that the filenames are gone. This has actually been the case since Heart Gold and Soul Silver — all the files now simply have names like /a/0/0/0 and count upwards from there. I don't know the reason for the change, but I assume the original filenames weren't intended to be left in the game in the first place. The files move around in every new pair of games, too, requiring bunches of people to go through the files by hand and note down what each of them appears to be.
Newer games use GARC instead of NARC. I don't know what the G stands for now. (Gooder? Gen 6?) It's basically the same idea, except that now a single GARC archive has two levels of nesting — that is, a GARC contains some number of sub-archives, and each of those in turn contains some number of subfiles. Usually there's only one subfile per subarchive, but I've seen zanier schemes once or twice.
Also, just in case that's not enough levels of containers for you, there are also a number of other single-level containers embedded inside GARCs. They're all very basic and nearly identical: just a list of start and end offsets.
Oh, and some of the data is compressed now. (Maybe that was the case before X/Y? I don't remember.) Compression is fun. Any given data might be compressed with one of two flavors of LZSS, and it seems completely arbitrary what's compressed and what's not. There's no indication of what's compressed or what's not, either; the only "header" that compressed data has is that the first byte is either 0x10 or 0x11, which isn't particularly helpful since plenty of valid data also begins with one of those bytes.
But there was one much bigger practical problem with X and Y, one I'd been dreading for a while. X and Y, you see, use models — which means they don't have any sprites for us to rip at all. And that kind of sucks.
The community's solution has been for a few people (who have screen-capture hardware) to take screenshots and cut them out. It works, but it's not great. The thing I've wanted for a very long time is rips of the actual models.
(Later games went back to including "sprites" that are just static screenshots of the models. Maybe out of kindness to us? Okay, yeah, doubtful. Oh, and those sprites are in the incredibly obtuse ETC1 format, which I had never heard of and needed help to identify, and which I will let you just read about yourself.)
Extracting models
The Pokémon models are an absolute massive chunk of the game data. All the data combined is 3GB; the Pokémon models are a hair under half of that, despite being compressed.
At least this makes them easy to find, since they're all packed into a single GARC file.
That file contains, I don't know, a zillion other files. And many of those files are generic containers, containing more files. And none of these files are named. Of course. It's easy enough to notice that there are nine files per Pokémon, since the sizes follow a rough pattern like the sprites did in Diamond and Pearl. (You'd think that they'd use GARC's two levels of nesting to group the per-Pokémon files together, but apparently not.)
At this point, I had zero experience with 3D — in fact, working on this was my introduction to 3D and Blender — so I didn't get very far on my own. I basically had to wait a few years for other people to figure it out, look at their source code, replicate it myself, and then figure out some of the bits they missed. The one thing I did get myself was texture animations, which are generally used to make Pokémon change expressions — last I saw, no one had gotten those ripped, but I managed it. Hooray. I'm sure someone else has done the same by now.
Anyway, I bring up models because of two very weird things that I never would've guessed in a million years.
One was the mesh data itself. A mesh is just the description of a 3D model's shape — its vertices (points), the edges between vertices, and the faces that fill in the space between the edges.
And, well, those are the three parts to a basic mesh. A few very simple model formats are even just those things written out: a list of vertices (defined by three numbers each, x y z), a list of edges (defined by the two vertices they connect), and a list of faces (defined by their edges).
It should be easy to find models by looking for long lists of triplets of numbers — vertex coordinates. Well, not quite. Pokémon models are stored as compiled shaders.
A shader is a simple kind of program that runs directly on a video card, since video cards tend to be a more appropriate place for doing a bunch of graphics-related math. On a desktop or phone or whatever, you'd usually write a shader as text, then compile it when your program/game runs. In fact, you have to do this, since the compilation is different for each kind of video card.
But Pokémon games only have to worry about one video card: the graphics chip in the 3DS. And there's absolutely no reason to waste time compiling shaders while the game is running, when they could just do it ahead of time and put the compiled shader in the game directly. (Incidentally, the Dolphin emulator recently wrote about how GameCube games do much the same thing.)
So they did that. Thankfully, the compiled shader is much simpler than machine code, and the parts I care about are just the parts that load in the mesh data — which mostly looks like opcodes for "here's some data", followed by some data. It would probably be possible to figure out without knowing anything about the particular graphics chip, but if you didn't know it was supposed to be a shader, you'd be mighty confused by all the mesh data surrounded by weird extra junk that doesn't look at all like mesh data.
The other was skeletal animation. The basic idea is that you want to make a high-resolution model move around, but it would be a huge pain in the ass to describe the movement of every single vertex. Instead, you make an invisible "skeleton" — a branching tree of bones. The bones tend to follow major parts of the body, so they do look like very simple skeletons, with spines and arms and whatnot (though of course skeletons aren't limited only to living creatures). Every vertex attaches to one or more of those bones — a rough first pass of this can be done automatically — and then by animating the much simpler skeleton, vertices will all move to match the bones they're attached to.
The skeleton itself isn't too surprising. It's a tree, whatever; we've seen one of those already, with the DS directory structure. The skeletons and models are in a neutral pose by default: T for bipeds, simply standing on all fours for quadrupeds, etc. All of this is pretty straightforward.
But then there are the animations themselves.
An animation has some number of keyframes which specify a position, rotation, and size for each bone. Animating the skeleton involves smoothly moving each bone from one keyframe's position to the next.
Position, rotation, and size each exist in three dimensions, so there are nine possible values for each keyframe. You might expect a set of nine values, then, times the number of keyframes, times the number of bones.
But no! These animations are stored the other way: each of those nine values is animated separately per bone. Also, each of those nine values can have a different number of keyframes, even for the same bone. Also, each of those nine values is optional, and if it's not animated then its keyframes are simply missing, and there's a set of bitflags indicating which values are present.
Okay, well, you might at least expect that a single value's keyframes are given by a list of numbers, right?
Not quite! Such a set of keyframes has an initial "scale" and "offset", given as single-precision floating point numbers (which are fairly accurate). Each keyframe then gives a "value" as an integer, which is actually the numerator of a fraction whose denominator is 65,535. So the actual value of each keyframe is:
offset + scale * value / 65535
Maybe this is a more common scheme than I suspect. Animation does take up an awful lot of space, and this isn't an entirely unreasonable way to squish it down. The fraction thing is just incredibly goofy at first blush. I have no idea how anyone figured out what was going on there. (It's used for texture animation, too.)
Anyway, thanks mostly to other people's hard work, I managed to write a script that can dump models and then play them with a pretty decent in-browser model viewer. I never got around to finishing it, which is a shame, because it took so much effort and it's so close to being really good. (My local copy has texture animation mostly working; the online version doesn't yet.)
Hopefully I will someday, because I think this is pretty dang cool, and there's a lot of interesting stuff that could be done with it. (For example, one request: applying one Pokémon's animations to another Pokémon's model. Hm.)
The one thing that really haunts me about it is the outline effect. It's not actually the effect from the games; I had to approximate it, and there are a few obvious ways it falls flat. I would love to exactly emulate what the games do, but I just don't know what that is. But maybe… maybe there's a chance I can find the compiled shader and figure it out. Maybe. Somehow.
Some annoying edge cases
Let's finish up with some small bumps in the road that are still fresh in my mind.
TMs are still in the Pokémon data, as is compatibility with move tutors. Alas, the lists of what the TMs and tutor moves are are embedded in the code, just like in the Game Boy days. You don't really need to know the TM order, since they have a numbering exposed to the player in-game, and TM compatibility is in that same order… but move tutors have no natural ordering, so you have to either match them up by hand or somehow find the list in the binary.
I had a similar problem with incense items, which each affect the breeding outcome for a specific Pokémon. In Ruby and Sapphire, the incense effects were hardcoded. I don't mean they were a data structure baked into the code; I mean they were actually "if the baby is this species and one parent is holding this incense, do this; otherwise," etc. I spent a good few hours hunting for something similar in later games, to no avail — I'd searched for every permutation of machine code I could think of and come up with nothing. I was about to give up when someone pointed out to me that incense is now a data structure; it's just in the one format I'd forgotten to try searching for. Alas.
Moves have a bunch of metadata, like "status effect inflicted" or "min/max number of turns to last". Trouble is, I'm pretty sure that same information is duplicated within the code for each individual move — most moves get their own code, and there's no single "generic move" function. Which raises the question… what is this metadata actually used for? Is it useful to expose on veekun? Is it guaranteed to be correct? I already know that some of it is a little misleading; for example, Tri Attack is listed as inflicting some mystery one-off status effect, because the format doesn't allow expressing what it actually does (inflict one of burn, freezing, or paralysis at random).
Items have a similar problem: they get a bunch of their own data, but it's not entirely clear what most of it is used for. It's not even straightforward to identify how the game decides which items go in which pocket.
Moves also have flags, and it took some effort to figure out what each of them meant. Sun and Moon added a new flag, and I agonized over it for a while before I was fed the answer: it's an obscure detail relating to move animations. No idea how anyone figured that out.
In Omega Ruby and Alpha Sapphire, there are two lists of item names. They're exactly identical, with one exception: in Korean, the items "PP Up" and "PP Max" have their names written with English letters "PP" in one list but with Hangul in the other list. Why? No idea.
Evolution types are numbered. Method 4 is a regular level up; method 5 is a trade; method 21 is levelling up while knowing a specific move, which is given as a parameter. Cool. But there are two oddities. Karrablast and Shelmet only evolve when traded with each other, but the data doesn't indicate this in any way; they both get the same unique evolution method, but there's no parameter to indicate what they need to be traded with, as you might expect. Also, Shedinja isn't listed as an evolution at all, since it's produced as a side effect of Nincada's evolution (which is listed as a normal level-up). To my considerable chagrin, that means neither of these cases can be ripped 100% automatically.
Pokémon are listed in a different order, depending on context. Sometimes they're identified by species, e.g. Deoxys. Sometimes they're identified by form, e.g. Attack Forme Deoxys. Sometimes they're identified by species and also a separate per-species form number. Sometimes the numbering includes aesthetic-only forms, like Unown, that only affect visuals. But sprites and models both seem to have their own completely separate numberings, which are (of course) baked into the binary.
Incidentally, it turns out that all of the Totem Pokémon in Sun and Moon count as distinct forms! They're just not obtainable. Do I expose them on veekun, then? I guess so?
Encounters are particularly thorny. The data is simple enough: for each map, there's a list of Pokémon that can be encountered by various methods (e.g. walking in grass, fishing, surfing). But each of those Pokémon appears at a different rate, and those rates are somewhere in the code, not in the data. And there are some weird cases like swarms, which have special rules. And there are unique encounters that aren't listed in this data at all, and which veekun has thus never had. And how do you even figure out where a map is anyway, when a named place can span multiple maps, and the encounters are only very slightly different in each map?
Anyway, that's why veekun is taking so long. Also because I've spent several days not working on veekun so I could write this post, which could be much longer but has gone on more than long enough already. I hope some of this was interesting!
Oh, and all my recent code is on the pokedex GitHub. The model extraction stuff isn't up yet, but it will be… eventually? Next time I work on it, maybe?
20133663D3m4K512546848-bit90ADADIadsaesAIALAAllALPHAanonymousappaptarinARMartASAAspectAssertionsATIATSbabyBASICBECBETTbinary formatBitsbleBTbugBugsCC5CADcamcarCASCasecasesccicdCERNcheatingChoiceciciaCIScliclustercodecodingcolumncommunitycompressionComputersconsoleContainerscontextcourtcryptoculturedatadatabasedataminingdeaDemodesigndesktopdetdimensiondirectoriesDMCAdocumentDocumentationdowndressEASTebsecECRedEdgeeffembeddedencodingencryptionEngineeringEnglisheNomeseteteueventexpressfactfailfamiliesfamilyfanfishflashformforumfoxFungamegamesGeneralgifGitGithubGogooglegotGREgroupsguideHABhackinghairhard drivehardwareHAThiveHoChtmlIAMICEincreaseInsideIPiqIRCirsissISTEjapankoreaLAPSLawLibrarylieslightlimitlinkinglocationLocksLTEMacMakemakingmanamapmappingmathmediaMegametadataMICROSmillionmitmoonMoUmovmovemp3MPANCRNECNESNintendonseOnlineopen sourceoperaORGOSSOtherOUspatreonpcpeoplepersonalPHIphonepiePINPIRpixelPlaypokemonPositionPPLPresentproblemprogrammingprojectProximapspythonRratrateRDSreconrecordReleaseresearchResourceresourcesrestReverse EngineeringRimerisksROVrpiRTIrubyrunningS.SAMScaleschoolscratchSearchshedSNIsoftwaresource codespacespotSSESSLStarstatusstsTAGteateachtechTechnicaltedtestthingstortouchtrackingtvUIunUnityUSveekunvideovolunteerwarwdwebwebsitewifiwinwindowsWorkWOWwritingzero
Post Syndicated from Eevee original https://eev.ee/blog/2017/05/28/introspection/
This month, IndustrialRobot has generously donated in order to ask:
How do you go about learning about yourself? Has your view of yourself changed recently? How did you handle it?
Whoof. That's incredibly abstract and open-ended — there's a lot I could say, but most of it is hard to turn into words.
The first example to come to mind — and the most conspicuous, at least from where I'm sitting — has been the transition from technical to creative since quitting my tech job. I think I touched on this a year ago, but it's become all the more pronounced since then.
I quit in part because I wanted more time to work on my own projects. Two years ago, those projects included such things as: giving the Python ecosystem a better imaging library, designing an alternative to regular expressions, building a Very Correct IRC bot framework, and a few more things along similar lines. The goals were all to solve problems — not hugely important ones, but mildly inconvenient ones that I thought I could bring something novel to. Problem-solving for its own sake.
Now that I had all the time in the world to work on these things, I… didn't. It turned out they were almost as much of a slog as my job had been!
The problem, I think, was that there was no point.
This was really weird to realize and come to terms with. I do like solving problems for its own sake; it's interesting and educational. And most of the programming folks I know and surround myself with have that same drive and use it to create interesting tools like Twisted. So besides taking for granted that this was the kind of stuff I wanted to do, it seemed like the kind of stuff I should want to do.
But even if I create a really interesting tool, what do I have? I don't have a thing; I have a tool that can be used to build things. If I want a thing, I have to either now build it myself — starting from nearly zero despite all the work on the tool, because it can only do so much in isolation — or convince a bunch of other people to use my tool to build things. Then they'd be depending on my tool, which means I have to maintain and support it, which is even more time and effort poured into this non-thing.
Despite frequently being drawn to think about solving abstract tooling problems, it seems I truly want to make things. This is probably why I have a lot of abandoned projects boldly described as "let's solve X problem forever!" — I go to scratch the itch, I do just enough work that it doesn't itch any more, and then I lose interest.
I spent a few months quietly flailing over this minor existential crisis. I'd spent years daydreaming about making tools; what did I have if not that drive? I was having to force myself to work on what I thought were my passion projects.
Meanwhile, I'd vaguely intended to do some game development, but for some reason dragged my feet forever and then took my sweet time dipping my toes in the water. I did work on a text adventure, Runed Awakening, on and off… but it was a fractal of creative decisions and I had a hard time making all of them. It might've been too ambitious, despite feeling small, and that might've discouraged me from pursuing other kinds of games earlier.
A big part of it might have been the same reason I took so long to even give art a serious try. I thought of myself as a technical person, and art is a thing for creative people, so I'm simply disqualified, right? Maybe the same thing applies to games.
Lord knows I had enough trouble when I tried. I'd orbited the Doom community for years but never released a single finished level. I did finally give it a shot again, now that I had the time. Six months into my funemployment, I wrote a three-part guide on making Doom levels. Three months after that, I finally released one of my own.
I suppose that opened the floodgates; a couple weeks later, glip and I decided to try making something for the PICO-8, and then we did that (almost exactly a year ago!). Then kept doing it.
It's been incredibly rewarding — far moreso than any "pure" tooling problem I've ever approached. Moreso than even something like veekun, which is a useful thing. People have thoughts and opinions on games. Games give people feelings, which they then tell you about. Most of the commentary on a reference website is that something is missing or incorrect.
I like doing creative work. There was never a singular moment when this dawned on me; it was a slow process over the course of a year or more. I probably should've had an inkling when I started drawing, half a year before I quit; even my early (and very rough) daily comics made people laugh, and I liked that a lot. Even the most well-crafted software doesn't tend to bring joy to people, but amateur art can.
I still like doing technical work, but I prefer when it's a means to a creative end. And, just as important, I prefer when it has a clear and constrained scope. "Make a library/tool for X" is a nebulous problem that could go in a great many directions; "make a bot that tweets Perlin noise" has a pretty definitive finish line. It was interesting to write a little physics engine, but I would've hated doing it if it weren't for a game I were making and didn't have the clear scope of "do what I need for this game".
It feels like creative work is something I've been wanting to do for a long time. If this were a made-for-TV movie, I would've discovered this impulse one day and immediately revealed myself as a natural-born artistic genius of immense unrealized talent.
That didn't happen. Instead I've found that even something as mundane as having ideas is a skill, and while it's one I enjoy, I've barely ever exercised it at all. I have plenty of ideas with technical work, but I run into brick walls all the time with creative stuff.
How do I theme this area? Well, I don't know. How do I think of something? I don't know that either. It's a strange paradox to have an urge to create things but not quite know what those things are.
It's such a new and completely different kind of problem. There's no right answer, or even an answer I can check for "correctness". I can do anything. With no landmarks to start from, it's easy to feel completely lost and just draw blanks.
I've essentially recalibrated the texture of stuff I work on, and I have to find some completely new ways to approach problems. I haven't found them yet. I don't think they're anything that can be told or taught. But I'm starting to get there, and part of it is just accepting that I can't treat these like problems with clear best solutions and clear algorithms to find those solutions.
A particularly glaring irony is that I've had a really tough problem designing abstract spaces, even though that's exactly the kind of architecture I praise in Doom. It's much trickier than it looks — a good abstract design is reminiscent of something without quite being that something.
I suppose it's similar to a struggle I've had with art. I'm drawn to a cartoony style, and cartooning is also a mild form of abstraction, of whittling away details to leave only what's most important. I'm reminded in particular of the forest background in fox flux — I was completely lost on how to make something reminiscent of a tree line. I knew enough to know that drawing trees would've made the background far too busy, but trees are naturally busy, so how do you represent that?
The answer glip gave me was to make big chunky leaf shapes around the edges and where light levels change. Merely overlapping those shapes implies depth well enough to convey the overall shape of the tree. The result works very well and looks very simple — yet it took a lot of effort just to get to the idea.
It reminds me of mathematical research, in a way? You know the general outcome you want, and you know the tools at your disposal, and it's up to you to make some creative leaps. I don't think there's a way to directly learn how to approach that kind of problem; all you can do is look at what others have done and let it fuel your imagination.
I think I'm getting a little distracted here, but this is stuff that's been rattling around lately.
If there's a more personal meaning to the tree story, it's that this is a thing I can do. I can learn it, and it makes sense to me, despite being a huge nerd.
Two and a half years ago, I never would've thought I'd ever make an entire game from scratch and do all the art for it. It was completely unfathomable. Maybe we can do a lot of things we don't expect we're capable of, if only we give them a serious shot.
And ask for help, of course. I have a hell of a time doing that. I did a painting recently that factored in mountains of glip's advice, and on some level I feel like I didn't quite do it myself, even though every stroke was made by my hand. Hell, I don't even look at references nearly as much as I should. It feels like cheating, somehow? I know that's ridiculous, but my natural impulse is to put my head down and figure it out myself. Maybe I've been doing that for too long with programming. Trust me, it doesn't work quite so well in a brand new field.
I'm getting distracted again!
To answer your actual questions: how do I go about learning about myself? I don't! It happens completely by accident. I'll consciously examine my surface-level thoughts or behaviors or whatever, sure, but the serious fundamental revelations have all caught me completely by surprise — sometimes slowly, sometimes suddenly.
Most of them also came from listening to the people who observe me from the outside: I only started drawing in the first place because of some ridiculous deal I made with glip. At the time I thought they just wanted everyone to draw because art is their thing, but now I'm starting to suspect they'd caught on after eight years of watching me lament that I couldn't draw.
I don't know how I handle such discoveries, either. What is handling? I imagine someone discovering something and trying to come to grips with it, but I don't know that I have quite that experience — my grappling usually comes earlier, when I'm still trying to figure the thing out despite not knowing that there's a thing to find out. Once I know it, it's on the table; I can't un-know it or reject it meaningfully. All I can do is figure out what to do with it, and I approach that the same way I approach every other problem: by flailing at it and hoping for the best.
This isn't quite 2000 words. Sorry. I've run out of things to say about me. This paragraph is very conspicuous filler. Banana. Atmosphere. Vocation.
ACTAADAIalgorithmsAllappArchitecturearinartATIBECBehaviorBETTbleCCalicamcarccicheatingciCIScomiccomicscommunitycreativeDailydeadesigndetdevelopmentdoomdownEASTebsecedEdgeeducationeffemploymenteteuexpressfactforestformfoxfox fluxFrameworkFungamegamesGeneralGoGREguideHATICEindustrialIPIRCirsISISISPisslieslightLTEMakemakingmathmediaMoUmovmovieNCRNESnseOpinionOtherOUspatreonpeopleperlpersonalphysicsPINPPLPresentproblemprogrammingprojectProjectspspthpythonRratraterawRDSReleaseresearchrestrobotRTIruned awakeningS.SAMscratchSearchshedsoftwarespaceSSEStarstrokesupportteatechTechnicaltedthingsToolstortouchtrusttvUIunUnityUSUSTRUXveekunwarwatchwaterwebwebsitewinWorkzero
A few tidbits on networking in games
Post Syndicated from Eevee original https://eev.ee/blog/2017/05/22/a-few-tidbits-on-networking-in-games/
Nova Dasterin asks, via Patreon:
How about do something on networking code, for some kind of realtime game (platformer or MMORPG or something). 😀
Ah, I see. You're hoping for my usual detailed exploration of everything I know about networking code in games.
Well, joke's on you! I don't know anything about networking.
Wait… wait… maybe I know one thing.
Surprise! The thing I know is, roughly, how multiplayer Doom works.
Doom is 100% deterministic. Its random number generator is really a list of shuffled values; each request for a random number produces the next value in the list. There is no seed, either; a game always begins at the first value in the list. Thus, if you play the game twice with exactly identical input, you'll see exactly the same playthrough: same damage, same monster behavior, and so on.
And that's exactly what a Doom demo is: a file containing a recording of player input. To play back a demo, Doom runs the game as normal, except that it reads input from a file rather than the keyboard.
Multiplayer works the same way. Rather than passing around the entirety of the world state, Doom sends the player's input to all the other players. Once a node has received input from every connected player, it advances the world by one tic. There's no client or server; every peer talks to every other peer.
You can read the code if you want to, but at a glance, I don't think there's anything too surprising here. Only sending input means there's not that much to send, and the receiving end just has to queue up packets from every peer and then play them back once it's heard from everyone. The underlying transport was pluggable (this being the days before we'd even standardized on IP), which complicated things a bit, but the Unix port that's on GitHub just uses UDP. The Doom Wiki has some further detail.
This approach is very clever and has a few significant advantages. Bandwidth requirements are fairly low, which is important if it happens to be 1993. Bandwidth and processing requirements are also completely unaffected by the size of the map, since map state never touches the network.
Unfortunately, it has some drawbacks as well. The biggest is that, well, sometimes you want to get the world state back in sync. What if a player drops and wants to reconnect? Everyone has to quit and reconnect to one another. What if an extra player wants to join in? It's possible to load a saved game in multiplayer, but because the saved game won't have an actor for the new player, you can't really load it; you'd have to start fresh from the beginning of a map.
It's fairly fundamental that Doom allows you to save your game at any moment… but there's no way to load in the middle of a network game. Everyone has to quit and restart the game, loading the right save file from the command line. And if some players load the wrong save file… I'm not actually sure what happens! I've seen ZDoom detect the inconsistency and refuse to start the game, but I suspect that in vanilla Doom, players would have mismatched world states and their movements would look like nonsense when played back in each others' worlds.
Ah, yes. Having the entire game state be generated independently by each peer leads to another big problem.
Maybe this wasn't as big a deal with Doom, where you'd probably be playing with friends or acquaintances (or coworkers). Modern games have matchmaking that pits you against strangers, and the trouble with strangers is that a nontrivial number of them are assholes.
Doom is a very moddable game, and it doesn't check that everyone is using exactly the same game data. As long as you don't change anything that would alter the shape of the world or change the number of RNG rolls (since those would completely desynchronize you from other players), you can modify your own game however you like, and no one will be the wiser. For example, you might change the light level in a dark map, so you can see more easily than the other players. Lighting doesn't affect the game, only how its drawn, and it doesn't go over the network, so no one would be the wiser.
Or you could alter the executable itself! It knows everything about the game state, including the health and loadout of the other players; altering it to show you this information would give you an advantage. Also, all that's sent is input; no one said the input had to come from a human. The game knows where all the other players are, so you could modify it to generate the right input to automatically aim at them. Congratulations; you've invented the aimbot.
I don't know how you can reliably fix these issues. There seems to be an entire underground ecosystem built around playing cat and mouse with game developers. Perhaps the most infamous example is World of Warcraft, where people farm in-game gold as automatically as possible to sell to other players for real-world cash.
Egregious cheating in multiplayer really gets on my nerves; I couldn't bear knowing that it was rampant in a game I'd made. So I will probably not be working on anything with random matchmaking anytime soon.
Let's jump to something a little more concrete and modern.
Starbound is a procedurally generated universe exploration game — like Terraria in space. Or, if you prefer, like Minecraft in space and also flat. Notably, it supports multiplayer, using the more familiar client/server approach. The server uses the same data files as single-player, but it runs as a separate process; if you want to run a server on your own machine, you run the server and then connect to localhost with the client.
I've run a server before, but that doesn't tell me anything about how it works. Starbound is an interesting example because of the existence of StarryPy — a proxy server that can add some interesting extra behavior by intercepting packets going to and from the real server.
That means StarryPy necessarily knows what the protocol looks like, and perhaps we can glean some insights by poking around in it. Right off the bat there's a list of all the packet types and rough shapes of their data.
I modded StarryPy to print out every single decoded packet it received (from either the client or the server), then connected and immediately disconnected. (Note that these aren't necessarily TCP packets; they're just single messages in the Starbound protocol.) Here is my quick interpretation of what happens:
The client and server briefly negotiate a connection. The password, if any, is sent with a challenge and response.
The client sends a full description of its "ship world" — the player's ship, which they take with them to other servers. The server sends a partial description of the planet the player is either on, or orbiting.
From here, the server and client mostly communicate world state in the form of small delta updates. StarryPy doesn't delve into the exact format here, unfortunately. The world basically freezes around you during a multiplayer lag spike, though, so it's safe to assume that the vast bulk of game simulation happens server-side, and the effects are broadcast to clients.
The protocol has specific message types for various player actions: damaging tiles, dropping items, connecting wires, collecting liquids, moving your ship, and so on. So the basic model is that the player can attempt to do stuff with the chunk of the world they're looking at, and they'll get a reaction whenever the server gets back to them.
(I'm dimly aware that some subset of object interactions can happen client-side, but I don't know exactly which ones. The implications for custom scripted objects are… interesting. Actually, those are slightly hellish in general; Starbound is very moddable, but last I checked it has no way to send mods from the server to the client or anything similar, and by default the server doesn't even enforce that everyone's using the same set of mods… so it's possible that you'll have an object on your ship that's only provided by a mod you have but the server lacks, and then who knows what happens.)
Hang on, this isn't a video game at all.
Starbound's "fire and forget" approach reminds me a lot of IRC — a protocol I've even implemented, a little bit, kinda. IRC doesn't have any way to match the messages you send to the responses you get back, and success is silent for some kinds of messages, so it's impossible (in the general case) to know what caused an error. The most obvious fix for this would be to attach a message id to messages sent out by the client, and include the same id on responses from the server.
It doesn't look like Starbound has message ids or any other solution to this problem — though StarryPy doesn't document the protocol well enough for me to be sure. The server just sends a stream of stuff it thinks is important, and when it gets a request from the client, it queues up a response to that as well. It's TCP, so the client should get all the right messages, eventually. Some of them might be slightly out of order depending on the order the client does stuff, but that's not a big deal; anyway, the server knows the canonical state.
I bring up IRC because I'm kind of at the limit of things that I know. But one of those things is that IRC is simultaneously very rickety and wildly successful: it's a decade older than Google and still in use. (Some recent offerings are starting to eat its lunch, but those are really because clients are inaccessible to new users and the protocol hasn't evolved much. The problems with the fundamental design of the protocol are only obvious to server and client authors.)
Doom's cheery assumption that the game will play out the same way for every player feels similarly rickety. Obviously it works — well enough that you can go play multiplayer Doom with exactly the same approach right now, 24 years later — but for something as complex as an FPS it really doesn't feel like it should.
So while I don't have enough experience writing multiplayer games to give you a run-down of how to do it, I think the lesson here is that you can get pretty far with simple ideas. Maybe your game isn't deterministic like Doom — although there's no reason it couldn't be — but you probably still have to save the game, or at least restore the state of the world on death/loss/restart, right? There you go: you already have a fragment of a concept of entity state outside the actual entities. Codify that, stick it on the network, and see what happens.
I don't know if I'll be doing any significant multiplayer development myself; I don't even play many multiplayer games. But I'd always assumed it would be a nigh-impossible feat of architectural engineering, and I'm starting to think that maybe it's no more difficult than anything else in game dev. Easy to fudge, hard to do well, impossible to truly get right so give up that train of thought right now.
Also now I am definitely thinking about how a multiplayer puzzle-platformer would work.
ADADIadsAIAllappARIAARMartATIauthBASICBECBehaviorBitsbleCCADCASCaseChallengecheatingcicodedatadeadeathDemodesigndetDevelopersdevelopmentdocumentdoomdowndpEASTecedeffEngineeringeteueventfireFlyformFungamegamedevgamesGeneralGitGithubGogooglegotGREHAThealthICEinteractionIPiplayeriqIRCirsississuelightlightinglimitLTEMacmakingmapmediaMinecraftmitMoUmovmoveMPANCRNECNESNetworknetworkingnistnseobjectsORGOSSOtherOUspatreonpeoplePINPlayproblemProcessingproxypsRratraterawReactrealtimerecordrestrorROVRTIS.SAMserverserverssimulationspacesshStarsupportSyncTAGtalktalkstechtedthingstortouchudpUIunUSvideovideo gamewarwinWorkwriting
Predicting a Slot Machine's PRNG
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/predicting_a_sl.html
Wired is reporting on a new slot machine hack. A Russian group has reverse-engineered a particular brand of slot machine — from Austrian company Novomatic — and can simulate and predict the pseudo-random number generator.
The cell phones from Pechanga, combined with intelligence from investigations in Missouri and Europe, revealed key details. According to Willy Allison, a Las Vegas-based casino security consultant who has been tracking the Russian scam for years, the operatives use their phones to record about two dozen spins on a game they aim to cheat. They upload that footage to a technical staff in St. Petersburg, who analyze the video and calculate the machine's pattern based on what they know about the model's pseudorandom number generator. Finally, the St. Petersburg team transmits a list of timing markers to a custom app on the operative's phone; those markers cause the handset to vibrate roughly 0.25 seconds before the operative should press the spin button.
"The normal reaction time for a human is about a quarter of a second, which is why they do that," says Allison, who is also the founder of the annual World Game Protection Conference. The timed spins are not always successful, but they result in far more payouts than a machine normally awards: Individual scammers typically win more than $10,000 per day. (Allison notes that those operatives try to keep their winnings on each machine to less than $1,000, to avoid arousing suspicion.) A four-person team working multiple casinos can earn upwards of $250,000 in a single week.
The easy solution is to use a random-number generator that accepts local entropy, like Fortuna. But there's probably no way to easily reprogram those old machines.
AIAllappartATIaustriaCcamCAScasinoscheatingciconferenceCuritydetedeteuEuropegamblinggameintelintelligenceIPissMacmitMPANESoperaphonephonesPINpsrandomnumbersratrateReactrecordRTIrussiaS.scamsecurityStaffTAGteateamtechTechnicaltortrackingunUSUSTRvideowinWork
Weekly roundup: National Novelty Writing Month
Post Syndicated from Eevee original https://eev.ee/dev/2016/11/07/weekly-roundup-national-novelty-writing-month/
Inktober is a distant memory.
Now it's time for NaNoWriMo! Almost. I don't have any immediate interest in writing a novel, but I do have plenty of other stuff that needs writing — blog posts, my book, Runed Awakening, etc. So I'm going to try to write 100,000 words this month, spread across whatever.
I'm only measuring, like, works. I'll count this page, as short as it is, because it's still a single self-contained thing that took some writing effort. But no tweets or IRC or the like.
I'm counting with vim's g C-g or wc -w, whichever is more convenient. The former is easier for single files I edit in vim; the latter is easier for multiple files or stuff I edit outside of vim.
I'm making absolutely zero effort to distinguish between English text, code, comments, etc.; whatever the word count is, that's what it is. So code snippets in the book will count, as will markup in blog posts. Runed Awakening is a weird case, but I'm choosing to count it because it's inherently a text-based game, plus it's written in a prosaic language. On the other hand, dialogue for Isaac HD does not count, because it's a few bits of text in what is otherwise just a Lua codebase.
Only daily net change counts. This rule punishes me for editing, but that's the entire point of NaNoWriMo's focus on word count: to get something written rather than linger on a section forever and edit it to death. I tend to do far too much of the latter.
This rule already bit me on day one, where I made some significant progress on Runed Awakening but ended up with a net word count of -762 because it involved some serious refactoring. Oops. Turns out word-counting code is an even worse measure of productivity than line-counting code.
These rules are specifically crafted to nudge me into working a lot more on my book and Runed Awakening, those two things I'd hoped to get a lot further on in the last three months. And unlike Inktober, blog posts contribute towards my preposterous goal rather than being at odds with it.
With one week down, so far I'm at +8077 words. I got off to a pretty slow (negative, in fact) start, and then spent a day out of action from an ear infection, so I'm a bit behind. Hoping I can still catch up as I get used to this whole "don't rewrite the same paragraph over and over for hours" approach.
art: Last couple ink drawings of Pokémon, hallelujah. I made a montage of them all, too.
I drew Momo (the cat from Google's Halloween doodle game) alongside Isaac and it came out spectacularly well.
I finally posted the loophole commission.
I posted a little "what type am I" meme on Twitter and drew some of the interesting responses. I intended to draw a couple more, but then I got knocked on my ass and my brain stopped working. I still might get back to them later.
blog: I posted an extremely thorough teardown of JavaScript. That might be cheating, but it's okay, because I love cheating.
Wrote a whole lot about Java.
doom: I did another speedmap. I haven't released the last two yet; I want to do a couple more and release them as a set.
blog: I wrote about game accessibility, which touched on those speedmaps.
runed awakening: I realized I didn't need all the complexity of (and fallout caused by) the dialogue extension I was using, so I ditched it in favor of something much simpler. I cleaned up some stuff, fixed some stuff, improved some stuff, and started on some stuff. You know.
book: I'm working on the PICO-8 chapter, since I've actually finished the games it describes. I'm having to speedily reconstruct the story of how I wrote Under Construction, which is interesting. I hope it still comes out like a story and not a tutorial.
As for the three big things, well, they sort of went down the drain. I thought they might; I don't tend to be very good at sticking with the same thing for a long and contiguous block of time. I'm still making steady progress on all of them, though, and I did some other interesting stuff in the last three months, so I'm satisfied regardless.
With November devoted almost exclusively to writing, I'm really hoping I can finally have a draft chapter of the book ready for Patreon by the end of the month. That $4 tier has kinda been languishing, sorry.
accessibilityAIAllappaptartATIBECBitsblogbookbrainCcamCASCasecheatingcicodeDailydeathdoomdownedeffEnglishetfactformgamegamesGogooglegotGREhalloweenIPIRCissjavajavascriptluamakingmapmedianseOSSOtherpatreonpetsPINpsratrawReleaserestROVruned awakeningS.SAMshedSNIStarstatusstsTAGteatedthingstortouchTutorialtwitterUIunUSwinWorkwritingzero
accessibilityAIAllappartATIBECBitsblogbookbrainCCASCasecheatingcodeDailydeathdoomdownedeffEnglishfactformgamegamesGogooglegotGREhalloweenIPIRCissjavajavascriptmapmedianseOSSOtherpatreonpetsPINratrawReleaserestROVruned awakeningS.shedSNIStarstatusstsTAGtedthingstortouchTutorialtwitterUIUSwinWorkzero
Weekly roundup: Inktober 4: A New Hope
Post Syndicated from Eevee original https://eev.ee/dev/2016/11/01/weekly-roundup-inktober-4-a-new-hope/
Inktober is over! Oh my god.
art: Almost the last of the ink drawings of Pokémon, all of them done in fountain pen now. I filled up the sketchbook I'd been using and switched to a 9"×12" one. Much to my surprise, that made the inks take longer.
I did some final work on that loophole commission from a few weeks ago.
irl: I voted, and am quite cross that election news has continued in spite of this fact.
doom: I made a few speedmaps — maps based on random themes and made in an hour (or so). It was a fun and enlightening experience, and I'll definitely do some more of it.
mario maker: One of the level themes I got was "The Wreckage", and I didn't know how to speedmap that in Doom in only an hour, but it sounded like an interesting concept for a Mario level.
I managed to catch up on writing by the end of the month (by cheating slightly), so I'm starting fresh in November. The "three big things" obviously went out the window in favor of Inktober, but I'm okay with that. I've got something planned for this next month that should make up for it, anyway.
accessibilityAIAllartATIblogbookCcheatingdoomedelectionetfactFungameGogothboisslightMakemakermanamapmariomario makernewsOSSpsrawrestRTIS.StarstatustedthingstouchUIunUSwinWorkwriting
accessibilityAIAllartATIblogbookCcheatingdoomedfactFungameGogothboisslightMakemanamariomario makernewsOSSrawrestRTIS.StarstatusthingstouchUIUSwinWork | CommonCrawl |
On the one dimensional cubic NLS in a critical space
Strong Birkhoff ergodic theorem for subharmonic functions with irrational shift and its application to analytic quasi-periodic cocycles
doi: 10.3934/dcds.2021174
Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.
Readers can access Online First articles via the "Online First" tab for the selected journal.
On $ L^1 $ estimates of solutions of compressible viscoelastic system
Yusuke Ishigaki
Department of Mathematics, Tokyo Institute of Technology, Meguro-ku, Ookayama 2-12-1, Tokyo 152-8551, Japan
Received July 2021 Early access November 2021
Fund Project: This work was partially supported by JSPS KAKENHI Grant Number 19J10056
We consider the large time behavior of solutions of compressible viscoelastic system around a motionless state in a three-dimensional whole space. We show that if the initial data belongs to $ W^{2,1} $, and is sufficiently small in $ H^4\cap L^1 $, the solutions grow in time at the same rate as $ t^{\frac{1}{2}} $ in $ L^1 $ due to diffusion wave phenomena of the system caused by interaction between sound wave, viscous diffusion and elastic wave.
Keywords: Compressible viscoelastic system, diffusion wave, large time behavior.
Mathematics Subject Classification: Primary: 76N10, Secondary: 35B40, 35Q35, 76A10.
Citation: Yusuke Ishigaki. On $ L^1 $ estimates of solutions of compressible viscoelastic system. Discrete & Continuous Dynamical Systems, doi: 10.3934/dcds.2021174
Q. Chen and G. Wu, The 3D compressible viscoelastic fluid in a bounded domain, Commun. Math. Sci., 16 (2018), 1303-1323. doi: 10.4310/CMS.2018.v16.n5.a6. Google Scholar
M. E. Gurtin, An Introduction to Continuum Mechanics, Math. Sci. Eng., vol. 158, Academic Press, New York-London, 1981 Google Scholar
D. Hoff and K. Zumbrun, Multi-dimensional diffusion waves for the Navier-Stokes equations of compressible flow, Indiana Univ. Math. J., 44 (1995), 603-676. doi: 10.1512/iumj.1995.44.2003. Google Scholar
X. Hu, Global existence of weak solutions to two dimensional compressible viscoelastic flows, J. Differential Equations, 265 (2018), 3130-3167. doi: 10.1016/j.jde.2018.05.001. Google Scholar
X. Hu and D. Wang, Local strong solution to the compressible viscoelastic flow with large data, J. Differential Equations, 249 (2010), 1179-1198. doi: 10.1016/j.jde.2010.03.027. Google Scholar
X. Hu and D. Wang, Global existence for the multi-dimensional compressible viscoelastic flows, J. Differential Equations, 250 (2011), 1200-1231. doi: 10.1016/j.jde.2010.10.017. Google Scholar
X. Hu and G. Wu, Global existence and optimal decay rates for three-dimensional compressible viscoelastic flows, SIAM J. Math. Anal., 45 (2013), 2815-2833. doi: 10.1137/120892350. Google Scholar
X. Hu and W. Zhao, Global existence of compressible dissipative elastodynamics systems with zero shear viscosity in two dimensions, Arch. Ration. Mech. Anal., 235 (2020), 1177-1243. doi: 10.1007/s00205-019-01443-z. Google Scholar
X. Hu and W. Zhao, Global existence for the compressible viscoelastic system with zero shear viscosity in three dimensions, J. Differential Equations, 268 (2020), 1658-1685. doi: 10.1016/j.jde.2019.09.034. Google Scholar
Y. Ishigaki, Diffusion wave phenomena and $L^p$ decay estimates of solutions of compressible viscoelastic system, J. Differential Equations, 269 (2020), 11195-11230. doi: 10.1016/j.jde.2020.07.020. Google Scholar
S. Kawashima, A. Matsumura and T. Nishida, On the fluid-dynamical approximation to the Boltzmann equation at the level of the Navier-Stokes equation, Commun. Math. Phys., 70 (1979), 97-124. doi: 10.1007/BF01982349. Google Scholar
T. Kobayashi and Y. Shibata, Remark on the rate of decay of solutions to linearized compressible Navier-Stokes equation, Pacific J. Math., 207 (2002), 199-234. doi: 10.2140/pjm.2002.207.199. Google Scholar
Y. Li, R. Wei and Z. Yao, Optimal decay rates for the compressible viscoelastic flows, J. Math. Phys., 57 (2016), 111506, 8 pp. doi: 10.1063/1.4967975. Google Scholar
F.-H. Lin, C. Liu and P. Zhang, On hydrodynamics of viscoelastic fluids, Comm. Pure Appl. Math., 58 (2005), 1437-1471. doi: 10.1002/cpa.20074. Google Scholar
A. Matsumura, T. Nishida and P. Zhang, The initial value problems for the equation of motion of compressible viscous and heat-conductive fluids, Proc. Japan Acad. Ser. A Math. Sci., 55 (1979), 337-342. Google Scholar
X. Pan, J. Xu and P. Zhang, Global existence and optimal decay estimates of the compressible viscoelastic flows in $L^p$ critical spaces, Discrete Contin. Dyn. Syst., 39 (2019), 2021-2057. doi: 10.3934/dcds.2019085. Google Scholar
J. Qian, Initial boundary value problems for the compressible viscoelastic fluid, J. Differential Equations, 250 (2011), 848-865. doi: 10.1016/j.jde.2010.07.026. Google Scholar
J. Qian and Z. Zhang, Global well-posedness for compressible viscoelastic fluids near equilibrium, Arch. Ration. Mech. Anal., 198 (2010), 835-868. doi: 10.1007/s00205-010-0351-5. Google Scholar
Y. Shibata, On the rate of decay of solutions to linear viscoelastic equation, Math. Methods Appl. Sci., 23 (2000), 203-226. doi: 10.1002/(SICI)1099-1476(200002)23:3<203::AID-MMA111>3.0.CO;2-M. Google Scholar
T. C. Sideris and B. Thomases, Global existence for 3D incompressible isotropic elastodynamics via the incompressible limit, Comm. Pure Appl. Math., 58 (2005), 750-788. doi: 10.1002/cpa.20049. Google Scholar
R. Wei, Y. Li and Z. Yao, Decay of the compressible viscoelastic flows, Commun. Pure Appl. Anal., 15 (2016), 1603-1624. doi: 10.3934/cpaa.2016004. Google Scholar
G. Wu, Z. Gao and Z. Tan, Time decay rates for the compressible viscoelastic flows, J. Math. Anal. Appl., 452 (2017), 990-1004. doi: 10.1016/j.jmaa.2017.03.044. Google Scholar
F. Xu, X. Zhang, Y. Wu and L. Liu, The optimal convergence rates for the multi-dimensional compressible viscoelastic flows, ZAMM Z. Angew. Math. Mech., 96 (2016), 1490-1504. doi: 10.1002/zamm.201500095. Google Scholar
Geonho Lee, Sangdong Kim, Young-Sam Kwon. Large time behavior for the full compressible magnetohydrodynamic flows. Communications on Pure & Applied Analysis, 2012, 11 (3) : 959-971. doi: 10.3934/cpaa.2012.11.959
Zhong Tan, Yong Wang, Fanhui Xu. Large-time behavior of the full compressible Euler-Poisson system without the temperature damping. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1583-1601. doi: 10.3934/dcds.2016.36.1583
Shifeng Geng, Lina Zhang. Large-time behavior of solutions for the system of compressible adiabatic flow through porous media with nonlinear damping. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2211-2228. doi: 10.3934/cpaa.2014.13.2211
Weike Wang, Xin Xu. Large time behavior of solution for the full compressible navier-stokes-maxwell system. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2283-2313. doi: 10.3934/cpaa.2015.14.2283
Zhong Tan, Yong Wang, Xu Zhang. Large time behavior of solutions to the non-isentropic compressible Navier-Stokes-Poisson system in $\mathbb{R}^{3}$. Kinetic & Related Models, 2012, 5 (3) : 615-638. doi: 10.3934/krm.2012.5.615
Hiroshi Takeda. Large time behavior of solutions for a nonlinear damped wave equation. Communications on Pure & Applied Analysis, 2016, 15 (1) : 41-55. doi: 10.3934/cpaa.2016.15.41
Hai-Yang Jin. Boundedness and large time behavior in a two-dimensional Keller-Segel-Navier-Stokes system with signal-dependent diffusion and sensitivity. Discrete & Continuous Dynamical Systems, 2018, 38 (7) : 3595-3616. doi: 10.3934/dcds.2018155
Martin Burger, Marco Di Francesco. Large time behavior of nonlocal aggregation models with nonlinear diffusion. Networks & Heterogeneous Media, 2008, 3 (4) : 749-785. doi: 10.3934/nhm.2008.3.749
Kin Ming Hui, Soojung Kim. Asymptotic large time behavior of singular solutions of the fast diffusion equation. Discrete & Continuous Dynamical Systems, 2017, 37 (11) : 5943-5977. doi: 10.3934/dcds.2017258
Joana Terra, Noemi Wolanski. Large time behavior for a nonlocal diffusion equation with absorption and bounded initial data. Discrete & Continuous Dynamical Systems, 2011, 31 (2) : 581-605. doi: 10.3934/dcds.2011.31.581
Junyong Eom, Kazuhiro Ishige. Large time behavior of ODE type solutions to nonlinear diffusion equations. Discrete & Continuous Dynamical Systems, 2020, 40 (6) : 3395-3409. doi: 10.3934/dcds.2019229
Marco Di Francesco, Yahya Jaafra. Multiple large-time behavior of nonlocal interaction equations with quadratic diffusion. Kinetic & Related Models, 2019, 12 (2) : 303-322. doi: 10.3934/krm.2019013
Jie Zhao. Large time behavior of solution to quasilinear chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems, 2020, 40 (3) : 1737-1755. doi: 10.3934/dcds.2020091
Ahmed Bonfoh, Cyril D. Enyi. Large time behavior of a conserved phase-field system. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1077-1105. doi: 10.3934/cpaa.2016.15.1077
Zhenhua Guo, Wenchao Dong, Jinjing Liu. Large-time behavior of solution to an inflow problem on the half space for a class of compressible non-Newtonian fluids. Communications on Pure & Applied Analysis, 2019, 18 (4) : 2133-2161. doi: 10.3934/cpaa.2019096
Huicheng Yin, Lin Zhang. The global existence and large time behavior of smooth compressible fluid in an infinitely expanding ball, Ⅱ: 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2018, 38 (3) : 1063-1102. doi: 10.3934/dcds.2018045
Peng Jiang. Global well-posedness and large time behavior of classical solutions to the diffusion approximation model in radiation hydrodynamics. Discrete & Continuous Dynamical Systems, 2017, 37 (4) : 2045-2063. doi: 10.3934/dcds.2017087
Genglin Li, Youshan Tao, Michael Winkler. Large time behavior in a predator-prey system with indirect pursuit-evasion interaction. Discrete & Continuous Dynamical Systems - B, 2020, 25 (11) : 4383-4396. doi: 10.3934/dcdsb.2020102
Weike Wang, Yucheng Wang. Global existence and large time behavior for the chemotaxis–shallow water system in a bounded domain. Discrete & Continuous Dynamical Systems, 2020, 40 (11) : 6379-6409. doi: 10.3934/dcds.2020284
Chao Deng, Tong Li. Global existence and large time behavior of a 2D Keller-Segel system in logarithmic Lebesgue spaces. Discrete & Continuous Dynamical Systems - B, 2019, 24 (1) : 183-195. doi: 10.3934/dcdsb.2018093
HTML views (87)
\begin{document}$ L^1 $\end{document} estimates of solutions of compressible viscoelastic system" readonly="readonly"> | CommonCrawl |
Multiple nontrivial solutions to a $p$-Kirchhoff equation
Existence and nonuniqueness of homoclinic solutions for second-order Hamiltonian systems with mixed nonlinearities
January 2016, 15(1): 73-90. doi: 10.3934/cpaa.2016.15.73
Large-time behavior of liquid crystal flows with a trigonometric condition in two dimensions
Jishan Fan 1, and Fei Jiang 2,
Department of Applied Mathematics, Nanjing Forestry University, Nanjing, 210037
College of Mathematics and Computer Science, Fuzhou University, Fuzhou, 361000
Received January 2015 Revised August 2015 Published December 2015
In this paper, we study the large-time behavior of weak solutions to the initial-boundary problem arising in a simplified Ericksen-Leslie system for nonhomogeneous incompressible flows of nematic liquid crystals with a transformation condition of trigonometric functions (called by trigonometric condition for simplicity) posed on the initial direction field in a bounded domain $\Omega\subset \mathbb{R}^2$. We show that the kinetic energy and direction field converge to zero and an equilibrium state, respectively, as time goes to infinity. Further, if the initial density is away from vacuum and bounded, then the density, and velocity and direction fields exponential decay to an equilibrium state. In addition, we also show that the weak solutions of the corresponding compressible flows converge {an equilibrium} state.
Keywords: Liquid crystals, weak solutions, exponential decay, Navier-Stokes equations., large-time behavior.
Mathematics Subject Classification: Primary: 35B41, 35Q35, 76A15; Secondary: 76D0.
Citation: Jishan Fan, Fei Jiang. Large-time behavior of liquid crystal flows with a trigonometric condition in two dimensions. Communications on Pure & Applied Analysis, 2016, 15 (1) : 73-90. doi: 10.3934/cpaa.2016.15.73
M. A. Abdallah, F. Jiang and Z. Tan, Decay estimates for isentropic compressible magnetohydrodynamic equations in bounded domain, Acta Math. Sci. Ser. B Engl. Ed., 32 (2012), 2211-2220. doi: 10.1016/S0252-9602(12)60171-4. Google Scholar
S. J. Ding, J. R. Huang, F. G. Xia, H. Y. Wen and R. Z. Zi, Incompressible limit of the compressible nematic liquid crystal flow, J. Funct. Anal., 264 (2013), 1711-1756. doi: 10.1016/j.jfa.2013.01.011. Google Scholar
E. Feireisl and H. Petzeltová, Large-time behaviour of solutions to the Navier-Stokes equations of compressible flow, Arch. Ration. Mech. Anal., 150 (1999), 77-96. doi: 10.1007/s002050050181. Google Scholar
M. Grasselli and H. Wu, Long-time behavior for a hydrodynamic model on nematic liquid crystal flows with asymptotic stabilizing boundary condition and external force, SIAM J. Math. Anal., 45 (2013), 965-1002. doi: 10.1137/120866476. Google Scholar
J. L. Hineman and C. Y. Wang, Well-posedness of Nematic liquid crystal flow in $L_{u l o c}^3(R^3)$, Arch. Ration. Mech. Anal., 210 (2013), 177-218. doi: 10.1007/s00205-013-0643-7. Google Scholar
M. C. Hong, Global existence of solutions of the simplified Ericksen-Leslie system in dimension two, Calc. Var. Partial Differential Equations, 40 (2011), 15-36. doi: 10.1007/s00526-010-0331-5. Google Scholar
X. P. Hu and H. Wu, Global solution to the three-dimensional compressible flow of liquid crystals, SIAM J. Math. Analysis, 252 (2013), 2678-2699. doi: 10.1137/120898814. Google Scholar
X. P. Hu and H. Wu, Long-time dynamics of the nonhomogeneous incompressible flow of nematic liquid crystals, Commun. Math. Sci., 11 (2013), 779-806. doi: 10.4310/CMS.2013.v11.n3.a6. Google Scholar
T. Huang, C. Y. Wang and H. Y. Wen, Blow up criterion for compressible nematic liquid crystal flows in dimension three, Arch. Ration. Mech. Anal., 204 (2012), 285-311. doi: 10.1007/s00205-011-0476-1. Google Scholar
T. Huang, C. Y. Wang and H. Y. Wen, Strong solutions of the compressible nematic liquid crystal flow, J. Differential Equations, 252 (2012), 2222-2265. doi: 10.1016/j.jde.2011.07.036. Google Scholar
F. Jiang, S. Jiang and D. H. Wang, On multi-dimensional compressible flows of nematic liquid crystals with large initial energy in a bounded domain, J. Funct. Anal., 265 (2013), 3369-3397. doi: 10.1016/j.jfa.2013.07.026. Google Scholar
F. Jiang, S. Jiang, and D. H. Wang, Global weak solutions to the equations of compressible flow of nematic liquid crystals in two dimensions, Arch. Ration. Mech. Anal., 214 (2014), 403-451. doi: 10.1007/s00205-014-0768-3. Google Scholar
F. Jiang and Z. Tan, Global weak solution to the flow of liquid crystals system, Math. Methods Appl. Sci., 32 (2009), 2243-2266. doi: 10.1002/mma.1132. Google Scholar
F. Jiang and Z. Tan, On the domain dependence of solutions to the Navier-Stokes equations of a two-dimensional compressible flow, Math. Methods Appl. Sci., 32 (2009), 2350-2367. doi: 10.1002/mma.1138. Google Scholar
J. Li, Z. H. Xu and J. W. Zhang, Global well-posedness with large oscillations and vacuum to the three-dimensional equations of compressible nematic liquid crystal flows,, preprint, (). Google Scholar
J. K. Li, Global strong and weak solutions to inhomogeneous nematic liquid crystal flow in two dimensions, Nonlinear Anal., 99 (2014), 80-94. doi: 10.1016/j.na.2013.12.023. Google Scholar
X. L. Li and D. H. Wang, Global solution to the incompressible flow of liquid crystals, J. Differential Equations, 252 (2012), 745-767. doi: 10.1016/j.jde.2011.08.045. Google Scholar
F. H. Lin, Nonlinear theory of defects in nematic liquid crystals: phase transition and flow phenomena, Comm. Pure Appl. Math., 42 (1989), 789-814. doi: 10.1002/cpa.3160420605. Google Scholar
F. H. Lin, J. Y. Lin and C. Y. Wang, Liquid crystal flows in two dimensions, Arch. Rational Mech. Anal., 197 (2010), 297-336. doi: 10.1007/s00205-009-0278-x. Google Scholar
F. H. Lin and C. Liu, Nonparabolic dissipative systems modeling the flow of liquid crystals, Comm. Pure Appl. Math., 48 (1995), 501-537. doi: 10.1002/cpa.3160480503. Google Scholar
F. H. Lin and C. Liu, Partial regularity of the dynamic system modeling the flow of liquid crystals, Discrete Cont. Dyn. S., 2 (1996), 1-22. Google Scholar
F. H. Lin and C. Y. Wang, Global existence of weak solutions of the nematic liquid crystal flow in dimensions three,, preprint, (). Google Scholar
F. H. Lin and C. Y. Wang, Recent developments of analysis for hydrodynamic flow of nematic liquid crystals, Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 372 (2014), 20130361, 18 pp. Google Scholar
J. Y. Lin, B. S. Lai and C. Y. Wang, Global finite energy weak solutions to the compressible nematic liquid crystal flow in dimension three, SIAM J. Math. Anal., 47 (2015), 2952-2983. doi: 10.1137/15M1007665. Google Scholar
P. Lions, Mathematical Topics in Fluid Mechanics: Incompressible models, Oxford University Press, New York, 1996. Google Scholar
Q. Liu, On the temporal decay of solutions to the two-dimensional nematic liquid crystal flows,, preprint, (). Google Scholar
X. G. Liu and Z. Y. Zhang, Existence of the flow of liquid crystals system, Chinese Ann. Math. Ser. A, 30 (2009), 1-20. Google Scholar
D. G. Matteis and G. E. Virga, Director libration in nematoacoustics, Physical Review E, 83 (2011), 011703. doi: 10.1103/PhysRevE.83.011703. Google Scholar
A. Novotnỳ and I. Straškraba, Introduction to the Mathematical Theory of Compressible Flow, Oxford University Press, Oxford, 2004. Google Scholar
M. Schonbek, $L^2$ decay for weak solutions of the Navier-Stokes equations, Arch. Ration. Mech. Anal., 88 (1985), 209-222. doi: 10.1007/BF00752111. Google Scholar
L. Simon, Asymptotics for a class of nonlinear evolution equation, with applications to geometri problems, Ann. of Math.(2), 118 (1983), 525-571. doi: 10.2307/2006981. Google Scholar
C. Y. Wang, Well-posedness for the heat flow of harmonic maps and the liquid crystal flow with rough initial data, Arch. Ration. Mech. Anal., 200 (2011), 1-19. doi: 10.1007/s00205-010-0343-5. Google Scholar
C. Y. Wang and X. Xu, On the rigidity of nematic liquid crystal flow on $\mathbbS^2$, J. Funct. Anal., 266 (2014), 5360-5376. doi: 10.1016/j.jfa.2014.02.023. Google Scholar
D. H. Wang and C. Yu, Global weak solution and large-time behavior for the compressible flow of liquid crystals, Arch. Ration. Mech. Anal., 204 (2012), 881-915. doi: 10.1007/s00205-011-0488-x. Google Scholar
H. Y. Wen and S. J. Ding, Solutions of incompressible hydrodynamic flow of liquid crystals, Nonlinear Anal. Real World Appl., 12 (2011), 1510-1531. doi: 10.1016/j.nonrwa.2010.10.010. Google Scholar
H. Wu, Long-time behavior for nonlinear hydrodynamic system modeling the nematic liquid crystal flows, Discrete Contin. Dyn. Syst., 26 (2010), 379-396. doi: 10.3934/dcds.2010.26.379. Google Scholar
H. Wu, X. Xu and C. Liu, Asymptotic behavior for a nematic liquid crystal model with different kinematic transport properties, Calc. Var. Partial Differential Equations, 45 (2012), 319-345. doi: 10.1007/s00526-011-0460-5. Google Scholar
Y. Zhou, J. S. Fan and G. Nakamura, Global strong solution to the density-dependent 2-D liquid crystal flows, Abstr. Appl. Anal., Art. ID 947291 (2013), 5pp. Google Scholar
Takeshi Taniguchi. The exponential behavior of Navier-Stokes equations with time delay external force. Discrete & Continuous Dynamical Systems, 2005, 12 (5) : 997-1018. doi: 10.3934/dcds.2005.12.997
Teng Wang, Yi Wang. Large-time behaviors of the solution to 3D compressible Navier-Stokes equations in half space with Navier boundary conditions. Communications on Pure & Applied Analysis, 2021, 20 (7&8) : 2811-2838. doi: 10.3934/cpaa.2021080
Xinhua Zhao, Zilai Li. Asymptotic behavior of spherically or cylindrically symmetric solutions to the compressible Navier-Stokes equations with large initial data. Communications on Pure & Applied Analysis, 2020, 19 (3) : 1421-1448. doi: 10.3934/cpaa.2020052
Yang Liu. Global existence and exponential decay of strong solutions to the cauchy problem of 3D density-dependent Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1291-1303. doi: 10.3934/dcdsb.2020163
Huicheng Yin, Lin Zhang. The global existence and large time behavior of smooth compressible fluid in an infinitely expanding ball, Ⅱ: 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2018, 38 (3) : 1063-1102. doi: 10.3934/dcds.2018045
Qiwei Wu, Liping Luan. Large-time behavior of solutions to unipolar Euler-Poisson equations with time-dependent damping. Communications on Pure & Applied Analysis, 2021, 20 (3) : 995-1023. doi: 10.3934/cpaa.2021003
Joanna Rencławowicz, Wojciech M. Zajączkowski. Global regular solutions to the Navier-Stokes equations with large flux. Conference Publications, 2011, 2011 (Special) : 1234-1243. doi: 10.3934/proc.2011.2011.1234
Peter E. Kloeden, José Valero. The Kneser property of the weak solutions of the three dimensional Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2010, 28 (1) : 161-179. doi: 10.3934/dcds.2010.28.161
Marco Di Francesco, Yahya Jaafra. Multiple large-time behavior of nonlocal interaction equations with quadratic diffusion. Kinetic & Related Models, 2019, 12 (2) : 303-322. doi: 10.3934/krm.2019013
Xin Zhong. Global strong solution and exponential decay for nonhomogeneous Navier-Stokes and magnetohydrodynamic equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (7) : 3563-3578. doi: 10.3934/dcdsb.2020246
Yuning Liu, Wei Wang. On the initial boundary value problem of a Navier-Stokes/$Q$-tensor model for liquid crystals. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3879-3899. doi: 10.3934/dcdsb.2018115
Kuijie Li, Tohru Ozawa, Baoxiang Wang. Dynamical behavior for the solutions of the Navier-Stokes equation. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1511-1560. doi: 10.3934/cpaa.2018073
Yuming Chu, Yihang Hao, Xiangao Liu. Global weak solutions to a general liquid crystals system. Discrete & Continuous Dynamical Systems, 2013, 33 (7) : 2681-2710. doi: 10.3934/dcds.2013.33.2681
Xiaopeng Zhao, Yong Zhou. Well-posedness and decay of solutions to 3D generalized Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 795-813. doi: 10.3934/dcdsb.2020142
Zdeněk Skalák. On the asymptotic decay of higher-order norms of the solutions to the Navier-Stokes equations in R3. Discrete & Continuous Dynamical Systems - S, 2010, 3 (2) : 361-370. doi: 10.3934/dcdss.2010.3.361
Takeshi Taniguchi. The existence and decay estimates of the solutions to $3$D stochastic Navier-Stokes equations with additive noise in an exterior domain. Discrete & Continuous Dynamical Systems, 2014, 34 (10) : 4323-4341. doi: 10.3934/dcds.2014.34.4323
Zhong Tan, Yong Wang, Xu Zhang. Large time behavior of solutions to the non-isentropic compressible Navier-Stokes-Poisson system in $\mathbb{R}^{3}$. Kinetic & Related Models, 2012, 5 (3) : 615-638. doi: 10.3934/krm.2012.5.615
Daniel Coutand, J. Peirce, Steve Shkoller. Global well-posedness of weak solutions for the Lagrangian averaged Navier-Stokes equations on bounded domains. Communications on Pure & Applied Analysis, 2002, 1 (1) : 35-50. doi: 10.3934/cpaa.2002.1.35
Daniel Pardo, José Valero, Ángel Giménez. Global attractors for weak solutions of the three-dimensional Navier-Stokes equations with damping. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3569-3590. doi: 10.3934/dcdsb.2018279
Fang Li, Bo You, Yao Xu. Dynamics of weak solutions for the three dimensional Navier-Stokes equations with nonlinear damping. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4267-4284. doi: 10.3934/dcdsb.2018137
Jishan Fan Fei Jiang | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.