text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
GigaHedron Scientific Computing
All posts by R. Herrmann
Fractional Calculus
On Fractional Calculus
September 5, 2018 R. Herrmann
Fractional calculus provides us with a set of axioms and methods to extend the concept of a derivative operator from integer order n to arbitrary order α, where α is a real or complex value.
Despite the fact, that this concept is discussed since the days of Leibniz and since then has occupied the great mathematicians of their times, no other research area has resisted a direct application for centuries. Consequently, Abel's treatment of the tautochrone problem from 1823 stood for a long time as a singular example for an application of fractional calculus.
Not until the works of Mandelbrot on fractal geometry in the early 1980's the interest of physicists has been attracted by this subject and caused a first wave of publications on the area of fractional Brownian motion and anomalous diffusion processes. But these works caused only minimal impact on the progress of traditional physics, because the results obtained could also be derived using classical methods.
This situation changed drastically by the progress made in the area of fractional wave equations during the last years. Within this process, new questions in fundamental physics have been raised, which cannot be formulated adequately using traditional methods. Consequently a new research area has emerged, which allows for new insights und intriguing results using new methods and approaches.
The interest in fractional wave equations aroused in the year 2000 with a publication of Raspini. He demonstrated, that a 3-fold factorization of the Klein-Gordon equation leads to a fractional Dirac equation which contains fractional derivative operators of order α =2/3 and furthermore the resulting γ – matrices obey an extended Clifford algebra.
To state this result more precisely: the extension of Dirac's linearization procedure which determines the correct coupling of a SU(2) symmetric charge to a 3-fold factorization of the d'Alembert-operator leads to a fractional wave equation with an inherent SU(3) symmetry. This symmetry is formally deduced by the factorization procedure. In contrast to this formal derivation a standard Yang-Mills-theory is merely a recipe, how to couple a phenomenologically deduced SU(3) symmetry.
In 2005 we calculated algebraically the Casimir operators and multiplets of the fractional extension of the standard rotation group SO(n). This may be interpreted as a first approach to investigate a fractional generalization of a standard Lie-algebra and the first nontrivial application of fractional calculus in multidimensional euclidean space. The classification scheme derived was used for a successful description of the charmonium spectrum. The derived symmetry has been used to predict the exact masses of Y(4260) and X(4664), which later have been confirmed experimentally.
In 2007 we applied the concept of local gauge invariance to fractional free fields and derived the exact interaction form in first order of the coupling constant. The fractional analogue of the normal Zeeman-effect was calculated and as a first application a mass formula was presented, which gives the masses of the baryon spectrum with an accuracy better than 1%.
It has been demonstrated, that the concept of local gauge invariance determines the exact form of this interaction, which in lowest order coincides with the derived group chain for the fractional rotation group.
Furthermore we investigated the transformation properties of the fractional Schrödinger equation under rotations with the result, that the particles described carry an additional intrinsic mixed rotational- and translational degree of freedom, which we named fractional spin. As a consequence the transformation properties of a fractional Schrödinger equation are closely related to a standard Pauli equation.
Since then the investigation of the fractional rotation group within the framework of fractional group theory has lead to a vast amount of interestening results, e.g. a theoretical foundation of magic numbers in atomic nuclei and metallic clusters
Besides group theoretical methods, the application of fractional derivatives on multi dimensional space and the increasing importance of numerical approaches are major developments within the last years.
Furthermore, as long as the fractional derivative has been considered as the inverse of a fractional integral, which per se is nonlocal its nonlocality was a common paradigm. But recent years have seen an increasing number of alternative approaches, which are not necessarily founded on nonlocality.
Another increasing area of research is the investigation of genetic differential equations with variable order fractional derivatives based on an idea of Samko and Ross, where the form and type of a differential operator changes with time or space respectively emphasizing evolutionary aspects of dynamic behavior.
The objective of our research program is the realization of a fractional field theory, which describes the interaction of particles in a more stringent and accurate way, than actual theories do up to now. A major contribution in this research area is the fractional group theory, which allows expressing and solving problems in a very elegant way, which cannot be sufficiently described using traditional methods.
One reason of the success of this concept is the strategy, to interpret concrete experimental data and strictly verify the theoretical results with experimental findings.
Still there are many open questions and problems to solve on the way to realize a successful universal fractional quantum field theory. The concept we follow is a promising strategy, to reach that goal.
Below follows a list of available preprints and reviewed articles.
Uniqueness of the fractional derivative definition – The Riesz fractional derivative as an example
April 3, 2013 R. Herrmann
R. Herrmann
For the Riesz fractional derivative besides the well known integral representation two new differential representations are presented, which emphasize the local aspects of a fractional derivative. The consequences for a valid solution of the fractional Schroedinger equation are discussed.
download: arXiv: arXiv:1303.2939
March 20, 2013 R. Herrmann
GSI, Darmstadt, Germany Alnwick Castle Physics Today
Goethe Uni Frankfurt, Germany English Heritage Spektrum der Wissenschaft
Vanderbilt University, TN, USA London Die Zeit
Oakridge National Laboratory, TN, USA National Trust Frankfurter Rundschau
JAEA Japan Atomic Energy Agency, Tokai-mura, Ibaraki-ken, Japan Stonehenge Drucke 18. Jahrhundert
Frankfurt Institute for Advances Studies, FRankfurt, Germany Westminster Abbey NZZ
Lawrence Berkeley National Lab., CA, USA Winchester Focus
JINR, Dubna, CCCR Midsomer Nature
Oxford, UK hms-victory Spiegel
Cambridge, UK historical Royal Palaces The Guardian
Royal Mint Altmühltal edx
VfS Bayerische Schlösserverwaltung Alte Drucke
Austria Hessische Schlösserverwaltung arte
Würtembergische Schlösserverwaltung Gutenberg
Unesco Germany Cosmati pavement
Beagle library
Fractional Calculus – An Introduction to Physicists – 1st Edition
From the cover:
Fractional calculus is undergoing rapid and ongoing development. We can already recognize, that within its framework new concepts and strategies emerge, which lead to new challenging insights and surprising correlations between different branches of physics.
This book is an invitation both to the interested student and the professional researcher. It presents a thorough introduction to the basics of fractional calculus and guides the reader directly to the current state-of-the-art physical interpretation. It is also devoted to the application of fractional calculus on physical problems, in the subjects of classical mechanics, friction, damping, oscillations, group theory, quantum mechanics, nuclear physics, and hadron spectroscopy up to quantum field theory.
Fractional Calculus – An Introduction for Physicists
by Richard Herrmann,
World Scientific Publishing, Singapore, February 2011, reprinted 2012, 276 pp, 6 x 9 in.,
"The book is a solid introduction to fractional calculus that contains, in particular an elucidating section on geometric interpretation of fractional operators… the bulk of the book concentrates on aspects of fractional calculus related to symmetries in quantum mechanics…what is covered is presented in an authoritative, solid style and actually provides very entertaining reading…Overall, Fractional Calculus is an affordable and valuable introduction to the field that will appeal to physicists interested in scientific what-ifs…"
Ralf Metzler, Physics Today
For full details on this review, please visit: Physics Today 65(2), (2012) 55–56;
doi: 10.1063/PT.3.1443
"The book has the property that derived results are directly compared with experimental findings. As a consequence, the reader is guided and encouraged to apply the fractional calculus approach in her/his research area. The reviewer strongly recommends this book for beginners as well as specialists in the fields of physics, mathematics and complex adaptive systems."
E. Ahmed, Zentralblatt MATH
For full details on this review, please visit: Zentralblatt MATH (2012), Zbl 1232.26006
"…the first three chapters actually appear very helpful at the graduate level. Each chapter has a careful precis at the start. There are many analyses illustrating outcomes of fractional analyses…If this [fractional calculus] is the field of your research then this book is essential with numerous references…"
J. E. Caroll, Contemporary Physics
For full details on this review, please visit: Contemporary Physics 53(2), (2012), 187–188; doi:10.1080/00107514.2011.648957
Numerical solution of the fractional quantum mechanical harmonic oscillator based on the Riemann and Caputo derivative
Based on the Riemann- and Caputo definition of the fractional derivative we tabulate the lowest n=31 energy levels and generated graphs of the occupation probability of the fractional quantum mechanical harmonic oscillator with a precision of 32 digits for 0.50 < \alpha < 2.00, which corresponds to the transition from U(1) to SO(3).
reference: Gam. Ori. Chron. Phys. (2013) 1(1) 13-176
The fractional Schroedinger equation and the infinite potential well – numerical results using the Riesz derivative
Based on the Riesz definition of the fractional derivative the fractional Schroedinger equation with an infinite well potential is investigated. First it is shown analytically, that the solutions of the free fractional Schroedinger equation are not eigenfunctions, but good approximations for large k and for $\alpha \approx 2$. The first lowest eigenfunctions are then calculated numerically and an approximate analytic formula for the level spectrum is derived.
download: arxiv: arXiv:1210.4410[math-ph]
reference: Gam. Ori. Chron. Phys. (2013) 1(1) 1-12
Curvature interaction in collective space
November 14, 2012 R. Herrmann
For the Riemannian space, built from the collective coordinates used within nuclear models, an additional interaction with the metric is investigated, using the collective equivalent to Einstein's curvature scalar. The coupling strength is determined using a fit with the AME2003 ground state masses. An extended finite-range droplet model including curvature is introduced, which generates significant improvements for light nuclei and nuclei in the trans-fermium region.
download: arxiv: arXiv:0801.0298 [nucl-th] [physics.gen-ph]
reference: International Journal of Modern Physics E (2012) 21 1250103
Infrared spectroscopy of diatomic molecules – a fractional calculus approach
September 19, 2012 R. Herrmann
The eigenvalue spectrum of the fractional quantum harmonic oscillator is calculated numerically solving the fractional Schr\"odinger equation based on the Riemann- and Caputo definition of a fractional derivative. The fractional approach allows a smooth transition between vibrational and rotational type spectra, which is shown to be an appropriate tool to analyze IR spectra of diatomic molecules.
download: arxiv: arXiv:1209.1630 [physics.gen-ph]
reference: International Journal of Modern Physics B (2013) 27(6) 1350019
Covariant fractional extension of the modified Laplace-operator used in 3D-shape recovery
Extending the Liouville-Caputo definition of a fractional derivative to a nonlocal covariant generalization of arbitrary bound operators acting on multidimensional Riemannian spaces an appropriate approach for the 3D-shape recovery of aperture afflicted 2D slide sequences is proposed. We demonstrate, that the step from a local to a nonlocal algorithm yields an order of magnitude in accuracy and the use of the specific fractional approach an additional factor 2 in accuracy of the derived results.
download: arxiv: 1111.1311v1 [cs.CV]
reference: Fract. Calc. Appl. Anal. (2012) 15(2) 332-343
Common aspects of q-deformed Lie algebras and fractional calculus
Fractional calculus and q-deformed Lie algebras are closely related. Both concepts expand the scope of standard Lie algebras to describe generalized symmetries. A new class of fractional q-deformed Lie algebras is proposed, which for the first time allows a smooth transition between different Lie algebras. For the fractional harmonic oscillator, the corresponding fractional q-number is derived. It is shown, that the resulting energy spectrum is an appropriate tool to describe e.g. the ground state spectra of even-even nuclei. In addition, the equivalence of rotational and vibrational spectra for fractional q-deformed Lie algebras is shown and the $B_\alpha(E2)$ values for the fractional q-deformed symmetric rotor are calculated. A first interpretation of half integer representations of the fractional rotation group is given in terms of a description of $K=1/2^-$ band spectra of odd-even nuclei.
download: arxiv: arXiv:1007.1084v1 [physics.gen-ph]
reference: Physica A (2010) 389 4613-4622 | CommonCrawl |
Multiplicity results for classes of singular problems on an exterior domain
DCDS Home
Partial reconstruction of the source term in a linear parabolic initial problem with Dirichlet boundary conditions
November 2013, 33(11&12): 5143-5151. doi: 10.3934/dcds.2013.33.5143
Well-posedness results for the Navier-Stokes equations in the rotational framework
Matthias Hieber 1, and Sylvie Monniaux 2,
Fachbereich Mathematik, Angewandte Analysis, Technische Universität Darmstadt, Schlossgartenstr. 7, 64289 Darmstadt
LATP UMR 6632, CMI, Technopôle de Château-Gombert, 39 rue Frédéric Joliot-Curie, 13453 Marseille Cedex 13, France
Received January 2012 Revised July 2012 Published May 2013
Consider the Navier-Stokes equations in the rotational framework either on $\mathbb{R}^3$ or on open sets $\Omega \subset \mathbb{R}^3$ subject to Dirichlet boundary conditions. This paper discusses recent well-posedness and ill-posedness results for both situations.
Keywords: mild solutions., Coriolis force, Navier-Stokes equations, Dirichlet boundary conditions, Stokes-Coriolis semigroup.
Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C3.
Citation: Matthias Hieber, Sylvie Monniaux. Well-posedness results for the Navier-Stokes equations in the rotational framework. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5143-5151. doi: 10.3934/dcds.2013.33.5143
A. Babin, A. Mahalov and B. Nicolaenko, Regularity and integrability for the 3D Euler and Navier-Stokes equations for uniformly rotating fluids, Asympt. Anal., 15 (1997), 103-150. Google Scholar
A. Babin, A. Mahalov and B. Nicolaenko, 3D Navier-Stokes and Euler equations with initial data characterized by uniformly large vorticity, Indiana Univ. Math. J., 50 (2001), 1-35. doi: 10.1512/iumj.2001.50.2155. Google Scholar
I. Bejenaru and T. Tao, Sharp well-posedness and ill-posedness results for a quadratic non-linear Schrödinger equation, J. Funct. Anal., 223 (2006), 228-259. doi: 10.1016/j.jfa.2005.08.004. Google Scholar
J. Bourgain and N. Pavlović, Ill-posedness of the Navier-Stokes equations in a critical space in 3D, J. Funct. Anal., 255 (2008), 2233-2247. doi: 10.1016/j.jfa.2008.07.008. Google Scholar
M. Cannone, Harmonic analysis tools for solving the incompressible Navier-Stokes equations, Handbook of Mathematical Fluid Dynamics, (eds. S. Friedlander and D. Serre), Elsevier, 3 (2003). Google Scholar
C. Cao and E. Titi, Global wellposedness of the three dimensional viscous primitive equations of large scale ocean and atmosphere dynamics, Annals of Math., 166 (2007), 245-267. doi: 10.4007/annals.2007.166.245. Google Scholar
J.-Y. Chemin, B. Desjardins, I. Gallagher and E. Grenier, "Mathematical Geophysics," Oxford University Press, 2006. Google Scholar
J. A. Goldstein, "Semigroups of Operators and Applications," Oxford University Press, 1985. Google Scholar
Y. Giga, K. Inui and S. Matsui, On the Cauchy problem for the Navier-Stokes equations with nondecaying initial data, Quaderni di Math., 4 (1999), 28-68. Google Scholar
Y. Giga, K. Inui, A. Mahalov and S. Matsui, Navier-Stokes equations in a rotating frame in $R^3$ with initial data nondecreasing at infinity, Hokkaido Math. J., 35 (2006), 321-364. Google Scholar
Y. Giga, K. Inui, A. Mahalov and J. Saal, Uniform global solvability of the Navier-Stokes equations for nondecaying data, Indiana Univ. Math. J., 57 (2008), 2775-2792. 321-364. doi: 10.1512/iumj.2008.57.3795. Google Scholar
Y. Giga, K. Inui, A. Mahalov, S. Matsui and J. Saal, Rotating NS-equations in $\mathbbR^3_+$ with initial data nondecreasing at infinity: The Ekman boundary layer problem, Arch. Rat. Mech. Anal., (2007). Google Scholar
M. Hieber and S. Monniaux, Global solutions of the Navier-Stokes-Coriolis system in domains, preprint, 2012. Google Scholar
M. Hieber and O. Sawada, The Navier-Stokes equations in $\mathbbR^n$ with linearly growing initial data, Arch. Rational Mech. Anal., 175 (2005), 269-285. doi: 10.1007/s00205-004-0347-0. Google Scholar
M. Hieber and Y. Shibata, The Fujita-Kato approach to the Navier-Stokes equations in the rotational framework, Math. Z., 265 (2010), 481-491. doi: 10.1007/s00209-009-0525-8. Google Scholar
T. Iwabuchi and R. Takada, Global well-posedness and ill-posedness for the Navier-Stokes equations with the Coriolis force in function spaces of Besov type, preprint, 2011. doi: 10.1016/j.jmaa.2011.02.010. Google Scholar
T. Iwabuchi and R. Takada, Dispersive effects of the Coriolis force and the local well-posedness for the Navier-Stokes equations in the rotational framework, preprint, 2011. Google Scholar
T. Kato, Strong $L^p$-solutions of Navier-Stokes equations in $\mathbbR^n$ with applications to weak solutions, Math. Z., 187 (1984), 471-480. doi: 10.1007/BF01174182. Google Scholar
H. Koch and D. Tataru, Wellposedness for the Navier-Stokes equations, Advances Math., 157 (2001), 22-35. doi: 10.1006/aima.2000.1937. Google Scholar
P. Konieczny and T. Yoneda, On the dispersive effect of the Coriolis force for stationary Navier-Stokes equations, J. Diff. Equ., 250 (2011), 3859-3873. doi: 10.1016/j.jde.2011.01.003. Google Scholar
A. Majda, "Introduction to PDEs and Waves for the Atmosphere and Ocean," Courant Lecture Notes in Math., 2003. Google Scholar
S. Monniaux, Navier-Stokes equations in arbitrary domains: The Fujita-Kato scheme, Math. Res. Lett., 13 (2006), 455-461. Google Scholar
T. Yoneda, Long-time solvability of the Navier-Stokes equations in a rotating frame with spatially almost periodic large data, Arch. Rational Mech. Anal., 200 (2011), 225-237. doi: 10.1007/s00205-010-0360-4. Google Scholar
Šárka Nečasová. Stokes and Oseen flow with Coriolis force in the exterior domain. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 339-351. doi: 10.3934/dcdss.2008.1.339
Chérif Amrouche, Nour El Houda Seloula. $L^p$-theory for the Navier-Stokes equations with pressure boundary conditions. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1113-1137. doi: 10.3934/dcdss.2013.6.1113
Sylvie Monniaux. Various boundary conditions for Navier-Stokes equations in bounded Lipschitz domains. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1355-1369. doi: 10.3934/dcdss.2013.6.1355
Yejuan Wang, Tongtong Liang. Mild solutions to the time fractional Navier-Stokes delay differential inclusions. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3713-3740. doi: 10.3934/dcdsb.2018312
Petr Kučera. The time-periodic solutions of the Navier-Stokes equations with mixed boundary conditions. Discrete & Continuous Dynamical Systems - S, 2010, 3 (2) : 325-337. doi: 10.3934/dcdss.2010.3.325
Takeshi Taniguchi. The exponential behavior of Navier-Stokes equations with time delay external force. Discrete & Continuous Dynamical Systems, 2005, 12 (5) : 997-1018. doi: 10.3934/dcds.2005.12.997
Matthew Paddick. The strong inviscid limit of the isentropic compressible Navier-Stokes equations with Navier boundary conditions. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2673-2709. doi: 10.3934/dcds.2016.36.2673
Alessio Falocchi, Filippo Gazzola. Regularity for the 3D evolution Navier-Stokes equations under Navier boundary conditions in some Lipschitz domains. Discrete & Continuous Dynamical Systems, 2021 doi: 10.3934/dcds.2021151
Quanrong Li, Shijin Ding. Global well-posedness of the Navier-Stokes equations with Navier-slip boundary conditions in a strip domain. Communications on Pure & Applied Analysis, 2021, 20 (10) : 3561-3581. doi: 10.3934/cpaa.2021121
Yoshikazu Giga. A remark on a Liouville problem with boundary for the Stokes and the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1277-1289. doi: 10.3934/dcdss.2013.6.1277
Vittorino Pata. On the regularity of solutions to the Navier-Stokes equations. Communications on Pure & Applied Analysis, 2012, 11 (2) : 747-761. doi: 10.3934/cpaa.2012.11.747
Luigi C. Berselli. An elementary approach to the 3D Navier-Stokes equations with Navier boundary conditions: Existence and uniqueness of various classes of solutions in the flat boundary case.. Discrete & Continuous Dynamical Systems - S, 2010, 3 (2) : 199-219. doi: 10.3934/dcdss.2010.3.199
Maxim A. Olshanskii, Leo G. Rebholz, Abner J. Salgado. On well-posedness of a velocity-vorticity formulation of the stationary Navier-Stokes equations with no-slip boundary conditions. Discrete & Continuous Dynamical Systems, 2018, 38 (7) : 3459-3477. doi: 10.3934/dcds.2018148
Franck Boyer, Pierre Fabrie. Outflow boundary conditions for the incompressible non-homogeneous Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2007, 7 (2) : 219-250. doi: 10.3934/dcdsb.2007.7.219
Hamid Bellout, Jiří Neustupa, Patrick Penel. On a $\nu$-continuous family of strong solutions to the Euler or Navier-Stokes equations with the Navier-Type boundary condition. Discrete & Continuous Dynamical Systems, 2010, 27 (4) : 1353-1373. doi: 10.3934/dcds.2010.27.1353
Chongsheng Cao. Sufficient conditions for the regularity to the 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2010, 26 (4) : 1141-1151. doi: 10.3934/dcds.2010.26.1141
Teng Wang, Yi Wang. Large-time behaviors of the solution to 3D compressible Navier-Stokes equations in half space with Navier boundary conditions. Communications on Pure & Applied Analysis, 2021, 20 (7&8) : 2811-2838. doi: 10.3934/cpaa.2021080
Pavel I. Plotnikov, Jan Sokolowski. Compressible Navier-Stokes equations. Conference Publications, 2009, 2009 (Special) : 602-611. doi: 10.3934/proc.2009.2009.602
Jan W. Cholewa, Tomasz Dlotko. Fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 2967-2988. doi: 10.3934/dcdsb.2017149
Xulong Qin, Zheng-An Yao. Global solutions of the free boundary problem for the compressible Navier-Stokes equations with density-dependent viscosity. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1041-1052. doi: 10.3934/cpaa.2010.9.1041
Matthias Hieber Sylvie Monniaux | CommonCrawl |
Optimized monitoring sites for detection of biodiversity trends in China
Haigen Xu1,
Mingchang Cao1,
Yi Wu2,
Lei Cai3,
Yun Cao1,
Hui Ding1,
Peng Cui1,
Jun Wu1,
Zhi Wang1,
Zhifang Le1,
Xiaoqiang Lu1,
Li Liu1 &
Jiaqi Li1
Biodiversity and Conservation volume 26, pages 1959–1971 (2017)Cite this article
Properly designed monitoring networks can generate data to understand status and trends of biodiversity, and to assess progress towards conservation targets. However, biodiversity monitoring is often affected by poor sampling design. We proposed an approach to choosing optimized monitoring sites among large areas. Based on comprehensive distribution data of 34,284 vertebrates and vascular plants from 2376 counties in China, we selected 564 optimized monitoring sites (counties) through complementarity analysis and pre-existing knowledge of nature reserves. The optimized monitoring sites are complementary to each other and reasonably distributed, to ensure that maximum species are covered while the total number of sites and monitoring costs are minimized. Incongruence of optimized monitoring sites among different taxa indicates that taxa with different ecological features should be selected for large-scale monitoring programmes. The results of this study have been applied in the design and operation of China Biodiversity Observation Network.
Avoid the most common mistakes and prepare your manuscript for journal editors.
Biodiversity has continued to decline over the past four decades (Butchart et al. 2010; Tittensor et al. 2014). Parties to the Convention on Biological Diversity (CBD) has adopted the Strategic Plan for Biodiversity 2011–2020 and set the Aichi Targets to 'take effective and urgent action to halt the loss of biodiversity' (CBD 2010). Biodiversity monitoring is useful for identifying species in decline or at risk of extinction (Gerber et al. 1999; Shea and Mangel 2001), determining sustainable levels of utilization (Hauser et al. 2006), and assessing the effectiveness of conservation measures (Campbell et al. 2002). Biodiversity monitoring can provide timely and accurate data for regional or national management needs and policy making (Green et al. 2005; Haughland et al. 2010; Honrado et al. 2016). Lack of monitoring data can reduce the capacity for informed decision-making and timely reporting on progress towards conservation targets (DeWan and Zipkin 2010). It is crucial to detect and understand spatial–temporal biodiversity changes through monitoring for better allocation of conservation efforts and assessment of the progress towards relevant strategies and targets (Pereira and Cooper 2006; Pereira et al. 2013).
The design of a monitoring network requires cost-efficient allocation of monitoring sites across space (Amorim et al. 2014; Vicente et al. 2016), to ensure that monitoring sites are distributed in the most informative areas and the total number of sites are minimized (Amorim et al. 2014; Carvalho et al. 2016; Honrado et al. 2016). In design of monitoring networks, researchers usually divide the target region into grids and select grids through relevant sampling strategies. For instance, the breeding bird survey (BBS) in the UK adopted approximately 3000 1-km grids that were chosen through stratified random sampling (Harris et al. 2016), and biodiversity monitoring in Switzerland (BDM) implemented systematic sampling to monitor its biodiversity (BDM Coordination Office 2014). Nevertheless, sampling methods for species-level monitoring network, such as sampling strategies, estimate of effective sample size, standardized field protocols, statistical models to interpret monitoring data are still inadequate due to large number of species involved and the high cost of traditional survey methods (Noon et al. 2012). Monitoring for conservation is challenging as accurate estimation of species abundance or occurrence may be hampered by large geographic areas or inaccurate detection of species (Yoccoz et al. 2001; MacKenzie 2006; Pereira et al. 2013). Current biodiversity monitoring programmes often suffer insufficient taxonomic and spatial coverage. The priority task is to significantly expand the coverage of biological taxa, habitats and geographical regions, which will in turn require the design of sampling regimes that are properly selected across space and taxa (Balmford et al. 2003). Complementarity analysis is commonly used for systematic conservation planning (Reyers et al. 2000; Faith et al. 2003; Williams et al. 2006). Through complementarity analysis, minimal areas can be selected to protect maximal biodiversity (species, habitats or ecosystems) (Cabeza and Moilanen 2001). We explored for the first time the utility of complementarity analysis for the selection of monitoring sites.
China is one of the 'megadiversity' countries in the world (Liu et al. 2003; Brooks et al. 2006). However, it is facing huge pressures from the largest population and economic growth (Liu and Diamond 2005), which poses threats to biodiversity. Currently, China has set up a series of nationwide monitoring network for ecosystems, such as China ecosystem research network (CERN) and China forest ecological research network (CFERN) (Xu et al. 2012). Although the preliminary framework for ecosystem-level monitoring has been basically established, species-level monitoring in China remains rare and faces various challenges, such as poor sampling design and low spatial and species coverage (Xu et al. 2012).
Here, we present an approach to designing monitoring networks for vertebrates and vascular plants in terrestrial and inland water ecosystems across China. The overall objective of the proposed monitoring networks is to detect changes in species composition, distribution and population dynamics, assess major threats to target species and evaluate the efficiency of conservation policy. It contains two steps: (i) selection of monitoring sites; and (ii) implementation of monitoring activities in selected monitoring sites, that are establishment of plots and line and/or point transects, training of human resources, and standardization of field protocols and quality control. In this study, we identified optimized monitoring sites based on a comprehensive database of 34,284 vertebrates and vascular plants from 2376 counties across China through complementarity analysis and heuristic knowledge of nature reserves. The major criterion in the selection of monitoring sites is to ensure the coverage of maximum species and minimization of network size (number of sites). We considered all species, threatened species and species endemic to China of mammals, birds, amphibians, reptiles, inland water fishes and vascular plants.
We used a database of the geographical distribution for 561 mammal species, 1347 bird species, 387 reptile species, 359 amphibian species, 1111 inland water fish species, and 30519 vascular plant species from 2376 counties (their mean size: 3908.7 km2; standard deviation: 9287.6 km2) across China (Xu et al. 2013, 2015, 2016). As far as we know, it is the most comprehensive database ever developed in the country. Marine species, cultivated or bred species, and alien species were eliminated from this study. We adopted 'county' as the basic sampling unit in this study (the sampling population across China is 2376 counties) (Xu et al. 2015, 2016). The richness data were collected from (i) species distribution information from over 1000 monographs and representative papers on fauna and flora across China; (ii) record information of specimens in herbaria of the Chinese Academy of Sciences and relevant universities; and (iii) field surveys in different regions (Xu et al. 2013, 2015, 2016). We considered respectively all species, threatened species and species endemic to China. According to the IUCN Red List Categories and Criteria (Version 3.1), threatened species are those species that are critically endangered, endangered, or vulnerable.
Main threats to biodiversity in China include fragmentation and loss of habitats, overexploitation of natural resources, pollution, invasion of alien species and climate change (Ministry of Environmental Protection of China 2014). We selected population density, GDP density and road density to represent major threats to biodiversity. Data on population density and GDP density of counties were obtained from provincial statistical bureaus, and data on road density were obtained from Ministry of Transport of China (http://www.moc.gov.cn/). Data on above three major threats were recorded in 2010.
Data on the locality, area, conservation targets and year of establishment of nature reserves were obtained from the statistics of the Ministry of Environmental Protection of China (http://www.mep.gov.cn). The zoning map of phytogeographic regions and zoogeographical regions were respectively from the study of Wu et al. (2011) and Zhang (2011). The map of watersheds in China was from the website of National Administration of Surveying, Mapping and Geoinformation with a scale of 1:4,000,000 (http://www.sbsm.gov.cn/article/zxbs/dtfw/).
Essential sites for monitoring
Essential sites were selected before complementarity analysis. To identify essential sites for monitoring, we selected nature reserves that harbor representative regional biodiversity and the capacity to monitor biodiversity as essential nature reserves. The essential nature reserves were assessed in terms of their conservation targets, regional distribution and monitoring capability. Criteria to select essential nature reserves include: (i) embracing major national nature reserves; (ii) focusing on the conservation targets of nature reserves and maintaining regional balance; and (iii) including existing important monitoring sites among CERN and CFERN. We considered counties in which essential nature reserves are distributed as essential sites for monitoring. When an essential nature reserve is distributed across several counties, the county with the highest richness was selected as essential sites for monitoring and noted as priority county. We then calculated the complementarity score between the priority county and other counties holding the nature reserve (see details in next paragraph). If the complementarity score of a county with the priority county was larger than 0.8, this county was also selected as an essential site.
Complementarity analysis
Optimized monitoring sites (counties) were selected by complementarity analysis under scenarios of different distances between counties and species coverage. According to the study of Colwell and Coddington (1994), the complementarity score (C jk ) between county j and county k was defined as follows:
$$ C_{jk} = 1 - {{V_{jk} } \mathord{\left/ {\vphantom {{V_{jk} } {S_{jk} }}} \right. \kern-0pt} {S_{jk} }} $$
where \( S_{jk} = S_{j} + S_{k} - V_{jk} \); S j is the number of species in county j; S k is the number of species in county k; V jk is the number of common species both in county j and county k. The resulting C jk ranges between 0 and 1.
Based on the study of Xu (2013), the algorithm of complementarity analysis was as follows:
Set goals for the design of monitoring network: covering maximal number of species within minimal sampling sites and embracing all nationally protected species;
Select essential sites for monitoring;
Select other sites:
Set species coverage 90% of species of a taxon;
Set a distance of 50 km between counties (calculated as the distance between centroids of counties);
For each county belonging to U, which is the set of all counties excluding essential sites, calculate the complementarity score between the county and essential sites for nationally protected species; select the county with the highest complementarity score and a distance between the county and essential sites larger than 50 km (Ties for complementarity score were broken by selecting the county with the highest species richness) and include this county in the list of essential sites, until the selected counties cover all nationally protected species of the taxon;
For all species of the taxon, repeat the process in (c), until the selected counties cover no less than 90% of species of the taxon.
Enlarge the distance to 100, 150, 200, 250 and 300 km, progressively, repeat the processes in (c) and (d), to select the counties meeting condition (a);
Repeat process from (a) to (e), to select counties with a species coverage of 95, 98 or 100% under different distances.
The algorithm first selected essential sites, then selected other sites based on species coverage targets and control distances between counties. There were four species coverage targets: 90, 95, 98 and 100%. If all essential sites meet a given species coverage target, the process is ended. Otherwise, all species found in essential sites were excluded from further consideration, the algorithm searched for other sites (counties) with the greatest number of species that were not already selected (Dobson et al. 1997) and included this selected site (county) in the list of essential sites. This process continues until a given species coverage target is met. Monitoring sites should be apart from each other as much as possible to ensure independence and avoid spatial autocorrelation between sites (Carvalho et al. 2016). Six distance intervals (50, 100, 150, 200, 250 and 300 km) were examined. The distance between selected sites must always be larger than relevant distance value. The algorithm was run respectively for all species, threatened species and endemic species of all six taxa (Reyers et al. 2000).
Kendall's rank partial correlations were used to analyze the relations between species richness and the number of monitoring sites in a zoning system and remove the effects of area on the number of monitoring sites (Xu et al. 2008). Species richness, the number of monitoring sites plus one, and area of regions were log10 transformed before analysis. SPSS version 16.0 was used for partial correlation analysis and the Software packages R, version 2.15 (R Development Core Team 2012) was used for complementarity analysis.
Essential monitoring sites
In terms of conservation targets, regional distribution and monitoring capacity of nature reserves, 196 essential nature reserves were selected, which were mostly national nature reserves. Accordingly, 36 essential monitoring sites for mammals, 52 for birds, 21 for reptiles, 24 for amphibians, 40 for inland water fishes and 84 for plants were selected (Fig. S1).
Optimized monitoring sites
Goals for the design of monitoring networks were determined based on the results of complementarity analysis under scenarios of different distances and species coverage (Fig. 1; Tables S1–S6). Meanwhile, costs (number of sites) were taken into account. The number of sites increased rapidly with an increase in coverage of species for plants and fishes. Therefore, we selected a threshold of species coverage of 90% for plants and fishes (Tables S5, S6). The number of sites for mammals, birds, reptiles and amphibians increased relatively slow compared with that for plants and fishes. Accordingly, we selected the highest species coverage (100%) for mammals, birds, reptiles and amphibians (Tables S1–S4). We set distance thresholds of 50 km for plants and 100 km for most vertebrates, according to changes in the number of sites for all species, threatened species and endemic species. Therefore, the goals for the design of monitoring networks were set as follows: (i) Covering 100% of species of mammals, birds, reptiles and amphibians and 90% of species of inland water fishes and vascular plants; (ii) keeping 100 km apart between any two sites for vertebrates and 50 km apart for vascular plants.
Number of monitoring sites for vertebrates and vascular plants calculated by complementarity analysis under different distance and species coverage scenarios. Lines where the number of monitoring sites does not exist mean that there is no solution under the distance and species coverage. Monitoring sites should be apart from each other as much as possible to ensure independence and avoid spatial autocorrelation between sites. The distance between selected sites must always be larger than relevant distance value. a Mammals, b birds, c reptiles, d amphibians, e inland water fishes, f vascular plants
We obtained monitoring sites for all species, threatened species and endemic species of the six taxa based on the above-mentioned goals (Figs. S2–S7). The monitoring sites for all species, threatened species and endemic species of a taxon were merged into optimized monitoring sites (Fig. 2; Table 1). The overlaps between the optimized monitoring sites of any two taxa range between 8.7 and 20.1% (Table 2). There were more monitoring sites in southern China than in northern China, and more sites in eastern China than in western China (Fig. 2). The total number of optimized monitoring sites for six taxa is 564 (Table 1), owning to overlaps between some monitoring sites for different taxa. The optimized monitoring sites represent a set of counties that complement each other in terms of species composition with minimized sampling size and monitoring costs. It indicates the different role of the optimized monitoring sites where monitoring should be carried out for only one taxon in most sites while several or all of six taxa in other sites.
Optimized monitoring sites for vertebrates and vascular plants in China. a mammals, b birds, c reptiles, d amphibians, e inland water fishes, f vascular plants
Table 1 Number of optimized monitoring sites for vertebrates and vascular plants in China
Table 2 Overlaps between optimized monitoring sites for different taxa
This study demonstrated an approach to allocating minimum monitoring sites to the most informative areas. We showed spatial data of species richness, endemism and threat can be combined with pre-existing knowledge of nature reserves in order to optimize monitoring networks across large areas. In this study, the number of monitoring sites increased with the increase in species coverage (Tables S1–S6). For instance, the number of monitoring sites for all vascular plants increased from 177 to 871, when the species coverage changed from 90 to 100%, under the circumstance of keeping over 50 km apart between any two sites (Table S6). It indicates that more monitoring sites should be included to cover more species. Meanwhile, distance has effects on the selection of monitoring sites. The number of monitoring sites for all vascular plants changed from 177 to 210 when the distance increased from 0 to 250 km under 90% species coverage (Table S6). Based on the adopted algorithm, we begin with the county that has the highest complementarity score and sequentially include counties that add the most unrepresented species (Reyers et al. 2000). When distance is increased, more counties should be selected among counties with lower species richness in order to meet species coverage target.
Monitoring sites for vascular plants were mainly distributed along large mountains, monitoring sites for reptiles, amphibians and fishes were distributed along the Qinling Mountains and Huaihe River and further south, and monitoring sites for mammals and birds were distributed evenly throughout China (Fig. 2). There are very few monitoring sites in the middle of Kunlun Mountains and Hoh Xil Mountains of Qinghai-Tibet Plateau where few species are distributed owning to extreme cold and arid conditions (Xu et al. 2008). Overlaps between optimized monitoring sites for different taxa were low (less than 21%) (Table 2). It indicates that optimized monitoring sites among different taxa are not congruent. Our findings confirmed the conclusion of low congruence among biodiversity hotspots of different taxa (Orme et al. 2005). The results suggest that we should consider taxa with different ecological requirements in large-scale monitoring schemes.
We tested the correlations between the number of monitoring sites and species richness in different zoogeographical regions (Zhang 2011), phytogeographic regions (Wu et al. 2011) and watersheds (Figs. S8–S10). The correlations were positive and mostly significant for different taxa and zoning systems (Table 3). It indicates the high representativeness of optimized monitoring sites for regional ecological features. The insignificant correlation between the number of bird monitoring sites and bird species richness in zoogeographical regions (Table 3) might result from the special bird fauna in Xinjiang. As this region is located in the Central Asian flyway, most of its bird species are Central Asian and Northern species and thus different from species from other regions. Although bird species richness in the Mongolia-Xinjiang region was the second lowest, it covered the maximum number of bird monitoring sites among seven zoogeographical regions (Fig. S8).
Table 3 Correlations (Kendall's rank) between the number of optimized monitoring sites and species richness in different zoning systems, with the effects of area removed
Currently, China has experienced a very rapid growth in population and economy. However, its rich biodiversity are suffering the threats from the very strong and fast changes in terms of economy, land use, pollution, fragmenting infrastructure etc. To effectively capture the status and trends of biodiversity, human impact on biodiversity should be incorporated into the design of biodiversity monitoring network. Here, we used the data on population density, GDP density and road density to represent major threats to biodiversity, and verified the rationality of the designed monitoring network. Each indicator was normalized separately to the range of 0–100 using the minimum–maximum normalization method (Yang et al. 2016), with 100 the largest and 0 the smallest. The average value of the three normalized indicators was expressed as the value of Threat Index (TI) for each county. The mean TI was 6.56 for 2376 counties across the whole country. The mean TI for 564 proposed monitoring sites in this study was 4.64, which was obviously larger than that (3.81) of 246 counties where 196 essential nature reserves were distributed. Among the 564 proposed monitoring sites, 98 monitoring sites' TI exceeded the average national level of 6.56, and 230 monitoring sites' TI exceeded 3.81 in the essential nature reserves (Fig. S11). Moreover, the mean values of population density, GDP density and road density for 2376 counties were 440.43 people/km2, 1788.64 ten thousand yuan/km2, and 278.70 m/km2, respectively. 75, 62, and 107 monitoring sites showed a higher level than mean values for 2376 counties respectively in terms of population density, GDP density and road density. Population density, GDP density or road density in half of the monitoring sites exceeded 114.57 people/km2, 158.93 ten thousand yuan/km2, and 173.61 m/km2, respectively. Therefore, we consider that the gradient of stressors has been relatively well addressed within the proposed monitoring network.
The presented sampling framework aims to detect biodiversity status and trends in different habitat types across large-scale areas. It can be used as a reference for the design and operationalization of practical biodiversity monitoring schemes. Theoretically, all mammal, bird, reptile and amphibian species and 90% of inland water fish and vascular plant species can be covered by optimized monitoring sites. However, species coverage is reduced in practice because of low detection probabilities (Kéry and Schmid 2004), and limited budget and human resources. Therefore, the number of actual monitoring sites may exceed 564 to address the impacts of low detection probabilities and other management issues. However, the optimized monitoring sites can be used as a starting point to design and fine tune practical monitoring schemes.
The Group on Earth Observations Biodiversity Observation Network (GEO BON) has developed the concept of Essential Biodiversity Variables (EBVs) (Pereira et al. 2013). The aim of the EBVs is to identify a minimum set of variables that can be used to inform scientists, managers and the public on biodiversity trends (Pereira et al. 2013; Proença et al. 2016). GEO BON aggregated candidate variables into six classes: "genetic composition," "species populations," "species traits," "community composition," "ecosystem structure," and "ecosystem function" (Pereira et al. 2013). EBVs allow for the averaging of trends of multiple species across multiple locations, and their measurement captures ongoing changes in the status of biodiversity (Pereira et al. 2013; Schmeller et al. 2015). An EBV is thus a critical biological variable that characterizes change in an aspect of biodiversity across multiple species and ecosystems, functioning as the interface between raw data and the calculated indicators (Pereira et al. 2013; Brummitt et al. 2016; Proença et al. 2016; Mihoub et al. 2017). For instance, species abundance provides data for indicators such as the Living Planet, Wild Bird, and Red List indices (LPI, WBI, and RLI) (Pereira et al. 2013). In the proposed monitoring network, main raw data were systematically collected, including the name of species, location and number of individuals, type and vegetation of habitats, weather condition, and categories (infra-structure development, resources exploitation, pollution, hunting, tourism, agriculture, husbandry and fishery, etc.) and extent (strong, moderate, low or non) of anthropogenic disturbance. The corresponding EBVs that can be generated by the proposed monitoring network encompass abundance and distribution, taxonomic diversity, habitat structure and quality, and phenology. Thus, the monitoring network here can notably contribute to mapping of EBVs at global level.
The proposed monitoring scheme has received wide support from the Central Government, Ministry of Environmental Protection (MEP), Ministry of Finance (MF) and the scientific community from China. With an annual financial allocation of approximately USD 5.8 million from MEP and MF, the monitoring scheme proposed in this study has taken into effects as China Biodiversity Observation Network (China BON). Under the planning and coordination of Nanjing Institute of Environmental Sciences affiliated to MEP, China BON has attracted approximately 3500 trained biologists, protected area managers and volunteer citizen scientists from over 400 universities, research institutes, protected areas and civil societies to get involved in field monitoring of biodiversity, currently consisting of mammals, birds and amphibians. The pilot implementation adopted national standards and field protocols for biodiversity monitoring promulgated by MEP. 441 monitoring sites were selected and applied for monitoring with >9000 line transects and point transects. It is noted that one of the key challenges in designing a long-term monitoring framework is program sustainability (Barrows et al. 2014). To enhance the sustainability of China BON, we coupled trained biologists with volunteer citizen scientists. At least one professional biologist was included in each monitoring team while well trained volunteers are also involved to extend limited staff and budgets for the long-term monitoring goal (Barrows et al. 2014). At present, China BON's Work Plan has been approved by the State Council of China. In particular, the opersationalization of biodiversity monitoring networks based on this study has been listed as one of the key action plans by China National Economy and Social Development Planning in the 13th Five-Year Plan and approved by the National People's Congress in 2016. It is imperative to continuously maintain national biodiversity monitoring networks. Their success depends on the commitment of the whole society, including scientific communities, private sectors, governments and the public.,
Amorim F, Carvalho SB, Honrado J, Rebelo H (2014) Designing optimized multi-species monitoring networks to detect range shifts driven by climate change: a case study with bats in the North of Portugal. PLoS ONE 9:e87291
Balmford A, Green RE, Jenkins M (2003) Measuring the changing state of nature. Trends Ecol Evol 18:326–330
Barrows CW, Hoines J, Fleming KD et al (2014) Designing a sustainable monitoring framework for assessing impacts of climate change at Joshua Tree National Park, USA. Biodivers Conserv 23:3263–3285
BDM Coordination Office (2014) Swiss biodiversity monitoring BDM. Description of methods and indicators., Environmental studies no. 1410Federal Office for the Environment, Bern
Brooks TM, Mittermeier RA, da Fonseca GAB, Gerlach J, Hoffmann M, Lamoreux JF, Mittermeier CG, Pilgrim JD, Rodrigues ASL (2006) Global biodiversity conservation priorities. Science 313:58–61
Brummitt N, Regan EC, Weatherdon LV et al (2016) Taking stock of nature: essential biodiversity variables explained. Biol Conserv. doi:10.1016/j.biocon.2016.09.006
Butchart SHM, Walpole M, Collen B et al (2010) Global biodiversity: indicators of recent declines. Science 328:1164–1168
Cabeza M, Moilanen A (2001) Design of reserve networks and the persistence of biodiversity. Trends Ecol Evol 16:242–248
Campbell SP, Clark JA, Crampton LH, Guerry AD, Hatch LT, Hosseini PR, Lawler JJ, O'Connor RJ (2002) An assessment of monitoring efforts in endangered species recovery plans. Ecol Appl 12:674–681
Carvalho SB, Gonçalves J, Guisan A, Honrado JP (2016) Systematic site selection for multispecies monitoring networks. J Appl Ecol 53(5):1305–1316
CBD (2010) Decision X/2, the strategic plan for biodiversity 2011–2020 and the Aichi biodiversity targets. Nagoya, Japan, 18–29 Oct 2010
Colwell RK, Coddington JA (1994) Estimating terrestrial biodiversity through extrapolation. Phil Trans R Soc Lond B 345:101–118
Development Core Team R (2012) R: a language and environment for statistical computing Vienna, Austria. R Foundation for Statistical Computing, Vienna
DeWan AA, Zipkin EF (2010) An integrated sampling and analysis approach for improved biodiversity monitoring. Environ Manag 45:1223–1230
Dobson AP, Rodriguez JP, Roberts WM, Wilcove DS (1997) Geographic distribution of endangered species in the United States. Science 275:550–553
Faith DP, Carter G, Cassis G, Ferrier S, Wilkie L (2003) Complementarity, biodiversity viability analysis, and policy-based algorithms for conservation. Environ Sci Policy 6:311–328
Gerber LR, DeMaster DP, Kareiva PM (1999) Gray whales and the value of monitoring data in implementing the U.S. endangered species act. Conserv Biol 13:1215–1219
Green RE, Balmford A, Crane PR, Mace GM, Reynolds JD, Turner K (2005) A framework for improved monitoring of biodiversity: responses to the world summit on sustainable development. Conserv Biol 19:56–65
Harris SJ, Massimino D, Newson SE, Eaton MA, Marchant JH, Balmer DE, Noble DG, Gillings S, Procter D, Pearce-Higgins JW (2016) The breeding bird survey 2015., BTO research report 687British Trust for Ornithology, Thetford
Haughland DL, Hero JM, Schieck J, Castley JG, Boutin S, Solymos P, Lawson BE, Holloway G, Magnusson WE (2010) Planning forwards: biodiversity research and monitoring systems for better management. Trends Ecol Evol 25:199–200
Hauser CE, Pople AR, Possingham HP (2006) Should managed populations be monitored every year? Ecol Appl 16:807–819
Honrado JP, Pereira HM, Guisan A (2016) Fostering integration between biodiversity monitoring and modelling. J Appl Ecol 53:1299–1304
Kéry M, Schmid H (2004) Monitoring programs need to take into account imperfect species detectability. Basic Appl Ecol 5:65–73
Liu JG, Diamond J (2005) China's environment in a globalizing world. Nature 435:1179–1186
Liu JG, Ouyang ZY, Pimm SL, Raven PH, Wang XK, Miao H, Han NY (2003) Protecting China's biodiversity. Science 300:1240–1241
MacKenzie D (2006) Modeling the probability of resource use: the effect of, and dealing with, detection a species imperfectly. J Wildl Manag 70:367–374
Mihoub JB, Henle K, Titeux N et al (2017) Setting temporal baselines for biodiversity: the limits of available monitoring data for capturing the full impact of anthropogenic pressures. Sci Rep 7:41591
Ministry of Environmental Protection of China (2014) China's fifth national report on the implementation of the convention on biological diversity. https://www.cbd.int/doc/world/cn/cn-nr-05-en.pdf). (China Environmental Science Press, Beijing)
Noon BR, Bailey LL, Sisk TD, McKelvey KS (2012) Efficient species-level monitoring at the landscape scale. Conserv Biol 26:432–441
Orme CDL, Davies RG, Burgess M et al (2005) Global hotspots of species richness are not congruent with endemism or threat. Nature 436:1016–1019
Pereira HM, Cooper HD (2006) Towards the global monitoring of biodiversity change. Trends Ecol Evol 21:123–129
Pereira HM, Ferrier S, Walters M et al (2013) Essential biodiversity variables. Science 339:377–378
Proença V, Pereira HM, Martin LJ et al (2016) Global biodiversity monitoring: from data sources to essential biodiversity variables. Biol Conserv. doi:10.1016/j.biocon.2016.07.014
Reyers B, van Jaarsveld AS, Krüger M (2000) Complementarity as a biodiversity indicator strategy. Proc R Soc Lond B 267:505–513
Schmeller DS, Julliard R, Bellingham PJ et al (2015) Towards a global terrestrial species monitoring program. J Nat Conserv 25:51–57
Shea K, Mangel M (2001) Detection of population trends in threatened coho salmon (Oncorhynchus kisutch). Can J Fish Aquat Sci 58:375–385
Tittensor DP, Walpole M, Hill SL et al (2014) A mid-term analysis of progress toward international biodiversity targets. Science 346:241–244
Vicente J, Alagador D, Guerra C et al (2016) Cost-effective monitoring of biological invasions under global change: a model-based framework. J Appl Ecol 53:1317–1329
Williams P, Faith D, Manne L, Sechrest W, Preston C (2006) Complementarity analysis: mapping the performance of surrogates for biodiversity. Biol Conserv 128:253–264
Wu ZY, Sun H, Zhou ZK, Li DZ, Peng H (2011) Floristics of seed plants from China. Science Press, Beijing
Xu HG (2013) Introduction to monitoring of species resources. Science Press, Beijing
Xu HG, Wu J, Liu Y, Ding H, Zhang M, Wu Y, Xi Q, Wang L (2008) Biodiversity congruence and conservation strategies: a national test. Bioscience 58:632–639
Xu HG, Ding H, Wu J (2012) Introduction to ecological and biodiversity monitoring in China. In: Nakano S, Yahara T, Nakashizuka T (eds) The biodiversity observation network in the Asia-pacific region: toward further development of monitoring. Springer, Tokyo, pp 65–70
Xu HG, Cao MC, Wu J, Ding H (2013) Assessment report on biodiversity baseline in China. Science Press, Beijing
Xu HG, Cao MC, Wu J et al (2015) Determinants of mammal and bird species richness in China based on habitat groups. PLoS ONE 10:e0143996
Xu HG, Cao MC, Wu Y et al (2016) Disentangling the determinants of species richness of vascular plants and mammals from national to regional scales. Sci Rep 6:21988
Yang W, Dietz T, Kramer DB, Ouyang Z, Liu J (2016) An integrated approach to understanding the linkages between ecosystem services and human well-being. Ecosyst Health Sustain 1(5):1–12
Yoccoz NG, Nichols JD, Boulinier T (2001) Monitoring of biological diversity in space and time. Trends Ecol Evol 16:446–453
Zhang RZ (2011) Zoogeography of China. Science Press, Beijing
This work was funded by the National Key Technologies Research and Development Programme (environmental project 201409061, Grants 2008BAC 39B06 and 2008BAC39B01) and the Biodiversity Conservation Programme of China. The authors would thank helpful comments from two anonymous reviewers and Dr. Dirk Schmeller (Senior Editor).
Nanjing Institute of Environmental Sciences, Ministry of Environmental Protection, 8 Jiangwangmiao St., PO Box 4202, Nanjing, 210042, China
Haigen Xu, Mingchang Cao, Yun Cao, Hui Ding, Peng Cui, Jun Wu, Zhi Wang, Zhifang Le, Xiaoqiang Lu, Li Liu & Jiaqi Li
College of Forest Resources and Environment, Nanjing Forestry University, Nanjing, 210037, China
Yi Wu
Department of Natural Ecology Conservation, Ministry of Environmental Protection, Beijing, 100035, China
Lei Cai
Haigen Xu
Mingchang Cao
Yun Cao
Hui Ding
Peng Cui
Jun Wu
Zhi Wang
Zhifang Le
Xiaoqiang Lu
Li Liu
Jiaqi Li
Correspondence to Haigen Xu.
Communicated by Dirk Sven Schmeller.
Below is the link to the electronic supplementary material.
Supplementary material 1 (PDF 1171 kb)
Xu, H., Cao, M., Wu, Y. et al. Optimized monitoring sites for detection of biodiversity trends in China. Biodivers Conserv 26, 1959–1971 (2017). https://doi.org/10.1007/s10531-017-1339-3
Revised: 18 March 2017
Issue Date: July 2017
Endemism
Sampling design
Biodiversity observation network
Essential biodiversity variables | CommonCrawl |
Tagged: Ohio State.LA
by Yu · Published 02/12/2018
True or False Problems on Midterm Exam 1 at OSU Spring 2018
The following problems are True or False.
Let $A$ and $B$ be $n\times n$ matrices.
(a) If $AB=B$, then $B$ is the identity matrix.
(b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions.
(c) If $A$ is invertible, then $ABA^{-1}=B$.
(d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix.
(e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Find the Vector Form Solution to the Matrix Equation $A\mathbf{x}=\mathbf{0}$
Find the vector form solution $\mathbf{x}$ of the equation $A\mathbf{x}=\mathbf{0}$, where $A=\begin{bmatrix}
1 & 1 & 1 & 1 &2 \\
1 & 2 & 4 & 0 & 5 \\
\end{bmatrix}$. Also, find two linearly independent vectors $\mathbf{x}$ satisfying $A\mathbf{x}=\mathbf{0}$.
If $\mathbf{v}, \mathbf{w}$ are Linearly Independent Vectors and $A$ is Nonsingular, then $A\mathbf{v}, A\mathbf{w}$ are Linearly Independent
Let $A$ be an $n\times n$ nonsingular matrix. Let $\mathbf{v}, \mathbf{w}$ be linearly independent vectors in $\R^n$. Prove that the vectors $A\mathbf{v}$ and $A\mathbf{w}$ are linearly independent.
Find a Nonsingular Matrix $A$ satisfying $3A=A^2+AB$
(a) Find a $3\times 3$ nonsingular matrix $A$ satisfying $3A=A^2+AB$, where \[B=\begin{bmatrix}
2 & 0 & -1 \\
0 &2 &-1 \\
-1 & 0 & 1
(b) Find the inverse matrix of $A$.
Determine whether the Matrix is Nonsingular from the Given Relation
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.
\[A\begin{bmatrix}
\end{bmatrix}=B\begin{bmatrix}
\end{bmatrix},\] then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
Find All Symmetric Matrices satisfying the Equation
Find all $2\times 2$ symmetric matrices $A$ satisfying $A\begin{bmatrix}
\end{bmatrix}
\begin{bmatrix}
\end{bmatrix}$? Express your solution using free variable(s).
Compute $A^5\mathbf{u}$ Using Linear Combination
\[A=\begin{bmatrix}
-4 & -6 & -12 \\
-2 &-1 &-4 \\
2 & 3 & 6
\end{bmatrix}, \quad \mathbf{u}=\begin{bmatrix}
\end{bmatrix}, \quad \mathbf{v}=\begin{bmatrix}
\end{bmatrix}, \quad \text{ and } \mathbf{w}=\begin{bmatrix}
(a) Express the vector $\mathbf{u}$ as a linear combination of $\mathbf{v}$ and $\mathbf{w}$.
(b) Compute $A^5\mathbf{v}$.
(c) Compute $A^5\mathbf{w}$.
(d) Compute $A^5\mathbf{u}$.
If the Augmented Matrix is Row-Equivalent to the Identity Matrix, is the System Consistent?
Consider the following system of linear equations:
ax_1+bx_2 &=c\\
dx_1+ex_2 &=f\\
gx_1+hx_2 &=i.
(a) Write down the augmented matrix.
(b) Suppose that the augmented matrix is row equivalent to the identity matrix. Is the system consistent? Justify your answer.
Using Properties of Inverse Matrices, Simplify the Expression
Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression
\[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\] which matrix do you get?
(a) $A$
(b) $C^{-1}A^{-1}BC^{-1}AC^2$
(c) $B$
(d) $C^2$
(e) $C^{-1}BC$
(f) $C$
Elementary Questions about a Matrix
-5 & 0 & 1 & 2 \\
3 &8 & -3 & 7 \\
0 & 11 & 13 & 28
(a) What is the size of the matrix $A$?
(b) What is the third column of $A$?
(c) Let $a_{ij}$ be the $(i,j)$-entry of $A$. Calculate $a_{23}-a_{31}$.
Is the Following Function $T:\R^2 \to \R^3$ a Linear Transformation?
Determine whether the function $T:\R^2 \to \R^3$ defined by
\[T\left(\, \begin{bmatrix}
x \\
\end{bmatrix} \,\right)
x_+y \\
x+1 \\
\end{bmatrix}\] is a linear transformation.
Find a Basis of the Subspace Spanned by Four Polynomials of Degree 3 or Less
Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less.
\[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\] where
p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\
p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3.
(a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$.
(b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$.
(The Ohio State University, Linear Algebra Midterm)
Determine the Dimension of a Mysterious Vector Space From Coordinate Vectors
Let $V$ be a vector space and $B$ be a basis for $V$.
Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$.
Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$.
After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form
0 & 0 & 0 & 0 & 0
(a) What is the dimension of $V$?
(b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$?
Matrix Representation, Rank, and Nullity of a Linear Transformation $T:\R^2\to \R^3$
Let $T:\R^2 \to \R^3$ be a linear transformation such that
=\begin{bmatrix}
\end{bmatrix} \text{ and }
T\left(\, \begin{bmatrix}
4\\
(a) Find the matrix representation of $T$ (with respect to the standard basis for $\R^2$).
(b) Determine the rank and nullity of $T$.
Find Bases for the Null Space, Range, and the Row Space of a $5\times 4$ Matrix
1 & -1 & 0 & 0 \\
0 & 2 & 2 & 2\\
(a) Find a basis for the null space $\calN(A)$.
(b) Find a basis of the range $\calR(A)$.
(c) Find a basis of the row space for $A$.
Are the Trigonometric Functions $\sin^2(x)$ and $\cos^2(x)$ Linearly Independent?
Let $C[-2\pi, 2\pi]$ be the vector space of all continuous functions defined on the interval $[-2\pi, 2\pi]$.
Consider the functions \[f(x)=\sin^2(x) \text{ and } g(x)=\cos^2(x)\] in $C[-2\pi, 2\pi]$.
Prove or disprove that the functions $f(x)$ and $g(x)$ are linearly independent.
Find an Orthonormal Basis of the Given Two Dimensional Vector Space
Let $W$ be a subspace of $\R^4$ with a basis
\[\left\{\, \begin{bmatrix}
\end{bmatrix} \,\right\}.\]
Find an orthonormal basis of $W$.
Vector Space of 2 by 2 Traceless Matrices
Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.
\[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix}
a & b\\
c& -a
\end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\]
(a) Show that $W$ is a subspace of $V$.
(b) Find a basis of $W$.
(c) Find the dimension of $W$.
Linear Algebra Midterm 1 at the Ohio State University (3/3)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.
There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).
The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.
Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}
-3 & -4\\
8& 9
\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.
Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
(e) The vectors
\end{bmatrix}, \mathbf{v}_2=\begin{bmatrix}
\end{bmatrix}\] are linearly independent.
Problem 4. Let
\[\mathbf{a}_1=\begin{bmatrix}
\end{bmatrix}, \mathbf{a}_2=\begin{bmatrix}
\end{bmatrix}, \mathbf{b}=\begin{bmatrix}
Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$.
Find the inverse matrix of
\end{bmatrix}\] if it exists. If you think there is no inverse matrix of $A$, then give a reason.
Consider the system of linear equations
3x_1+2x_2&=1\\
5x_1+3x_2&=2.
(a) Find the coefficient matrix $A$ of the system.
(b) Find the inverse matrix of the coefficient matrix $A$.
(c) Using the inverse matrix of $A$, find the solution of the system.
(Linear Algebra Midterm Exam 1, the Ohio State University)
How to Find a Formula of the Power of a Matrix
Find All Values of $x$ so that a Matrix is Singular
Ring is a Filed if and only if the Zero Ideal is a Maximal Ideal
There is Exactly One Ring Homomorphism From the Ring of Integers to Any Ring | CommonCrawl |
About OpenStax
About OpenStax's Resources
About Principles of Economics
Chapter 1. Welcome to Economics!
1.1 What Is Economics, and Why Is It Important?
The Problem of Scarcity
The Division of and Specialization of Labor
Why the Division of Labor Increases Production
Trade and Markets
Key Concepts and Summary
1.2 Microeconomics and Macroeconomics
1.3 How Economists Use Theories and Models to Understand Economic Issues
1.4 How Economies Can Be Organized: An Overview of Economic Systems
Regulations: The Rules of the Game
The Rise of Globalization
Chapter 2. Choice in a World of Scarcity
Introduction to Choice in a World of Scarcity
2.1 How Individuals Make Choices Based on Their Budget Constraint
The Concept of Opportunity Cost
Identifying Opportunity Cost
Marginal Decision-Making and Diminishing Marginal Utility
Sunk Costs
From a Model with Two Goods to One of Many Goods
2.2 The Production Possibilities Frontier and Social Choices
The Shape of the PPF and the Law of Diminishing Returns
Productive Efficiency and Allocative Efficiency
Why Society Must Choose
The PPF and Comparative Advantage
2.3 Confronting Objections to the Economic Approach
First Objection: People, Firms, and Society Do Not Act Like This
Second Objection: People, Firms, and Society Should Not Act This Way
Chapter 3: Defining Economics: A Pluralistic Approach
Defining Economics: A Pluralistic Approach
3.1 The Importance of Definitions
3.2 Multiple Perspectives Require Multiple Definitions
3.3 A Brief Synopsis of Different Economic Perspectives
3.4 Deconstructing the Orthodox Definition of Economics
3.5 A Critical Examination of the Orthodox Definition of Economics and its Resultant Impacts
3.6 An Alternative Approach to Defining Economics
Chapter 4. Demand and Supply
Introduction to Demand and Supply
4.1 Demand, Supply, and Equilibrium in Markets for Goods and Services
Demand for Goods and Services
Supply of Goods and Services
Equilibrium—Where Demand and Supply Intersect
4.2 Shifts in Demand and Supply for Goods and Services
What Factors Affect Demand?
The Ceteris Paribus Assumption
How Does Income Affect Demand?
Other Factors That Shift Demand Curves
Summing Up Factors That Change Demand
How Production Costs Affect Supply
Other Factors That Affect Supply
Summing Up Factors That Change Supply
4.3 Changes in Equilibrium Price and Quantity: The Four-Step Process
Good Weather for Salmon Fishing
Newspapers and the Internet
The Interconnections and Speed of Adjustment in Real Markets
A Combined Example
4.4 Price Ceilings and Price Floors
Price Ceilings
Price Floors
4.5 Demand, Supply, and Efficiency
Consumer Surplus, Producer Surplus, Social Surplus
Inefficiency of Price Floors and Price Ceilings
Demand and Supply as a Social Adjustment Mechanism
Chapter 5. Labor and Financial Markets
Introduction to Labor and Financial Markets
5.1 Demand and Supply at Work in Labor Markets
Equilibrium in the Labor Market
Shifts in Labor Demand
Shifts in Labor Supply
Technology and Wage Inequality: The Four-Step Process
Price Floors in the Labor Market: Living Wages and Minimum Wages
The Minimum Wage as an Example of a Price Floor
5.2 Demand and Supply in Financial Markets
Who Demands and Who Supplies in Financial Markets?
Equilibrium in Financial Markets
Shifts in Demand and Supply in Financial Markets
The United States as a Global Borrower
Price Ceilings in Financial Markets: Usury Laws
5.3 The Market System as an Efficient Mechanism for Information
Chapter 6. Elasticity
Introduction to Elasticity
6.1 Price Elasticity of Demand and Price Elasticity of Supply
Calculating Price Elasticity of Demand
Calculating the Price Elasticity of Supply
6.2 Polar Cases of Elasticity and Constant Elasticity
6.3 Elasticity and Pricing
Does Raising Price Bring in More Revenue?
Can Costs Be Passed on to Consumers?
Elasticity and Tax Incidence
Long-Run vs. Short-Run Impact
6.4 Elasticity in Areas Other Than Price
Income Elasticity of Demand
Cross-Price Elasticity of Demand
Elasticity in Labor and Financial Capital Markets
Expanding the Concept of Elasticity
Chapter 7. Consumer Choices
Introduction to Consumer Choices
7.1 Consumption Choices
Total Utility and Diminishing Marginal Utility
Choosing with Marginal Utility
A Rule for Maximizing Utility
Measuring Utility with Numbers
7.2 How Changes in Income and Prices Affect Consumption Choices
How Changes in Income Affect Consumer Choices
How Price Changes Affect Consumer Choices
The Foundations of Demand Curves
Applications in Government and Business
7.3 Labor-Leisure Choices
The Labor-Leisure Budget Constraint
Applications of Utility Maximizing with the Labor-Leisure Budget Constraint
7.4 Intertemporal Choices in Financial Capital Markets
Using Marginal Utility to Make Intertemporal Choices
Applications of the Model of Intertemporal Choice
The Unifying Power of the Utility-Maximizing Budget Set Framework
Behavioral Economics: An Alternative Viewpoint
Chapter 8. Challenging the Role of Utilitarianism
The Role of Value(s) in the Economics Discipline
8.1 Economics and Value
8.2 Utilitarianism: The Philosophy Behind Orthodox Economics
8.3 Utility and Pareto Optimality: The Orthodox Economic View of Social Welfare
8.4 Abandoning the Normative Constraints of Utilitarianism
Chapter 9. An Institutional Analysis of Modern Consumption
Introduction to An Institutional Analysis of Modern Consumption
9.1 Institutional Analysis
9.2 Conspicuous Consumption
9.3 The Complex World of Modern Consumption
Chapter 10. Cost and Industry Structure
Introduction to Cost and Industry Structure
10.1 Explicit and Implicit Costs, and Accounting and Economic Profit
10.2 The Structure of Costs in the Short Run
Fixed and Variable Costs
Average Total Cost, Average Variable Cost, Marginal Cost
Lessons from Alternative Measures of Costs
A Variety of Cost Patterns
10.3 The Structure of Costs in the Long Run
Choice of Production Technology
Shapes of Long-Run Average Cost Curves
The Size and Number of Firms in an Industry
Shifting Patterns of Long-Run Average Cost
Chapter 11. Perfect Competition
Introduction to Perfect Competition
11.1 Perfect Competition and Why It Matters
11.2 How Perfectly Competitive Firms Make Output Decisions
Determining the Highest Profit by Comparing Total Revenue and Total Cost
Comparing Marginal Revenue and Marginal Costs
Profits and Losses with the Average Cost Curve
The Shutdown Point
Short-Run Outcomes for Perfectly Competitive Firms
Marginal Cost and the Firm's Supply Curve
11.3 Entry and Exit Decisions in the Long Run
How Entry and Exit Lead to Zero Profits in the Long Run
The Long-Run Adjustment and Industry Types
11.4 Efficiency in Perfectly Competitive Markets
Chapter 12. Monopoly
Introduction to a Monopoly
12.1 How Monopolies Form: Barriers to Entry
Natural Monopoly
Control of a Physical Resource
Legal Monopoly
Promoting Innovation
Intimidating Potential Competition
Summing Up Barriers to Entry
12.2 How a Profit-Maximizing Monopoly Chooses Output and Price
Demand Curves Perceived by a Perfectly Competitive Firm and by a Monopoly
Total Cost and Total Revenue for a Monopolist
Marginal Revenue and Marginal Cost for a Monopolist
Illustrating Monopoly Profits
The Inefficiency of Monopoly
Chapter 13. Monopolistic Competition and Oligopoly
Introduction to Monopolistic Competition and Oligopoly
13.1 Monopolistic Competition
Differentiated Products
Perceived Demand for a Monopolistic Competitor
How a Monopolistic Competitor Chooses Price and Quantity
Monopolistic Competitors and Entry
Monopolistic Competition and Efficiency
The Benefits of Variety and Product Differentiation
13.2 Oligopoly
Why Do Oligopolies Exist?
Collusion or Competition?
The Oligopoly Version of the Prisoner's Dilemma
How to Enforce Cooperation
Tradeoffs of Imperfect Competition
Chapter 14. The Rise of Big Business
Introduction to the Rise of Big Business
The Joint-Stock Corporation and Long Distance Trade
Mercantilism
14.1 Big Business in American History
Business Enterprise and Property Rights
Colonial legal institutions
Canals, Steamboats and Railroads
14.2 Industrialization and the Factory
Large-scale technologies that make up the core of the economic system
Integrated chains of production that link markets and industries
14.3 Big Business and Organized Labor
The Knights of Labor
The Great Railway Strike of 1877
14.4 Regulation of Big Business
The Sherman Act
Chapter 15. Costs and Prices: The Evidence
Introduction to Costs and Prices
15.1 Testing the Neoclassical Theory of the Firm
Testing the Theory
Designing the Average Total Cost Curve
15.2 Costing and Pricing: A Heterodox Alternative
Depreciation and the Going Concern
Prices from Pricing
15.3 Comparing Neoclassical and Heterodox Theory
Chapter 16. The Megacorp
Introduction to the Megacorp
16.1 The Imperatives of Technology
16.2 Business Models, Plural: Aims and Methods of the Megacorp
16.3 Stabilizing Unstable Markets
16.4 The High Price of College Textbooks
Chapter 17. Monopoly and Antitrust Policy
Introduction to Monopoly and Antitrust Policy
17.1 Corporate Mergers
Regulations for Approving Mergers
The Four-Firm Concentration Ratio
The Herfindahl-Hirshman Index
New Directions for Antitrust
17.2 Regulating Anticompetitive Behavior
Restrictive Practices
17.3 Regulating Natural Monopolies
The Choices in Regulating a Natural Monopoly
Cost-Plus versus Price Cap Regulation
17.4 The Great Deregulation Experiment
Doubts about Regulation of Prices and Quantities
The Effects of Deregulation
Chapter 18. Environmental Protection and Negative Externalities
Introduction to Environmental Protection and Negative Externalities
18.1 The Economics of Pollution
Pollution as a Negative Externality
18.2 Command-and-Control Regulation
18.3 Market-Oriented Environmental Tools
Pollution Charges
Marketable Permits
Better-Defined Property Rights
Applying Market-Oriented Environmental Tools
Self-Check Questions
18.4 The Benefits and Costs of U.S. Environmental Laws
Benefits and Costs of Clean Air and Clean Water
Ecotourism: Making Environmentalism Pay
Marginal Benefits and Marginal Costs
18.5 International Environmental Issues
18.6 The Tradeoff between Economic Output and Environmental Protection
Chapter 19. Positive Externalities and Public Goods
Introduction to Positive Externalities and Public Goods
19.1 Why the Private Sector Under Invests in Innovation
The Positive Externalities of New Technology
Why Invest in Human Capital?
Other Examples of Positive Externalities
19.2 How Governments Can Encourage Innovation
Policy #1: Government Spending on Research and Development
Policy #2: Tax Breaks for Research and Development
Policy #3 Cooperative Research
19.3 Public Goods
The Definition of a Public Good
The Free Rider Problem of Public Goods
The Role of Government in Paying for Public Goods
Common Resources and the "Tragedy of the Commons"
Positive Externalities in Public Health Programs
Chapter 20. Poverty and Economic Inequality
Introduction to Poverty and Economic Inequality
20.1 Drawing the Poverty Line
20.2 The Poverty Trap
20.3 The Safety Net
Temporary Assistance for Needy Families
The Earned Income Tax Credit (EITC)
Supplemental Nutrition Assistance Program (SNAP)
20.4 Income Inequality: Measurement and Causes
Measuring Income Distribution by Quintiles
Lorenz Curve
Causes of Growing Inequality: The Changing Composition of American Households
Causes of Growing Inequality: A Shift in the Distribution of Wages
20.5 Government Policies to Reduce Income Inequality
The Ladder of Opportunity
The Tradeoff between Incentives and Income Equality
Chapter 21. Issues in Labor Markets: Unions, Discrimination, Immigration
Introduction to Issues in Labor Markets: Unions, Discrimination, Immigration
21.1 Unions
Facts about Union Membership and Pay
Higher Wages for Union Workers
The Decline in U.S. Union Membership
21.2 Employment Discrimination
Earnings Gaps by Race and Gender
Investigating the Female/Male Earnings Gap
Investigating the Black/White Earnings Gap
Competitive Markets and Discrimination
Public Policies to Reduce Discrimination
An Increasingly Diverse Workforce
21.3 Immigration
Historical Patterns of Immigration
Economic Effects of Immigration
Proposals for Immigration Reform
Chapter 22. Information, Risk, and Insurance
22.1 The Problem of Imperfect Information and Asymmetric Information
"Lemons" and Other Examples of Imperfect Information
How Imperfect Information Can Affect Equilibrium Price and Quantity
When Price Mixes with Imperfect Information about Quality
Mechanisms to Reduce the Risk of Imperfect Information
22.2 Insurance and Imperfect Information
Government and Social Insurance
Risk Groups and Actuarial Fairness
The Moral Hazard Problem
The Adverse Selection Problem
U.S. Health Care in an International Context
Government Regulation of Insurance
The Patient Protection and Affordable Care Act
Chapter 23. Financial Markets
Introduction to Financial Markets
23.1 How Businesses Raise Financial Capital
Early Stage Financial Capital
Profits as a Source of Financial Capital
Borrowing: Banks and Bonds
Corporate Stock and Public Firms
How Firms Choose between Sources of Financial Capital
23.2 How Households Supply Financial Capital
Expected Rate of Return, Risk, and Actual Rate of Return
Housing and Other Tangible Assets
The Tradeoffs between Return and Risk
23.3 How to Accumulate Personal Wealth
Why It Is Hard to Get Rich Quick: The Random Walk Theory
Getting Rich the Slow, Boring Way
How Capital Markets Transform Financial Flows
Chapter 24. Public Economy
Introduction to Public Economy
24.1 Voter Participation and Costs of Elections
24.2 Special Interest Politics
Identifiable Winners, Anonymous Losers
Pork Barrels and Logrolling
24.3 Flaws in the Democratic System of Government
Where Is Government's Self-Correcting Mechanism?
A Balanced View of Markets and Government
Chapter 25. Money and the Theory of the Firm
Introduction to Money and the Theory of the Firm
25.1 The Metallist and the Barter Myth
25.2 Smith, Marx, Keynes, Chartalism and Modern Money Theory
25.3 The Money Hierarchy and the False Duality of the State and Market
25.4 Local Currency Systems: Social Money and Community Currencies
Chapter 26. International Trade
Introduction to International Trade
26.1 Absolute and Comparative Advantage
A Numerical Example of Absolute and Comparative Advantage
Gains from Trade
26.2 What Happens When a Country Has an Absolute Advantage in All Goods
Production Possibilities and Comparative Advantage
Mutually Beneficial Trade with Comparative Advantage
How Opportunity Cost Sets the Boundaries of Trade
Comparative Advantage Goes Camping
26.3 Intra-industry Trade between Similar Economies
The Prevalence of Intra-industry Trade between Similar Economies
Gains from Specialization and Learning
Economies of Scale, Competition, Variety
Dynamic Comparative Advantage
26.4 The Benefits of Reducing Barriers to International Trade
From Interpersonal to International Trade
Chapter 27. Globalization and Protectionism
Introduction to Globalization and Protectionism
27.1 Protectionism: An Indirect Subsidy from Consumers to Producers
Demand and Supply Analysis of Protectionism
Who Benefits and Who Pays?
27.2 International Trade and Its Effects on Jobs, Wages, and Working Conditions
Fewer Jobs?
Trade and Wages
Labor Standards and Working Conditions
27.3 Arguments in Support of Restricting Imports
The Infant Industry Argument
The Anti-Dumping Argument
The Environmental Protection Argument
The Unsafe Consumer Products Argument
The National Interest Argument
27.4 How Trade Policy Is Enacted: Globally, Regionally, and Nationally
The World Trade Organization
Regional Trading Agreements
Trade Policy at the National Level
Long-Term Trends in Barriers to Trade
27.5 The Tradeoffs of Trade Policy
Chapter 28. The Economics of Globalization and Trade: A Pluralistic Approach
Introduction to Globalization and Trade from a Pluralistic Perspective
28.1 The Orthodox Story of Trade: A Synopsis
28.2 A Critical Examination of the Orthodox Depiction of Free Trade
28.3 Challenging Functionality: A More Penetrating Critique
28.4 An Alternative Presentation of International Trade: Path Dependency
Principles of Microeconomics: Scarcity and Social Provisioning
Analyze short-run costs as influenced by total cost, fixed cost, variable cost, marginal cost, and average cost.
Calculate average profit
Evaluate patterns of costs to determine potential profit
The cost of producing a firm's output depends on how much labor and physical capital the firm uses. A list of the costs involved in producing cars will look very different from the costs involved in producing computer software or haircuts or fast-food meals. However, the cost structure of all firms can be broken down into some common underlying patterns. When a firm looks at its total costs of production in the short run, a useful starting point is to divide total costs into two categories: fixed costs that cannot be changed in the short run and variable costs that can be changed.
Fixed costs are expenditures that do not change regardless of the level of production, at least not in the short term. Whether you produce a lot or a little, the fixed costs are the same. One example is the rent on a factory or a retail space. Once you sign the lease, the rent is the same regardless of how much you produce, at least until the lease runs out. Fixed costs can take many other forms: for example, the cost of machinery or equipment to produce the product, research and development costs to develop new products, even an expense like advertising to popularize a brand name. The level of fixed costs varies according to the specific line of business: for instance, manufacturing computer chips requires an expensive factory, but a local moving and hauling business can get by with almost no fixed costs at all if it rents trucks by the day when needed.
Variable costs, on the other hand, are incurred in the act of producing—the more you produce, the greater the variable cost. Labor is treated as a variable cost, since producing a greater quantity of a good or service typically requires more workers or more work hours. Variable costs would also include raw materials.
As a concrete example of fixed and variable costs, consider the barber shop called "The Clip Joint" shown in Figure 1. The data for output and costs are shown in Table 2. The fixed costs of operating the barber shop, including the space and equipment, are $160 per day. The variable costs are the costs of hiring barbers, which in our example is $80 per barber each day. The first two columns of the table show the quantity of haircuts the barbershop can produce as it hires additional barbers. The third column shows the fixed costs, which do not change regardless of the level of production. The fourth column shows the variable costs at each level of output. These are calculated by taking the amount of labor hired and multiplying by the wage. For example, two barbers cost: 2 × $80 = $160. Adding together the fixed costs in the third column and the variable costs in the fourth column produces the total costs in the fifth column. So, for example, with two barbers the total cost is: $160 + $160 = $320.
Variable Cost
1 16 $160 $80 $240
2 40 $160 $160 $320
Table 2. Output and Total Costs
Figure 1. How Output Affects Total Costs. At zero production, the fixed costs of $160 are still present. As production increases, variable costs are added to fixed costs, and the total cost is the sum of the two.
The relationship between the quantity of output being produced and the cost of producing that output is shown graphically in the figure. The fixed costs are always shown as the vertical intercept of the total cost curve; that is, they are the costs incurred when output is zero so there are no variable costs.
You can see from the graph that once production starts, total costs and variable costs rise. While variable costs may initially increase at a decreasing rate, at some point they begin increasing at an increasing rate. This is caused by diminishing marginal returns, discussed in the chapter on Choice in a World of Scarcity, which is easiest to see with an example. As the number of barbers increases from zero to one in the table, output increases from 0 to 16 for a marginal gain of 16; as the number rises from one to two barbers, output increases from 16 to 40, a marginal gain of 24. From that point on, though, the marginal gain in output diminishes as each additional barber is added. For example, as the number of barbers rises from two to three, the marginal output gain is only 20; and as the number rises from three to four, the marginal gain is only 12.
To understand the reason behind this pattern, consider that a one-man barber shop is a very busy operation. The single barber needs to do everything: say hello to people entering, answer the phone, cut hair, sweep up, and run the cash register. A second barber reduces the level of disruption from jumping back and forth between these tasks, and allows a greater division of labor and specialization. The result can be greater increasing marginal returns. However, as other barbers are added, the advantage of each additional barber is less, since the specialization of labor can only go so far. The addition of a sixth or seventh or eighth barber just to greet people at the door will have less impact than the second one did. This is the pattern of diminishing marginal returns. As a result, the total costs of production will begin to rise more rapidly as output increases. At some point, you may even see negative returns as the additional barbers begin bumping elbows and getting in each other's way. In this case, the addition of still more barbers would actually cause output to decrease, as shown in the last row of Table 2.
This pattern of diminishing marginal returns is common in production. As another example, consider the problem of irrigating a crop on a farmer's field. The plot of land is the fixed factor of production, while the water that can be added to the land is the key variable cost. As the farmer adds water to the land, output increases. But adding more and more water brings smaller and smaller increases in output, until at some point the water floods the field and actually reduces output. Diminishing marginal returns occur because, at a given level of fixed costs, each additional input contributes less and less to overall production.
The breakdown of total costs into fixed and variable costs can provide a basis for other insights as well. The first five columns of Table 3 duplicate the previous table, but the last three columns show average total costs, average variable costs, and marginal costs. These new measures analyze costs on a per-unit (rather than a total) basis and are reflected in the curves shown in Figure 2.
Figure 2. Cost Curves at the Clip Joint. The information on total costs, fixed cost, and variable cost can also be presented on a per-unit basis. Average total cost (ATC) is calculated by dividing total cost by the total quantity produced. The average total cost curve is typically U-shaped. Average variable cost (AVC) is calculated by dividing variable cost by the quantity produced. The average variable cost curve lies below the average total cost curve and is typically U-shaped or upward-sloping. Marginal cost (MC) is calculated by taking the change in total cost between two levels of output and dividing by the change in output. The marginal cost curve is upward-sloping.
Marginal Cost
Average Total Cost
Average Variable Cost
1 16 $160 $80 $240 $5.00 $15.00 $5.00
2 40 $160 $160 $320 $3.30 $8.00 $4.00
5 80 $160 $400 $560 $10.00 $7.00 $5.00
Table 3. Different Types of Costs
Average total cost (sometimes referred to simply as average cost) is total cost divided by the quantity of output. Since the total cost of producing 40 haircuts is $320, the average total cost for producing each of 40 haircuts is $320/40, or $8 per haircut. Average cost curves are typically U-shaped, as Figure 2 shows. Average total cost starts off relatively high, because at low levels of output total costs are dominated by the fixed cost; mathematically, the denominator is so small that average total cost is large. Average total cost then declines, as the fixed costs are spread over an increasing quantity of output. In the average cost calculation, the rise in the numerator of total costs is relatively small compared to the rise in the denominator of quantity produced. But as output expands still further, the average cost begins to rise. At the right side of the average cost curve, total costs begin rising more rapidly as diminishing returns kick in.
Average variable cost obtained when variable cost is divided by quantity of output. For example, the variable cost of producing 80 haircuts is $400, so the average variable cost is $400/80, or $5 per haircut. Note that at any level of output, the average variable cost curve will always lie below the curve for average total cost, as shown in Figure 2. The reason is that average total cost includes average variable cost and average fixed cost. Thus, for Q = 80 haircuts, the average total cost is $8 per haircut, while the average variable cost is $5 per haircut. However, as output grows, fixed costs become relatively less important (since they do not rise with output), so average variable cost sneaks closer to average cost.
Average total and variable costs measure the average costs of producing some quantity of output. Marginal cost is somewhat different. Marginal cost is the additional cost of producing one more unit of output. So it is not the cost per unit of all units being produced, but only the next one (or next few). Marginal cost can be calculated by taking the change in total cost and dividing it by the change in quantity. For example, as quantity produced increases from 40 to 60 haircuts, total costs rise by 400 – 320, or 80. Thus, the marginal cost for each of those marginal 20 units will be 80/20, or $4 per haircut. The marginal cost curve is generally upward-sloping, because diminishing marginal returns implies that additional units are more costly to produce. A small range of increasing marginal returns can be seen in the figure as a dip in the marginal cost curve before it starts rising. There is a point at which marginal and average costs meet, as the following Clear it Up feature discusses.
Where do marginal and average costs meet?
The marginal cost line intersects the average cost line exactly at the bottom of the average cost curve—which occurs at a quantity of 72 and cost of $6.60 in Figure 2. The reason why the intersection occurs at this point is built into the economic meaning of marginal and average costs. If the marginal cost of production is below the average cost for producing previous units, as it is for the points to the left of where MC crosses ATC, then producing one more additional unit will reduce average costs overall—and the ATC curve will be downward-sloping in this zone. Conversely, if the marginal cost of production for producing an additional unit is above the average cost for producing the earlier units, as it is for points to the right of where MC crosses ATC, then producing a marginal unit will increase average costs overall—and the ATC curve must be upward-sloping in this zone. The point of transition, between where MC is pulling ATC down and where it is pulling it up, must occur at the minimum point of the ATC curve.
This idea of the marginal cost "pulling down" the average cost or "pulling up" the average cost may sound abstract, but think about it in terms of your own grades. If the score on the most recent quiz you take is lower than your average score on previous quizzes, then the marginal quiz pulls down your average. If your score on the most recent quiz is higher than the average on previous quizzes, the marginal quiz pulls up your average. In this same way, low marginal costs of production first pull down average costs and then higher marginal costs pull them up.
The numerical calculations behind average cost, average variable cost, and marginal cost will change from firm to firm. However, the general patterns of these curves, and the relationships and economic intuition behind them, will not change.
Breaking down total costs into fixed cost, marginal cost, average total cost, and average variable cost is useful because each statistic offers its own insights for the firm.
Whatever the firm's quantity of production, total revenue must exceed total costs if it is to earn a profit. As explored in the chapter Choice in a World of Scarcity, fixed costs are often sunk costs that cannot be recouped. In thinking about what to do next, sunk costs should typically be ignored, since this spending has already been made and cannot be changed. However, variable costs can be changed, so they convey information about the firm's ability to cut costs in the present and the extent to which costs will increase if production rises.
Why are total cost and average cost not on the same graph?
Total cost, fixed cost, and variable cost each reflect different aspects of the cost of production over the entire quantity of output being produced. These costs are measured in dollars. In contrast, marginal cost, average cost, and average variable cost are costs per unit. In the previous example, they are measured as cost per haircut. Thus, it would not make sense to put all of these numbers on the same graph, since they are measured in different units ($ versus $ per unit of output).
It would be as if the vertical axis measured two different things. In addition, as a practical matter, if they were on the same graph, the lines for marginal cost, average cost, and average variable cost would appear almost flat against the horizontal axis, compared to the values for total cost, fixed cost, and variable cost. Using the figures from the previous example, the total cost of producing 40 haircuts is $320. But the average cost is $320/40, or $8. If you graphed both total and average cost on the same axes, the average cost would hardly show.
Average cost tells a firm whether it can earn profits given the current price in the market. If we divide profit by the quantity of output produced we get average profit, also known as the firm's profit margin. Expanding the equation for profit gives:
[latex]\begin{array}{r @{{}={}} l}average\;profit & \frac{profit}{quantity\;produced} \\[1em] & \frac{total\;revenue\;-\;total\;cost}{quantity\;produced} \\[1em] & \frac{total\;revenue}{quantity\;produced}\;-\;\frac{total\;cost}{quantity\;produced} \\[1em] & average\;revenue\;-\;average\;cost \end{array}[/latex]
But note that:
[latex]\begin{array}{r @{{}={}} l}average\;revenue & \frac{price\;\times\;quantity\;produced}{quantity\;produced} \\[1em] & price \end{array}[/latex]
[latex]average\;profit = price\;-\;average\;cost[/latex]
This is the firm's profit margin. This definition implies that if the market price is above average cost, average profit, and thus total profit, will be positive; if price is below average cost, then profits will be negative.
The marginal cost of producing an additional unit can be compared with the marginal revenue gained by selling that additional unit to reveal whether the additional unit is adding to total profit—or not. Thus, marginal cost helps producers understand how profits would be affected by increasing or decreasing production.
The pattern of costs varies among industries and even among firms in the same industry. Some businesses have high fixed costs, but low marginal costs. Consider, for example, an Internet company that provides medical advice to customers. Such a company might be paid by consumers directly, or perhaps hospitals or healthcare practices might subscribe on behalf of their patients. Setting up the website, collecting the information, writing the content, and buying or leasing the computer space to handle the web traffic are all fixed costs that must be undertaken before the site can work. However, when the website is up and running, it can provide a high quantity of service with relatively low variable costs, like the cost of monitoring the system and updating the information. In this case, the total cost curve might start at a high level, because of the high fixed costs, but then might appear close to flat, up to a large quantity of output, reflecting the low variable costs of operation. If the website is popular, however, a large rise in the number of visitors will overwhelm the website, and increasing output further could require a purchase of additional computer space.
For other firms, fixed costs may be relatively low. For example, consider firms that rake leaves in the fall or shovel snow off sidewalks and driveways in the winter. For fixed costs, such firms may need little more than a car to transport workers to homes of customers and some rakes and shovels. Still other firms may find that diminishing marginal returns set in quite sharply. If a manufacturing plant tried to run 24 hours a day, seven days a week, little time remains for routine maintenance of the equipment, and marginal costs can increase dramatically as the firm struggles to repair and replace overworked equipment.
Every firm can gain insight into its task of earning profits by dividing its total costs into fixed and variable costs, and then using these calculations as a basis for average total cost, average variable cost, and marginal cost. However, making a final decision about the profit-maximizing quantity to produce and the price to charge will require combining these perspectives on cost with an analysis of sales and revenue, which in turn requires looking at the market structure in which the firm finds itself. Before we turn to the analysis of market structure in other chapters, we will analyze the firm's cost structure from a long-run perspective.
In a short-run perspective, a firm's total costs can be divided into fixed costs, which a firm must incur before producing any output, and variable costs, which the firm incurs in the act of producing. Fixed costs are sunk costs; that is, because they are in the past and cannot be altered, they should play no role in economic decisions about future production or pricing. Variable costs typically show diminishing marginal returns, so that the marginal cost of producing higher levels of output rises.
Marginal cost is calculated by taking the change in total cost (or the change in variable cost, which will be the same thing) and dividing it by the change in output, for each possible change in output. Marginal costs are typically rising. A firm can compare marginal cost to the additional revenue it gains from selling another unit to find out whether its marginal unit is adding to profit.
Average total cost is calculated by taking total cost and dividing by total output at each different level of output. Average costs are typically U-shaped on a graph. If a firm's average cost of production is lower than the market price, a firm will be earning profits.
Average variable cost is calculated by taking variable cost and dividing by the total output at each level of output. Average variable costs are typically U-shaped. If a firm's average variable cost of production is lower than the market price, then the firm would be earning profits if fixed costs are left out of the picture.
The WipeOut Ski Company manufactures skis for beginners. Fixed costs are $30. Fill in Table 4 for total cost, average variable cost, average total cost, and marginal cost.
0 0 $30
1 $10 $30
5 $100 $30
Based on your answers to the WipeOut Ski Company in Self-Check Question 1, now imagine a situation where the firm produces a quantity of 5 units that it sells for a price of $25 each.
What will be the company's profits or losses?
How can you tell at a glance whether the company is making or losing money at this price by looking at average cost?
At the given quantity and price, is the marginal unit produced adding to profits?
What is the difference between fixed costs and variable costs?
Are there fixed costs in the long-run? Explain briefly.
Are fixed costs also sunk costs? Explain.
What are diminishing marginal returns as they relate to costs?
Which costs are measured on per-unit basis: fixed costs, average cost, average variable cost, variable costs, and marginal cost?
How is each of the following calculated: marginal cost, average total cost, average variable cost?
Critical Thinking Questions
A common name for fixed cost is "overhead." If you divide fixed cost by the quantity of output produced, you get average fixed cost. Supposed fixed cost is $1,000. What does the average fixed cost curve look like? Use your response to explain what "spreading the overhead" means.
How does fixed cost affect marginal cost? Why is this relationship important?
Average cost curves (except for average fixed cost) tend to be U-shaped, decreasing and then increasing. Marginal cost curves have the same shape, though this may be harder to see since most of the marginal cost curve is increasing. Why do you think that average and marginal cost curves have the same general shape?
Return to Figure 1. What is the marginal gain in output from increasing the number of barbers from 4 to 5 and from 5 to 6? Does it continue the pattern of diminishing marginal returns?
Compute the average total cost, average variable cost, and marginal cost of producing 60 and 72 haircuts. Draw the graph of the three curves between 60 and 72 haircuts.
average profit
profit divided by the quantity of output produced; profit margin
total cost divided by the quantity of output
variable cost divided by the quantity of output
expenditure that must be made before production starts and that does not change regardless of the level of production
the additional cost of producing one more unit
the sum of fixed and variable costs of production
cost of production that increases with the quantity produced
Answers to Self-Check Questions
0 0 $30 $30 – –
1 $10 $30 $40 $10.00 $40.00 $10
4 $70 $30 $100 $17.50 $25.00 $25
5 $100 $30 $130 $20.00 $26.00 $30
Total revenues in this example will be a quantity of five units multiplied by the price of $25/unit, which equals $125. Total costs when producing five units are $130. Thus, at this level of quantity and output the firm experiences losses (or negative profits) of $5.
If price is less than average cost, the firm is not making a profit. At an output of five units, the average cost is $26/unit. Thus, at a glance you can see the firm is making losses. At a second glance, you can see that it must be losing $1 for each unit produced (that is, average cost of $26/unit minus the price of $25/unit). With five units produced, this observation implies total losses of $5.
When producing five units, marginal costs are $30/unit. Price is $25/unit. Thus, the marginal unit is not adding to profits, but is actually subtracting from profits, which suggests that the firm should reduce its quantity produced.
Previous: 10.1 Explicit and Implicit Costs, and Accounting and Economic Profit
Next: 10.3 The Structure of Costs in the Long Run
Principles of Microeconomics: Scarcity and Social Provisioning by Erik Dean, Justin Elardo, Mitch Green, Benjamin Wilson, Sebastian Berger is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted. | CommonCrawl |
Category: geometry
Leila Schneps on Grothendieck
If you have neither the time nor energy to watch more than one interview or talk about Grothendieck's life and mathematics, may I suggest to spare that privilege for Leila Schneps' talk on 'Le génie de Grothendieck' in the 'Thé & Sciences' series at the Salon Nun in Paris.
I was going to add some 'relevant' time slots after the embedded YouTube-clip below, but I really think it is better to watch Leila's interview in its entirety. Enjoy!
From Weil's foundations to schemes
Last time, we've seen that the first time 'schemes' were introduced was in 'La Tribu' (the internal Bourbaki-account of their congresses) of the May-June 1955 congress in Chicago.
Here, we will focus on the events leading up to that event. If you always thought Grothendieck invented the word 'schemes', here's what Colin McLarty wrote:
"A story says that in a Paris café around 1955 Grothendieck asked his friends "what is a scheme?". At the time only an undefined idea of "schéma" was current in Paris, meaning more or less whatever would improve on Weil's foundations." (McLarty in The Rising Sea)
What were Weil's foundations of algebraic geometry?
Well, let's see how Weil defined an affine variety over a field $k$. First you consider a 'universal field' $K$ containing $k$, that is, $K$ is an algebraically closed field of infinite transcendence degree over $k$. A point of $n$-dimensional affine space is an $n$-tuple $x=(x_1,\dots,x_n) \in K^n$. For such a point $x$ you consider the field $k(x)$ which is the subfield of $K$ generated by $k$ and the coordinates $x_i$ of $x$.
Alternatively, the field $k(x)$ is the field of fractions of the affine domain $R=k[z_1,\dots,z_n]/I$ where $I$ is the prime ideal of all polynomials $f \in k[z_1,\dots,z_n]$ such that $f(x) = f(x_1,\dots,x_n)=0$.
An affine $k$-variety $V$ is associated to a 'generic point' $x=(x_1,\dots,x_n)$, meaning that the field $k(x)$ is a 'regular extension' of $k$ (that is, for all field-extensions $k'$ of $k$, the tensor product $k(x) \otimes_k k'$ does not contain zero-divisors.
The points of $V$ are the 'specialisations' of $x$, that is, all points $y=(y_1,\dots,y_n)$ such that $f(y_1,\dots,y_n)=0$ for all $f \in I$.
Perhaps an example? Let $k = \mathbb{Q}$ and $K=\mathbb{C}$ and take $x=(i,\pi)$ in the affine plane $\mathbb{C}^2$. What is the corresponding prime ideal $I$ of $\mathbb{Q}[z_1,z_2]$? Well, $i$ is a solution to $z_1^2+1=0$ whereas $\pi$ is transcendental over $\mathbb{Q}$, so $I=(z_1^2+1)$ and $R=\mathbb{Q}[z_1,z_2]/I= \mathbb{Q}(i)[z_2]$.
Is $x=(i,\pi)$ a generic point? Well, suppose it were, then the points of the corresponding affine variety $V$ would be all couples $(\pm i, \lambda)$ with $\lambda \in \mathbb{C}$ which is the union of two lines in $\mathbb{C}^2$. But then $i \otimes 1 + 1 \otimes i$ is a zero-divisor in $\mathbb{Q}(x) \otimes_{\mathbb{Q}} \mathbb{Q}(i)$. So no, it is not a generic point over $\mathbb{Q}$ and does not define an affine $\mathbb{Q}$-variety.
If we would have started with $k=\mathbb{Q}(i)$, then $x=(i,\pi)$ is generic and the corresponding affine variety $V$ consists of all points $(i,\lambda) \in \mathbb{C}^2$.
If this is new to you, consider yourself lucky to be young enough to have learned AG from Fulton's Algebraic curves, or Hartshorne's chapter 1 if you were that ambitious.
By 1955, Serre had written his FAC, and Bourbaki had developed enough commutative algebra to turn His attention to algebraic geometry.
La Ciotat congress (February 27th – March 6th, 1955)
With a splendid view on the mediterranean, a small group of Bourbaki members (Henri Cartan (then 51), with two of his former Ph.D. students: Jean-Louis Koszul (then 34), and Jean-Pierre Serre (then 29, and fresh Fields medaillist), Jacques Dixmier (then 31), and Pierre Samuel (then 34), a former student of Zariski's) discussed a previous 'Rapport de Geometrie Algebrique'(no. 206) and arrived at some unanimous decisions:
1. Algebraic varieties must be sets of points, which will not change at every moment.
2. One should include 'abstract' varieties, obtained by gluing (fibres, etc.).
3. All necessary algebra must have been previously proved.
4. The main application of purely algebraic methods being characteristic p, we will hide nothing of the unpleasant phenomena that occur there.
(Henri Cartan and Jean-Pierre Serre, photo by Paul Halmos)
The approach the propose is clearly based on Serre's FAC. The points of an affine variety are the maximal ideals of an affine $k$-algebra, this set is equipped with the Zariski topology such that the local rings form a structure sheaf. Abstract varieties are then constructed by gluing these topological spaces and sheaves.
At the insistence of the 'specialistes' (Serre, and Samuel who had just written his book 'Méthodes d'algèbre abstraite en géométrie algébrique') two additional points are adopted, but with some hesitation. The first being a jibe at Weil:
1. …The congress, being a little disgusted by the artificiality of the generic point, does not want $K$ to be always of infinite transcendent degree over $k$. It admits that generic points are convenient in certain circumstances, but refuses to see them put to all the sauces: one could speak of a coordinate ring or of a functionfield without stuffing it by force into $K$.
2. Trying to include the arithmetic case.
The last point was problematic as all their algebras were supposed to be affine over a field $k$, and they wouldn't go further than to allow the overfield $K$ to be its algebraic closure. Further, (and this caused a lot of heavy discussions at coming congresses) they allowed their varieties to be reducible.
The Chicago congress (May 30th – June 2nd 1955)
Apart from Samuel, a different group of Bourbakis gathered for the 'second Caucus des Illinois' at Eckhart Hall, including three founding members Weil (then 49), Dixmier (then 49) and Chevalley (then 46), and two youngsters, Armand Borel (then 32) and Serge Lang (then 28).
Their reaction to the La Ciotat meeting (the 'congress of the public bench') was swift:
(page 1) : "The caucus discovered a public bench near Eckhart Hall, but didn't do much with it."
(page 2) : "The caucus did not judge La Ciotat's plan beyond reproach, and proposed a completely different plan."
They wanted to include the arithmetic case by defining as affine scheme the set of all prime ideals (or rather, the localisations at these prime ideals) of a finitely generated domain over a Dedekind domain. They continue:
(page 4) : "The notion of a scheme covers the arithmetic case, and is extracted from the illustrious works of Nagata, themselves inspired by the scholarly cogitations of Chevalley. This means that the latter managed to sell all his ideas to the caucus. The Pope of Chicago, very happy to be able to reject very far projective varieties and Chow coordinates, willingly rallied to the suggestions of his illustrious colleague. However, we have not attempted to define varieties in the arithmetic case. Weil's principle is that it is unclear what will come out of Nagata's tricks, and that the only stable thing in arithmetic theory is reduction modulo $p$ a la Shimura."
"Contrary to the decisions of La Ciotat, we do not want to glue reducible stuff, nor call them varieties. … We even decide to limit ourselves to absolutely irreducible varieties, which alone will have the right to the name of varieties."
The insistence on absolutely irreducibility is understandable from Weil's perspective as only they will have a generic point. But why does he go along with Chevalley's proposal of an affine scheme?
In Weil's approach, a point of the affine variety $V$ determined by a generic point $x=(x_1,\dots,x_n)$ determines a prime ideal $Q$ of the domain $R=k[x_1,\dots,x_n]$, so Chevalley's proposal to consider all prime ideals (rather than only the maximal ideals of an affine algebra) seems right to Weil.
However in Weil's approach there are usually several points corresponding to the same prime ideal $Q$ of $R$, namely all possible embeddings of the ring $R/Q$ in that huge field $K$, so whenever $R/Q$ is not algebraic over $k$, there are infinitely Weil-points of $V$ corresponding to $Q$ (whence the La Ciotat criticism that points of a variety were not supposed to change at every moment).
According to Ralf Krömer in his book Tool and Object – a history and philosophy of category theory this shift from Weil-points to prime ideals of $R$ may explain Chevalley's use of the word 'scheme':
(page 164) : "The 'scheme of the variety' denotes 'what is invariant in a variety'."
Another time we will see how internal discussion influenced the further Bourbaki congresses until Grothendieck came up with his 'hyperplan'.
The birthplace of schemes
Wikipedia claims:
"The word scheme was first used in the 1956 Chevalley Seminar, in which Chevalley was pursuing Zariski's ideas."
and refers to the lecture by Chevalley 'Les schemas', given on December 12th, 1955 at the ENS-based 'Seminaire Henri Cartan' (in fact, that year it was called the Cartan-Chevalley seminar, and the next year Chevalley set up his own seminar at the ENS).
Items recently added to the online Bourbaki Archive give us new information on time and place of the birth of the concept of schemes.
From May 30th till June 2nd 1955 the 'second caucus des Illinois' Bourbaki-congress was held in 'le grand salon d'Eckhart Hall' at the University of Chicago (Weil's place at that time).
Only six of the Bourbaki members were present:
Jean Dieudonne (then 49), the scribe of the Bourbaki-gang.
Andre Weil (then 49), called 'Le Pape de Chicago' in La Tribu, and responsible for his 'Foundations of Algebraic Geometry'.
Claude Chevalley (then 46), who wanted a better, more workable version of algebraic geometry. He was just nominated professor at the Sorbonne, and was prepping for his seminar on algebraic geometry (with Cartan) in the fall.
Pierre Samuel (then 34), who studied in France but got his Ph.D. in 1949 from Princeton under the supervision of Oscar Zariski. He was a Bourbaki-guinea pig in 1945, and from 1947 attended most Bourbaki congresses. He just got his book Methodes d'algebre abstraite en geometrie algebrique published.
Armand Borel (then 32), a Swiss mathematician who was in Paris from 1949 and obtained his Ph.D. under Jean Leray before moving on to the IAS in 1957. He was present at 9 of the Bourbaki congresses between 1955 and 1960.
Serge Lang (then 28), a French-American mathematician who got his Ph.D. in 1951 from Princeton under Emil Artin. In 1955, he just got a position at the University of Chicago, which he held until 1971. He attended 7 Bourbaki congresses between 1955 and 1960.
The issue of La Tribu of the Eckhart-Hall congress is entirely devoted to algebraic geometry, and starts off with a bang:
"The Caucus did not judge the plan of La Ciotat above all reproaches, and proposed a completely different plan.
I – Schemes
II – Theory of multiplicities for schemes
III – Varieties
IV – Calculation of cycles
V – Divisors
VI – Projective geometry
In the spring of that year (February 27th – March 6th, 1955) a Bourbaki congress was held 'Chez Patrice' at La Ciotat, hosting a different group of Bourbaki members (Samuel was the singleton intersection) : Henri Cartan (then 51), Jacques Dixmier (then 31), Jean-Louis Koszul (then 34), and Jean-Pierre Serre (then 29, and fresh Fields medaillist).
In the La Ciotat-Tribu,nr. 35 there are also a great number of pages (page 14 – 25) used to explain a general plan to deal with algebraic geometry. Their summary (page 3-4):
"Algebraic Geometry : She has a very nice face.
Chap I : Algebraic varieties
Chap II : The rest of Chap. I
Chap III : Divisors
Chap IV : Intersections"
There's much more to say comparing these two plans, but that'll be for another day.
We've just read the word 'schemes' for the first (?) time. That unnumbered La Tribu continues on page 3 with "where one explains what a scheme is":
So, what was their first idea of a scheme?
Well, you had your favourite Dedekind domain $D$, and you considered all rings of finite type over $D$. Sorry, not all rings, just all domains because such a ring $R$ had to have a field of fractions $K$ which was of finite type over $k$ the field of fractions of your Dedekind domain $D$.
They say that Dedekind domains are the algebraic geometrical equivalent of fields. Yeah well, as they only consider $D$-rings the geometric object associated to $D$ is the terminal object, much like a point if $D$ is an algebraically closed field.
But then, what is this geometric object associated to a domain $R$?
In this stage, still under the influence of Weil's focus on valuations and their specialisations, they (Chevalley?) take as the geometric object $\mathbf{Spec}(R)$, the set of all 'spots' (taches), that is, local rings in $K$ which are the localisations of $R$ at prime ideals. So, instead of taking the set of all prime ideals, they prefer to take the set of all stalks of the (coming) structure sheaf.
But then, speaking about sheaves is rather futile as there is no trace of any topology on this set, then. Also, they make a big fuss about not wanting to define a general schema by gluing together these 'affine' schemes, but then they introduce a notion of 'apparentement' of spots which basically means the same thing.
It is still very early days, and there's a lot more to say on this, but if no further documents come to light, I'd say that the birthplace of 'schemes', that is , the place where the first time there was a documented consensus on the notion, is Eckhart Hall in Chicago.
the topos of unconsciousness
Published May 14, 2022 by lievenlb
Since wednesday, as mentioned last time, the book by Alain Connes and Patrick Gauthier-Lafaye: "A l'ombre de Grothendieck et de Lacan, un topos sur l'inconscient" is available in the better bookshops.
There's no need to introduce Alain Connes on this blog. Patrick Gauthier-Lafaye is a French psychiatrist and psycho-analyst, working in Strassbourg.
The book is a lengthy dialogue in which the authors try to find a use for topos theory in Jaques Lacan's psycho-analytical view of the unconscious.
If you are a complete Lacanian virgin, it may be helpful to browse through "Lacan, a beginners guide" (by Lionel Bailly) first.
If this left you bewildered, for example by Lacan's strange (ab)use of mathematics, rest assured, you're not alone.
It is no coincidence that Lacan's works are the first case-study in the book "Fashionable Nonsense: Postmodern Intellectuals' Abuse of Science" by Alan Sokal (the one of the hoax) and Jean Bricmont. You can download the book from this link.
If now you feel that Sokal and Bricmont are way too harsh on Lacan, I urge you to have a go at the book "Writing the structures of the subject, Lacan and topology" by Will Greenshields.
If you don't have the time or energy for this, let me give you one illustrative example: the topological explanation of Lacan's formula of fantasy:
\[
\$~\diamond~a \]
Loosely speaking this formula says "the barred subject stands within a circular relationship to the objet petit a (the object of desire), one part of which is determined by alienation, the other by separation".
Lacan was obsessed with the immersion of the projective plane $\mathbb{P}^2(\mathbb{R})$ into $\mathbb{R}^3$ as the cross-cap. Here's an image of it from his 1966-67 seminar on 'Logique du fantasme' (213 pages).
This image includes the position of the objet petit $a$ as the end point of the self-intersection curve, which itself is referred to as the 'castration', or the 'phallus', or whatever.
Brace yourself for the 'explanation' of $\$~\diamond~a$: if you walk twice around $a$ this divides the cross-cap into a disk and a Mobius-strip!
The mathematics is correct but I fail to see how this helps the psycho-analyst in her therapy. But hey, everyone will tell you I have absolutely no therapeutic talent.
Let's return to the brand new book by Alain Connes and Patrick Gauthier-Lafaye: "A l'ombre de Grothendieck et de Lacan, un topos sur l'inconscient".
It was to be expected that they would defend Lacan's exploitation of (surface) topology by saying that he was just unfortunate not to have the more general notion of toposes available, as well as their much subtler logic. Perhaps someone should write a fictional parody on Greenshields book: "Lacan and the topos"…
Connes' first attempt to construct the topos of unconsciousness was also not much of a surprise. According to Lacan the unconscious is 'structured like a language'.
So, a natural approach might be to start with a 'dictionary'-category (words and relations between them) or any other known use of a category in linguistics. A good starting point to read up on this is the blog post A new application of category theory in linguistics.
Eventually they settled for a much more ambitious project. To Connes and Gauthier-Lafaye every individual has her own topos and corresponding logic.
They don't specify how to construct these individual toposes, but postulate that they are all connected to a classifying topos, which is their incarnation of the world of 'myths' and 'fantasies'.
Surely an idea Lacan would have liked. Underlying the unconscious must be, according to Connes and Gauthier-Lafaye, a geometric theory! That is, it can be fully described by first order sentences.
Lacan himself used already some first order sequences in his teachings, such as in his logic of sexuation:
\forall x~(\Phi~x)~\quad \text{but also} \quad \exists x~\neg~(\Phi~x) \]
where $\Phi~x$ is the phallic function. Quoting from Greenshield's book:
"While all (the sons) are subject to ($\forall x$) the law of castration ($\Phi~x$), we also learn that this law nevertheless resides upon an exception: there exists a subject ($\exists x$) that is not subject to this law ($\neg \Phi~x$). This exception is embodied by the despotic father who, not being subject to the phallic function, experiences an impossible mode of totalised jouissance (he enjoys all the women). He is, quite simply, the exception that proves the law a necessary beyond that enables the law's geometric bounds to be defined."
It will be quite hard (but probably great fun for psycho-analysts) to turn the whole of Lacanian theory on the unconscious into a coherent geometric theory, construct its classifying topos, and apply the Joyal-Reyes theorem to get at the individual cases/toposes.
I'm sure there are much deeper insights to be gained from Connes' and Gauthier-Lafaye's book, but this is what i got from a first, fast, cursory reading of it.
Grothendieck meets Lacan
Published April 21, 2022 by lievenlb
Next month, a weekend-meeting is organised in Paris on Lacan et Grothendieck, l'impossible rencontre?.
Photo from Remembering my father, Jacques Lacan
Jacques Lacan was a French psychoanalyst and psychiatrist who has been called "the most controversial psycho-analyst since Freud".
What's the connection between Lacan and Grothendieck? Here's Stephane Dugowson's take (G-translated):
"As we know, Lacan was passionate about certain mathematics, notably temporal logic and the theory of knots, where he thought he found material for advancing the theory of psychoanalysis. For his part, Grothendieck testifies in his non-strictly mathematical writings to his passion for the psyche, as shown by many pages of his Récoltes et Semailles just published by Gallimard (in January 2022), or even, among the tens of thousands of pages discovered at his death and of which we know almost nothing, the 3700 pages of mathematics grouped under the title 'Structure of the Psyche'.
One might therefore be surprised that the two geniuses never met. In fact, a lunch did take place in the early 1970s organized by the mathematician and psychoanalyst Daniel Sibony. But a lunch does not necessarily make a meeting, and it seems that this one unfortunately did not happen."
As it is 'bon ton' these days in Parisian circles to utter the word 'topos', several titles of the talks given at the meeting contain that word.
There's Stephane Dugowson's talk on "Logique du topos borroméen et autres logiques à trois points".
Lacan used the Borromean link to illustrate his concepts of the Real, Symbolic, and Imaginary (RSI). For more on this, please read chapter 6 of Lionel Baily's excellent introduction to Lacan's work Lacan, A Beginner's Guide.
The Borromean topos is an example of Dugowson's toposes associated to his 'connectivity spaces'. From his paper Définition du topos d'un espace connectif I gather that the objects in the Borromean topos consist of a triple of set-maps from a set $A$ (the global sections) to sets $A_x,A_y$ and $A_z$ (the restrictions to three disconnected 'opens').
\xymatrix{& A \ar[rd] \ar[d] \ar[ld] & \\ A_x & A_y & A_z} \]
This seems to be a topos with a Boolean logic, but perhaps there are other 3-point connectivity spaces with a non-Boolean Heyting subobject classifier.
There's Daniel Sibony's talk on "Mathématiques et inconscient". Sibony is a French mathematician, turned philosopher and psychoanalyst, l'inconscient is an important concept in Lacan's work.
Here's a nice conversation between Daniel Sibony and Alain Connes on the notions of 'time' and 'truth'.
In the second part (starting around 57.30) Connes brings up toposes whose underlying logic is much subtler than brute 'true' or 'false' statements. He discusses the presheaf topos on the additive monoid $\mathbb{N}_+$ which leads to statements which are 'one step from the truth', 'two steps from the truth' and so on. It is also the example Connes used in his talk Un topo sur les topos.
Alain Connes himself will also give a talk at the meeting, together with Patrick Gauthier-Lafaye, on "Un topos sur l'inconscient".
It appears that Connes and Gauthier-Lafaye have written a book on the subject, A l'ombre de Grothendieck et de Lacan : un topos sur l'inconscient. Here's the summary (G-translated):
"The authors present the relevance of the mathematical concept of topos, introduced by A. Grothendieck at the end of the 1950s, in the exploration of the structure of the unconscious."
The book will be released on May 11th.
Published March 13, 2022 by lievenlb
Until now, we've looked at actions of groups (such as the $T/I$ or $PLR$-group) or (transformation) monoids (such as Noll's monoid) on special sets of musical elements, in particular the twelve pitch classes $\mathbb{Z}_{12}$, or the set of all $24$ major and minor chords.
Elephant-lovers recognise such settings as objects in the presheaf topos on the one-object category $\mathbf{M}$ corresponding to the group or monoid. That is, we look at contravariant functors $\mathbf{M} \rightarrow \mathbf{Sets}$.
Last time we've encountered the 'Cube Dance Grap' which depicts a particular relation among the major, minor, and augmented chords.
Recall that the twelve major chords (numbered for $1$ to $12$) are the ordered triples of tones in $\mathbb{Z}_{12}$ of the form $(n,n+4,n+7)$ (such as the triangle on the left). The twelve minor chords (numbered from $13$ to $24$) are the ordered triples $(n,n+3,n+7)$ (such as the middle triangle). The four augmented chords (numbered from $25$ to $28$) are the triples of the form $(n,n+4,n+8)$ (such as the rightmost triangle).
The Cube Dance Graph relates two of these chords when they share two tones (pitch classes) whereas the remaining tones differ by a halftone.
Picture modified from this post.
We can separate this symmetric binary relation into three sub-relations: the extension of the $P$ and $L$-operations on major and minor chords to the augmented ones (these are transformations), and the remaining relation $U$ which connects the major and minor chords to the augmented chords (and which is not a transformation).
Binary relations on the same set can be composed, so we get a monoid $\mathbf{M}$ generated by the three relations $P,L$ and $U$. The action of $\mathbf{M}$ on the $28$ chords no longer gives us an ordinary presheaf (because $U$ is not a transformation), but a relational presheaf as in the paper On the use of relational presheaves in transformational music theory by Alexandre Popoff.
That is, the action defines a contravariant functor $\mathbf{M} \rightarrow \mathbf{Rel}$ where $\mathbf{Rel}$ is the category (actually a $2$-category) of sets, but with binary relations as morphisms (that is, $Hom(X,Y)$ is all subsets of $X \times Y$), and the natural notion of composition of such relations. The $2$-morphism between relations is that of inclusion.
To compute with monoids generated by binary relations in GAP one needs to download, compile and load the package semigroups, and to represent the binary relations as partitioned binary relations as in the paper by Martin and Mazorchuk.
This is a bit more complicated than working with ordinary transformations:
P:=PBR([[-13],[-14],[-15],[-16],[-17],[-18],[-19],[-20],[-21],[-22],[-23],[-24],[-1],[-2],[-3],[-4],[-5],[-6],[-7],[-8],[-9],[-10],[-11],[-12],[-25],[-26],[-27],[-28]],[[13],[14],[15],[16],[17],[18],[19],[20],[21],[22],[23],[24],[1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[25],[26],[27],[28]]);
L:=PBR([[-17],[-18],[-19],[-20],[-21],[-22],[-23],[-24],[-13],[-14],[-15],[-16],[-9],[-10],[-11],[-12],[-1],[-2],[-3],[-4],[-5],[-6],[-7],[-8],[-25],[-26],[-27],[-28]],[[17],[18],[19],[20],[21],[22],[23],[24],[13],[14],[15],[16],[9],[10],[11],[12],[1],[2],[3],[4],[5],[6],[7],[8],[25],[26],[27],[28]]);
U:=PBR([[-26],[-27],[-28],[-25],[-26],[-27],[-28],[-25],[-26],[-27],[-28],[-25],[-25],[-26],[-27],[-28],[-25],[-26],[-27],[-28],[-25],[-26],[-27],[-28],[-17,-21,-13,-4,-8,-12],[-5,-1,-9,-18,-14,-22],[-2,-6,-10,-15,-23,-19],[-24,-16,-20,-11,-3,-7]],[[26],[27],[28],[25],[26],[27],[28],[25],[26],[27],[28],[25],[25],[26],[27],[28],[25],[26],[27],[28],[25],[26],[27],[28],[17,21,13,4,8,12],[5,1,9,18,14,22],[2,6,10,15,23,19],[24,16,20,11,3,7]]);
But then, GAP quickly tells us that $\mathbf{M}$ is a monoid consisting of $40$ elements.
gap> M:=Semigroup([P,L,U]);
gap> Size(M);
The Semigroups-package can also compute Green's relations and tells us that there are seven such $R$-classes, four consisting of $6$ elements, two of four, and one of eight elements. These are also visible in the Cayley graph, exactly as last time.
Or, if you prefer the cleaner picture of the Cayley graph from the paper Relational poly-Klumpenhouwer networks for transformational and voice-leading analysis by Popoff, Andreatta and Ehresmann.
This then allows us to compute the Heyting algebra of the subobject classifier, and all the Grothendieck topologies, at least for the ordinary presheaf topos of $\mathbf{M}$-sets, not for the relational presheaves we need here.
We can consider the same binary relation on the larger set of triads when we add the suspended triads. These are the ordered triples in $\mathbb{Z}_{12}$ of the form $(n,n+5,n+7)$, as in the rightmost triangle below.
There are twelve suspended chords (numbered from $29$ to $40$), so we now have a binary relation $T$ on a set of $40$ triads.
The relation $T$ is too coarse, and the art is to subdivide $T$ is disjoint sub-relations which are musically significant, between major and minor triads, between major/minor and augmented triads, and so on.
For each such partition we can then consider the monoids generated by these sub-relations.
In his paper, Popoff suggest relevant sub-relations $P,L,T_U,T_V$ and $T_U \cup T_V$ of $T$ which in our numbering of the $40$ chords can be represented by these PBR's (assuming I made no mistakes…ADDED march 24th: I did make a mistake in the definition of L, see comment by Alexandre Popoff, below the corect L):
P:=PBR([[-13],[-14],[-15],[-16],[-17],[-18],[-19],[-20],[-21],[-22],[-23],[-24],[-1],[-2],[-3],[-4],[-5],[-6],[-7],[-8],[-9],[-10],[-11],[-12],[-25],[-26],[-27],[-28],[-36],[-37],[-38],[-39],[-40],[-29],[-30],[-31],[-32],[-33],[-34],[-35]],[[13],[14],[15],[16],[17],[18],[19],[20],[21],[22],[23],[24],[1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[25],[26],[27],[28],[34],[35],[36],[37],[38],[39],[40],[29],[30],[31],[32],[33]]);
L:=PBR([[-17],[-18],[-19],[-20],[-21],[-22],[-23],[-24],[-13],[-14],[-15],[-16],[-9],[ -10],[-11],[-12],[-1],[-2],[-3],[-4],[-5],[-6],[-7],[-8],[-25],[-26],[-27],[-28],[-29], [-30],[-31],[-32],[-33],[-34],[-35],[-36],[-37],[-38],[-39],[-40]],[[17], [18], [19], [ 20],[21],[22],[23],[24],[13],[14],[15],[16],[9],[10],[11],[12],[1],[2],[3],[4],[5], [6], [7],[8],[25],[26],[27],[28],[29],[30],[31],[32],[33],[34],[35],[36],[37],[38],[39],[40] ]);
TU:=PBR([[-26],[-27],[-28],[-25],[-26],[-27],[-28],[-25],[-26],[-27],[-28],[-25],[-25],[-26],[-27],[-28],[-25],[-26],[-27],[-28],[-25],[-26],[-27],[-28],[-4,-8,-12,-13,-17,-21],[-1,-5,-9,-14,-18,-22],[-2,-6,-10,-15,-19,-23],[-3,-7,-11,-16,-20,-24],[],[],[],[],[],[],[],[],[],[],[],[]],[[26],[27],[28],[25],[26],[27],[28],[25],[26],[27],[28],[25],[25],[26],[27],[28],[25],[26],[27],[28],[25],[26],[27],[28],[4,8,12,13,17,21],[1,5,9,14,18,22],[2,6,10,15,19,23],[3,7,11,16,20,24],[],[],[],[],[],[],[],[],[],[],[],[]]);
TV:=PBR([[-29],[-30],[-31],[-32],[-33],[-34],[-35],[-36],[-37],[-38],[-39],[-40],[-36],[-37],[-38],[-39],[-40],[-29],[-30],[-31],[-32],[-33],[-34],[-35],[],[],[],[],[-1,-18],[-2,-19],[-3,-20],[-4,-21],[-5,-22],[-6,-23],[-7,-24],[-8,-13],[-9,-14],[-10,-15],[-11,-16],[-12,-17]],[[29],[30],[31],[32],[33],[34],[35],[36],[37],[38],[39],[40],[36],[37],[38],[39],[40],[29],[30],[31],[32],[33],[34],[35],[],[],[],[],[1,18],[2,19],[3,20],[4,21],[5,22],[6,23],[7,24],[8,13],[9,14],[10,15],[11,16],[12,17]]);
TUV:=PBR([[-26,-29],[-27,-30],[-28,-31],[-25,-32],[-26,-33],[-27,-34],[-28,-35],[-25,-36],[-26,-37],[-27,-38],[-28,-39],[-25,-40],[-25,-36],[-26,-37],[-27,-38],[-28,-39],[-25,-40],[-26,-29],[-27,-30],[-28,-31],[-25,-32],[-26,-33],[-27,-34],[-28,-35],[-4,-8,-12,-13,-17,-21],[-1,-5,-9,-14,-18,-22],[-2,-6,-10,-15,-19,-23],[-3,-7,-11,-16,-20,-24],[-1,-18],[-2,-19],[-3,-20],[-4,-21],[-5,-22],[-6,-23],[-7,-24],[-8,-13],[-9,-14],[-10,-15],[-11,-16],[-12,-17]],[[26,29],[27,30],[28,31],[25,32],[26,33],[27,34],[28,35],[25,36],[26,37],[27,38],[28,39],[25,40],[25,36],[26,37],[27,38],[28,39],[25,40],[26,29],[27,30],[28,31],[25,32],[26,33],[27,34],[28,35],[4,8,12,13,17,21],[1,5,9,14,18,22],[2,6,10,15,19,23],[3,7,11,16,20,24],[1,18],[2,19],[3,20],[4,21],[5,22],[6,23],[7,24],[8,13],[9,14],[10,15],[11,16],[12,17]]);
The resulting monoids are huge:
gap> G:=Semigroup([P,L,TU,TV]);
gap> H:=Semigroup([P,L,TUV]);
gap> Size(H);
In Popoff's paper these monoids have sizes respectively $473,293$ and $994,624$. Strangely, the offset is in both cases $144=12^2$. (Added march 24: with the correct L I get the same sizes as in Popoff's paper).
Perhaps we should try to transform such relational presheaves to ordinary presheaves.
One approach is to use the Grothendieck construction and associate to a set with such a relational monoid action a directed graph, coloured by the elements of the monoid. That is, an object in the presheaf topos of the category
\xymatrix{C & E \ar[l]^c \ar@/^2ex/[r]^s \ar@/_2ex/[r]_t & V} \]
and then we should consider the slice topos over the one-vertex bouquet graph with one loop for each element in the monoid.
If you want to have more details on the musical side of things, for example if you want to know what the opening twelve chords of "Take a Bow" by Muse have to do with the Cube Dance graph, here are some more papers:
A categorical generalization of Klumpenhouwer networks, A. Popoff, M. Andreatta and A. Ehresmann.
From K-nets to PK-nets: a categorical approach, A. Popoff, M. Andreatta and A. Ehresmann.
From a Categorical Point of View: K-Nets as Limit Denotators, G. Mazzola and M. Andreatta.
From Mamuth to Elephant
Here, MaMuTh stands for Mathematical Music Theory which analyses the pitch, timing, and structure of works of music.
The Elephant is the nickname for the 'bible' of topos theory, Sketches of an Elephant: A Topos Theory Compendium, a two (three?) volume book, written by Peter Johnstone.
How can we get as quickly as possible from the MaMuth to the Elephant, musical illiterates such as myself?
What Mamuth-ers call a pitch class (sounds that are a whole number of octaves apart), is for us a residue modulo $12$, as an octave is usually divided into twelve (half)tones.
We'll just denote them by numbers from $0$ to $11$, or view them as the vertices of a regular $12$-gon, and forget the funny names given to them, as there are several such encodings, and we don't know a $G$ from a $D\#$.
Our regular $12$-gon has exactly $24$ symmetries. Twelve rotations, which they call transpositions, given by the affine transformations
T_k~:~x \mapsto x+k~\text{mod}~12 \]
and twelve reflexions, which they call involutions, given by
I_k~:~x \mapsto -x+k~\text{mod}~12 \]
What for us is the dihedral group $D_{12}$ (all symmetries of the $12$-gon), is for them the $T/I$-group (for transpositions/involutions).
Let's move from individual notes (or pitch classes) to chords (or triads), that is, three notes played together.
Not all triples of notes sound nice when played together, that's why the most commonly played chords are among the major and minor triads.
A major triad is an ordered triple of elements from $\mathbb{Z}_{12}$ of the form
(n,n+4~\text{mod}~12,n+7~\text{mod}~12) \]
and a minor triad is an ordered triple of the form
where the first entry $n$ is called the root of the triad (or chord) and its funny name is then also the name of that chord.
For us, it is best to view a triad as an inscribed triangle in our regular $12$-gon. The triangles of major and minor triads have edges of different lengths, a small one, a middle, and a large one.
Starting from the root, and moving clockwise, we encounter in a major chord-triangle first the middle edge, then the small edge, and finally the large edge. For a minor chord-triangle, we have first the small edge, then the middle one, and finally the large edge.
On the left, two major triads, one with root $0$, the other with root $6$. On the right, two minor triads, also with roots $0$ and $6$.
(Btw. if you are interested in the full musical story, I strongly recommend the alpof blog by Alexandre Popoff, from which the above picture is taken.)
Clearly, there are $12$ major triads (one for each root), and $12$ minor triads.
From the shape of the triad-triangles it is also clear that rotations (transpositions) send major triads to major triads (and minors to minors), and that reflexions (involutions) interchange major with minor triads.
That is, the dihedral group $D_{12}$ (or if you prefer the $T/I$-group) acts on the set of $24$ major and minor triads, and this action is transitive (an element stabilising a triad-triangle must preserve its type (so is a rotation) and its root (so must be the identity)).
Can we hear the action of the very special group element $T_6$ (the unique non-trivial central element of $D_{12}$) on the chords?
This action is not only the transposition by three full tones, but also a point-reflexion with respect to the center of the $12$-gon (see two examples in the picture above). This point reflexion can be compositionally meaningful to refer to two very different upside-down worlds.
In It's $T_6$-day, Alexandre Popoff gives several examples. Here's one of them, the Ark theme in Indiana Jones – Raiders of the Lost Ark.
"The $T_6$ transformation is heard throughout the map room scene (in particular at 2:47 in the video): that the ark is a dreadful object from a very different world is well rendered by the $T_6$ transposition, with its inherent tritone and point reflection."
Let's move on in the direction of the Elephant.
We saw that the only affine map of the form $x \mapsto \pm x + k$ fixing say the major $0$-triad $(0,4,7)$ is the identity map.
But, we can ask for the collection of all affine maps $x \mapsto a x + b$ fixing this major $0$-triad set-wise, that is, such that
\{ b, 4a+b~\text{mod}~12, 7a+b~\text{mod}~2 \} \subseteq \{ 0,4,7 \} \]
A quick case-by-case analysis shows that there are just eight such maps: the identity and the constant maps
x \mapsto x,~x \mapsto 0,~x \mapsto 4, ~x \mapsto 7 \]
and the four maps
\underbrace{x \mapsto 3x+7}_a,~\underbrace{x \mapsto 8x+4}_b,~x \mapsto 9x+4,~x \mapsto 4x \]
Compositions of such maps again preserve the set $\{ 0,4,7 \}$ so they form a monoid, and a quick inspection with GAP learns that $a$ and $b$ generate this monoid.
gap> a:=Transformation([10,1,4,7,10,1,4,7,10,1,4,7]);;
gap> b:=Transformation([12,8,4,12,8,4,12,8,4,12,8,4]);;
gap> gens:=[a,b];;
gap> T:=Monoid(gens);
gap> Size(T);
The monoid $T$ is the triadic monoid of Thomas Noll's paper The topos of triads.
The monoid $T$ can be seen as a one-object category (with endomorphisms the elements of $T$). The corresponding presheaf topos is then the category of all sets equipped with a right $T$-action.
Actually, Noll considers just one such presheaf (and its sub-presheaves) namely $\mathcal{F}=\mathbb{Z}_{12}$ with the action of $T$ by affine maps described before.
He is interested in the sheafifications of these presheaves with respect to Grothendieck topologies, so we have to describe those.
For any monoid category, the subobject classifier $\Omega$ is the set of all right ideals in the monoid.
Using the GAP sgpviz package we can draw its Cayley graph (red coloured vertices are idempotents in the monoid, the blue vertex is the identity map).
gap> DrawCayleyGraph(T);
The elements of $T$ (vertices) which can be connected by oriented paths (in both ways) in the Cayley graph, such as here $\{ 2,4 \}$, $\{ 3,7 \}$ and $\{ 5,6,8 \}$, will generate the same right ideal in $T$, so distinct right ideals are determined by unidirectional arrows, such as from $1$ to $2$ and $3$ or from $\{ 2,4 \}$ to $5$, or from $\{ 3,7 \}$ to $6$.
This gives us that $\Omega$ consists of the following six elements:
$0 = \emptyset$
$C = \{ 5,6,8 \} = a.T \wedge b.T$
$L = \{ 2,4,5,6,8 \}=a.T$
$R = \{ 3,7,5,6,8 \}=b.T$
$P = \{ 2,3,4,5,6,7,8 \}=a.T \vee b.T$
$1 = T$
As a subobject classifier $\Omega$ is itself a presheaf, so wat is the action of the triad monoid $T$ on it? For all $A \in \Omega$, and $s \in T$ the action is given by $A.s = \{ t \in T | s.t \in A \}$ and it can be read off from the Cayley-graph.
$\Omega$ is a Heyting algebra of which the inclusions, and logical operations can be summarised in the picture below, using the Hexboards and Heytings-post.
In this case, Grothendieck topologies coincide with Lawvere-Tierney topologies, which come from closure operators $j~:~\Omega \rightarrow \Omega$ which are order-increasing, idempotent, and compatible with the $T$-action and with the $\wedge$, that is,
if $A \leq B$, then $j(A) \leq j(B)$
$j(j(A)) = j(A)$
$j(A).t=j(A.t)$
$j(A \wedge B) = j(A) \wedge j(B)$
Colouring all cells with the same $j$-value alike, and remaining cells $A$ with $j(A)=A$ coloured yellow, we have six such closure operations $j$, that is, Grothendieck topologies.
The triadic monoid $T$ acts via affine transformations on the set of pitch classes $\mathbb{Z}_{12}$ and we've defined it such that it preserves the notes $\{ 0,4,7 \}$ of the major $(0,4,7)$-chord, that is, $\{ 0,4,7 \}$ is a subobject of $\mathbb{Z}_{12}$ in the topos of $T$-sets.
The point of the subobject classifier $\Omega$ is that morphisms to it classify subobjects, so there must be a $T$-equivariant map $\chi$ making the diagram commute (vertical arrows are the natural inclusions)
\xymatrix{\{ 0,4,7 \} \ar[r] \ar[d] & 1 \ar[d] \\ \mathbb{Z}_{12} \ar[r]^{\chi} & \Omega} \]
What does the morphism $\chi$ do on the other pitch classes? Well, it send an element $k \in \mathbb{Z}_{12} = \{ 1,2,\dots,12=0 \}$ to
$1$ iff $k \in \{ 0,4,7 \}$
$P$ iff $a(k)$ and $b(k)$ are in $\{ 0,4,7 \}$
$L$ iff $a(k) \in \{ 0,4,7 \}$ but $b(k)$ is not
$R$ iff $b(k) \in \{ 0,4,7 \}$ but $a(k)$ is not
$C$ iff neither $a(k)$ nor $b(k)$ is in $\{ 0,4,7 \}$
Remember that $a$ and $b$ are the transformations (images of $(1,2,\dots,12)$)
a:=Transformation([10,1,4,7,10,1,4,7,10,1,4,7]);;
b:=Transformation([12,8,4,12,8,4,12,8,4,12,8,4]);;
so we see that
$0,1,4$ are mapped to $1$
$3$ is mapped to $P$
$8,11$ are mapped to $L$
$1,6,9,10$ are mapped to $R$
$2,5$ are mapped to $C$
Finally, we can compute the sheafification of the sub-presheaf $\{ 0,4,7 \}$ of $\mathbb{Z}$ with respect to a Grothendieck topology $j$: it consists of the set of those $k \in \mathbb{Z}_{12}$ such that $j(\chi(k)) = 1$.
The musically interesting Grothendieck topologies are $j_P, j_L$ and $j_R$ with corresponding sheaves:
For $j_P$ we get the sheaf $\{ 0,3,4,7 \}$ which Mamuth-ers call a Major-Minor Mixture as these are the notes of both the major and minor $0$-triads
For $j_L$ we get $\{ 0,3,4,7,8,11 \}$ which is an example of an Hexatonic scale (six notes), here they are the notes of the major and minor $0,~4$ and $8$-triads
For $j_R$ we get $\{ 0,1,3,4,6,7,9,10 \}$ which is an example of an Octatonic scale (eight notes), here they are the notes of the major and minor $0,~3,~6$ and $9$-triads
We could have played the same game starting with the three notes of any other major triad.
Those in the know will have noticed that so far I've avoided another incarnation of the dihedral $D_{12}$ group in music, namely the $PLR$-group, which explains the notation for the elements of the subobject classifier $\Omega$, but this post is already way too long.
Hexboards and Heytings
Published February 26, 2022 by lievenlb
A couple of days ago, Peter Rowlett posted on The Aperiodical: Introducing hexboard – a LaTeX package for drawing games of Hex.
Hex is a strategic game with two players (Red and Blue) taking turns placing a stone of their color onto any empty space. A player wins when they successfully connect their sides together through a chain of adjacent stones.
Here's a short game on a $5 \times 5$ board (normal play uses $11\times 11$ boards), won by Blue, drawn with the LaTeX-package hexboard.
As much as I like mathematical games, I want to use the versability of the hexboard-package for something entirely different: drawing finite Heyting algebras in which it is easy to visualise the logical operations.
Every full hexboard is a poset with minimal cell $0$ and maximal cell $1$ if cell-values increase if we move horizontally to the right or diagonally to the upper-right. With respect to this order, $p \vee q$ is the smallest cell bigger than both $p$ and $q$, and $p \wedge q$ is the largest cell smaller than $p$ and $q$.
The implication $p \Rightarrow q$ is the largest cell $r$ such that $r \wedge p \leq q$, and the negation $\neg p$ stands for $p \Rightarrow 0$. With these operations, the full hexboard becomes a Heyting algebra.
Now the fun part. Every filled area of the hexboard, bordered above and below by a string of strictly increasing cells from $0$ to $1$ is also a Heyting algebra, with the induced ordering, and with the logical operations defined similarly.
Note that this mustn't be a sub-Heyting algebra as the operations may differ. Here, we have a different value for $p \Rightarrow q$, and $\neg p$ is now $0$.
If you're in for an innocent "Where is Wally?"-type puzzle: $W = (\neg \neg p \Rightarrow p)$.
Click on the image to get the solution.
The downsets in these posets can be viewed as the open sets of a finite topology, so these Heyting algebra structures come from the subobject classifier of a topos.
There are more interesting toposes with subobject classifier determined by such hex-Heyting algebras.
For example, the Topos of Triads of Thomas Noll in music theory has as its subobject classifier the hex-Heyting algebra (with cell-values as in the paper):
Note to self: why not write a couple of posts on this topos?
Another example: the category of all directed graphs is the presheaf topos of the two object category ($V$ for vertices, and $E$ for edges) with (apart from the identities) just two morphisms $s,t : V \rightarrow E$ (for start- and end-vertex of a directed edge).
The subobject classifier $\Omega$ of this topos is determined by the two Heyting algebras $\Omega(E)$ and $\Omega(V)$ below.
These 'hex-Heyting algebras' are exactly what Eduardo Ochs calls 'planar Heyting algebras'.
Eduardo has a very informative research page, containing slides and handouts of talks in which he tries to explain topos theory to "children" (using these planar Heyting algebras) including:
Sheaves for children
Planar Heyting algebras for children
Logic for children
Grothendieck topologies for children
Perhaps now is a good time to revive my old sga4hipsters-project. | CommonCrawl |
End of chapter exercises
Exercise 2.10
Textbook Exercise 2.7
[SC 2003/11]A projectile is fired vertically upwards from the ground. At the highest point of its motion, the projectile explodes and separates into two pieces of equal mass. If one of the pieces is projected vertically upwards after the explosion, the second piece will...
drop to the ground at zero initial speed.
be projected downwards at the same initial speed as the first piece.
be projected upwards at the same initial speed as the first piece.
be projected downwards at twice the initial speed as the first piece.
[IEB 2004/11 HG1] A ball hits a wall horizontally with a speed of \(\text{15}\) \(\text{m·s$^{-1}$}\). It rebounds horizontally with a speed of \(\text{8}\) \(\text{m·s$^{-1}$}\). Which of the following statements about the system of the ball and the wall is true?
The total linear momentum of the system is not conserved during this collision.
The law of conservation of energy does not apply to this system.
The change in momentum of the wall is equal to the change in momentum of the ball.
Energy is transferred from the ball to the wall.
[IEB 2001/11 HG1] A block of mass M collides with a stationary block of mass 2M. The two blocks move off together with a velocity of \(\vec{v}\). What is the velocity of the block of mass M immediately before it collides with the block of mass 2M?
\(\vec{v}\)
2\(\vec{v}\)
\begin{align*} M\vec{v}_{1i} + 2M\vec{v}_{2i} & = (M+2M)\vec{v} \\ M\vec{v}_{1i} & = 3M\vec{v} \\ \vec{v}_{1i} & = 3\vec{v} \end{align*}
[IEB 2003/11 HG1] A cricket ball and a tennis ball move horizontally towards you with the same momentum. A cricket ball has greater mass than a tennis ball. You apply the same force in stopping each ball.
How does the time taken to stop each ball compare?
It will take longer to stop the cricket ball.
It will take longer to stop the tennis ball.
It will take the same time to stop each of the balls.
One cannot say how long without knowing the kind of collision the ball has when stopping.
Since their momenta are the same, and the stopping force applied to them is the same, it will take the same time to stop each of the balls.
[IEB 2004/11 HG1] Two identical billiard balls collide head-on with each other. The first ball hits the second ball with a speed of V, and the second ball hits the first ball with a speed of 2V. After the collision, the first ball moves off in the opposite direction with a speed of 2V. Which expression correctly gives the speed of the second ball after the collision?
[SC 2002/11 HG1] Which one of the following physical quantities is the same as the rate of change of momentum?
resultant force
[IEB 2005/11 HG] Cart X moves along a smooth track with momentum p. A resultant force F applied to the cart stops it in time t. Another cart Y has only half the mass of X, but it has the same momentum p.
In what time will cart Y be brought to rest when the same resultant force F acts on it?
\(\frac{1}{2}t\)
\(t\)
\(2t\)
[SC 2002/03 HG1] A ball with mass m strikes a wall perpendicularly with a speed, v. If it rebounds in the opposite direction with the same speed, v, the magnitude of the change in momentum will be ...
\(2mv\)
\(mv\)
\(\frac{1}{2}mv\)
Show that impulse and momentum have the same units.
The units of momentum are \(\text{kg⋅m⋅s$^{-1}$}\):
Impulse can be defined as force over total time: \(\text{N⋅s}\). \(\text{1}\text{ N} = \text{1}\text{ kg⋅m⋅s$^{-2}$}\). Therefore the units for impulse are: \(\text{kg⋅m⋅s$^{-1}$}\)
This is the same as the units for momentum.
A golf club exerts an average force of \(\text{3}\) \(\text{kN}\) on a ball of mass \(\text{0,06}\) \(\text{kg}\). If the golf club is in contact with the golf ball for \(\text{5} \times \text{10}^{-\text{4}}\) \(\text{seconds}\), calculate
the change in the momentum of the golf ball.
\begin{align*} \Delta p & = F_{\text{net}}\Delta t \\ & = (\text{3} \times \text{10}^{\text{3}})(\text{5} \times \text{10}^{-\text{4}}) \\ & = \text{1,5}\text{ kg⋅m⋅s$^{-1}$} \end{align*}
the velocity of the golf ball as it leaves the club.
\begin{align*} \Delta p & = mv \\ \text{1,5} & = \text{0,06}v \\ v & = \text{25}\text{ m⋅s$^{-1}$} \end{align*}
During a game of hockey, a player strikes a stationary ball of mass \(\text{150}\) \(\text{g}\) . The graph below shows how the force of the ball varies with the time.
What does the area under this graph represent?
Calculate the speed at which the ball leaves the hockey stick.
\begin{align*} \text{Impulse } & = F \Delta t \\ & = \Delta p = m \Delta v \end{align*}
The impulse is the area under the graph:
\begin{align*} \text{Impulse } & = (\text{0,5})(150)(\text{0,5}) \\ & = \text{37,5}\text{ N} \end{align*}
The speed is:
\begin{align*} \Delta v & = \frac{\text{37,5}}{\text{0,150}} \\ & = \text{250}\text{ m⋅s$^{-1}$} \end{align*}
The same player hits a practice ball of the same mass, but which is made from a softer material. The hit is such that the ball moves off with the same speed as before. How will the area, the height and the base of the triangle that forms the graph, compare with that of the original ball?
The area will remain the same because the final velocity and the mass are the same. The duration of the contact between the bat and the ball will be longer as the ball is soft, so the base will be wider. In order for the area to be the same, the height must be lower. Therefore, the player can hit the softer ball with less force to impart the same velocity on the ball.
The fronts of modern cars are deliberately designed in such a way that in case of a head-on collision, the front would crumple. Why is it desirable that the front of the car should crumple?
If the front crumples then the force of the collision is reduced. The energy of the collision would go into making the front of the car crumple and so the passengers in the car would feel less force.
[SC 2002/11 HG1] In a railway shunting yard, a locomotive of mass \(\text{4 000}\) \(\text{kg}\), travelling due east at a velocity of \(\text{1,5}\) \(\text{m·s$^{-1}$}\), collides with a stationary goods wagon of mass \(\text{3 000}\) \(\text{kg}\) in an attempt to couple with it. The coupling fails and instead the goods wagon moves due east with a velocity of \(\text{2,8}\) \(\text{m·s$^{-1}$}\).
Calculate the magnitude and direction of the velocity of the locomotive immediately after collision.
\begin{align*} m_{1}v_{i1} + m_{2}v_{i2} & = m_{1}v_{f1} + m_{2}v_{f2} \\ (\text{4 000})(\text{1,5}) & = (\text{3 000})(\text{2,8}) + (\text{4 000})v_{f2} \\ v_{f2} & = -\text{0,6}\text{ m⋅s$^{-1}$} \\ & = \text{0,6}\text{ m⋅s$^{-1}$} \text{ west} \end{align*}
Name and state in words the law you used to answer the previous question
The principle of conservation of linear momentum. The total linear momentum of an isolated system is constant.
[SC 2005/11 SG1] A combination of trolley A (fitted with a spring) of mass \(\text{1}\) \(\text{kg}\), and trolley B of mass \(\text{2}\) \(\text{kg}\), moves to the right at \(\text{3}\) \(\text{m·s$^{-1}$}\) along a frictionless, horizontal surface. The spring is kept compressed between the two trolleys.
While the combination of the two trolleys is moving at \(\text{3}\) \(\text{m·s$^{-1}$}\), the spring is released and when it has expanded completely, the \(\text{2}\) \(\text{kg}\) trolley is then moving to the right at \(\text{4,7}\) \(\text{m·s$^{-1}$}\) as shown below.
State, in words, the principle of conservation of linear momentum.
The total linear momentum of an isolated system is constant.
Calculate the magnitude and direction of the velocity of the \(\text{1}\) \(\text{kg}\) trolley immediately after the spring has expanded completely.
\begin{align*} (m_{1}+m_{2})\vec{v}_i & = m_{1}\vec{v}_{f1} + m_{2}\vec{v}_{f2} \\ (\text{2}+\text{1})(\text{3}) & = (\text{2})(\text{4,7}) + (\text{1})\vec{v}_{f2} \\ \vec{v}_{f2} & = -\text{0,4}\text{ m·s$^{-1}$} \end{align*}
\(\vec{v}_{f2} = \text{0,4}\text{ m·s$^{-1}$}\) to the left
\(\text{0,4}\) \(\text{m·s$^{-1}$}\) to the left
[IEB 2002/11 HG1] A ball bounces back from the ground. Which of the following statements is true of this event?
The magnitude of the change in momentum of the ball is equal to the magnitude of the change in momentum of the Earth.
The magnitude of the impulse experienced by the ball is greater than the magnitude of the impulse experienced by the Earth.
The speed of the ball before the collision will always be equal to the speed of the ball after the collision.
Only the ball experiences a change in momentum during this event.
[SC 2002/11 SG] A boy is standing in a small stationary boat. He throws his schoolbag, mass \(\text{2}\) \(\text{kg}\), horizontally towards the jetty with a velocity of \(\text{5}\) \(\text{m·s$^{-1}$}\). The combined mass of the boy and the boat is \(\text{50}\) \(\text{kg}\).
Calculate the magnitude of the horizontal momentum of the bag immediately after the boy has thrown it.
\begin{align*} p & = mv \\ & = (2)(5) \\ & = \text{10}\text{ kg·m·s$^{-1}$} \end{align*}
Calculate the velocity (magnitude and direction) of the boat-and-boy immediately after the bag is thrown.
\begin{align*} 0 & = m_{1}\vec{v}_{1f} + m_2\vec{v}_{2f} \\ -\text{10}& = (\text{50})\vec{v}_{2f}\\ \vec{v}_{2f} & = \frac{-\text{10}}{\text{50}}\\ & = -\text{0,2}\text{ m·s$^{-1}$} \end{align*}
\(\text{0,2}\) \(\text{m·s$^{-1}$}\) in the opposite direction to the jetty | CommonCrawl |
Exponential mixing and smooth classification of commuting expanding maps
This Volume
Normal forms for non-uniform contractions
2017, 11: 313-339. doi: 10.3934/jmd.2017013
Computation of annular capacity by Hamiltonian Floer theory of non-contractible periodic trajectories
Morimichi Kawasaki 1, and Ryuma Orita 2,
Center for Geometry and Physics, Institute for Basic Science (IBS), Pohang 790-784, Republic of Korea
Graduate School of Mathematical Sciences, University of Tokyo, Tokyo 153-0041, Japan
Received May 04, 2016 Revised February 14, 2017 Published April 2017
Figure(6)
The first author [9] introduced a relative symplectic capacity $C$ for a symplectic manifold $(N,\omega_N)$ and its subset $X$ which measures the existence of non-contractible periodic trajectories of Hamiltonian isotopies on the product of $N$ with the annulus $A_R=(-R,R)\times\mathbb{R}/\mathbb{Z}$. In the present paper, we give an exact computation of the capacity $C$ of the $2n$-torus $\mathbb{T}^{2n}$ relative to a Lagrangian submanifold $\mathbb{T}^n$ which implies the existence of non-contractible Hamiltonian periodic trajectories on $A_R\times\mathbb{T}^{2n}$. Moreover, we give a lower bound on the number of such trajectories.
Keywords: Hamiltonian Floer theory, periodic trajectory of Hamiltonian isotopy, Biran–Polterovich–Salamon capacity.
Mathematics Subject Classification: Primary: 53D40, 37J10; Secondary: 37J45.
Citation: Morimichi Kawasaki, Ryuma Orita. Computation of annular capacity by Hamiltonian Floer theory of non-contractible periodic trajectories. Journal of Modern Dynamics, 2017, 11: 313-339. doi: 10.3934/jmd.2017013
P. Biran, L. Polterovich and D. Salamon, Propagation in Hamiltonian dynamics and relative symplectic homology, Duke Math.J., 119 (2003), 65-118. doi: 10.1215/S0012-7094-03-11913-4. Google Scholar
K. Cieliebak, Handle attaching in symplectic homology and the chord conjecture, J. Eur.Math.Soc.(JEMS), 4 (2002), 115-142. doi: 10.1007/s100970100036. Google Scholar
K. Cieliebak, A. Floer and H. Hofer, Symplectic homology.Ⅱ.A general construction, Math.Zeit., 218 (1995), 103-122. doi: 10.1007/BF02571891. Google Scholar
A. Floer, Symplectic fixed points and holomorphic spheres, Comm.Math.Phys., 120 (1989), 575-611. doi: 10.1007/BF01260388. Google Scholar
A. Floer and H. Hofer, Symplectic homology.Ⅰ.Open sets in $\mathbb{C}^n$, Math.Zeit., 215 (1994), 37-88. doi: 10.1007/BF02571699. Google Scholar
A. Floer, H. Hofer and D. Salamon, Transversality in elliptic Morse theory for the symplectic action, Duke Math.J., 80 (1995), 251-292. doi: 10.1215/S0012-7094-95-08010-7. Google Scholar
U. Frauenfelder and F. Schlenk, Hamiltonian dynamics on convex symplectic manifolds, Israel J.Math., 159 (2007), 1-56. doi: 10.1007/s11856-007-0037-3. Google Scholar
H. Ishiguro, Non-contractible orbits for Hamiltonian functions on Riemann surfaces, arXiv: 1612.07062, (2016).Google Scholar
M. Kawasaki, Heavy subsets and non-contractible trajectories, arXiv: 1606.01964, (2016).Google Scholar
C. Niche, Non-contractible periodic orbits of Hamiltonian flows on twisted cotangent bundles, Discrete Contin.Dyn.Syst., 14 (2006), 617-630. doi: 10.3934/dcds.2006.14.617. Google Scholar
M. Poźniak, Floer homology, Novikov rings and clean intersections, in Northern California Symplectic Geometry Seminar(eds. Y. Eliashberg, D. Fuchs, T. Ratiu, and A. Weinstein), Amer. Math. Soc. Transl. Ser. 2 196, Amer. Math. Soc., Providence, 1999,119-181. doi: 10.1090/trans2/196/08. Google Scholar
D. Salamon, Lectures on Floer homology, in Symplectic Geometry and Topology (Park City, Utah, 1997), IAS/Park City Math. Ser. 7, Amer. Math. Soc., Providence, 1999,143-229. doi: 10.1016/S0165-2427(99)00127-0. Google Scholar
D. Salamon and E. Zehnder, Morse theory for periodic solutions of Hamiltonian systems and the Maslov index, Comm.Pure Appl.Math., 45 (1992), 1303-1360. doi: 10.1002/cpa.3160451004. Google Scholar
M. Usher, The sharp energy-capacity inequality, Commun.Contemp.Math., 12 (2010), 457-473. doi: 10.1142/S0219199710003889. Google Scholar
C. Viterbo, Functors and computations in Floer homology with applications Ⅰ, Geom.funct.anal., 9 (1999), 985-1033. doi: 10.1007/s000390050106. Google Scholar
J. Weber, Noncontractible periodic orbits in cotangent bundles and Floer homology, Duke Math.J., 133 (2006), 527-568. doi: 10.1215/S0012-7094-06-13334-3. Google Scholar
J. Xue, Existence of noncontractible periodic orbits of Hamiltonian system separating two Lagrangian tori on $T^{\ast}\mathbb{T}^{n}$ with application to non convex Hamiltonian systems, to appear in J. Symplectic Geom., arXiv: 1408.5193, (2014).Google Scholar
Figure 1. Outline of the graphs of $f_s$ (for $s\geq 1$ and $s\leq -1$)
Figure 2. Outline of the graphs of $H_s$ (for $s\geq 1$ and $s\leq -1$) in the case $n=1$
Figure 3. Outline of the graph of $H_s$ (for $s\geq 1$) in the direction of $p_0$
Figure 4. Outline of the graph of $H_s$ (for $s\leq -1$) in the direction of $p_0$
Figure 5. Outline of the graph of $H_T\natural(\varepsilon \rho_TF_T)$ in the case $n=1$
Figure 6. Outline of the graph of $\widetilde{H}$ in the case $n=1$
Peter Albers, Urs Frauenfelder. Spectral invariants in Rabinowitz-Floer homology and global Hamiltonian perturbations. Journal of Modern Dynamics, 2010, 4 (2) : 329-357. doi: 10.3934/jmd.2010.4.329
V. Barbu. Periodic solutions to unbounded Hamiltonian system. Discrete & Continuous Dynamical Systems - A, 1995, 1 (2) : 277-283. doi: 10.3934/dcds.1995.1.277
Roman Šimon Hilscher. On general Sturmian theory for abnormal linear Hamiltonian systems. Conference Publications, 2011, 2011 (Special) : 684-691. doi: 10.3934/proc.2011.2011.684
Katarzyna Grabowska. Lagrangian and Hamiltonian formalism in Field Theory: A simple model. Journal of Geometric Mechanics, 2010, 2 (4) : 375-395. doi: 10.3934/jgm.2010.2.375
Tianqing An, Zhi-Qiang Wang. Periodic solutions of Hamiltonian systems with anisotropic growth. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1069-1082. doi: 10.3934/cpaa.2010.9.1069
Dario Bambusi, D. Vella. Quasi periodic breathers in Hamiltonian lattices with symmetries. Discrete & Continuous Dynamical Systems - B, 2002, 2 (3) : 389-399. doi: 10.3934/dcdsb.2002.2.389
Alessandro Fonda, Andrea Sfecci. Multiple periodic solutions of Hamiltonian systems confined in a box. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1425-1436. doi: 10.3934/dcds.2017059
Viktor L. Ginzburg, Başak Z. Gürel. On the generic existence of periodic orbits in Hamiltonian dynamics. Journal of Modern Dynamics, 2009, 3 (4) : 595-610. doi: 10.3934/jmd.2009.3.595
Rémi Leclercq. Spectral invariants in Lagrangian Floer theory. Journal of Modern Dynamics, 2008, 2 (2) : 249-286. doi: 10.3934/jmd.2008.2.249
Jeremy L. Marzuola. Dispersive estimates using scattering theory for matrix Hamiltonian equations. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 995-1035. doi: 10.3934/dcds.2011.30.995
Qiong Meng, X. H. Tang. Solutions of a second-order Hamiltonian system with periodic boundary conditions. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1053-1067. doi: 10.3934/cpaa.2010.9.1053
Jianshe Yu, Honghua Bin, Zhiming Guo. Periodic solutions for discrete convex Hamiltonian systems via Clarke duality. Discrete & Continuous Dynamical Systems - A, 2006, 15 (3) : 939-950. doi: 10.3934/dcds.2006.15.939
Pietro-Luciano Buono, Daniel C. Offin. Instability criterion for periodic solutions with spatio-temporal symmetries in Hamiltonian systems. Journal of Geometric Mechanics, 2017, 9 (4) : 439-457. doi: 10.3934/jgm.2017017
Mitsuru Shibayama. Periodic solutions for a prescribed-energy problem of singular Hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2705-2715. doi: 10.3934/dcds.2017116
Shiwang Ma. Nontrivial periodic solutions for asymptotically linear hamiltonian systems at resonance. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2361-2380. doi: 10.3934/cpaa.2013.12.2361
Mark A. Pinsky, Alexandr A. Zevin. Stability criteria for linear Hamiltonian systems with uncertain bounded periodic coefficients. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 243-250. doi: 10.3934/dcds.2005.12.243
Laura Olian Fannio. Multiple periodic solutions of Hamiltonian systems with strong resonance at infinity. Discrete & Continuous Dynamical Systems - A, 1997, 3 (2) : 251-264. doi: 10.3934/dcds.1997.3.251
Juntao Sun, Jifeng Chu, Zhaosheng Feng. Homoclinic orbits for first order periodic Hamiltonian systems with spectrum point zero. Discrete & Continuous Dynamical Systems - A, 2013, 33 (8) : 3807-3824. doi: 10.3934/dcds.2013.33.3807
Paolo Perfetti. Hamiltonian equations on $\mathbb{T}^\infty$ and almost-periodic solutions. Conference Publications, 2001, 2001 (Special) : 303-309. doi: 10.3934/proc.2001.2001.303
B. Buffoni, F. Giannoni. Brake periodic orbits of prescribed Hamiltonian for indefinite Lagrangian systems. Discrete & Continuous Dynamical Systems - A, 1995, 1 (2) : 217-222. doi: 10.3934/dcds.1995.1.217
Morimichi Kawasaki Ryuma Orita | CommonCrawl |
Solving LPN Using Covering Codes
Qian Guo1,2,
Thomas Johansson1 &
Carl Löndahl1
Journal of Cryptology volume 33, pages1–33(2020)Cite this article
We present a new algorithm for solving the LPN problem. The algorithm has a similar form as some previous methods, but includes a new key step that makes use of approximations of random words to a nearest codeword in a linear code. It outperforms previous methods for many parameter choices. In particular, we can now solve the \((512,\frac{1}{8})\) LPN instance with complexity less than \(2^{80}\) operations in expectation, indicating that cryptographic schemes like HB variants and LPN-C should increase their parameter size for 80-bit security.
In recent years of modern cryptography, much effort has been devoted to finding efficient and secure low-cost cryptographic primitives targeting applications in very constrained hardware environments (such as RFID tags and low-power devices). Many proposals rely on the hardness assumption of Learning Parity with Noise (LPN), a fundamental problem in learning theory, which recently has also gained a lot of attention within the cryptographic society. The LPN problem is well studied, and it is intimately related to the problem of decoding random linear codes, which is one of the most important problems in coding theory. Being a supposedly hard problem, the LPN problem is a good candidate for post-quantum cryptography, where other classically hard problems such as factoring and the discrete log problem fall short. The inherent properties of LPN also make it ideal for lightweight cryptography.
The LPN problem can be informally stated as follows. We have an LPN oracle denoted \(\varPi _{ LPN }\) that returns pairs of the form \((\mathbf{g}, { \left\langle \mathbf {x},\mathbf {g}\right\rangle + e})\), where \(\mathbf{x}\) is an unknown but fixed binary vector, \(\mathbf{g}\) is a binary vector with the same length but sampled from a uniform distribution, \({e}\) is from a Bernoulli distribution, and \(\left\langle \mathbf {x},\mathbf {g}\right\rangle \) denotes the scalar product of vectors \(\mathbf {x}\) and \(\mathbf {g}\). The (search) LPN problem is to find the secret vector \(\mathbf x\) given a fixed number of samples (oracle queries) from \(\varPi _{ LPN }\).
The first time the LPN problem was employed in a cryptographic construction was in the Hopper-Blum (HB) identification protocol [21]. HB is a minimalistic protocol that is secure in a passive attack model. Aiming to secure the HB scheme also in an active attack model, Juels and Weis [22], and Katz and Shin [23] proposed a modified scheme. The modified scheme, which was given the name \(\hbox {HB}^+\), extends HB with one extra round. It was later shown by Gilbert et al. [15] that the \(\hbox {HB}^+\) protocol is vulnerable to active attacks, in particular man-in-the-middle attacks, where the adversary is allowed to intercept and attack an ongoing authentication session to learn the secret. Gilbert et al. [13] subsequently proposed a variant of the Hopper-Blum protocol called \(\hbox {HB}^\#\). Apart from repairing the protocol, the constructors of \(\hbox {HB}^\#\) introduced a more efficient key representation using a variant of LPN called Toeplitz-LPN.
Gilbert et al. [14] proposed a way to use LPN in encryption of messages, which resulted in the cryptosystem LPN-C. Kiltz et al. [24] and Dodis et al. [10] showed how to construct message authentication codes (MACs) using LPN. The existence of MACs allows one to construct identification schemes that are provably secure against active attacks. The most recent contribution to LPN-based constructions is a two-round identification protocol called Lapin, proposed by Heyse et al. [20], and an LPN-based encryption scheme called Helen, proposed by Duc and Vaudenay [11]. The Lapin protocol is based on an LPN variant called Ring-LPN, where the samples are elements of a polynomial ring.
The two major threats against LPN-based cryptographic constructions are generic algorithms that decode random linear codes (information-set decoding (ISD)) and variants of the BKW algorithm, originally proposed by Blum et al. [3]. Being the asymptotically most efficientFootnote 1 approach, the BKW algorithm employs an iterated collision procedure on the queries. In each iteration, colliding entries sum together to produce a new entry with smaller dependency on the information bits but with an increased noise level. Once the dependency from sufficiently many information bits is removed, the remaining are exhaustively searched to find the secret. Although the collision procedure is the main reason for the efficiency of the BKW algorithm, it leads to a requirement of an immense amount of queries compared to ISD. Notably, for some cases, e.g., when the noise is very low, ISD yields the most efficient attack.
Levieil and Fouque [29] proposed to use fast Walsh–Hadamard transform in the BKW algorithm when searching for the secret. In an unpublished paper, Kirchner [25] suggested to transform the problem into systematic form, where each information (key) bit then appears as an observed symbol, perturbed by noise. This requires the adversary to only exhaust the biased noise variables rather than the key bits. When the error rate is low, the noise variable search space is very small and this technique decreases the attack complexity. Building on the work by Kirchner [25], Bernstein and Lange [4] showed that the ring structure of Ring-LPN can be exploited in matrix inversion, further reducing the complexity of attacks on for example Lapin. None of the known algorithms manage to break the 80 bit security of Lapin. Nor do they break the parameters proposed in [29], which were suggested as design parameters of LPN-C [14] for 80-bit security.
In this paper, we propose a new algorithm for solving the LPN problem based on [4, 25]. We employ a new technique that we call subspace distinguishing, which exploits coding theory to decrease the dimension of the secret. The trade-off is a small increase in the sample noise. Our novel algorithm performs favorably in comparison to the state-of-the-art algorithms and affects the security of HB variants, Lapin and LPN-C. As an example, we attack the common \((512,\frac{1}{8})\)-instance of LPN and question its 80-bit security barrier. A comparison of complexity of different algorithmsFootnote 2 is shown in Table 1.
Let us explain the main idea of the paper in an informal way. The BKW algorithm will in each step remove the influence of b secret bits by colliding subvectors, at the cost of increasing the noise. So we can model a single step as reducing an LPN problem of dimension n and bias \(\epsilon \) to an LPN problem of dimension \(n-b\) and bias \(\epsilon ^2\). The new main idea is that one can remove more secret bits if we now collide subvectors (linear combinations of secret bits) that are close in Hamming distance, but not necessarily the same. This will leave a few secret bits in each expression, but as the secret bits are biased and they can be considered as an additional noise term. Such a step reduces an LPN problem of dimension n and bias \(\epsilon \) to an LPN problem of dimension \(n-B\), where B is much larger than b, and the new bias is a bit smaller than \(\epsilon ^2\). It is shown that LPN solvers that perform this new approach in the last step get an improved performance.
Table 1 Comparison of different algorithms for solving LPN with parameters \((512,\frac{1}{8})\)
Subsequent Work
After the submission of this paper, a number of papers have appeared that further refine and improve upon this work. We mention the work of [5, 6, 32], and [12].
To be specific, Bogos, Tramèr, and Vaudenay [5] presented a unified framework to study the existing LPN algorithms and also a tight theoretical bound to analyze the data complexity by using the Hoeffding bounds. Later, Zhang et al. [32] proposed a new method to analyze the bias introduced by the concatenation of several perfect codes, where the bias average rather than the bias conditioned on certain keys is employed. Bogos and Vaudenay [6] further clarified the underlying heuristic approximation and generalized the average bias analysis. They considered concrete code construction using concatenations of perfect and quasi-perfect codes. Note that firstly we can treat searching for large decodable linear codes with good covering property as a pre-computation task, and secondly, the analysis using bias average could produce a lower complexity estimation, which has been verified in our experiments where our bias estimation conditioned on key patterns matches the experimental data but is slightly conservative. In a recent paper [12], the idea of combining BKW and ISD was further investigated by Esser, Kübler and May.
The organization of the paper is as follows. In Sect. 2, we give some preliminaries and introduce the LPN problem in detail. Moreover, in Sect. 3 we give a short description of the BKW algorithm. We briefly describe the general idea of our new attack in Sect. 4 and more formally in Sect. 5. In Sect. 6, we analyze its complexity. The results when the algorithm is applied on various LPN-based cryptosystems are given in Sect. 7, which is followed by a section showing the experimental results. In Sect. 9, we describe some aspects of the covering-coding technique. Section 10 concludes the paper.
The LPN Problem
We now give a more thorough description of the LPN problem. Let \( \textsf {Ber} _\eta \) be the Bernoulli distribution and let \(X \sim \textsf {Ber} _\eta \) be a random variable with alphabet \(\mathcal {X} = \{0,1\}\). Then, \(\mathbf {Pr}\left[ X = 1\right] = \eta \) and \(\mathbf {Pr}\left[ X = 0\right] = 1 - \mathbf {Pr}\left[ X = 1\right] = 1-\eta \). The bias \(\epsilon \) of X is given from \(\mathbf {Pr}\left[ X = 0\right] = \frac{1}{2}\left( 1+\epsilon \right) \), i.e., \(\epsilon = 1-2\eta \). Let k be a security parameter, and let \(\mathbf {x}\) be a binary vector of length k. We define the Hamming weight of a vector \(\mathbf {v}\) as the number of nonzero elements, denoted by \(w_\text{ H }\left( \mathbf {v}\right) \), and let \(\mathcal {B}_{2}(n,w)\) denote the Hamming ball which contains all the elements in \(\mathbb {F}_{2}^{n}\) whose Hamming weight is no larger than w.
(LPNoracle) An LPN oracle \(\varPi _{ LPN }\) for an unknown vector \(\mathbf {x} \in \{0,1\}^k\) with \(\eta \in (0,\frac{1}{2})\) returns pairs of the form
$$\begin{aligned} \left( \mathbf {g} \mathop \leftarrow \limits ^{\$}\{0,1\}^k, \left\langle \mathbf {x},\mathbf {g}\right\rangle + e\right) , \end{aligned}$$
where \( e \leftarrow \textsf {Ber} _\eta \). Here, \(\left\langle \mathbf {x},\mathbf {g}\right\rangle \) denotes the scalar product of vectors \(\mathbf {x} \) and \(\mathbf {g}\).
We also write \(\left\langle \mathbf {x},\mathbf {g}\right\rangle \) as \(\mathbf {x} \cdot \mathbf {g}^\mathrm{T}\), where \(\mathbf {g}^\mathrm{T}\) is the transpose of the row vector \(\mathbf {g}\). We receive a number n of noisy versions of scalar products of \(\mathbf {x}\) from the oracle \(\varPi _{ LPN }\), and our task is to recover \(\mathbf {x}\).
(LPN) Given an LPN oracle \(\varPi _{ LPN }\), the \((k,\eta )\)-LPN problem consists of finding the vector \(\mathbf {x}\). An algorithm \(\mathcal {A}_{ LPN }(T,n,\delta )\) using time at most T with at most n oracles queries solves \((k,\eta )\)-LPN if
$$\begin{aligned} \mathbf {Pr}\left[ \mathcal {A}_{ LPN }(T,n,\delta ) = \mathbf {x} : \mathbf {x} \mathop \leftarrow \limits ^{\$}\{0,1\}^k\right] \ge \delta . \end{aligned}$$
Let \(\mathbf {y}\) be a vector of length n, and let \(y_i=\left\langle \mathbf {x},\mathbf {g}_i\right\rangle \). For known random vectors \(\mathbf {g}_1, \mathbf {g}_2, \ldots ,\mathbf {g}_n\), we can easily reconstruct an unknown \(\mathbf{x}\) from \(\mathbf {y}\) using linear algebra. In the LPN problem, however, we receive instead noisy versions of \(y_i, i=1,2,\ldots ,n\). Writing the noise in position i as \(e_i, i=1,2,\ldots ,n\), we obtain
$$\begin{aligned} z_i=y_i+e_i = \left\langle \mathbf {x},\mathbf {g}_i\right\rangle +e_i. \end{aligned}$$
In matrix form, the same relation is written as \(\mathbf {z} = \mathbf {x} \mathbf {G} +\mathbf {e},\) where
$$\begin{aligned} \mathbf {z}=\begin{pmatrix} z_1&z_2&\cdots&z_n \end{pmatrix},\quad {\mathbf {e}=\begin{pmatrix} e_1&e_2&\cdots&e_n \end{pmatrix},} \end{aligned}$$
and the matrix \(\mathbf {G}\) is formed as
$$\begin{aligned} \mathbf {G}=\begin{pmatrix} \mathbf {g}_1^\mathrm{T}&\mathbf {g}_2 ^\mathrm{T}&\cdots&\mathbf {g}_n^\mathrm{T}\end{pmatrix}. \end{aligned}$$
This shows that the LPN problem is simply a decoding problem, where \(\mathbf {G}\) is a random \(k\times n\) generator matrix, \(\mathbf {x}\) is the information vector, and \(\mathbf {z}\) is the received vector after transmission of a codeword on the binary symmetric channel with error probability \(\eta \).
Piling-Up Lemma
We recall the piling-up lemma, which is frequently used in analysis of the LPN problem.
Lemma 1
(Piling-up lemma) Let \(X_1,X_2,...X_n\) be independent binary random variables where each \(\mathbf {Pr}\left[ X_i = 0\right] = \frac{1}{2}( 1 + \epsilon _i)\), for \(1 \le i \le n\). Then,
$$\begin{aligned} \mathbf {Pr}\left[ X_1 + X_2 + \cdots + X_n = 0\right] = \frac{1}{2}\left( 1+\prod _{i=1}^n\epsilon _i\right) . \end{aligned}$$
Complexity Estimates
The computational complexity of a given algorithm can be given in many different ways. First, we may choose between giving asymptotic expressions or giving more explicit complexity estimates. For example, the BKW algorithm for solving LPN in dimension n is sub-exponential.
In this paper, we are primarily interested in explicit complexity estimates and we will thus try to estimate the number of operations required by an algorithm. We follow a long tradition of counting the number of "simple" bit operations. This includes reading a bit in memory, and it also does not have restrictions in memory size. Clearly, this model does not match an estimation of number of clock cycles on some CPU. In general, we expect the number of clock cycles to be less, since some word-oriented instructions can perform many bit operations in a single instruction.
The BKW Algorithm
The BKW algorithm, as proposed by Blum et al. [3], is an algorithm that solves the LPN problem in sub-exponential time, requiring \(2^{\mathcal {O}\left( k /\log k\right) }\) queries and time. To achieve this, the algorithm uses an iterative sort-and-match procedure on the columns of the query matrix \(\mathbf {G}\), which iteratively reduces the dimension of \(\mathbf {G}\).
Reduction phase Initially, one searches for all combinations of two columns in \(\mathbf {G}\) that add to zero in the last b entries. Let
$$\begin{aligned} \mathcal {\MakeUppercase {M}} {\mathop {=}\limits ^{\mathrm{def}}}\{k-b+1,k-b+2,\ldots , k\} \end{aligned}$$
and define a filtering function \(\phi _{\mathcal {\MakeUppercase {m}}} : \mathbb {F}_{2}^{k} \rightarrow \mathbb {F}_{2}^{b}\). Assume that one finds two columns \({\mathbf {g}^\mathrm{T}_{i_0}}, \mathbf {g}^\mathrm{T}_{i_1}\) such that
$$\begin{aligned} \mathbf {g}_{i_0}+ \mathbf {g}_{i_1}=(\begin{matrix} *&*&\cdots&*&\end{matrix}\underbrace{\begin{matrix} 0&0&\cdots&0 \end{matrix}}_{b symbols }), \end{aligned}$$
where \(*\) means any value, i.e., they belong to the same partition (or equivalence class) and fulfill \(\phi _{\mathcal {\MakeUppercase {m}}}(\mathbf {g}_{i_0}) = \phi _{\mathcal {\MakeUppercase {m}}}(\mathbf {g}_{i_1})\). Then, a new vector
$$\begin{aligned} \mathbf {g}_1^{(1)}= \mathbf {g}_{i_0}+ \mathbf {g}_{i_1} \end{aligned}$$
is computed. Let \(y_1^{(1)} = \left\langle \mathbf {x},\mathbf {g}^{(1)}_{1}\right\rangle \). An observed symbol is also formed, corresponding to this new column by forming
$$\begin{aligned} z_1^{(1)}=z_{i_0}+z_{i_1} =y_1^{(1)}+e_1^{(1)} = \left\langle \mathbf {x},\mathbf {g}^{(1)}_{1}\right\rangle +e_1^{(1)}, \end{aligned}$$
where now \(e_1^{(1)}=e_{i_0}+e_{i_1}\). It can be verified that \(\mathbf {Pr}\left[ e_1^{(1)}=0\right] = \frac{1}{2}\cdot (1+\epsilon ^2)\). The algorithm proceeds by adding the same element, say \(\mathbf {g}_{i_0}\), to the other elements in the partition forming
and so forth. The resulting columns are stored in a matrix \({\mathbf {G}}_1\),
$$\begin{aligned} \mathbf {G}_1= \begin{pmatrix} (\mathbf {g}_1^{(1)})^\mathrm{T}&(\mathbf {g}_2^{(1)})^\mathrm{T}&\ldots&(\mathbf {g}_{n - 2^b}^{(1)})^\mathrm{T}\end{pmatrix}. \end{aligned}$$
If n is the number of columns in \(\mathbf {G}\), then the number of columns in \(\mathbf {G}_1\) will be \(n - 2^b\). Note that the last b entries of every column in \(\mathbf {G}_1\) are all zero. In connection to this matrix, the vector of observed symbols is
$$\begin{aligned} \mathbf {z}_1= \begin{pmatrix} z_1^{(1)}&z_2^{(1)}&\cdots&z_{n-2^b}^{(1)} \end{pmatrix}, \end{aligned}$$
where \(\mathbf {Pr}\left[ z_i^{(1)}=y_i^{(1)}\right] =\frac{1}{2}\cdot (1+\epsilon ^2)\), for \(1 \le i \le n-2^b\). We now iterate the same (with a new \(\phi \) function), picking one column and then adding it to another suitable column in \(\mathbf {G}_i\) giving a sum with an additional b entries being zero, forming the columns of \(\mathbf {G}_{i+1}\). Repeating the same procedure, an additional \(t-1\) times will reduce the number of unknown variables to \(k-b \cdot t\) in the remaining problem. For each iteration, the noise level is squared. By the piling-up lemma (Lemma 1), we have that
$$\begin{aligned} \mathbf {Pr}\left[ \sum _{j=1}^{2^t} e_i = 0\right] = \frac{1}{2}\cdot \left( 1+\epsilon ^{2^t}\right) . \end{aligned}$$
Hence, the bias decreases quickly to low levels as t increases. Therefore, we want to keep t as small as possible.
Solving phase In the final step, the BKW algorithm looks for a column vector in \({\mathbf {G}}_{t}\) such that only the first bit of the vector is nonzero. If the algorithm finds such a vector, then that sample constitutes a very noisy observation the first bit \(x_1\) of \(\mathbf {x}\). The algorithm stores the observation and repeats the reduction-phase procedure with new samples from the oracle, until sufficiently many observations of the secret bit \(x_1\) have been obtained. Then, it uses a majority decision to determine \(x_1\). The whole procedure is given in Algorithm 1.
LF1 and LF2 Variants
The BKW algorithm is a powerful theoretical construction and because the algorithm operates solely on independent samples, it is possible to provide rigorous analysis using probabilistic arguments without heuristic assumptions. However, the provability comes at a quite high expense—the algorithm discards a lot of samples that could be used in solving the problem. This was first pointed out by Levieil and Fouque in [29]. They suggested that all samples should be kept after the reduction and not only the ones having weight 1. Instead of determining the secret bit by bit using majority decision, the whole \(k-t\cdot b\) bit secret may be determined using Walsh transformation. The authors suggested two methods: LF1 and LF2—the methods are essentially the same, but differ in how the columns to be merged are chosen in the reduction phase.Footnote 3
LF1 picks a column in each partition and then adds it to the remaining samples in the same partition (entries having the same last b entries). This is identical to how the described BKW operates in its merging steps. The number of samples is reduced by \(2^b\) after each merge operation. Hence, after a series of t merges, the number of samples is about
$$\begin{aligned} r(t) = n - t \cdot 2^b. \end{aligned}$$
The algorithm uses fast Walsh–Hadamard transform to determine the remaining secret of dimension \(k-t \cdot b\). Thus, no samples are discarded and the algorithm does, in contrast to BKW, not query the oracle a multiple number of times. Therefore, a factor \(2^b\) is lost in terms of query complexity. The LF1 method was subsequently adopted by Bernstein and Lange in [4].
The other method, LF2, computes all pairs within the same partition. It produces more samples at the cost of increased dependency, thereby gaining more efficiency in practice. Given that there are on average \(\frac{n}{2^b}\) samples in one partition, we expect around
$$\begin{aligned} 2^b {n/2^b \atopwithdelims ()2} \end{aligned}$$
possible samples at the end of one merge step in LF2, or more generally
$$\begin{aligned} r'(t) = 2^b\cdot {r'(t-1)/2^b \atopwithdelims ()2}, \end{aligned}$$
after t merging steps, with \(r'(0) = n\). The number of samples preserves when setting \(m = 3 \cdot 2^b\), and this setting is verified by an implementation in [29]. Like LF1, a fast Walsh–Hadamard transform (FWHT) is used to determine the secret. Combined with a more conservative use of samples, LF2 is expected to be at least as efficient as LF1 in practice. In particular, LF2 has great advantage when the attacker has restricted access to the oracle.
In the above, we illustrate t merging steps and sample count at each t with respect to BKW/LF1, r(t) and LF2, \(r'(t)\)
We have illustrated the different methods in Fig. 1.
Essential Idea
In this section, we try to give a very basic description of the idea used to give a new and more efficient algorithm for solving the \( \textsc {LPN} \) problem. A more detailed analysis will be provided in later sections.
Assume that we have an initial \( \textsc {LPN} \) problem described by
$$\begin{aligned} \mathbf {G}=\begin{pmatrix} \mathbf {g}_1^\mathrm{T}&\mathbf {g}_2^\mathrm{T}&\cdots&\mathbf {g}_n^\mathrm{T}\end{pmatrix} \end{aligned}$$
and \( \mathbf {z} = \mathbf {x} \mathbf {G} +\mathbf {e},\) where \(\mathbf {z}=\begin{pmatrix} z_1&z_2&\cdots&z_n \end{pmatrix}\) and
As previously shown in [25] and [4], we may through Gaussian elimination transform \(\mathbf {G}\) into systematic form. Assume that the first k columns are linearly independent and form the matrix \(\mathbf {D}^{-1}\). With a change of variables \({ \hat{\mathbf {x}}} = \mathbf {x} \mathbf {D}^{-1} \), we get an equivalent problem description with
$$\begin{aligned} \hat{\mathbf {G}}=\begin{pmatrix} \mathbf {I}&\hat{\mathbf {g}}_{k+1}^\mathrm{T}&\hat{\mathbf {g}}_{k+2}^\mathrm{T}&\cdots&\hat{\mathbf {g}}_n^\mathrm{T}\end{pmatrix}. \end{aligned}$$
We compute
$$\begin{aligned} \hat{\mathbf {z}} = \mathbf {z} + \begin{pmatrix} z_1,z_2,\ldots , z_k \end{pmatrix} \hat{\mathbf {G}} = \begin{pmatrix} \mathbf {0}, \hat{z}_{k+1}, \hat{z}_{k+2}, \ldots , \hat{z}_n \end{pmatrix} . \end{aligned}$$
In this situation, one may start performing a number of BKW steps on columns \(k+1\) to n, reducing the dimension k of the problem to something smaller. This will result in a new problem instance where noise in each position is larger, except for the first systematic positions. We may write the problem after performing t BKW steps in the form
$$\begin{aligned} \mathbf {G'}=\begin{pmatrix} \mathbf {I}&{\mathbf {g}'_{1}}^\mathrm{T}&{\mathbf {g}'_{2}}^\mathrm{T}&\cdots&{\mathbf {g}'_{m}}^\mathrm{T}\end{pmatrix} \end{aligned}$$
$$\begin{aligned} {\mathbf {z}'}= \begin{pmatrix} \mathbf {0}, z'_1,z'_2,\ldots z'_m \end{pmatrix} ,\end{aligned}$$
where now \(\mathbf {G'}\) has dimension \(k'\times m\) with \(k'=k-bt\) and m is the number of columns remaining after the t BKW steps. We have \({\mathbf {z}'}={\mathbf {x}'}\mathbf {G'}+{\mathbf {e}'}\),
$$\begin{aligned} \mathbf {Pr}\left[ x'_i=0\right] =\frac{1}{2}(1+\epsilon ) \end{aligned}$$
$$\begin{aligned} \mathbf {Pr}\left[ \mathbf {x}'\cdot {\mathbf {g}'_i}^\mathrm{T}=z_i\right] =\frac{1}{2}(1+\epsilon ^{2^t}). \end{aligned}$$
Now we explain the basics of the new idea proposed in the paper. In a problem instance as above, we may look at the random variables \(y'_i=\mathbf {x}'\cdot {\mathbf {g}'_i}^\mathrm{T}\). The bits in \(\mathbf {x}'\) are mostly zero, but a few are set to one. Let us assume that c bits are set to one. Furthermore, \(\mathbf {x}'\) is fixed for all i. We usually assume that \(\mathbf {g}'_i\) is generated according to a uniform distribution. However, assuming that every column \(\mathbf {g}'_i\) would be biased, i.e., every bit in a column position is zero with probability \(1/2(1+\epsilon ')\), we then observe that the variables \(y'_i\) will be biased, as
$$\begin{aligned} y'_i=\left\langle \mathbf {x}',\mathbf {g}'_i\right\rangle =\sum _{j=1}^{c}[\mathbf {g}'_i]_{k_j}, \end{aligned}$$
where \({k_1,k_2,\ldots k_c}\) are the bit positions where \(\mathbf {x}'\) has value one (here \([\mathbf {x}]_y\) denotes bit y of vector \(\mathbf {x}\)). In fact, assuming that the variables \([\mathbf {g}'_i]_{k_j}\) are independently distributed,Footnote 4 variables \(y'_i\) will have bias \((\epsilon ')^c\).
So how do we get the columns to be biased in the general case? We could simply hope for some of them to be biased, but if we need to use a larger number of columns, the bias would have to be small, giving a high complexity for an algorithm solving the problem. We propose instead to use a covering code to achieve something similar to what is described above. Vectors \(\mathbf {g}'_i\) are of length \(k'\), so we consider a code of length \(k'\) and some dimension l. Let us assume that a generator matrix of this code is denoted \(\mathbf {F}\). For each vector \( \mathbf {g}'_i\), we now find the codeword in the code spanned by \(\mathbf {F}\) that is closest (in Hamming sense) to \(\mathbf {g}'_i\). Assume that this codeword is denoted \(\mathbf {c}_i\). Then, we can write
$$\begin{aligned} \mathbf {g}'_i = \mathbf {c}_i + \mathbf {e}'_i, \end{aligned}$$
where \(\mathbf {e}'_i\) is a vector with biased bits. It remains to examine exactly how biased the bits in \( \mathbf {e}'_i\) will be, but assume for the moment that the bias is \(\epsilon '\). Going back to our previous expressions, we can write
$$\begin{aligned} y'_i=\left\langle \mathbf {x}',\mathbf {g}'_i\right\rangle =\mathbf {x}'\cdot (\mathbf {c}_i+\mathbf {e}'_i)^\mathrm{T}\end{aligned}$$
and since \(\mathbf {c}_i = \mathbf {u}_i\mathbf {F} \) for some \(\mathbf {u}_i\), we can write
$$\begin{aligned} y'_i=\mathbf {x}' \mathbf {F}^\mathrm{T}\cdot \mathbf {u}_i^\mathrm{T}+\mathbf {x}'\cdot {\mathbf {e}'_i}^\mathrm{T}. \end{aligned}$$
We may introduce \(\mathbf {v}=\mathbf {x}' \mathbf {F}^\mathrm{T}\) as a length l vector of unknown bits (linear combinations of bits from \(\mathbf {x}'\)) and again
$$\begin{aligned} y'_i=\mathbf {v} \cdot \mathbf {u}_i^\mathrm{T}+\mathbf {x}'\cdot {\mathbf {e}'_i}^\mathrm{T}. \end{aligned}$$
Since we have \(\mathbf {Pr}\left[ y'_i=z'_i\right] =1/2(1+\epsilon ^{2^t})\), we get
$$\begin{aligned} \mathbf {Pr}\left[ \mathbf {v} \cdot \mathbf {u}_i^\mathrm{T}= z'_i\right] =\frac{1}{2}(1+\epsilon ^{2^t}(\epsilon ')^c), \end{aligned}$$
where \(\epsilon '\) is the bias determined by the expected distance between \(\mathbf {g}'_i\) and the closest codeword in the code we are using, and c is the number of positions in \(\mathbf {x}'\) set to one. The last step in the new algorithm now selects about \(m=\mathcal {O}\left( l/(\epsilon ^{2^t}\cdot \epsilon '^c)^2\right) \) samples \(z'_1,z'_2,\ldots , z'_m\) and for each guess of the \(2^l\) possible values of \(\mathbf {v}\), we compute how many times \(\mathbf {v} \cdot \mathbf {u}_i^\mathrm{T}=z'_i\) when \(i=1,2,\ldots , m\). As this step is similar to a correlation attack scenario, we know that it can be efficiently computed using fast Walsh–Hadamard transform. After recovering \(\mathbf {v}\), it is an easy task to recover remaining unknown bits of \(\mathbf {x}'\).
A Toy Example
In order to illustrate the ideas and convince the reader that the proposed algorithm can be more efficient than previously known methods, we consider an example. We assume an \( \textsc {LPN} \) instance of dimension \(k=160\), where we allow at most \(2^{24}\) received samples and we allow at most around \(2^{24}\) vectors of length 160 to be stored in memory. Furthermore, the error probability is \(\eta =0.1\).
For this particular case, we propose the following algorithm. Note that for an intuitive explanation, we assume the number of required samples to be \(1/\epsilon _{tot}^{2}\), where \(\epsilon _{tot}\) is the total bias. A rigorous complexity analysis of the new algorithm will be presented later.
The first step is to compute the systematic form,
$$\begin{aligned} \hat{{\mathbf {G}}}=\begin{pmatrix} \mathbf {I}&\hat{\mathbf {g}}_{k+1}^\mathrm{T}&\hat{\mathbf {g}}_{k+2}^\mathrm{T}&\cdots&\hat{\mathbf {g}}_n^\mathrm{T}\end{pmatrix} \end{aligned}$$
$$\begin{aligned} {\hat{\mathbf {z}}}=\mathbf {z}+\begin{pmatrix}z_1&z_2&\ldots&z_k\end{pmatrix}\hat{\mathbf {{G}}}=\begin{pmatrix}\mathbf {0}&\hat{z}_{k+1}&\hat{z}_{k+2}&\ldots&\hat{z}_{n}\end{pmatrix}. \end{aligned}$$
Here, \(\hat{\mathbf {{G}}}\) has dimension 160 and \({ \hat{\mathbf {z}}}\) has length at most \(2^{24}\).
In the second step, we perform \(t=4\) merging steps (using the BKW/LF1 approach), the first step removing 22 bits and the remaining three each removing 21 bits. This results in \(\mathbf {G'}=\begin{pmatrix} \mathbf {I}&{\mathbf {g}'_{1}}^\mathrm{T}&{\mathbf {g}'_{2} }^\mathrm{T}&\cdots&{\mathbf {g}'_m}^\mathrm{T}\end{pmatrix}\) and \({\mathbf {z}'}=\begin{pmatrix}\mathbf {0}&z'_1&z'_2&\ldots&z'_m\end{pmatrix}, \) where now \(\mathbf {G'}\) has dimension \(75\times m\) and m is about \(3\cdot 2^{21}\). We have \({\mathbf {z}'}={\mathbf {x}'}\mathbf {G'}\),
$$\begin{aligned} \mathbf {Pr}\left[ x'_i=0\right] =\frac{1}{2}\cdot (1+\epsilon ), \end{aligned}$$
where \(\epsilon =0.8\) and
$$\begin{aligned} \mathbf {Pr}\left[ \left\langle \mathbf {x}', \mathbf {g'_i}\right\rangle =z_i\right] =\frac{1}{2}\cdot (1+\epsilon ^{16}). \end{aligned}$$
Hence, the resulting problem has dimension 75 and the bias is \(\epsilon ^{2^t}=(0.8)^{16}\).
In the third step, we then select a suitable code of length 75. In this example, we choose a block code which is a direct sum of 25 [3, 1, 3] repetition codes,Footnote 5 i.e., the dimension is 25. We map every vector \(\mathbf {g}'_i\) to the nearest codeword by simply selecting chunks of three consecutive bits and replace them by either 000 or 111. With probability \(\frac{3}{4}\), we will change one position and with probability \(\frac{1}{4}\) we will not have to change any position. In total, we expect to change \((\frac{3}{4}\cdot 1+\frac{1}{4}\cdot 0)\cdot 25\) positions. The expected weight of the length 75 vector \(\mathbf {e}'_i\) is \(\frac{1}{4} \cdot 75\), so the expected bias is \(\epsilon '=\frac{1}{2}\). As \(\mathbf {Pr}\left[ x'_i=1\right] =0.1\), the expected number of nonzero positions in \(\mathbf {x}'\) is 7.5. Assuming we have only \(c=6\) nonzero positions, we get
$$\begin{aligned} \mathbf {Pr}\left[ \left\langle \mathbf {v},\mathbf {u}_i\right\rangle = z'_i\right] =\frac{1}{2}\cdot \left( 1+0.8^{16}\cdot \left( \frac{1}{2}\right) ^6\right) =\frac{1}{2}\cdot (1+2^{-11.15}). \end{aligned}$$
In the last step, we then run through \(2^{25}\) values of \(\mathbf {v}\) and for each of them we compute how often \(\mathbf {v} \cdot \mathbf {u}_i^\mathrm{T}= z'_i\) for \(i=1,\ldots , 3\cdot 2^{21}\). Again since we use fast Walsh–Hadamard transform, the cost of this step is not much more than \(2^{25}\) operations.
The above four-step procedure forms one iteration of our solving algorithm, and we need to repeat it a few times. The expected number depends on the success probability of one iteration. For this particular repetition code, there are \(\gg \)bad events\(\ll \) that make the distinguisher fail. When two of the errors in \(\mathbf {x}'\) fall into the same concatenation, then the bias is zero. If there are three errors in the same concatenation, then the bias is negative. To conclude, we can distinguish successfully if there are no more than 6 ones in \(\mathbf {x}'\) and each of them falls into a distinct concatenation, i.e., the overall bias is at least \(2^{-11.15}\). The successes probabilityFootnote 6 is thus
$$\begin{aligned} \sum _{i=0}^{6}{25 \atopwithdelims ()i } \cdot {3 \atopwithdelims ()1}^i\cdot \left( \frac{1}{10}\right) ^i\cdot \left( \frac{9}{10}\right) ^{75-i} \approx 0.28. \end{aligned}$$
In comparison with other algorithms, the best approach we can find is the Kirchner [25] and the Bernstein and Lange [4] approaches, where one can do up to 5 merging steps. Removing 21 bits in each step leaves 55 remaining bits. Using fast Walsh–Hadamard transform with \(0.8^{-64} = 2^{20.6}\) samples, we can include another 21 bits in this step, but there are still 34 remaining variables that needs to be guessed.
Overall, the simple algorithm sketched above is outperforming the best previous algorithm using optimal parameter values.Footnote 7
We have verified in simulation that the proposed algorithm works in practice, both in the LF1 and LF2 setting using the rate \(R=\frac{1}{3}\) concatenated repetition code.
Algorithm Description
Having introduced the key idea in a simplistic manner, we now formalize it by stating a new five-step LPN solving algorithm (see Algorithm 2) in detail. Its first three steps combine several well-known techniques on this problem, i.e., changing the distribution of secret vector [25], sorting and merging to make the dimension of samples shorter [3], and partial secret guessing [4], together. The efficiency improvement comes from a novel idea introduced in the last two subsections—if we employ a linear covering code and rearrange samples according to their nearest codewords, then the columns in the matrix subtracting their corresponding codewords lead to sparse vectors desired in the distinguishing process. We later propose a new distinguishing technique—subspace hypothesis testing, to remove the influence of the codeword part using fast Walsh–Hadamard transform. The algorithm consists of five steps, each described in separate subsections. These steps are graphically illustrated in Figs. 2 and 3.
Here, we illustrate the different steps of the new algorithm, using the LF1 and the LF2 merging approaches. In the figure, we only show the upper systematic part used in hypothesis testing
After the columns have been merged t times, we have a matrix as shown above. In the upper part, we perform the partial secret guessing. The remaining part will be projected (with distortion) into a smaller space of dimension l using a covering code
Gaussian Elimination
Recall that our LPN problem is given by \(\mathbf {z}= \mathbf {x} \mathbf {G} + \mathbf {e}\), where \(\mathbf {z}\) and \(\mathbf {G}\) are known. We can apply an arbitrary column permutation \(\pi \) without changing the problem (but we change the error locations). A transformed problem is \( \pi (\mathbf {z}) = \mathbf {x} \pi (\mathbf {G}) + \pi (\mathbf {e}).\) This means that we can repeat the algorithm many times using different permutations, which very much resembles the operation of information-set decoding algorithms.
Continuing, we multiply by a suitable \(k\times k\) matrix \(\mathbf {D}\) to bring the matrix \(\mathbf {G}\) to a systematic form, \(\hat{\mathbf {G}}=\mathbf {D}\mathbf {G}.\) The problem remains the same, except that the unknowns are now given by the vector \( \tilde{\mathbf {x}} = \mathbf {x}\mathbf {D}^{-1}\). This is just a change of variables. As a second step, we also add the codeword \(\begin{pmatrix} z_1&z_2&\cdots&z_k \end{pmatrix}\hat{\mathbf {G}}\) to our known vector \(\mathbf {z}\), resulting in a received vector starting with k zero entries. Altogether, this corresponds to the change \(\hat{\mathbf {x}} =\mathbf {x}\mathbf {D}^{-1}+ \begin{pmatrix} z_1&z_2&\cdots&z_k \end{pmatrix}\).
Our initial problem has been transformed, and the problem is now written as
$$\begin{aligned} \hat{\mathbf {z}} = \begin{pmatrix} \mathbf {0}&\hat{z}_{k+1}&\hat{z}_{k+2} \cdots&\hat{z}_{n} \end{pmatrix}= \hat{\mathbf {x}} \hat{\mathbf {G}} + \mathbf {e}, \end{aligned}$$
where now \( \hat{\mathbf {G}}\) is in systematic form. Note that these transformations do not affect the noise level. We still have a single noise variable added in every position.
Time–Memory Trade-Off
Schoolbook implementation of the above Gaussian elimination procedure requires about \(\frac{1}{2} \cdot n \cdot k^2\) bit operations; we propose, however, to reduce its complexity by using a more sophisticated time–memory trade-off technique. We store intermediate results in tables and then derive the final result by adding several items in the tables together. The detailed description is as follows.
For a fixed s, divide the matrix \(\mathbf {D}\) in \(a = \lceil \frac{k}{s}\rceil \) parts, i.e.,
$$\begin{aligned} \mathbf {D} = \begin{pmatrix} \mathbf {D}_1&\mathbf {D}_2&\ldots&\mathbf {D}_a \end{pmatrix}, \end{aligned}$$
where \(\mathbf {D}_i\) is a sub-matrix with s columns (except possibly the last matrix \({\mathbf {D}}_a\)). Then store all possible values of \(\mathbf {D}_i\mathbf {x}^\mathrm{T}\) for \(\mathbf {x} \in \mathbb {F}_{2}^{s}\) in tables indexed by i, where \(1 \le i \le a\). For a vector \(\mathbf {g} = \begin{pmatrix} \mathbf {g}_1&\mathbf {g}_2&\ldots&\mathbf {g}_a \end{pmatrix} \), the transformed vector is
$$\begin{aligned} \mathbf {D} \mathbf {g}^\mathrm{T}= \mathbf {D}_1 \mathbf {g}_1^\mathrm{T}+ \mathbf {D}_2 \mathbf {g}_2^\mathrm{T}+ \ldots + \mathbf {D}_a \mathbf {g}_a^\mathrm{T}, \end{aligned}$$
where \(\mathbf {D}_i \mathbf {g}_i^\mathrm{T}\) can be read directly from the table.
The cost of constructing the tables is about \(\mathcal {O}\left( 2^{s}\right) \), which can be negligible if memory in the later merge step is much larger. Furthermore, for each column, the transformation costs no more than \(k\cdot a \) bit operations; so, this step requires
$$\begin{aligned} C_1= (n-k)\cdot k \cdot a <n \cdot k \cdot a \end{aligned}$$
bit operations in total if \(2^{s}\) is much smaller than n.
A Minor Improvement
One observation is that only the distribution of the first \(k' = k- t\cdot b\) entries in the secret vector affects the later steps. In other words, we just need to make the first \(k'\) entries biased. Thus, we can ignore the Gaussian elimination processing on the bottom tb rows of the \(\mathbf {G}\). More formally, let the first k columns of \(\mathbf {G}\) be an invertible matrix \(\mathbf {G}_0\), where
$$\begin{aligned} \mathbf {G}_0 = \begin{bmatrix} \mathbf {G}_{01}&\mathbf {G}_{02} \\ \mathbf {G}_{03}&\mathbf {G}_{04} \end{bmatrix}, \end{aligned}$$
then instead of setting \(\mathbf {D} = \mathbf {G}_0^{-1}\), we define
$$\begin{aligned} \mathbf {D} = \begin{bmatrix} \mathbf {G}_{01}^{-1}&\mathbf {0} \\ -\mathbf {G}_{03}\mathbf {G}_{01}^{-1}&\mathbf {I} \end{bmatrix}. \end{aligned}$$
Then, the first \(k'\) column of \(\mathbf {D} \mathbf {G}\) is of the form
$$\begin{aligned} \begin{bmatrix} \mathbf {I} \\ \mathbf {0} \end{bmatrix}. \end{aligned}$$
Denote the transformed secret vector \(\hat{\mathbf {x}}= \mathbf {x} \mathbf {D}^{-1} + \mathbf {z}_{[1,\ldots ,k']}\) similarly. Then, we have that \( \hat{\mathbf {z}} = \hat{\mathbf {x}} \hat{\mathbf {G}} + \mathbf {e} \), where \(\hat{\mathbf {G}}\) is \(\mathbf {D}\mathbf {G}\) and,
$$\begin{aligned} \hat{\mathbf {z}} = \begin{pmatrix} \mathbf {0}&\hat{z}_{k'+1}&\hat{z}_{k'+2} \cdots&\hat{z}_{n} \end{pmatrix}= \mathbf {z} + \mathbf {z}_{[1,\ldots ,k']}\hat{\mathbf {G}} \end{aligned}$$
Using the space-time trade-off technique, the complexity can be computed as:
$$\begin{aligned} C'_1 = (n - k') \cdot k \cdot \left\lceil \frac{k'}{s}\right\rceil < n\cdot k\cdot \left\lceil \frac{k'}{s}\right\rceil . \end{aligned}$$
Compared with Eq. (16), we reduce the value a from \(\lceil \frac{k}{s}\rceil \) to \(\lceil \frac{k'}{s}\rceil \), where \(k' = k- t\cdot b\).
Merging Columns
This next step consists of merging columns. The input to this step is \( \hat{\mathbf {z}}\) and \( \hat{\mathbf {G}}\). We write \(\hat{\mathbf {G}}= \begin{pmatrix}\mathbf {I}&\mathbf {L}_0\end{pmatrix}\) and process only the matrix \(\mathbf {L}_0\). As the length of \(\mathbf {L}_0\) is typically much larger than the systematic part of \(\hat{\mathbf {G}}\), this is roughly no restriction at all. We then use the a sort-and-match technique as in the BKW algorithm, operating on the matrix \(\mathbf {L}_0\). This process will give us a sequence of matrices denoted \(\mathbf {L}_0, \mathbf {L}_1, \mathbf {L}_2,\ldots , \mathbf {L}_{t}\).
Let us denote the number of columns of \(\mathbf {L}_i\) by r(i), with \(r(0)=r'(0) = n-k'\). Adopting the LF1 type technique, every step operating on columns will reduce the number of samples by \(2^b\), yielding that
$$\begin{aligned} m=r(t)= r(0)- t\cdot 2^b \iff n-k' = m + t\cdot 2^b. \end{aligned}$$
Using the setting of LF2, the number of samples is
$$\begin{aligned} \begin{aligned}&m = r'(t)=2^b\cdot {r'(t-1)/2^b \atopwithdelims ()2} \\&\implies n-k' \approx \root 2^{t+1} \of {2^{(b+1)(2^{t+1}-1)} \cdot m}. \end{aligned} \end{aligned}$$
The expression for \(r'(t)\) does not appear in [29], but it can be found in [5]. We see that if m is equal to \(3\cdot 2^b\), the number of samples preserves during the reductions. Implementations suggest that there is no visible effect on the success of the algorithm,Footnote 8 so we adopt this setting.
Apart from the process of creating the \(\mathbf {L}_i\) matrices, we need to update the received vector in a similar fashion. A simple way is to put \(\hat{\mathbf {z}}\) as a first row in the representation of \( \hat{\mathbf {G}}\). This procedure will end with a matrix \( \begin{pmatrix}\mathbf {I}&\mathbf {L}_t\end{pmatrix}\), where \(\mathbf {L}_t\) will have all \(t \cdot b\) last entries in each column all zero. By discarding the last \(t \cdot b\) rows, we have a given matrix of dimension \(k-t\cdot b\) that can be written as \(\mathbf {G}'= \begin{pmatrix}\mathbf {I}&\mathbf {L}_t\end{pmatrix}\), and we have a corresponding received vector \(\mathbf {z}' = \begin{pmatrix}\mathbf {0}&z_1'&z_2'&\cdots&z_m'\end{pmatrix}\). The first \(k'=k-t\cdot b\) positions are only affected by a single noise variable, so we can write
$$\begin{aligned} \mathbf {z}' = \mathbf {x}' \mathbf {{G}}' + \begin{pmatrix}e_1&e_2&\cdots&e_{k'}&\tilde{e}_1&\tilde{e}_2&\cdots&\tilde{e}_m\end{pmatrix}, \end{aligned}$$
for some unknown \(\mathbf {x}'\) vector (here, we remove the bottom \(t\cdot b\) bits of \(\hat{\mathbf {x}}\) to form the length \(k'\) vector \(\mathbf {x}'\)), where
$$\begin{aligned} \tilde{e}_i=\sum _{i_j\in \mathcal{T}_i,~|\mathcal{T}_i|\le 2^{t}} e_{i_j} \end{aligned}$$
and \(\mathcal{T}_i\) contains the positions that have been added up to form the \((k'+i)\)th column of \(\mathbf {G}'\). By the piling-up lemma, the bias for \(\tilde{e_i}\) increases to \(\epsilon ^{2^{t}}\). We denote the complexity of this step by \(C_2\), where
$$\begin{aligned} C_2 = {\left\{ \begin{array}{ll} \sum _{i=1}^t (k+1-i\cdot b) \cdot (n-k'-i\cdot 2^b), &{} \quad \text {the LF1 setting,}\\ \sum _{i=1}^t (k+1-i\cdot b) \cdot (n-k'), &{} \quad \text {the LF2 setting.}\\ \end{array}\right. } \end{aligned}$$
In the both cases
$$\begin{aligned} C_2\approx (k+1)\cdot t \cdot n. \end{aligned}$$
Partial Secret Guessing
The previous procedure outputs \(\mathbf {G}'\) with dimension \(k' = k-t \cdot b\) and m columns. We now divide \(\mathbf {x}'\) into two parts:
$$\begin{aligned} \mathbf {x}'= \begin{pmatrix}\mathbf {x}'_1&\mathbf {x}'_2\end{pmatrix}, \end{aligned}$$
where \(\mathbf {x}'_1\) is of length \(k''\). In this step, we simply guess all vectors \(\mathbf {x}_2\in \mathcal {B}_{2}(k'-k'',w_0)\) for some \(w_0\) and update the observed vector \(\mathbf {z}' \) accordingly. This transforms the problem to that of attacking a new smaller \( \textsc {LPN} \) problem of dimension \(k''\) with the same number of samples. Firstly, note that this will only work if \(w_\text{ H }\left( \mathbf {x}_2'\right) \le w_0\), and we denote this probability by \(P(w_0,k'-k'')\). Secondly, we need to be able to distinguish a correct guess from incorrect ones and this is the task of the remaining steps. The complexity of this step is
$$\begin{aligned} C_3 = m\cdot \sum ^{w_0}_{i=0}{k' - k'' \atopwithdelims ()i}i. \end{aligned}$$
Covering-Coding Method
In this step, we use a \([k'', l]\) linear code \(\mathcal {C}\) with covering radius \(d_C\) to group the columns. That is, we rewrite
where \(\mathbf {c}_i\) is the nearest codeword in \(\mathcal {C}\), and \(w_\text{ H }\left( \mathbf {e}'_i\right) \le d_C\). The employed linear code is characterized by a systematic generator matrix
$$\begin{aligned} \mathbf {F} = \begin{pmatrix}\mathbf {I}&\mathbf {A}\end{pmatrix} \in \mathbb {F}_{2}^{l\times k''}, \end{aligned}$$
that has the corresponding parity-check matrix
$$\begin{aligned} \mathbf {H} = \begin{pmatrix}\mathbf {A}^\mathrm{T}&\mathbf {I}\end{pmatrix} \in \mathbb {F}_{2}^{(k'' - l) \times k''}. \end{aligned}$$
There are several ways to select a code. An efficient way of realizing the above grouping idea is by a table-based syndrome-decoding technique. The procedure is as follows:
We construct a constant-time query table containing \(2^{k'' - l}\) items, in each of which stores the syndrome and its corresponding minimum-weight error vector.
If the syndrome \(\mathbf {H} {\mathbf {g}'_i}^\mathrm{T}\) is computed, we then find its corresponding error vector \(\mathbf {e}'_i\) by checking in the table; adding them together yields the nearest codeword \(\mathbf {c}_i\).
The remaining task is to calculate the syndrome efficiently. We sort the vectors \(\mathbf {g}'_i\) according to the first l bits, where \(0 \le i \le m\), and group them into \(2^l\) partitions denoted by \(\mathcal {P}_j\) for \(1\le j\le 2^l\). Starting from the partition \(\mathcal {P}_1\) whose first l bits are all zero, we can derive the syndrome by reading its last \(k'' - l\) bits without any additional computational cost. If we know one syndrome in \(\mathcal {P}_j\), we then can compute another syndrome in the same partition within \(2(k'' - l)\) bit operations, and another in a different partition whose first l-bit vector has Hamming distance 1 from that of \(\mathcal {P}_j\) within \(3(k'' - l)\) bit operations. Therefore, the complexity of this step is
$$\begin{aligned} C_4= (k'' -l)\cdot (2m + 2^l). \end{aligned}$$
Notice that the selected linear code determines the syndrome table, which can be pre-computed within complexity \(\mathcal {O}\left( k'' \cdot 2^{k'' -l}\right) \). For some instances, building such a full syndrome table may dominate the complexity, i.e., when \(k'' \cdot 2^{k'' -l}\) becomes too large. Here, we use a code concatenation to reduce the size of the syndrome table, thereby making this cost negligible compared with the total attacking complexity.
We split the search space into two (or several) separate spaces by using a concatenated code construction. As an example, let \(\mathcal {C}'\) be a concatenation of two \([k''/2,l/2]\) linear codes. Then, the syndrome tables can be built in \(\mathcal {O}\left( k'' \cdot 2^{k''/2 -l/2}\right) \) time and memory. Assuming that the two codes are identical and they will both contribute to the final noise. The decoding complexity then changes to
$$\begin{aligned} C'_4= (k'' -l)\cdot (2m + 2^{l/2}). \end{aligned}$$
Subspace Hypothesis Testing
In the subspace hypothesis testing step, we group the (processed) samples \((\mathbf {g}'_i, z'_i)\) in sets \(L(\mathbf {c}_i)\) according to their nearest codewords and define the function \(f_L(\mathbf {c}_i)\) as
$$\begin{aligned} f_L(\mathbf {c}_i) = \sum _{(\mathbf {g}'_i, z'_i) \in L(\mathbf {c}_i)} (-1)^{z'_i}. \end{aligned}$$
The employed systematic linear code \(\mathcal {C}\) describes a bijection between the linear space \(\mathbb {F}_{2}^{l}\) and the set of all codewords in \(\mathbb {F}_{2}^{k''}\), and moreover, due to its systematic feature, the corresponding information vector appears explicitly in their first l bits. We can thus define a new function
$$\begin{aligned} g(\mathbf {u}) = f_L(\mathbf {c}_i), \end{aligned}$$
such that \(\mathbf {u}\) represents the first l bits of \(\mathbf {c}_i\) and exhausts all points in \(\mathbb {F}_{2}^{l}\).
The Walsh transform of g is defined as
$$\begin{aligned} G(\mathbf {v}) = \sum _{\mathbf {u}\in \mathbb {F}_{2}^{l}}g(\mathbf {u})(-1)^{\left\langle \mathbf {v},\mathbf {u}\right\rangle }. \end{aligned}$$
Here, we exhaust all candidates of \(\mathbf {v} \in \mathbb {F}_{2}^{l}\) by computing the Walsh transform.
The following lemma illustrates the reason why we can perform hypothesis testing on the subspace \(\mathbb {F}_{2}^{l}\).
There exits a unique vector \(\mathbf {v} \in \mathbb {F}_{2}^{l}\) s.t.,
$$\begin{aligned} \left\langle \mathbf {v},\mathbf {u}\right\rangle = \left\langle \mathbf {x}',\mathbf {c}_i\right\rangle . \end{aligned}$$
As \(\mathbf {c}_i = \mathbf {u}\mathbf {F}\), we obtain
$$\begin{aligned} \left\langle \mathbf {x}',\mathbf {c}_i\right\rangle = \mathbf {x}' (\mathbf {u}\mathbf {F})^\mathrm{T}= \mathbf {x}' \mathbf {F}^\mathrm{T}\mathbf {u}^\mathrm{T}= \left\langle \mathbf {x}' \mathbf {F}^\mathrm{T},\mathbf {u}\right\rangle . \end{aligned}$$
Thus, we construct the vector \(\mathbf {v} = \mathbf {x}'\mathbf {F}^\mathrm{T}\) that fulfills the requirement. On the other hand, the uniqueness is obvious. \(\square \)
Before we continue to go deeper into the details of the attack, we will now try to illustrate how the subspace hypothesis test is performed. Consider the following.
As a next step, we can separate the discrepancy \(\mathbf {e}'_i\) from \(\mathbf {u}' {\mathbf {F}}\), which yields
$$\begin{aligned} \left( \begin{array}{c} x'_1 \\ \vdots \\ x'_{k''} \\ \hline 0 \\ \vdots \end{array}\right) ^\mathrm{T}\left( \begin{array}{cc|c|c} *&{} *&{} (\mathbf {u} {\mathbf {F}})_1 &{} *\\ \vdots &{} \vdots &{} \vdots &{} \vdots \\ *&{} *&{} (\mathbf {u} {\mathbf {F}})_{k''} &{}*\\ \hline &{} &{} 0 &{} \\ &{} &{} \vdots &{} \\ \end{array}\right) = \left( \begin{array}{c} *\\ \vdots \\ *\\ \hline y'_i + \left\langle \mathbf {x}'_1, \mathbf {e}'_i\right\rangle \\ \hline *\\ \vdots \end{array}\right) ^\mathrm{T}. \end{aligned}$$
We now see that the dimension of the problem has been reduced, i.e., \(\mathbf {x}_1' \mathbf {F}^\mathrm{T}\in \mathbb {F}_2^{l}\), where \(l < k''\). A simple transformation yields
$$\begin{aligned} \left( \begin{array}{c} (\mathbf {x}_1'{\mathbf {F}}^\mathrm{T})_1 \\ \vdots \\ (\mathbf {x}_1' {\mathbf {F}}^\mathrm{T})_{l} \\ \hline 0 \\ \vdots \end{array}\right) ^\mathrm{T}\left( \begin{array}{cc|c|c} *&{} *&{} {u}_1 &{} *\\ \vdots &{} \vdots &{} \vdots &{} \vdots \\ *&{} *&{} {u}_{l} &{}*\\ \hline &{} &{} 0 &{} \\ &{} &{} \vdots &{} \\ \end{array}\right) = \left( \begin{array}{c} *\\ \vdots \\ *\\ \hline y'_i + \left\langle \mathbf {x}_1', \mathbf {e}'_i\right\rangle \\ \hline *\\ \vdots \end{array}\right) ^\mathrm{T}. \end{aligned}$$
Since \(w_\text{ H }\left( \mathbf {e}'_i\right) \le d_C\) and \(w_\text{ H }\left( \mathbf {x}'_i\right) \approx \eta \cdot k''\) , the contribution from \(\left\langle \mathbf {x}_1', \mathbf {e}'_i\right\rangle \) is small. Note that \(\mathbf {e}'_i\) is the error from the above procedure, and that we did not include the error from the oracle and the merging procedure. Recall that the sequence received from oracle is \(z_i = y_i + e_i\), that after merging the columns of \({\mathbf {G}}\) becomes \(z'_i = y'_i + \tilde{e}_i\). All things considered (all sources of error piled on the sequence), we have
$$\begin{aligned} z_i' + \left\langle \mathbf {x}',\mathbf {e}_i\right\rangle = y_i + \tilde{e}_i + \left\langle \mathbf {x}_1', \mathbf {e}'_i\right\rangle . \end{aligned}$$
Given the candidate \(\mathbf {v}\), \(G(\mathbf {v})\) is the difference between the number of predicted 0's and the number of predicted 1's for the bit \(\tilde{e}_i + \left\langle \mathbf {x}',\mathbf {e}'_i\right\rangle \). Assume that \(\left\langle \mathbf {x}'_1,\mathbf {e}'_i\right\rangle \) will contribute to a noise with bias no smaller then \(\epsilon _{set}\). If \(\mathbf {v} \) is the correct guess, then it is Bernoulli distributed with noise parameter
$$\begin{aligned} \frac{1}{2} \cdot \left( 1+\epsilon ^{2^t} \cdot \epsilon _{set}\right) ; \end{aligned}$$
otherwise, it is considered random. Thus, the best candidate \(\mathbf {v}_ opt \) is the one that maximizes the absolute value of \(G(\mathbf {v})\), i.e.,
$$\begin{aligned} \mathbf {v}_ opt = \mathop {\hbox {arg max}}\limits _{\mathbf {v} \in \mathbb {F}_2^{l}}|G(\mathbf {v})|, \end{aligned}$$
and we need approximately
$$\begin{aligned} \frac{4 \ln 2 \cdot l}{(\epsilon ^{2^{t}}\cdot \epsilon _{set})^2}. \end{aligned}$$
samplesFootnote 9 to distinguish these two cases.
Note that a false positive can be recognized without much cost. If the distinguisher fails, we then choose another permutation to run the algorithm again. The procedure will continue until we find the secret vector \(\mathbf {x}\).
We use the fast Walsh–Hadamard transform technique to accelerate the distinguishing step. As the hypothesis testing runs for every guess of \(\mathbf {x}'_2\), the overall complexity of this step is
$$\begin{aligned} C_5 {\mathop {=}\limits ^{\mathrm{def}}}l \cdot 2^l \cdot \sum ^{w_0}_{i=0}{k'-k'' \atopwithdelims ()i}. \end{aligned}$$
In the previous section, we already indicated the complexity of each step. We now put it together in a single complexity estimate. We first formulate the formula for the possibility of having at most w errors in j positions P(w, j), which follows a binomial distribution, i.e.,
$$\begin{aligned} P(w,j) {\mathop {=}\limits ^{\mathrm{def}}}\sum _{i=0}^{w}{j\atopwithdelims ()i} (1-\eta )^{j-i} \cdot \eta ^i. \end{aligned}$$
The complexity consists of three parts:
Inner complexity The complexity of each step in the algorithm, i.e.,
$$\begin{aligned} C_{one-iter} = C'_1+C_2+C_3+C_4+C_5. \end{aligned}$$
These steps will be performed every iteration.
Guessing The probability of making a correct guess on the weight of \(\mathbf {x}_2'\), i.e.,
$$\begin{aligned} P_ guess {\mathop {=}\limits ^{\mathrm{def}}}\mathbf {Pr}\left[ w_\text{ H }\left( \mathbf {x}'_2\right) \le w_0\right] = P(w_0,k'-k''). \end{aligned}$$
Testing The probability that the constraint on the bias level introduced by coding (i.e., no smaller than \(\epsilon _{set}\)) is fulfilled, denoted by \(P_ test \).
The success probability in one iteration is \(P(w_0,k'-k'') \cdot P_ test \). The presented algorithm is of the Las Vegas type, and in each iteration, the complexity accumulates step by step. Hence, the following theorem is revealed.
Theorem 1
(The complexity of Algorithm 2) Let n be the number of samples required and \(a,b,t,k'', l,w_0, \epsilon _{set}\) be algorithm parameters. For the \( \textsc {LPN} \) instance with parameter \((k,\eta )\), the number of bit operations required for a successful run of the new attack, denoted \(C^*(a,b,t,k'',l,w_0, \epsilon _{set})\), is equal to
$$\begin{aligned} \begin{aligned}&P_ guess ^{-1} \cdot P_ test ^{-1} \cdot \left\{ k\cdot n \cdot a + (k+1)\cdot t \cdot n \right. \\&\quad \left. + \sum _{i=0}^{w_0}{k'-k'' \atopwithdelims ()i}(m\cdot i + l \cdot 2^l)+ (k''-l) \cdot (2m + 2^l)\right\} , \end{aligned} \end{aligned}$$
under the condition that
$$\begin{aligned} m \ge \frac{4 \ln 2 \cdot l}{(\epsilon ^{2^{t}}\cdot \epsilon _{set})^2}, \end{aligned}$$
where \(m = n - t2^b\) in the LF1 setting and \(m=n=3\cdot 2^b\) in the LF2 setting.Footnote 10
The complexity of one iteration is given by \(C'_1+C_2+C_3+C_4+C_5\). The expected number of iterations is the inverse of \(P_ guess \cdot P_ test \). Substituting the formulas into the above will complete the proof. Condition (45) ensures that we have enough samples to determine the correct guess with high probability. \(\square \)
The remaining part is to calculate the value of \(P_ test \), which is determined by the employed code.
Bias from a Single Perfect Code
If we use a length \(k''\) perfect codeFootnote 11 with covering radius \(d_C\), the bias \(\epsilon '\) in \(\mathbf {e}_i'\) is determined by the following lemma.Footnote 12
(Bias from covering code [31]) If the covering code \(\mathbf {F}\) has an optimal covering radius, then the probability \(\mathbf {Pr}_{w_\text{ H }\left( \mathbf {x}'_1\right) =c}\left[ \left\langle \mathbf {x}'_1,\mathbf {e}'_i\right\rangle =1\right] \) is given by
$$\begin{aligned} \varphi (c) {\mathop {=}\limits ^{\mathrm{def}}}|\mathcal {B}_{2}(k'',d_C)|^{-1} \cdot \sum _{i odd }^{\min (c,d_C)} {c \atopwithdelims ()i}\cdot |\mathcal {B}_{2}(k''-c,d_C-i)| \end{aligned}$$
where \(k''\) is the dimension of \(\mathbf {x}'_1\) and \(d_C\) is the covering radius. Thus, the computed bias \(\epsilon (c)\) conditioned on the weight of \(\mathbf {x}'_1\) is
$$\begin{aligned} \epsilon (c)= 1- 2\varphi (c). \end{aligned}$$
Let the c nonzero positions of \(\mathbf {x}'_1\) represent a set of bins and the \(k''-c\) zero positions another set of bins.
$$\begin{aligned} \underbrace{\sqcup ~\sqcup ~\cdots ~\sqcup }_{c}~\Bigg |~\underbrace{\sqcup ~\sqcup ~\sqcup ~\sqcup ~\cdots ~\sqcup }_{k''-c} \end{aligned}$$
Assume that a bin contains at most one ball. If there is an odd number of balls in the c bins, then \(\left\langle \mathbf {x}'_1,\mathbf {e}'_i\right\rangle =1\). Suppose that there are i balls. Then, there are \({c \atopwithdelims ()i}\) ways to arrange the balls within those bins. In total, we may place up to \(j {\mathop {=}\limits ^{\mathrm{def}}}\min (c,d_C)\) balls, so there remains up to \(j-i\) balls to be placed in the other set of \(k''-c\) bins, which counts to \(|\mathcal {B}_{2}(k''-c,d_C-i)|\) possibilities. The summation includes all odd i. \(\square \)
The bias function \(\epsilon (c)\) is monotonically decreasing. If we preset a bias level \(\epsilon _{set}\), all the possible \(\mathbf {x}'_1\) with weight no more than \(c_{0}\) will be distinguished successfully, where \(c_{0} = \min \{c|\epsilon (c)\ge \epsilon _{set}\}\). We can then present a lower bound on \(P_ test \), i.e.,
$$\begin{aligned} P_ test = P(c_{0},k''). \end{aligned}$$
Note that this estimation lower bounds the success probability, which is higher in practice as the distinguisher will still succeed with certain probability even if the bias level introduced by coding is smaller than the one we set. We can also make use of the list-decoding idea to increase the success probability by keeping a small list of candidates.
The Concatenated Construction
Until now, we have only considered to use a single code for the covering code part. In some cases, performing syndrome decoding may be too expensive for optimal parameters and to overcome this, we need to use a concatenated code construction. As an example, we will illustrate the complexity estimation for the concatenation of two codes, which is the optimal code construction for solving several LPN instances.
As in the previous case, we set an explicit lower bound on the bias \(\epsilon ' \ge \epsilon _{set}\) introduced from the covering code part, which is attained only by a certain set \(\mathcal {E}_{\epsilon _{set}}\) of (good) error patterns in the secret. For a concatenation of two codes, we have divided the vector into two parts
$$\begin{aligned} \mathbf {x}'_1 = \begin{pmatrix} \bar{\mathbf {x}}_1&\bar{\mathbf {x}}_2 \end{pmatrix} \end{aligned}$$
and hence,
$$\begin{aligned} \mathbf {e}'_i = \begin{pmatrix} \bar{\mathbf {e}}^{(1)}_i&\bar{\mathbf {e}}^{(2)}_i\end{pmatrix}. \end{aligned}$$
The noise \(\left\langle \mathbf {x}'_1,\mathbf {e}'_i\right\rangle \) can be rewritten as
$$\begin{aligned} \left\langle \mathbf {x}'_1,\mathbf {e}'_i\right\rangle = \left\langle \bar{\mathbf {x}}_1,\bar{\mathbf {e}}^{(1)}_i\right\rangle + \left\langle \bar{\mathbf {x}}_2,\bar{\mathbf {e}}^{(2)}_i\right\rangle , \end{aligned}$$
which implies that the bias \(\epsilon '=\epsilon _1\epsilon _2\), where \(\epsilon _1\) (\(\epsilon _2\)) is the bias introduced by the first (second) code and can be computed by Proposition 1. We then determine all the (good) error patterns \(\mathcal {E}_{\epsilon _{set}}\) in the secret such that the bias \(\epsilon ' \ge \epsilon _{set}\).
We can write the success probability \(P_ test {\mathop {=}\limits ^{\mathrm{def}}}\mathbf {Pr}\left[ \mathbf {x}'_1 \in \mathcal {E}_{\epsilon _{set}}\right] \) as
$$\begin{aligned} \sum _{(\bar{\mathbf {x}}_1~ \bar{\mathbf {x}}_2) \in \mathcal {E}_{\epsilon _{set}}} \eta ^{k''/2-w_\text{ H }\left( \bar{\mathbf {x}}_1\right) } (1-\eta )^{w_\text{ H }\left( \bar{\mathbf {x}}_1\right) } \cdot \eta ^{k''/2-w_\text{ H }\left( \bar{\mathbf {x}}_2\right) } (1-\eta )^{w_\text{ H }\left( \bar{\mathbf {x}}_2\right) }. \end{aligned}$$
We could expect that the algorithm works slightly better in practice, as we discussed before in Sect. 6.1.
The complexity \(C_4\) changes to that in the concatenated code case which we denote by \(C'_4\), and the pre-computation of the syndrome tables has a lowered complexity since the codes are smaller and can be treated separately. Since the pre-computation complexity \(\mathcal {O}\left( k''\cdot 2^{k''/2-l/2}\right) \) must be lessFootnote 13 or match the total attacking complexity, the lowered time complexity allows for looser constraints on the algorithm parameters. Apart from these differences, the complexity expression is the same as that for the non-concatenated construction.
It is straight-forward to extend the above analysis to a concatenation of multiple linear codes. As before, we choose to preset a lower bound \(\epsilon _{set}\) for the bias and derive a formula to estimate the probability of all the good error patterns in the secret. This type of analysis has actually been done in the toy example from Sect. 4.1.
In this toy example from Sect. 4.1, we concatenate 25 \([3,1]\) repetition codes \(\mathcal {C}_{i}\), for \(1 \le i \le 25\). For each code \(\mathcal {C}_{i}\), we know that the corresponding bias \(\epsilon \) is related to the Hamming weight \(w_{\mathcal {C}_{i}}\) of the associated subvector of the secret (as shown in Table 2). In Sect. 4.1, we set the bound for the bias \(\epsilon _{set}\) to be \(2^{-6}\) and then obtain the success probabilityFootnote 14 in (12).
Table 2 The bias from a [3,1] repetition code
We now present numerical results of the new algorithm attacking three key \( \textsc {LPN} \) instances, as shown in Table 3. All aiming for achieving 80-bit security, the first one is with parameter (\(512,\frac{1}{8}\)), widely accepted in various \( \textsc {LPN} \)-based cryptosystems (e.g., \(\hbox {HB}^+\) [22], \(\hbox {HB}^{\#}\) [13], \( \textsc {LPN-C} \) [14]) after the suggestion from Levieil and Fouque [29]; the second one is with increased length (\(532,\frac{1}{8}\)), adopted as the parameter of the irreducible \( \textsc {Ring-LPN} \) instance employed in Lapin [20]; and the last one is a new design parameterFootnote 15 we recommend to use in the future. The attacking details on different protocols will be given later. We note that the new algorithm has significance not only on the above applications but also on some \( \textsc {LPN} \)-based cryptosystems without explicit parameter settings (e.g., [10, 24]).
Table 3 The complexity for solving different \( \textsc {LPN} \) instances in the LF2 setting
\(\hbox {HB}^+\)
Levieil and Fouque [29] proposed an active attack on \(\hbox {HB}^+\) by choosing the random vector \(\mathbf {a}\) from the reader to be \(\mathbf {0}\). To achieve 80-bit security, they suggested to adjust the lengths of secret keys to 80 and 512, respectively, instead of being both 224. Its security is based on the assumption that the \( \textsc {LPN} \) instance with parameter \((512,\frac{1}{8})\) can resist attacks in \(2^{80}\) bit operations. But we solve this instance in \(2^{79.64}\) bit operations, indicating that the old parameters are insufficient to achieve 80-bit security.
LPN-C and \(\hbox {HB}^{\#}\)
Using similar structures, Gilbert et al. proposed two different cryptosystems, one for authentication (\(\hbox {HB}^{\#}\)) and the other for encryption (\( \textsc {LPN-C} \)). By setting the random vector from the reader and the message vector to be both \(\mathbf {0}\), we obtain an active attack on \(\hbox {HB}^{\#}\) authentication protocol and a chosen-plaintext-attack on \( \textsc {LPN-C} \), respectively. As their protocols consist of both secure version (\( \textsc {random-HB} ^{\#}\) and \( \textsc {LPN-C} \)) and efficient version (\(\hbox {HB}^{\#}\) and Toeplitz \( \textsc {LPN-C} \)), we need to analyze them separately.
Using Toeplitz Matrices
Toeplitz matrix is a matrix in which each ascending diagonal from left to right is a constant. Thus, when employing a Toeplitz matrix as the secret, if we attack its first column successively, then only one bit in its second column is unknown. So the problem is transformed to that of solving a new \( \textsc {LPN} \) instance with parameter \((1, \frac{1}{8})\). We then deduce the third column, the fourth column, and so forth. The typical parameter settings of the number of the columns (denoted by m) are 441 for \(\hbox {HB}^{\#}\), and 80 (or 160) for Toeplitz \( \textsc {LPN-C} \). In either case, the cost for determining the vectors except for the first column is bounded by \(2^{40}\), negligible compared with that of attacking one \((512, \frac{1}{8})\)\( \textsc {LPN} \) instance. Therefore, for achieving the security of \(80\) bits, these efficient versions that use Toeplitz matrices should use a larger LPN instance.
Random Matrix Case
If the secret matrix is chosen totally at random, then there is no simple connection between different columns to exploit. One strategy is to attack column by column, thereby deriving an algorithm whose complexity is that of attacking a \((512, \frac{1}{8})\)\( \textsc {LPN} \) instance multiplied by the number of the columns. That is, if \(m = 441\), then the overall complexity is about \(2^{88.4}\). We may slightly improve the attack by exploiting that the different columns share the same random vector in each round.
Lapin with an Irreducible Polynomial
Heyse et al. [20] use a \((532, \frac{1}{8})\)\( \textsc {Ring-LPN} \) instance with an irreducible polynomialFootnote 16 to achieve 80-bit security. We show here that this parameter setting is not secure enough for Lapin to thwart attacks on the level of \(2^{80}\). Although the new attack on a \((532, \frac{1}{8})\)\( \textsc {LPN} \) instance requires approximately \(2^{82}\) bit operations, larger than \(2^{80}\), there are two key issues to consider:
\( \textsc {Ring-LPN} \) is believed to be no harder than the standard \( \textsc {LPN} \) problem. For the instance in Lapin using a quotient ring modulo the irreducible polynomial \(x^{532} + x +1\), it is possible to optimize the procedure by further taking advantage of the ring structure, thereby resulting in a more efficient attack than the generic one.
The definition of bit complexity here poorly characterizes the actual computational hardness as the computer can parallel many bit operations in one clock cycle. We believe that a better definition should be a vectorized version, i.e., defining the "atomic" operation as the addition or multiplication between two 64 (or 128)-bit vectors. The refined definition is a counterpart of that in the Advanced Encryption Standard (AES), where 80-bit security means that we can perform \(2^{80}\) AES encryptions, not just bit operations. If we adopt this vectorized security definition, the considered Lapin instance is far away from achieving 80-bit security.
We suggest to increase the size of the employed irreducible polynomial in Lapin for 80-bit security.
We show the experimental results in this part, using a \([46, 24]\) linear code that is a concatenation of two binary \([23, 12]\) Golay codesFootnote 17 for the subspace hypothesis testing procedure.
Validation of Success Rates
Starting with \(2^{25.6}\)LPN samples, we run two groups of simulations with \(k\) equal to \(142\) and \(166\), respectively. The noise rate \(\eta \) varies to achieve a reasonable success probability. We perform 4 BKW steps with size \(24\) for the prior and include one more step for the latter. Moreover, we stick to the LF2 type reduction steps for a better performance.
The comparison between the simulation results and their theoretical counterparts is shown in Table 4. The simulated values are obtained by running about \(200\) trials for each LPN instance. Meanwhile, as we always keep about \(2^{25.6}\) samples after each reduction step, the number of samples for the statistical testing procedure is also approximately \(2^{25.6}\). Thus, we can compute the theoretical success probabilities according to Proposition 1, Equations (39) and (51).
We conclude from Table 4 that the adopted theoretical estimation is a conservative estimation as discussed in Sect. 6.1, since the simulation results are almost always better than the theoretical ones. On the other hand, the theoretical predictions are fairly close to our experimental results. This understanding is further consolidated in Fig. 4 plotting the success probability comparison with fine-grained choices of the noise rate \(\eta \) and more accurate simulated probabilities, i.e., we run \(1000\) tries for each LPN instance.
Table 4 Success probability in simulation v.s. in theory
Fine-grained success probability comparison
The Largest Instance
We solve the \((136, \frac{1}{4})\)LPN instance in \(12\) h on average using one thread of a server with Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz and 256 GB of RAM. This solved instance is slightly larger than the \((135,\frac{1}{4})\) one reported in [12] requiring \(64\)-thread parallel computating of \(13.84\) days by using the Well-Pooled MMT algorithm and \(5.69\) days by using the Hybrid algorithm,Footnote 18 on a server with 256 GB of RAM. Though it is tricky to compare implementations of different types of algorithms, our results support that the BKW variants are more efficient when the noise rate \(\eta \) is high.
In the implementation, we ask the LPN oracle for around \(2^{31.6}\) samples and then perform three LF2 type BKW steps with size \(30\). After this step, we have zeroed out \(90\) positions and do the subspace hypothesis testing on the remaining \(46\) positions via employing a concatenation of two [23, 12] Golay codes. We run \(12\) trails in approximately \(48\) h and succeed 4 times.
More on the Covering-Coding Method
In this section, we describe more aspects of the covering-coding technique, thus emphasizing the most novel and essential step in the new algorithm.
Sphere-Covering Bound
We use sphere-covering bound to estimate the bias \(\epsilon '\) contributed by the new technique for two reasons. Firstly, there is a well-known conjecture [7] in coding theory that the covering density approaches 1 asymptotically if the code length goes to infinity. Thus, it is sensible to assume that the linear code has a good covering radius, when the code length \(k''\) is relatively large. Secondly, we could see from the previous example that the desired key feature is a linear code with low average error weights, which is smaller than its covering radius. From this perspective, the covering bound brings us a good estimation.
Attacking Public-Key Cryptography
We know various decodable covering codes that could be employed in the new algorithm, e.g., table-based syndrome decodable linear codes, concatenated codes built on Hamming codes, Golay codes and repetition codes, etc.. For the aimed cryptographic schemes in this paper, i.e., HB variants, LPN-C, and Lapin with an irreducible polynomial, the first three are efficient, but in the realm of public-key cryptography (e.g., schemes proposed by Alekhnovich [2], Damgård and Park [9], Duc and Vaudenay [11]), the situation alters. For these systems, their security is based on \( \textsc {LPN} \) instances with huge secret length (tens of thousands) and extremely low error probability (less than half a percent), so due to the competitive average weight of the error vector shown by the previous example in Sect. 4.1, the concatenation of repetition codes with much lower rate seems more applicable—by low-rate codes, we remove more bits when using the covering-coding method.
Alternative Collision Procedure
Although the covering-coding method is employed only once in the new algorithm, we could derive numerous variants, and among them, one may find a more efficient attack. For example, we could replace several steps in the later stage of the collision procedure by adding two vectors decoded to the same codeword together. This alternative technique is similar to that invented by Lamberger et al. [27, 28] for finding near-collisions of hash function. By this procedure, we could eliminate more bits in one step at the cost of increasing the error rate; this is a trade-off, and the concrete parameter setting should be analyzed more thoroughly later.
Actually, with the help of this alternative collision idea, a series of recent papers [1, 18, 19, 26] have greatly reduced the solving complexity of the LWE problem, the \(q\)-ary counterpart of LPN, both asymptotically and concretely. But we failed to find better attacks when applying this idea to the LPN instances of cryptographic interests in the proposed authentication protocols and LPN-C, since the noise rates are high. We believe that this idea could be useful when the noise is relatively small and leave this problem as an interesting scope for future research.
In this paper, we have described a new algorithm for solving the \( \textsc {LPN} \) problem that employs an approximation technique using covering codes together with a subspace hypothesis testing technique to determine the value of linear combinations of the secret bits. Complexity estimates show that the algorithm beats all the previous approaches, and in particular, we can present academic attacks on instances of LPN that has been suggested in different cryptographic primitives.
There are a few obvious improvements for this new technique, one being the use of strong distinguishers and another one being the use of more powerful constructions of good codes. There are also various modified versions that need to be further investigated. One such idea as described in Sect. 9.3 is to use the new technique inside a BKW step, thereby removing more bits in each step at the expense of introducing another contribution to the bias. An interesting open problem is whether these ideas can improve the asymptotic behavior of the BKW algorithm.
For a fixed error rate.
The Bernstein–Lange algorithm is originally proposed for Ring-LPN, and by a slight modification [4], one can also apply it to the LPN instances. This modified algorithm shares several beginning steps (i.e., the steps of Gaussian elimination and the collision procedure) with the new algorithm, so we use the same implementation of these steps when computing their complexity, for a fair comparison.
One critical assumption for LF1/LF2 variants is that the samples are independent after several reduction steps. This assumption has been verified in [29] and also during our experiments.
There are various approaches to estimate the bias introduced by coding. As the main goal in this section is to illustrate the gist of the new idea, we adopt the most straight-forward one, i.e., the one that assumes the variables each representing noise in a position in the error vector are independent. In a later section, when computing the algorithm complexity, a more accurate value is obtained by calculating the bias numerically (see Proposition 1).
In the sequel, we denote this code construction as concatenated repetition code. For this [75, 25, 3] linear code, the covering radius is 25, but we could see from this example that what matters is the average weight of the error vector, which is much smaller than 25.
This explains why we need a more rigorous analysis. If we would assume that the noise variables in the error vector are independent, the success probability is about \(0.37\). This estimation is too optimistic, since if two of the errors in \(\mathbf {x}'\) fall into the same code, the resulting zero bias totally ruin the statistical distinguishing procedure. We use a more accurate estimation in (12), which is further illustrated in Example 1.
Adopting the same method to implement their overlapping steps, for the \((160,\frac{1}{10})\)\( \textsc {LPN} \) instance, the Bernstein–Lange algorithm and the new algorithm cost \(2^{39.43}\) and \(2^{35.50}\) bit operations, respectively. Thus, the latter offers an improvement with a factor roughly 16 to solve this small-scale instance.
This is first pointed out in [29].
This estimation follows results from linear cryptanalysis [8, 30]. In the proceeding's version [16], we use a too optimistic estimation on the required number of samples, i.e., using a constant factor before the term \(\frac{1}{\epsilon ^2}\). This rough estimation also appears in some previous work.
The accurate value of m should be \(n-k'- t2^b\) in the LF1 setting and \(n-k'\) in the LF2 setting. We take this approximation as \(k'\) is negligible compared with n.
In the sequel, we assume that when the code length is relatively large, it is reasonable to approximate a perfect code by a random liner code. We replace the covering radius by the sphere-covering bound to estimate the expected distance d, i.e., d is the smallest integer, s.t. \(\sum ^{d}_{i=0}{k'' \atopwithdelims ()i} \ge 2^{k'' - l}\). We give more explanation in Sect. 9.
We would like to thank Sonia Bogos and Serge Vaudenay for pointing out this accurate bias computation.
We could make this cost negligible compared with the total complexity.
When calculating the success probability in (12), we ignore the probability that a nonzero even number of concatenations have \(w_{\mathcal {C}_{i}}=3\), since these events are so rare.
This instance requires \(2^{81}\) bit memory using the new algorithm and could withstand all existing attacks on the security level of \(2^{80}\) bit operations.
The Lapin instantiation with a reducible polynomial designed for 80-bit security has been broken within about \(2^{71}\) bit operations in [17].
Binary \([23, 12]\) Golay codes are perfect codes with optimal covering property. The concatenation of two Golay codes can produce a larger linear code with fairly good covering property and also efficient decoding. Moreover, the implementation of Golay codes is simple and well studied.
The Well-Pooled MMT algorithm is an ISD variant and the Hybrid algorithm combines solving ideas from ISD and BKW.
M.R. Albrecht, J.C. Faugère, R. Fitzpatrick, L. Perret, Lazy Modulus switching for the BKW algorithm on LWE, in H. Krawczyk, editor, Public-Key Cryptography—PKC 2014. Lecture Notes in Computer Science, vol. 8383 (Springer Berlin, 2014), pp. 429–445
M. Alekhnovich, More on average case versus approximation complexity, in FOCS (IEEE Computer Society, 2003), pp. 298–307
A. Blum, A. Kalai, H. Wasserman, Noise-tolerant learning, the parity problem, and the statistical query model. J. ACM, 50(4), 506–519 (2003)
D. Bernstein, T. Lange, Never trust a bunny, in Radio Frequency Identification Security and Privacy Issues (Springer, Berlin, 2013), pp. 137–148
S. Bogos, F. Tramer, S. Vaudenay, On Solving LPN using BKW and Variants. Tech. rep., Cryptology ePrint Archive, Report 2015/049 (2015)
S. Bogos, S. Vaudenay, Optimization of lpn solving algorithms, in Advances in Cryptology–ASIACRYPT 2016: 22nd International Conference on the Theory and Application of Cryptology and Information Security, Hanoi, Vietnam, December 4–8, 2016, Proceedings, Part I 22 (Springer, 2016) pp. 703–728
G. Cohen, I. Honkala, S. Litsyn, A. Lobstein, Covering Codes (Elsevier, Amsterdam, 1997)
T.M. Cover, J.A. Thomas, Elements of Information Theory (Wiley, New York, 2012)
I. Damgård, S. Park, Is Public-Key Encryption Based on LPN Practical? Cryptology ePrint Archive, Report 2012/699 (2012). http://eprint.iacr.org/
Y. Dodis, E. Kiltz, K. Pietrzak, D. Wichs, Message authentication, revisited, in D. Pointcheval, T. Johansson, editors, EUROCRYPT 2012. LNCS, vol. 7237 (Springer, Heidelberg, 2012), pp. 355–374
A. Duc, S. Vaudenay, HELEN: a public-key cryptosystem based on the LPN and the decisional minimal distance problems, in AFRICACRYPT 2013 (Springer, Berlin, 2013), pp. 107–126
A. Esser, R. Kübler, A. May, LPN decoded, in J. Katz, H. Shacham, editors, Advances in Cryptology—CRYPTO 2017—37th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 20–24, 2017, Proceedings, Part II. Lecture Notes in Computer Science, vol. 10402 (Springer, 2017), pp. 486–514
H. Gilbert, M.J.B. Robshaw, Y. Seurin, \(\text{HB}^{\#}\): Increasing the security and the efficiency of \(\text{ HB }^+\), in N.P. Smart, editors, EUROCRYPT 2008. LNCS, vol. 4965 (Springer, Heidelberg, 2008), pp. 361–378
H. Gilbert, M.J.B. Robshaw, Y. Seurin, How to encrypt with the LPN problem, in L. Aceto, I. Damgård, L.A. Goldberg, M.M. Halldorsson, A. Ingolfsdottir, I. Walukiewicz, editors, ICALP 2008, Part II. LNCS, vol. 5126 (Springer, Heidelberg, 2008), pp. 679–690
H. Gilbert, M.J.B. Robshaw, H. Sibert, An Active Attack Against \(\text{ HB }^+\)—A Provably Secure Lightweight Authentication Protocol. Cryptology ePrint Archive, Report 2005/237 (2005). http://eprint.iacr.org/
Q. Guo, T. Johansson, C. Löndahl, Solving LPN using covering codes, in Advances in Cryptology—ASIACRYPT 2014 (Springer, 2014), pp. 1–20
Q. Guo, T. Johansson, C. Löndahl, A new algorithm for solving ring-LPN with a reducible polynomial. IEEE Trans. Inf. Theory, 61(11), 6204–6212 (2015)
Q. Guo, T. Johansson, P. Stankovski, Coded-BKW: solving LWE using lattice codes, in Advances in Cryptology—CRYPTO 2015 (Springer, 2015), pp. 23–42
Q. Guo, T. Johansson, E.Mårtensson, P. Stankovski, Coded-bkw with sieving, in T. Takagi, T. Peyrin, editors, Advances in Cryptology—ASIACRYPT 2017—23rd International Conference on the Theory and Applications of Cryptology and Information Security, Hong Kong, China, December 3–7, 2017, Proceedings, Part I. Lecture Notes in Computer Science, vol. 10624 (Springer, 2017), pp. 323–346
S. Heyse, E. Kiltz, V. Lyubashevsky, C. Paar, K. Pietrzak, Lapin: an efficient authentication protocol based on ring-LPN, in FSE 2012 (2012), pp. 346–365
N.J. Hopper, M. Blum, Secure human identification protocols, in C. Boyd, editor, ASIACRYPT 2001. LNCS, vol. 2248 (Springer, Heidelberg, 2001), pp. 52–66
A. Juels, S.A. Weis, Authenticating pervasive devices with human protocols, in V. Shoup, editor, CRYPTO 2005. LNCS, vol. 3621 (Springer, Heidelberg, 2005), pp. 293–308
J. Katz, J.S. Shin, Parallel and concurrent security of the HB and \(\text{ HB }^+\) protocols, in S. Vaudenay, editor, EUROCRYPT 2006. LNCS, vol. 4004 (Springer, Heidelberg, 2006), pp. 73–87
E. Kiltz, K. Pietrzak, D. Cash, A. Jain, D. Venturi, Efficient authentication from hard learning problems, in K.G Paterson, editor, EUROCRYPT 2011. LNCS, vol. 6632 (Springer, Heidelberg, 2011) pp. 7–26
P. Kirchner, Improved Generalized Birthday Attack. Cryptology ePrint Archive, Report 2011/377 (2011). http://eprint.iacr.org/
P. Kirchner, P.A. Fouque, An improved BKW algorithm for LWE with applications to cryptography and lattices, in Advances in Cryptology—CRYPTO 2015 (Springer, 2015), pp. 43–62
M. Lamberger, F. Mendel, V. Rijmen, K. Simoens, Memoryless near-collisions via coding theory. Des. Codes Cryptogr. 62(1), 1-18 (2012)
M. Lamberger, E. Teufl, Memoryless near-collisions, revisited. Inf. Process. Lett., 113(3), 60-66 (2013)
E. Levieil, P.A. Fouque, An improved LPN algorithm, in Proceedings of SCN 2006. LNCS 4116 (Springer, Heidelberg, 2006), pp. 348–359
A.A. Selçuk, On probability of success in linear and differential cryptanalysis. J. Cryptol. 21(1), 131–147 (2008)
S. Vaudenay, Private Communication
B. Zhang, L. Jiao, M. Wang, Faster algorithms for solving LPN, in EUROCRYPT 2016 (Springer, 2016), pp. 168–195
Open access funding provided by Lund University. The authors would like to thank the anonymous ASIACRYPT 2014 reviewers for their helpful comments. They also would like to thank Sonia Bogos and Serge Vaudenay for their suggestions on analyzing the complexity more accurately. This work was supported in part by the Swedish Research Council (Grant No. 621-2012-4259 and No. 2015-04528). Qian Guo was also supported in part by the Erasmus Mundus Action 2 Scholarship, by the National Natural Science Foundations of China (Grant No. 61170208 ) and Shanghai Key Program of Basic Research (Grant No. 12JC1401400), and by the Norwegian Research Council (Grant No. 247742/070).
Department of Electrical and Information Technology, Lund University, Lund, Sweden
Qian Guo
, Thomas Johansson
& Carl Löndahl
Department of Informatics, Selmer Center, University of Bergen, Bergen, Norway
Search for Qian Guo in:
Search for Thomas Johansson in:
Search for Carl Löndahl in:
Correspondence to Qian Guo.
This paper is an extended version of [16] (https://doi.org/10.1007/978-3-662-45611-8_1). This paper was solicited by the Editors-in-Chief as the best paper from ASIACRYPT 2014, based on the recommendation of the program committee.
Communicated by Alon Rosen.
Guo, Q., Johansson, T. & Löndahl, C. Solving LPN Using Covering Codes. J Cryptol 33, 1–33 (2020) doi:10.1007/s00145-019-09338-8
Revised: 01 October 2019
Issue Date: January 2020
Covering codes
LPN-C
Not logged in - 3.95.139.100 | CommonCrawl |
Proceedings of the American Mathematical Society
Published by the American Mathematical Society, the Proceedings of the American Mathematical Society (PROC) is devoted to research articles of the highest quality in all areas of pure and applied mathematics.
The 2020 MCQ for Proceedings of the American Mathematical Society is 0.85.
Journals Home eContent Search About PROC Editorial Board Author and Submission Information Journal Policies Subscription Information
Hyers-Ulam-Rassias stability of Jensen's equation and its application
by Soon-Mo Jung PDF
Proc. Amer. Math. Soc. 126 (1998), 3137-3143 Request permission
The Hyers-Ulam-Rassias stability for the Jensen functional equation is investigated, and the result is applied to the study of an asymptotic behavior of the additive mappings; more precisely, the following asymptotic property shall be proved: Let $X$ and $Y$ be a real normed space and a real Banach space, respectively. A mapping $f: X \rightarrow Y$ satisfying $f(0)=0$ is additive if and only if $\left \| 2f\left [ (x+y)/2 \right ] - f(x) - f(y) \right \| \rightarrow 0$ as $\| x \| + \| y \| \rightarrow \infty$.
Gian Luigi Forti, Hyers-Ulam stability of functional equations in several variables, Aequationes Math. 50 (1995), no. 1-2, 143–190. MR 1336866, DOI 10.1007/BF01831117
Leonard Eugene Dickson, New First Course in the Theory of Equations, John Wiley & Sons, Inc., New York, 1939. MR 0000002
Donald H. Hyers and Themistocles M. Rassias, Approximate homomorphisms, Aequationes Math. 44 (1992), no. 2-3, 125–153. MR 1181264, DOI 10.1007/BF01830975
Zygfryd Kominek, On a local stability of the Jensen functional equation, Demonstratio Math. 22 (1989), no. 2, 499–507. MR 1037927
J. C. Parnami and H. L. Vasudeva, On Jensen's functional equation, Aequationes Math. 43 (1992), no. 2-3, 211–218. MR 1158729, DOI 10.1007/BF01835703
Themistocles M. Rassias, On the stability of the linear mapping in Banach spaces, Proc. Amer. Math. Soc. 72 (1978), no. 2, 297–300. MR 507327, DOI 10.1090/S0002-9939-1978-0507327-1
Themistocles M. Rassias and Peter emrl, On the behavior of mappings which do not satisfy Hyers-Ulam stability, Proc. Amer. Math. Soc. 114 (1992), no. 4, 989–993. MR 1059634, DOI 10.1090/S0002-9939-1992-1059634-1
Fulvia Skof, On the approximation of locally $\delta$-additive mappings, Atti Accad. Sci. Torino Cl. Sci. Fis. Mat. Natur. 117 (1983), no. 4-6, 377–389 (1986) (Italian, with English summary). MR 929697
S. M. Ulam, Problems in modern mathematics, Science Editions John Wiley & Sons, Inc., New York, 1964. MR 0280310
Retrieve articles in Proceedings of the American Mathematical Society with MSC (1991): 39B72
Retrieve articles in all journals with MSC (1991): 39B72
Soon-Mo Jung
Affiliation: Mathematics Section, College of Science and Technology, Hong-Ik University, 339-800 Cochiwon, South Korea
Email: [email protected]
Received by editor(s): March 19, 1997
Communicated by: Palle E. T. Jorgensen
Journal: Proc. Amer. Math. Soc. 126 (1998), 3137-3143
MSC (1991): Primary 39B72 | CommonCrawl |
Harvard Mathematics Department 2013-Archive (newest date up)
Department of Mathematics FAS Harvard University One Oxford Street Cambridge MA 02138 USA Tel: (617) 495-2171 Fax: (617) 495-5132
Current and Future Seminars
Archive 2013-
Seminars and News
UP seminar DOWN
UP speaker DOWN
UP title DOWN
UP date DOWN
CMSA SPECIAL SEMINAR Masaki Oshikawa (U TOKYO) Gauge invariance, polarization, and conductivity May 28, 2019, 10:30 am at CMSA, 20 Garden St, G10
CMSA FLUID DYNAMICS SEMINAR Jörn Dunkel (MIT) Symmetry breaking in active and quantum fluids May 22, 2019, 3:00 pm - 4:00 pm at CMSA, 20 Garden St, G10
CMSA FLUID DYNAMICS SEMINAR Tamer A. Zaki (JOHNS HOPKINS UNIVERSITY) The Onset of Chaos in Shear Flows May 15, 2019, 3:00 pm - 4:00 pm at CMSA, 20 Garden St, G10
CMSA METRIC LIMITS OF CALABI-YAU MANIFOLDS LECTURE SERIES Valentino Tosatti (NORTHWESTERN UNIVERSITY) A priori estimates May 10, 2019, 3:00 pm - 4:00 pm at CMSA, 20 Garden St, G10
CMSA TOPICAL ASPECTS OF CONDENSED MATTER SEMINAR Michael Zaletel (UC BERKELEY) Three-partite entanglement in CFTs and chiral topological orders May 09, 2019, 10:30 am at CMSA, 20 Garden St, G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Igor Krylov (KOREA INSTITUTE FOR ADVANCED STUDY) Birational rigidity of low degree del Pezzo fibrations May 07, 2019, 3:00 pm at Science Center 507
CMSA SPECIAL TALK Ming Hu (PICOWER INSTITUTE FOR LEARNING AND MEMORY, MIT) The Retinotopic Representation of the Visual Field in Early Visual Cortex - A Geometric View May 06, 2019, 2:15 pm at CMSA, 20 Garden St, G10
CMSA MATHEMATICAL PHYSICS SEMINAR Dennis Borisov (AARHUS/CMSA) Global shifted potentials for -2-shifted symplectic structures May 06, 2019, 12 pm - 1 pm at CMSA, 20 Garden St, G10
CMSA FLUID DYNAMICS SEMINAR David Sondak (IACS, HARVARD UNIVERSITY) Towards Machine Learning for Cataloguing Optimal Solutions in Turbulent Convection May 01, 2019, 3:00 pm - 4:00 pm at CMSA, 20 Garden St, G02
NUMBER THEORY SEMINAR Romyar Sharifi (UCLA) Iwasawa modules in higher codimension May 01, 2019, 3:00 - 4:00 pm at Science Center 507
MATHEMATICAL PICTURE LANGUAGE SEMINAR Shawn Xingshan Cui (VIRGINIA POLYTECHNIC INSTITUTE AND STATE UNIVERSITY) Four dimensional topological quantum field theories from G-crossed braided categories April 30, 2019, 3:00 pm at Jefferson 356
DIFFERENTIAL GEOMETRY SEMINAR Adam Jacob (UC DAVIS) Adiabatic limits of Yang-Mills connections on collapsing K3 surfaces April 30, 2019, 5:45 pm at CMSA, 20 Garden St, G10
OPEN NEIGHBORHOOD SEMINAR Alissa Crans (LOYOLA MARYMOUNT) Matrices, Reflections, and Knots, oh my! April 29, 2019, 4:30 pm at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Zili Zhang (UNIVERSITY OF MICHIGAN) P=W, a strange identity for Dynkin diagrams April 29, 2019, 12:00 pm - 1:00 pm at CMSA, 20 Garden St, G02
CMSA GENERAL RELATIVITY SEMINAR Armando Cabrera Pacheco (UNIVERSITäT TüBINGEN) Asymptotically flat extensions with charge April 26, 2019, 2:30-3:30 pm at CMSA, 20 Garden Street, G02
CMSA SPECIAL SEMINAR Maissam Barkeshli (UNIVERSITY OF MARYLAND) Relative anomalies in (2+1)D symmetry enriched topological states April 26, 2019, 10:30 am at CMSA, 20 Garden St, G10
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Jennifer Hom (GEORGIA TECH) Heegaard Floer and homology cobordism April 26, 2019, 3:30 pm at Science Center 507
CMSA SPECIAL SEMINAR Zhouli Xu (MIT) The intersection form of spin 4-manifolds and Pin(2)-equivariant Mahowald invariants April 26, 2019, 1:30 pm-2:30 pm at CMSA, 20 Garden St, G10
JOINT CMSA AND DEPARTMENT OF MATHEMATICS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Benjamin Fehrman (OXFORD UNIVERSITY) Pathwise well-posedness of nonlinear diffusion equations with nonlinear, conservative noise April 25, 2019, 4:30 pm- 5:30 pm at CMSA, 20 Garden St, G10
CMSA FLUID DYNAMICS SEMINAR Neel Patel (UNIVERSITY OF MICHIGAN) Local Existence and Blow-Up for SQG Patches April 25, 2019, 12 pm - 1 pm at CMSA, 20 Garden St, G10
TOPOLOGICAL ASPECTS OF CONDENSED MATTER SEMINAR Michael Freedman (MICROSOFT STATION Q) Quantum cellular automata in higher dimensions April 24, 2019, 10:30 am at CMSA Building, 20 Garden St, G10
CMSA FLUID DYNAMICS SEMINAR Heng Xiao (VIRGINIA TECH) Turbulence Modeling in the Age of Data: From Data Assimilation to Machine Learning April 24, 2019, 3:00 - 4:00 pm at CMSA, 20 Garden St, G10
CMSA COLLOQUIUM Shengwu Li (HARVARD UNIVERSITY) Credible Mechanisms April 24, 2019, 4:30 pm- 5:30 pm at CMSA, 20 Garden St, G10
NUMBER THEORY SEMINAR Yiannis Sakellaridis (RUTGERS) A new paradigm for the comparison of trace formulas April 24, 2019, 3:00 - 4:00 pm at Science Center 507
MATHEMATICAL PICTURE LANGUAGE SEMINAR Yi-Hong Zhang (TSINGHUA UNIVERSITY) Quantum many-body computation on a small quantum computer April 23, 2019, 3:00 pm at Jefferson 356
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Kalina Mincheva (YALE UNIVERSITY) Tropical Algebra April 23, 2019, 3:00 pm at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Yu-Wei Fan (HARVARD UNIVERSITY) Systolic inequality for K3 surfaces via stability conditions April 23, 2019, 4:00 pm at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Yang Zhou (CMSA) Quasimap wall-crossing for GIT quotients April 22, 2019, 12:00 pm - 1:00 pm at CMSA, 20 Garden St, G10
LOGIC SEMINAR Sebastien Vasey (HARVARD UNIVERSITY) Weak factorization systems and stable independence April 22, 2019, 5:40 pm at Science Center 507
CMSA GENERAL RELATIVITY SEMINAR Lydia Bieri (UNIVERSITY OF MICHIGAN) Logarithmic or Not Logarithmic April 19, 2019, 9:30 am - 10:30 am at CMSA, 20 Garden St, G02
CMSA METRIC LIMITS OF CALABI-YAU MANIFOLDS LECTURE SERIES Valentino Tosatti (NORTHWESTERN UNIVERSITY) Noncollapsing degenerations II April 19, 2019, 3:00 - 4:00 pm at CMSA Building, 20 Garden St, G10
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Aliakbar Daemi (SIMONS CENTER) Exotic Structures, Homology Cobordisms and Chern-Simons Functional April 19, 2019, 3:30 pm at Science Center 507
GENDER INCLUSIVITY IN MATHEMATICS SEMINAR Gelonia Dent (BROWN UNIVERSITY) Seeing Myself in Science: How Identity and Representation Impact Retention in STEM April 18, 2019, 3:30 pm at Emerson 105
CMSA FLUID DYNAMICS SEMINAR Maziar Raissi (NVIDIA) Hidden Physics Models: Machine Learning of Non-Linear Partial Differential Equations April 17, 2019, 3:00 - 4:00 pm at CMSA, 20 Garden Street, G02
NUMBER THEORY SEMINAR Brian Smithling (JOHNS HOPKINS UNIVERSITY) On Shimura varieties for unitary groups April 17, 2019, 3:00 pm at Science Center 507
CMSA COLLOQUIUM Yi-Zhuang You (UCSD) Machine Learning Physics: From Quantum Mechanics to Holographic Geometry April 17, 2019, 4:30 pm- 5:30 pm at CMSA, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Connor Mooney (UC IRVINE) Minimizers of convex functionals with small degeneracy set April 16, 2019, 4:00 pm at Science Center 507
COLLOQUIUM Xinwen Zhu (CALTECH) Hilbert's twenty-first problem for p-adic varieties April 16, 2019, 3:00 pm at Science Center 507
MATHEMATICAL PICTURE LANGUAGE SEMINAR Emil Prodan (YESHIVA UNIVERSITY) Pushing index theorems into the Sobolev April 16, 2019, 3:00 pm at Jefferson 356
LOGIC SEMINAR Rebecca Coulson (WEST POINT) Bipartite Metrically Homogeneous Graphs of Generic Type: Their Generic Theories and the Almost Sure Theories of their Ages April 15, 2019, 5:40 pm at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Junliang Shen (MIT) Perverse sheaves in hyper-Kähler geometry April 15, 2019, 12:00 - 1:00 pm at CMSA, 20 Garden Street, G02
OPEN NEIGHBORHOOD SEMINAR Lauren Williams (HARVARD UNIVERSITY) Combinatorics of shallow water waves (via the KP hierarchy) April 15, 2019, 4:30 pm at Science Center 507
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Joel Fish (UMASS BOSTON) Feral pseudoholomorphic curves and minimal sets April 12, 2019, 3:30 pm at Science Center 507
HARVARD-MIT-MSR COMBINATORICS SEMINAR Carly Klivans (BROWN UNIVERSITY) Flow-firing processes April 12, 2019, 4:45 pm- 5:45 pm at Science Center 507
CMSA METRIC LIMITS OF CALABI-YAU MANIFOLDS Valentino Tosatti (NORTHWESTERN UNIVERSITY) Noncollapsing degenerations April 12, 2019, 3:00 pm at CMSA, 20 Garden St, G10
THURSDAY SEMINAR Sander Kupers and Mike Hopkins (HARVARD UNIVERSITY) Discussion of two points April 11, 2019, 3:00 - 5:00 pm at Science Center 507
JOINT CMSA AND DEPARTMENT OF MATHEMATICS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Rui Han (GEORGIA TECH) Spectral gaps in graphene structures April 11, 2019, 4:30 pm at CMSA, 20 Garden St, G10
CMSA GENERAL RELATIVITY SEMINAR Amir Babak Aazami (CLARK UNIVERSITY) Kähler metrics via Lorentzian geometry in dimension 4 April 11, 2019, 3:00 - 4:00 pm at Science Center 411
GENDER INCLUSIVITY IN MATHEMATICS SEMINAR Genevieve Walsh (TUFTS UNIVERSITY) Geometric Topology and Group Theory April 11, 2019, 4:30 pm - 6 pm at Science Center Hall C
CMSA SPECIAL SEMINAR Juven Wang (IAS) Quantum 4d Yang-Mills and Time-Reversal Symmetric 5d TQFT: New Higher Anomalies to Anyonic-String/Brane Induced Topological Link Invariants April 10, 2019, 10:30 am at CMSA, 20 Garden St, G10
FLUID DYNAMICS SEMINAR Luc Deike (PRINCETON UNIVERSITY) Wave breaking in ocean atmosphere interactions April 10, 2019, 3:00 pm at Science Center 530
NUMBER THEORY SEMINAR Naser Talebi Zadeh (UNIVERSITY OF WISCONSIN) The Siegel variance formula for quadratic forms April 10, 2019, 3:00 PM at Science Center 507
COLLOQUIUM Pietro Veronesi (UNIVERSITY OF CHICAGO) Inequality Aversion, Populism, and the Backlash Against Globalization April 10, 2019, 2:30-3:30 pm at CMSA, 20 Garden Street, G10
MATHEMATICAL PICTURE LANGUAGE SEMINAR Alina Vdovina (UNIVERSITY OF NEWCASTLE, UK) Higher dimensional picture languages April 09, 2019, 4:30 pm at Jefferson 356
JOINT CMSA AND DEPARTMENT OF MATHEMATICS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Giulio Biroli (ENS) Large deviations for the largest eigenvalues and eigenvectors of spiked random matrices April 09, 2019, 12:00 - 1:00 pm at CMSA, 20 Garden Street, G02
MATHEMATICAL PICTURE LANGUAGE SEMINAR Claus Koestler (UNIVERSITY COLLEGE CORK, IRELAND) Markovianity as a distributional system April 09, 2019, 3:00 pm at Jefferson 356
DIFFERENTIAL GEOMETRY SEMINAR Spiro Karigiannis (UNIVERSITY OF WATERLOO) A curious system of second order nonlinear PDEs for U(m)-structures on manifolds April 09, 2019, 3:00 pm at Science Center 232
CMSA MATH PHYSICS SEMINAR Yoosik Kim (BOSTON UNIVERSITY) String polytopes and Gelfand-Cetlin polytopes April 08, 2019, 12:00 - 1:00 pm at CMSA, 20 Garden St, G10
TOPOLOGICAL ASPECTS OF CONDENSED MATTER SEMINAR Adam Nahum (OXFORD UNIVERSITY) Emergent statistical mechanics of entanglement in random unitary circuits April 08, 2019, 10:00 am at CMSA, 20 Garden St, G10
LOGIC SEMINAR Douglas Blue (HARVARD UNIVERSITY) Equivalence relations and effective cardinality April 08, 2019, 5:40 pm at Science Center 507
COLLOQUIUM Laura DeMarco (NORTHWESTERN UNIVERSITY) Complex dynamics and arithmetic equidistribution April 08, 2019, 3:00 pm at Science Center 507
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Melissa Liu (COLUMBIA UNIVERSITY) The Yang-Mills Equations over Klein Surfaces April 05, 2019, 3:30 pm at Science Center 507
CMSA GENERAL RELATIVITY SEMINAR Marcus Khuri (STONY BROOK UNIVERSITY) Stationary Vacuum Black Holes in Higher Dimensions April 04, 2019, 3:00 - 4:00 pm at CMSA, 20 Garden Street, G02
JOINT CMSA AND DEPARTMENT OF MATHEMATICS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Paul Bourgade (NYU) Log-correlations and branching structures in analytic number theory April 04, 2019, 4:30 pm- 5:30 pm at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Avner Ash (BOSTON COLLEGE) Resolutions of the Steinberg module for GL(n) April 04, 2019, 3:00 pm at Science Center Hall E *note different day and location*
THURSDAY SEMINAR Ben Knudsen (HARVARD UNIVERSITY) Toward the immersion conjecture April 04, 2019, 3:00 - 5:00 pm at Science Center 507
CMSA COLLOQUIUM Sarah Moshary (UNIVERSITY OF CHICAGO) Deregulation through Direct Democracy: Lessons from Liquor April 03, 2019, 2:30-3:30 pm at CMSA, 20 Garden Street, G10
CMSA FLUID DYNAMICS SEMINAR Christopher Rycroft (HARVARD UNIVERSITY) The reference map technique for simulating complex materials and multi-body interactions April 03, 2019, 4:00 pm at CMSA, 20 Garden Street, G10
COLLOQUIUM Christopher Hacon (UNIVERSITY OF UTAH) On the geometry of Algebraic varieties April 03, 2019, 3:00 pm at Science Center 507
MATHEMATICAL PICTURE LANGUAGE SEMINAR Magdalena Musat (UNIVERSITY OF COPENHAGEN) Quantum Correlations, Factorizable Channels, and the Connes Embedding Problem April 02, 2019, 4:00 pm at Jefferson 356
CMSA SPECIAL LECTURE SERIES ON DERIVED ALGEBRAIC/DIFFERENTIAL GEOMETRY Artan Sheshmani (CMSA) Examples and applications 2 (Part II) April 02, 2019, 3:00 - 4:30 pm at CMSA, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Ao Sun (MIT) Local Entropy and Generic Multiplicity One Singularities of Mean Curvature Flow of Surfaces April 02, 2019, 4:00 pm at Science Center 507
MATHEMATICAL PICTURE LANGUAGE SEMINAR Mikael Rordam (UNIVERSITY OF COPENHAGEN) Highlights of the Classification of Simple Nuclear C*-Algebras April 02, 2019, 3:00 pm at Jefferson 356
LOGIC SEMINAR Elliot Glazer (HARVARD UNIVERSITY AND ZHEJIANG UNIVERSITY) Set theory with randomness April 01, 2019, 5:40 pm at Science Center 507
OPEN NEIGHBORHOOD SEMINAR John Mackey (CARNEGIE MELLON UNIVERSITY) Tournaments having the most cycles April 01, 2019, 4:30 pm at Science Center 507
CMSA MATH PHYSICS SEMINAR Athanassios S. Fokas (UNIVERSITY OF CAMBRIDGE) Asymptotics: the unified transform, a new approach to the Lindelöf Hypothesis, and the ultra-relativistic limit of the Minkowskian approximation of general relativity April 01, 2019, 12:00 - 1:00 pm at CMSA, 20 Garden St, G10
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Chris Scaduto (SIMONS CENTER) Instantons and lattices of smooth 4-manifolds with boundary March 29, 2019, 3:30 pm at Science Center 222
GENDER INCLUSIVITY IN MATHEMATICS SEMINAR Eunice Kim (TUFTS UNIVERSITY) Dynamical Systems and Physical Phenomena March 28, 2019, 4:30 pm - 6 pm at Science Center Hall E
JOINT CMSA AND DEPARTMENT OF MATHEMATICS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Xuwen Chen (UNIVERSITY OF ROCHESTER) The Derivation of the Energy-critical NLS from Quantum Many-body Dynamics March 28, 2019, 4:30 - 5:30 pm at CMSA, 20 Garden St, G02
CMSA COLLOQUIUM Tatyana Sharpee (SALK INSTITUTE FOR BIOLOGICAL STUDIES) Hyperbolic geometry of the olfactory space March 27, 2019, 5:15 pm - 6:15 pm at CMSA, 20 Garden St, G10
NUMBER THEORY SEMINAR Kai-Wen Lan (UNIVERSITY OF MINNESOTA, TWIN CITIES) Local systems of Shimura varieties: a comparison of two constructions March 27, 2019, 3:00 pm at Science Center 507
MATHEMATICAL PICTURE LANGUAGE SEMINAR Jinsong Wu (HARBIN INSTITUTE OF TECHNOLOGY, CHINA) Fourier analysis on fusion rings March 26, 2019, 3:00 pm at Jefferson 356
CMSA SPECIAL LECTURE SERIES ON DERIVED ALGEBRAIC/DIFFERENTIAL GEOMETRY Artan Sheshmani (CMSA) Examples and applications 2 (Part II) March 26, 2019, 3:00 - 4:30 pm at CMSA, 20 Garden St, G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Changho Han (HARVARD UNIVERSITY) 'Almost K3' stable log surfaces and curves of genus 4 March 26, 2019, 3:00 pm at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Tian-Jun Li (MINNESOTA) Geometry of symplectic log Calabi-Yau surfaces March 26, 2019, 4:00 pm at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Eduardo Gonzalez (UNIVERSITY OF MASSACHUSETTS BOSTON) Stratifications in gauged Gromov-Witten theory March 25, 2019, 12:00 - 1:00 pm at CMSA, 20 Garden St, G10
LOGIC SEMINAR Justin Cavitt (HARVARD UNIVERSITY) Large Cardinals and Simpler Proofs March 25, 2019, 5:40 pm at Science Center 507
HARVARD - MIT - MSR COMBINATORICS SEMINAR Jim Haglund (UNIVERSITY OF PENNSYLVANIA) Three faces of the Delta Conjecture March 22, 2019, 4:15 pm at Science Center 507
CMSA SPECIAL LECTURE SERIES ON DERIVED ALGEBRAIC/DIFFERENTIAL GEOMETRY Artan Sheshmani (CMSA) Examples and applications 2 (Part I) March 21, 2019, 3:00 - 4:30 pm at CMSA, 20 Garden St, G10
CMSA FLUID DYNAMICS SEMINAR Paris Perdikaris (UNIVERSITY OF PENNSYLVANIA) Data-driven modeling of stochastic systems using physics-aware deep learning March 20, 2019, 3:00 PM - 4:00 PM at CMSA, 20 Garden Street, G10
CMSA COLLOQUIUM Sonia Jaffe (MICROSOFT) Quality Externalities on Platforms: The Case of Airbnb March 20, 2019, 4:30 - 5:30 pm at CMSA, 20 Garden St, G10
CMSA SPECIAL LECTURE SERIES ON DERIVED ALGEBRAIC/DIFFERENTIAL GEOMETRY Artan Sheshmani (CMSA) Intersections of Lagrangians March 19, 2019, 3:00 - 4:30 pm at CMSA, 20 Garden St, G10
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Emmy Murphy (NORTHWESTERN UNIVERSITY) Inductively collapsing Fukaya categories and flexibility March 15, 2019, 3:30 pm at Science Center 507
JOINT CMSA AND DEPARTMENT OF MATHEMATICS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Anna Vershynina (UNIVERSITY OF HOUSTON) How fast can entanglement be generated in quantum systems? March 14, 2019, 5:45 pm at Science Center 232 *note different location*
CMSA GENERAL RELATIVITY SEMINAR Peter Hintz (MIT) Stability of Minkowski space and polyhomogeneity of the metric March 14, 2019, 3:30 pm at Science Center 411
BRANDEIS-HARVARD-MIT-NORTHEASTERN JOINT COLLOQUIUM Emmy Murphy (NORTHWESTERN UNIVERSITY) Flexibility in contact and symplectic geometry March 14, 2019, Tea at 4:00 pm; Talk at 4:30 pm at Science Center Hall A
CMSA SPECIAL LECTURE SERIES ON DERIVED ALGEBRAIC/DIFFERENTIAL GEOMETRY Artan Sheshmani (CMSA) Lagrangians and Lagrangian fibrations March 14, 2019, 3:00 - 4:30 pm at CMSA, 20 Garden St, G02 *note different room*
NUMBER THEORY SEMINAR Samit Dasgupta (DUKE UNIVERSITY) On Brumer-Stark units March 13, 2019, 3:00 pm at Science Center 507
CMSA COLLOQUIUM Greg Galloway (UNIVERSITY OF MIAMI) On the geometry and topology of initial data sets in General Relativity March 13, 2019, 5:15 PM - 6:15 PM at CMSA, 20 Garden Street, G10
CMSA SPECIAL SEMINAR Albrecht Klemm (UNIVERSITY OF BONN) D-brane masses and the motivic Hodge conjecture March 13, 2019, 12:15 - 1:15 pm at CMSA, 20 Garden St, G10
MATHEMATICAL PICTURE LANGUAGE SEMINAR Youwei Zhao (UNIVERSITY OF SCIENCE AND TECHNOLOGY OF CHINA) A review of the surface code from an experimental perspective March 12, 2019, 3:00 pm at Jefferson 356
DIFFERENTIAL GEOMETRY SEMINAR Yaiza Canzani (UNC) Understanding the growth of Laplace eigenfunctions March 12, 2019, 4:00 PM at Science Center 507
CMSA SPECIAL LECTURE SERIES ON DERIVED ALGEBRAIC/DIFFERENTIAL GEOMETRY Artan Sheshmani (CMSA) Lagrangians and Lagrangian fibrations March 12, 2019, 3:00 - 4:30 pm at CMSA, 20 Garden St, G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Jake Levinson (UNIVERSITY OF WASHINGTON) Boij-Söderberg Theory for Grassmannians March 12, 2019, 3:00 pm at Science Center 507
OPEN NEIGHBORHOOD SEMINAR Bill Dunham (HARVARD UNIVERSITY) A Recipe for π March 11, 2019, 4:30 pm at Science Center 507
CMSA SPECIAL SEMINAR Juven Wang (IAS) New Higher Anomalies and 3+1D Quantum Matter: From 4d Yang-Mills Gauge Theory to 2d CP^N Sigma Model March 11, 2019, 4:15 pm at CMSA, 20 Garden St, G10
LOGIC SEMINAR Cameron Freer (MIT) Computability of exchangeable sequences, arrays, and graphs March 11, 2019, 5:40 pm at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Yu Pan (MIT) Augmentations and exact Lagrangian cobordisms. March 11, 2019, 12:00 PM - 1:00 PM at CMSA, 20 Garden Street, G10
SOCIAL SCIENCE APPLICATIONS FORUM Pietro Bonaldi (CARNEGIE MELLON UNIVERSITY) Synthetic Regression Discontinuity - Causal Identification with Machine Learning March 11, 2019, 2:30 PM at CMSA, 20 Garden Street, G02
CMSA SOCIAL SCIENCE APPLICATIONS FORUM Wes Pegden (CARNEGIE MELLON) Developing and applying theorems for the rigorous detection of gerrymandering in the real-world March 11, 2019, 4:00 pm at CMSA, 20 Garden St, G02
COLLOQUIUM Bhargav Bhatt (UNIVERSITY OF MICHIGAN) Title: Interpolating p-adic cohomology theories March 11, 2019, 3:00 PM at Science Center 507
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Thomas Walpuski (MICHIGAN STATE UNIVERSITY) Super-rigidity and Castelnuovo's bound March 08, 2019, 3:30 PM at Science Center 507
CMSA METRIC LIMITS OF CALABI-YAU MANIFOLDS Valentino Tosatti (NORTHWESTERN UNIVERSITY) Proof of Yau's Theorem March 08, 2019, 3:00 - 4:00 pm at CMSA, 20 Garden St, G10
CMSA GENERAL RELATIVITY SEMINAR Laura Donnay (BLACK HOLE INITIATIVE) Carrollian physics at the black-hole horizon March 07, 2019, 3:00 PM - 4:00 PM at Science Center 411
SOCIAL SCIENCE APPLICATIONS FORUM Jura Liaukonyte (CORNELL UNIVERSITY) Background Noise? TV Advertising Affects Real Time Investor Behavior March 07, 2019, 2:30 PM at CMSA, 20 Garden Street, G02
CMSA SPECIAL LECTURE SERIES ON DERIVED ALGEBRAIC/DIFFERENTIAL GEOMETRY: TUESDAYS & THURSDAYS BEGINNING FEBRUARY 5, 2019 Artan Sheshmani (CMSA) Lecture 4: Cotangent complexes March 07, 2019, 3:00 - 4:30 pm at CMSA Building, 20 Garden St, G10
JOINT CMSA AND DEPARTMENT OF MATHEMATICS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Ilya Kachkovskiy (MICHIGAN STATE) Localization and delocalization for interacting 1D quasiperiodic particles March 06, 2019, 4:15 pm at Science Center 411 *note unusual day and location*
CMSA COLLOQUIUM Philippe Sosoe (CORNELL UNIVERSITY) A sharp transition for Gibbs measures associated to the nonlinear Schrödinger equation March 06, 2019, 2:30-3:30 PM at CMSA, 20 Garden Street, G10
CMSA FLUID DYNAMICS SEMINAR Zhong Yi Wan (MIT) Machine learning the kinematics of spherical particles in fluid flows March 06, 2019, 3:00 -4:00 PM at CMSA, 20 Garden Street, G02
NUMBER THEORY SEMINAR Nicholas Triantafillou (MIT) The Method of Chabauty-Coleman-Skolem for Restrictions of Scalars March 06, 2019, 3:00 pm at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Tristan Collins (MIT) The Inverse Monge-Ampere flow March 05, 2019, 4:00 PM at Science Center 507
MATHEMATICAL PICTURE LANGUAGE SEMINAR Kaifeng Bu (HARVARD UNIVERSITY AND ZHEJIANG UNIVERSITY) Efficient classical simulation of Clifford circuits with nonstabilizer input states March 05, 2019, 3:00 PM at Jefferson 356
COLLOQUIUM Mihnea Popa (NORTHWESTERN UNIVERSITY) D-modules in birational and complex geometry March 04, 2019, 3:00 pm at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Zhenkun Li (MIT) Cobordism and gluing maps in sutured monopoles and applications March 04, 2019, 12:00 - 1:00 PM at CMSA, 20 Garden Street, G10
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Peter Ozsváth (PRINCETON UNIVERSITY) Computing knot Floer homology March 01, 2019, 3:30 PM at Science Center 507
CMSA METRIC LIMITS OF CALABI-YAU MANIFOLDS LECTURE SERIES Valentino Tosatti (NORTHWESTERN UNIVERSITY) Introduction and Yau's Theorem March 01, 2019, 3:00 -4:00 PM at CMSA, 20 Garden St, G10
BRANDEIS-HARVARD-MIT-NORTHEASTERN JOINT COLLOQUIUM Robert Lazarsfeld (STONY BROOK) How irrational is an irrational variety? February 28, 2019, Tea at 4:00 pm; Talk at 4:30 pm at Science Center Hall A
CMSA COLLOQUIUM Ian Martin (LSE) Sentiment and Speculation in a Market with Heterogeneous Beliefs February 27, 2019, 2:30-3:30 PM at CMSA, 20 Garden Street, G10
NUMBER THEORY SEMINAR Charlotte Chan (PRINCETON UNIVERSITY) Affine Deligne--Lusztig varieties at infinite level for GLn February 27, 2019, 3:00 PM at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Antoine Song (PRINCETON UNIVERSITY) Existence of infinitely many minimal hypersurfaces in closed manifolds February 26, 2019, 4:00 PM at Science Center 507
COLLOQUIUM Melanie Wood (UNIVERSITY OF WISCONSIN) Random groups from generators and relations, and unramified extensions of global fields February 26, 2019, 3:00 pm at Science Center 507
MATHEMATICAL PICTURE LANGUAGE SEMINAR Fei Wei (HARVARD UNIVERSITY) Entropy of arithmetic functions and arithmetic compactifications February 26, 2019, 3:00 PM at Jefferson 356
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Ethan Cotterill (UNIVERSIDADE FEDERAL FLUMINENSE) Real inflection points of real linear series on real (hyper)elliptic curves (joint with I. Biswas and C. Garay López) February 26, 2019, 3:00 pm at Science Center B10 *note different location
LOGIC SEMINAR Nathanael Ackerman (HARVARD UNIVERSITY) Entropy of Invariant Measures February 25, 2019, 5:40 PM at Science Center 507
CMSA SPECIAL SEMINAR Shinobu Hosono (GAKUSHUIN UNIVERSITY) Double cover family of K3 surfaces and mirror symmetry February 25, 2019, 1:00-2:00 PM at CMSA, 20 Garden Street, G10
CMSA MATH PHYSICS SEMINAR Hossein Movasati (IMPA) Modular vector fields February 25, 2019, 12:00 - 1:00 PM at CMSA, 20 Garden St, G10
OPEN NEIGHBORHOOD SEMINAR Joe Harris (HARVARD UNIVERSITY) Poncelet's theorem and the birth of modern algebraic geometry February 25, 2019, 4:30 PM at Science Center 507
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR John Baldwin (BOSTON COLLEGE) Instanton L-space knots February 22, 2019, 3:30 PM at Science Center 507
SOCIAL SCIENCE APPLICATIONS FORUM Bobby Pakzad-Hurson (BROWN UNIVERSITY) Crowdsourcing and Optimal Market Design February 22, 2019, 3:00 PM at CMSA Building, 20 Garden St, G02
JOINT CMSA AND DEPARTMENT OF MATHEMATICS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Michael Loss (GEORGIA TECH) Some results for functionals of Aharonov-Bohm type February 21, 2019, 4:30 pm at CMSA, 20 Garden St, G10
CMSA GENERAL RELATIVITY SEMINAR Hsin-Yu Chen (BLACK HOLE INITIATIVE) Measuring the Hubble Constant with Gravitational Waves February 21, 2019, 3:00 -4:00 PM at Sci Center 411
COLLOQUIUM June Huh (PRINCETON UNIVERSITY) Lorentzian polynomials February 21, 2019, 3:00 PM at Science Center 507
THURSDAY SEMINAR Morgan Opie (HARVARD UNIVERSITY) Hopf algebras, Witt vectors, and Brown-Gitler spectra, ctd February 21, 2019, 3:00 -5:00 PM at Sci Center Hall E
CMSA COLLOQUIUM Michael Woodford (COLUMBIA UNIVERSITY) Optimally Imprecise Memory and Biased Forecasts February 20, 2019, 4:30 pm at CMSA, 20 Garden St, G10
CMSA FLUID DYNAMICS SEMINAR Xiaolin Wang (SEAS, HARVARD) The effect of piezoelectric material on the stability of flexible flags February 20, 2019, 3:00-4:00 PM at CMSA, 20 Garden St, G10
NUMBER THEORY SEMINAR Lillian Pierce (DUKE UNIVERSITY) Recent progress on understanding class groups of number fields February 20, 2019, 3:00 PM at Science Center 507
CMSA SPECIAL SEMINAR Quan Wen (HEFEI NATIONAL LABORATORY FOR PHYSICAL SCIENCES AT THE MICROSCOPE) Motor Systems: Searching for Principles February 20, 2019, 1:15 PM at CMSA, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Valentino Tosatti (NORTHWESTERN UNIVERSITY) Ricci-flat metrics and dynamics on K3 surfaces February 19, 2019, 4:00 pm at Science Center 507
CMSA SPECIAL SEMINAR Feng Luo (RUTGERS UNIVERSITY) Volume and rigidity of hyperbolic polyhedral 3-manifolds February 19, 2019, 10:30 - 11:15 am at CMSA, 20 Garden St, G02
CMSA SPECIAL SEMINAR Xu Xu (WUHAN UNIVERSITY) Rigidity of sphere packing on triangulated 3-manifolds February 19, 2019, 11:15 am - 12:00 pm at CMSA, 20 Garden St, G02
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Maksym Fedorchuk (BOSTON COLLEGE) Standard models of low degree del Pezzo fibrations February 19, 2019, 3:00 pm at MIT 2-147
HARVARD-MIT-MSR COMBINATORICS SEMINAR Sylvie Corteel (CNRS/UC BERKELEY) Arctic curves for bounded Lecture Hall Tableaux February 15, 2019, 3:30 pm at Science Center 530 (SC 232 - backup location)
CMSA SPECIAL ALGEBRAIC GEOMETRY SEMINAR Jin Cao (TSINGHUA UNIVERSITY) Elliptic motives and related topics February 15, 2019, 3:00 pm at CMSA, 20 Garden St, G10
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Cagatay Kutluhan (BUFFALO) Can Floer-theoretic invariants detect overtwisted contact structures? February 15, 2019, 3:30 pm at Science Center 507
THURSDAY SEMINAR Morgan Opie (HARVARD UNIVERSITY) Hopf algebras, Witt vectors, and Brown-Gitler spectra February 14, 2019, 3:00 - 4:30 pm at Science Center 507
CMSA GENERAL RELATIVITY SEMINAR Charles Marteau (ECOLE POLYTECHNIQUE) Null hypersurfaces and ultra-relativistic physics in gravity February 14, 2019, 3:00 pm at Science Center 411
JOINT CMSA AND DEPARTMENT OF MATHEMATICS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Nike Sun (MIT) Capacity lower bound for the Ising perceptron February 14, 2019, 4:30 pm at CMSA, 20 Garden St, G10
BRANDEIS-HARVARD-MIT-NORTHEASTERN JOINT COLLOQUIUM Allen Knutson (CORNELL UNIVERSITY) Schubert calculus and quiver varieties February 14, 2019, Tea at 4:00 pm; Talk at 4:30 pm at Science Center Hall A
NUMBER THEORY SEMINAR Preston Wake (INSTITUTE FOR ADVANCED STUDY) Variation of Iwasawa invariants in residually reducible Hida families February 13, 2019, 3:00 pm at Science Center 507
CMSA COLLOQUIUM Christian Santangelo (UNIVERSITY OF MASSACHUSETTS AT AMHERST) 4D printing with folding forms February 13, 2019, 4:30 pm at CMSA, 20 Garden St, G10
CMSA HODGE AND NOETHER-LEFSCHETZ LOCI SEMINAR Hossein Movasati (IMPA) Foliations and Hodge loci February 13, 2019, 1:30 - 3:00 pm at CMSA, 20 Garden St, G10
CMSA SPECIAL ALGEBRAIC GEOMETRY SEMINAR Qingyuan Jiang (IAS) Categorical duality between joins and intersections February 13, 2019, 3:00 pm at CMSA Building, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Mitchell Faulk (COLUMBIA UNIVERSITY) Yau's theorem on Asymptotically Conical manifolds with prescribed decay rates February 12, 2019, 4:00 pm at Science Center 507
MATHEMATICAL PICTURE LANGUAGE SEMINAR Liang Kong (SHENZHEN INSTITUTE FOR QUANTUM SCIENCE AND ENGINEERING AND SOUTHERN UNIVERSITY OF SCIENCE AND TECHNOLOGY) A unified mathematical theory of gapped and gapless boundaries of 2d topological orders February 12, 2019, 3:00 pm at Jefferson 356
CMSA SPECIAL LECTURE SERIES ON DERIVED ALGEBRAIC/DIFFERENTIAL GEOMETRY: TUESDAYS & THURSDAYS BEGINNING FEBRUARY 5, 2019 Artan Sheshmani (CMSA) Lecture 3: Derived Artin stacks February 12, 2019, 3:00 - 4:30 pm at CMSA Building, 20 Garden St, G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Dan Abramovich (BROWN UNIVERSITY) Moduli technique in resolution of singularities February 12, 2019, 3:00 pm at Science Center 507
COLLOQUIUM Hector Pasten (PONTIFICIA UNIVERSIDAD CATOLICA DE CHILE) Modular forms and the ABC conjecture February 11, 2019, 3:00 pm at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Tristan Collins (MIT) Stability and Nonlinear PDE in mirror symmetry February 11, 2019, 12:00 - 1:00 pm at CMSA, 20 Garden St, G10
CMSA GENERAL RELATIVITY SEMINAR Pei-Ken Hung (MIT) The linear stability of the Schwarzschild spacetime in the harmonic gauge: even part February 07, 2019, 3:00 pm at Science Center 411
CMSA COLLOQUIUM Ulrich Mueller (PRINCETON UNIVERSITY) Inference for the Mean February 07, 2019, 4:30 pm at CMSA Building, 20 Garden St, G10
CMSA SPECIAL LECTURE SERIES ON DERIVED ALGEBRAIC/DIFFERENTIAL GEOMETRY: TUESDAYS & THURSDAYS BEGINNING FEBRUARY 5, 2019 Artan Sheshmani (CMSA) Lecture 2: Grothendieck topologies and homotopy descent February 07, 2019, 3:00 - 4:30 pm at CMSA Building, 20 Garden St, G10
JOINT CMSA AND DEPARTMENT OF MATHEMATICS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Ramis Movassagh (IBM RESARCH) Generic Gaplessness, and Hamiltonian density of states from free probability theory February 07, 2019, 4:30 pm at Science Center 530
NUMBER THEORY SEMINAR Rong Zhou (IAS) Motivic cohomology of quaternionic Shimura varieties and level raising February 06, 2019, 3:00 pm at Science Center 507
CMSA HODGE AND NOETHER-LEFSCHETZ LOCI SEMINAR Hossein Movasati (IMPA) Integrality properties of CY modular forms February 06, 2019, 1:30 - 3:00 pm at CMSA Building, 20 Garden St, G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Mihai Fulger (UNIVERSITY OF CONNECTICUT) Seshadri constants for bundles February 05, 2019, 3:00 pm at MIT 2-142
CMSA SPECIAL LECTURE SERIES ON DERIVED ALGEBRAIC/DIFFERENTIAL GEOMETRY: TUESDAYS & THURSDAYS BEGINNING FEBRUARY 5, 2019 Artan Sheshmani (CMSA) Lecture 1: Model and NA-categories February 05, 2019, 3:00 - 4:30 pm at CMSA Building, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Xin Zhou (UNIVERSITY OF CALIFORNIA AT SANTA BARBARA) Multiplicity One Conjecture in Min-max theory February 05, 2019, 4:00 pm at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Netanel (Nati) Rubin-Blaier (HARVARD & BRANDEIS) Abelian cycles, and homology of symplectomorphism groups February 04, 2019, 12:00 - 1:00 pm at CMSA Building, 20 Garden St, G10
COLLOQUIUM Dana Mendelson (UNIVERSITY OF CHICAGO) Probabilistic methods and long-time dynamics for nonlinear dispersive PDEs February 04, 2019, 3:00 pm at Science Center 507
SNAPSHOTS OF MATH AT HARVARD Ben Knudsen, Alison Miller, Sebastien Picard, Bena Tshishiku (HARVARD UNIVERSITY) February 04, 2019, 4:30 pm at Science Center 507
LOGIC SEMINAR Sebastien Vasey (HARVARD UNIVERSITY) On categoricity in successive cardinals February 04, 2019, 5:40 pm at Science Center 507
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Zoltán Szabó (PRINCETON UNIVERSITY) Algebraic methods in knot Floer homology February 01, 2019, 3:30 pm at Science Center 507
HARVARD - MIT COMBINATORICS SEMINAR Michael Wheeler (THE UNIVERSITY OF MEBOURNE) Matrix product expressions for Macdonald polynomials and combinatorial applications February 01, 2019, 3:30 pm at Science Center 530 (SC 309a - backup location)
COLLOQUIUM Simion Filip (HARVARD AND IAS) Discrete groups, Lyapunov exponents, and Hodge theory January 31, 2019, 4:00 pm at Science Center 507
CMSA GENERAL RELATIVITY SEMINAR Shahar Hadar (HARVARD UNIVERSITY) Late-time behavior of near-extremal black holes from symmetry January 31, 2019, 3:00 - 4:00 pm at Science Center 232 *note different location*
CMSA HODGE AND NOETHER-LEFSCHETZ LOCI SEMINAR Hossein Movasati (IMPA) Constant Yukawa couplings January 30, 2019, 1:30 - 3:00 pm at CMSA Building, 20 Garden St, G10
CMSA COLLOQUIUM Richard B. Freeman (HARVARD UNIVERSITY AND NBER) Innovation in Cell Phones in the US and China: Who Improves Technology Faster? January 30, 2019, 4:30 pm at CMSA Building, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Enno Kessler (HARVARD CMSA) Supergeometry, Super Riemann Surfaces and the Superconformal Action January 29, 2019, 4:00 pm at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Per Berglund (UNIVERSITY OF NEW HAMPSHIRE) A Generalized Construction of Calabi-Yau Manifolds and Mirror Symmetry January 28, 2019, 12:00 - 1:00 pm at CMSA Building, 20 Garden St, G10
CMSA HODGE AND NOETHER-LEFSCHETZ LOCI SEMINAR Hossein Movasati (IMPA) A new model for modular curves January 23, 2019, 1:30 - 3:00 pm at CMSA Building, 20 Garden St, G10
CMSA HODGE AND NOETHER-LEFSCHETZ LOCI SEMINAR Hossein Movasati (IMPA) Algebraic BCOV anomaly equation January 16, 2019, 1:30 - 3:00 pm at CMSA Building, 20 Garden St, G10
CMSA TOPOLOGICAL ASPECTS OF CONDENSED MATTER SEMINAR Lukasz Fidkowski (UNIVERSITY OF WASHINGTON) Non-trivial quantum cellular automata in 3 dimensions January 08, 2019, 2:00 PM at CMSA Building, 20 Garden St, G10
STUDENT/POSTDOC SYMPLECTIC GEOMETRY SEMINAR Fenglong You (UNIVERSITY OF ALBERTA) Mirror theorems for orbifold and relative Gromov-Witten invariants December 14, 2018, 2:00 - 3:15 pm at Science Center 507
CMSA COLLOQUIUM Zhiwei Yun (MIT) Shtukas: what and why December 12, 2018, 4:30 pm at CMSA Building, 20 Garden St, G10
CMSA HODGE AND NOETHER-LEFSCHETZ LOCI SEMINAR Hossein Movasati (IMPA) A conjectural Hodge locus for cubic tenfold December 12, 2018, 1:30 - 3:00 pm at CMSA Building, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Lino Amorim (KANSAS STATE) Mirror symmetry for an orbifold sphere December 11, 2018, 4:00 pm at Science Center 507
CMSA TOPOLOGICAL ASPECTS OF CONDENSED MATTER SEMINAR Anders Sandvik (BOSTON UNIVERSITY AND INSTITUTE OF PHYSICS, CAS, BEIJING) Quantum Monte Carlo simulations of exotic states in 2D quantum magnets December 10, 2018, 10:00 - 11:30 am at CMSA Building, 20 Garden St, G02
CMSA MATHEMATICAL PHYSICS SEMINAR Fenglong You (UNIVERSITY OF ALBERTA) Relative and orbifold Gromov-Witten theory December 10, 2018, 12:00 - 1:00 pm at CMSA Building, 20 Garden St, G10
STUDENT/POSTDOC SYMPLECTIC GEOMETRY SEMINAR Patrick Clarke (DREXEL UNIVERSITY AND CMSA) Homological mirror symmetry for toric Landau-Ginzburg models: dual pairs, T-duality, and curved A-infinity categories December 07, 2018, 2:00 - 3:15 pm at Science Center 507
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Raphael Zentner (REGENSBURG) Irreducible SL(2,C)-representations of homology-3-spheres December 07, 2018, 3:30 pm at Science Center 507
CMSA SPECIAL MATHEMATICAL PHYSICS SEMINAR Qintao Chen () Recent progress of various Volume Conjectures for links as well as 3-manifolds December 06, 2018, 3:30 - 4:30 pm at Science Center 530
THURSDAY SEMINAR Tomer Schlank (HEBREW UNIVERSITY OF JERUSALEM) Ambidexterity in localizations of spectra December 06, 2018, 3:00 - 4:00 pm *note earlier ending time* at Science Center 507
CMSA GENERAL RELATIVITY SEMINAR Pengzi Miao (UNIVERSITY OF MIAMI) Localization of the Penrose inequality and variation of quasi-local mass December 05, 2018, 11:00 am - 12:00 pm at CMSA Building, 20 Garden St, G02
INFORMAL GEOMETRY AND DYNAMICS SEMINAR Curtis McMullen (HARVARD UNIVERSITY) The failure of Ratner's theorem for moduli spaces December 05, 2018, 4:00 pm at Science Center 530
CMSA COLLOQUIUM Robert McCann (UNIVERSITY OF TORONTO) Displacement convexity of Boltzmann's entropy characterizes positive energy in general relativity December 05, 2018, 4:30 pm at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Sol Friedberg (BOSTON COLLEGE) Langlands functoriality, the converse theorem, and integral representations of L-functions December 05, 2018, 3:00 pm at Science Center 507
CMSA HODGE AND NOETHER-LEFSCHETZ LOCI SEMINAR Hossein Movasati (IMPA) Some explicit Hodge cycles December 05, 2018, 1:30 - 3:00 pm at CMSA Building, 20 Garden St, G02
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Omer Angel (UNIVERSITY OF BRITISH COLUMBIA) Balanced excited random walks December 05, 2018, 3:00 - 4:00 pm at CMSA Building, 20 Garden St, G02
DIFFERENTIAL GEOMETRY SEMINAR Beomjun Choi (COLUMBIA UNIVERSITY) Evolution of Non-Compact Hypersurface By Inverse Mean Curvature December 04, 2018, 4:00 pm at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Monica Pate (HARVARD UNIVERSITY) Gravitational Memory in Higher Dimensions December 03, 2018, 12:00 - 1:00 pm at CMSA Building, 20 Garden St, G02
CMSA TOPOLOGICAL ASPECTS OF CONDENSED MATTER SEMINAR Claudio Chamon (BOSTON UNIVERSITY) Many-body scar states with topological properties in 1D, 2D, and 3D December 03, 2018, 10:00 - 11:30 am at CMSA Building, 20 Garden St, G02
LOGIC SEMINAR Tibor Beke (UNIVERSITY OF MASSACHUSETTS - LOWELL) Schanuel functors and the Grothendieck (semi)ring of some theories December 03, 2018, 5:40 pm at Science Center 507
STUDENT/POSTDOC SYMPLECTIC GEOMETRY SEMINAR Daniel Pomerleano (UNIVERSITY OF MASSACHUSETTS - BOSTON) Degenerations from Floer cohomology November 30, 2018, 2:00 - 3:15 pm at Science Center 507
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR John Pardon (PRINCETON UNIVERSITY) Structural results in wrapped Floer theory November 30, 2018, 3:30 pm at Science Center 507
THURSDAY SEMINAR Sanath Devalapurkar () Stable splittings for classifying spaces of compact Lie groups November 29, 2018, 3:00 pm - 5:00 pm at Science Center 507
NUMBER THEORY SEMINAR John Bergdall (BRYN MAWR COLLEGE) Upper bounds for constant slope p-adic families of modular forms November 28, 2018, 3:00 pm at Science Center 507
CMSA HODGE AND NOETHER-LEFSCHETZ LOCI SEMINAR Roberto Villaflor () Periods of Complete Intersection Algebraic Cycles November 28, 2018, 1:30 - 3:00 pm at CMSA Building, 20 Garden St, G10
CMSA COLLOQUIUM Robert Haslhofer (UNIVERSITY OF TORONTO) Recent progress on mean curvature flow November 28, 2018, 4:30 pm at CMSA Building, 20 Garden St, G10
LOGIC COLLOQUIUM Charles Parsons (HARVARD UNIVERSITY) Kreisel and Gödel November 28, 2018, 3:00 - 4:00 pm at Logic Center, 2 Arrow St, Rm 420
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Sean O'Rourke () Universality and least singular values of random matrix products November 28, 2018, 3:00 - 4:00 pm at CMSA Building, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Bin Guo (COLUMBIA UNIVERSITY) Geometric estimates for complex Monge-Ampere equations November 27, 2018, 4:00 pm at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Aaron Bertram (UTAH) Stability for Regular Coherent Sheaves November 27, 2018, 3:00 pm at MIT 2-142
CMSA MATHEMATICAL PHYSICS SEMINAR Charles Doran (ALBERTA) Feynman Amplitudes from Calabi-Yau Fibrations November 26, 2018, 12:00 - 1:00 pm at CMSA Building, 20 Garden St, G10
CMSA HODGE AND NOETHER-LEFSCHETZ LOCI SEMINAR Hossein Movasati (IMPA) Periods of algebraic cycles November 21, 2018, 1:30 - 3:00 pm at CMSA Building, 20 Garden St, G10
CMSA MATHEMATICAL PHYSICS SEMINAR Yusuf Baris Kartal (MIT) Distinguishing symplectic fillings using dynamics of Fukaya categories November 19, 2018, 12:00 - 1:00 pm at CMSA Building, 20 Garden St, G10
CMSA TOPOLOGICAL ASPECTS OF CONDENSED MATTER SEMINAR Xiao-Gang Wen (MIT) A classification of 3+1D topological orders November 19, 2018, 10:00 - 11:30 am at CMSA Building, 20 Garden St, G10
CMSA COLLOQUIUM Xiaoqin Wang (JOHNS HOPKINS UNIVERSITY) Computational Principles of Auditory Cortex November 19, 2018, 3:00 pm *note special day and time* at CMSA Building, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Vaughan Jones (VANDERBILT UNIVERSITY) A stroll around the subfactor zoo November 19, 2018, 2:00 pm at CMSA Building, 20 Garden St, G10 **note change in day and location**
BRANDEIS - HARVARD - MIT - NORTHEASTERN JOINT COLLOQUIUM Andrei Okounkov (COLUMBIA UNIVERSITY) New worlds for Lie theory November 15, 2018, Tea at 4:00 pm, Talk at 4:30 pm at Science Center Hall A
CMSA GENERAL RELATIVITY SEMINAR Niky Kamran (MCGILL UNIVERSITY) Lorentzian Einstein metrics with prescribed conformal infinity November 14, 2018, 11:00 am at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Eric Urban (COLUMBIA UNIVERSITY) Eisenstein congruences and Euler systems November 14, 2018, 3:00 pm at Science Center 507
CMSA COLLOQUIUM Dusa McDuff (COLUMBIA UNIVERSITY) The virtual fundamental class in symplectic geometry November 14, 2018, 4:00 pm at CMSA Building, 20 Garden St, G10
CMSA HODGE AND NOETHER-LEFSCHETZ LOCI SEMINAR Hossein Movasati (IMPA) Integral Hodge conjecture for Fermat varieties November 14, 2018, 1:30 - 3:00 pm at CMSA Building, 20 Garden St, G10
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR David Gamarnik (MIT AND HARVARD) Two Algorithmic Hardness Results in Spin Glasses and Compressive Sensing November 14, 2018, 3:00 - 4:00 pm at CMSA Building, 20 Garden St, G10
MATHEMATICAL PICTURE LANGUAGE SEMINAR William Norledge (PENN STATE) The adjoint braid arrangement as a free Lie algebra in species via the Steinmann relations November 13, 2018, 4:00 pm at Jefferson 356
DIFFERENTIAL GEOMETRY SEMINAR Siyuan Lu (RUTGERS UNIVERSITY) On a localized Riemannian Penrose inequality November 13, 2018, 4:00 pm at Science Center 507
OPEN NEIGHBORHOOD SEMINAR Laure Flapan (NORTHEASTERN UNIVERSITY) The Luroth problem: algebraic curves, field extensions, and beyond November 12, 2018, 4:30 pm at Science Center 507
LOGIC SEMINAR Rehana Patel (HARVARD UNIVERSITY) Around Cherlin's question on countable universal $H$-free graphs November 12, 2018, 5:40 pm at Science Center 507
SPECIAL DIFFERENTIAL GEOMETRY SEMINAR Simon Donaldson (STONY BROOK UNIVERSITY) Multi-valued harmonic functions and Nash-Moser theory November 09, 2018, 2:00 pm at Science Center 507
CMSA SPECIAL SEMINAR Yang-Hui He (UNIVERSITY OF OXFORD) Deep-learning the Landscape November 09, 2018, 10:30 am - 12:00 pm at Science Center 530
CMSA SPECIAL SEMINAR Nima Arkani-Hamed (INSTITUTE FOR ADVANCED STUDY) Spacetime, Quantum Mechanics and Positive Geometry November 09, 2018, 12:00 - 1:30 pm at Science Center 507
MATHEMATICAL PICTURE LANGUAGE SEMINAR Xun Gao (HARVARD AND MAX PLANCK INSTITUTE) Pictorial language in quantum computation November 09, 2018, 3:30 pm at Jefferson 256
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Mohammed Abouzaid (COLUMBIA UNIVERSITY) A formalism for Floer theory with coefficients November 09, 2018, 3:30 pm at Science Center 507
BRANDEIS - HARVARD - MIT - NORTHEASTERN JOINT COLLOQUIUM Steven Gubser (PRINCETON UNIVERSITY) Number theory and spacetime November 08, 2018, Tea at 4:00 pm, Talk at 4:30 pm at Tea: 100 Goldsmith; Talk: 317 Goldsmith
THURSDAY SEMINAR Jeremy Hahn (HARVARD UNIVERSITY) The construction of BO/I_n November 08, 2018, 3:00 pm - 5:00 pm at Science Center 507
INFORMAL GEOMETRY AND DYNAMICS SEMINAR Curtis McMullen (HARVARD UNIVERSITY) A Panorama of Teichmueller Curves November 07, 2018, 4:00 pm at Science Center 530
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Wilhelm Schlag (YALE UNIVERSITY) On the Bourgain-Dyatlov fractal uncertainty principle November 07, 2018, 3:00 - 4:00 pm at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Florian Herzig (UNIVERSITY OF TORONTO) Ordinary representations and locally analytic vectors for GL_n(Q_p) November 07, 2018, 3:00 pm at Science Center 507
CMSA SPECIAL LECTURE SERIES Hossein Movasati (IMPA) Hodge and Noether-Lefschetz Loci Seminar November 07, 2018, 1:30 - 3:00 pm at CMSA Building, 20 Garden St, G10
CMSA GENERAL RELATIVITY SEMINAR Jordan Keller (HARVARD UNIVERSITY) Linear Stability of Higher Dimensional Schwarzschild Black Holes November 07, 2018, 11:00 am at CMSA Building, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Pengfei Guan (MCGILL UNIVERSITY) Interior curvature estimates for immersed hypersurfaces in R^{n+1} November 06, 2018, 4:00 pm at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Alex Perry (COLUMBIA UNIVERSITY) Stability conditions and cubic fourfolds November 06, 2018, 3:00 pm at Science Center 507
CMSA MATH PHYSICS SEMINAR Siqi He (SIMONS CENTER) The Kapustin-Witten Equations, Opers and Khovanov Homology November 05, 2018, 12:00 - 1:00 PM at CMSA Building, 20 Garden St, G10
LOGIC SEMINAR Gabriel Goldberg (HARVARD UNIVERSITY) Descending sequences in the Mitchell order November 05, 2018, 5:40 pm at Science Center 507
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Laure Flapan (NORTHEASTERN UNIVERSITY) Monodromy of Kodaira fibrations November 02, 2018, 3:30 pm at Science Center 507
STUDENT/POSTDOC SYMPLECTIC GEOMETRY SEMINAR Jingyu Zhao (HARVARD CMSA) A connection on symplectic cohomology November 02, 2018, 2:00 - 3:15 pm at Science Center 507
THURSDAY SEMINAR Robert Burklund (MIT) Properties of Brown-Gitler spectra November 01, 2018, 3:00 pm - 5:00 pm at Science Center 507
CMSA GENERAL RELATIVITY SEMINAR Alex Lupsasca (HARVARD UNIVERSITY) Polarization Whorls from M87 at the Event Horizon Telescope October 31, 2018, 11:00 am at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Chris Skinner (PRINCETON) Some recent results on Euler systems October 31, 2018, 3:00 pm at Science Center 507
INFORMAL GEOMETRY AND DYNAMICS SEMINAR Ronen Mukamel (RICE UNIVERSITY) Teichmueller curves: Theory and exploration October 31, 2018, 4:00 pm at Science Center 530
CMSA COLLOQUIUM Moon Duchin (TUFTS) Exploring the (massive) space of graph partitions October 31, 2018, 4:30 PM- 5:30 PM at CMSA Building, 20 Garden Street, Room G10
DIFFERENTIAL GEOMETRY SEMINAR Sebastien Picard (HARVARD UNIVERSITY) Anomaly Flows and Calabi-Yau Manifolds with Torsion October 30, 2018, 4:00 pm at Science Center 507
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Lauren Williams (HARVARD UNIVERSITY) Introduction to the asymmetric simple exclusion process (from a combinatorialist's point of view) October 30, 2018, 10:30 am at Science Center 507 -*note special day, time & location*
MATHEMATICAL PICTURE LANGUAGE SEMINAR Chunlan Jiang (HEBEI NORMAL UNIVERSITY) Similarity Invariants of Geometric Operators October 30, 2018, 4:00 pm at Jefferson 453
CMSA TOPOLOGICAL ASPECTS OF CONDENSED MATTER SEMINAR Dominic Williamson (YALE UNIVERSITY) Symmetry and topological order in tensor networks October 29, 2018, 10:00 am at CMSA Building, 20 Garden St, G10
LOGIC SEMINAR Doug Blue (HARVARD UNIVERSITY) Sacks' embedding problem October 29, 2018, 5:40 pm at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Francois Greer (SIMONS CENTER) Rigid Varieties with Lagrangian Spheres October 29, 2018, 12:00 - 1:00 pm at CMSA Building, 20 Garden St, G10
CMSA SOCIAL SCIENCE APPLICATIONS FORUM SPECIAL EDITION Daniel Marszalec (UNIVERSITY OF TOKYO) Auctions for complements: theories vs experiments October 26, 2018, 3:00 pm at CMSA Building, 20 Garden St, G02
STUDENT/POSTDOC SYMPLECTIC GEOMETRY SEMINAR **Canceled** Patrick Clarke (DREXEL UNIVERSITY AND CMSA) **Canceled** Homological mirror symmetry for toric Landau-Ginzburg models: dual pairs, T-duality, and curved A-infinity categories October 26, 2018, 2:00 - 3:15 pm at Science Center 507
CMSA SPECIAL SEMINAR John Loftin (RUTGERS UNIVERSITY) Equivariant minimal surfaces in real hyperbolic 3- and 4-spaces October 26, 2018, 1:30 pm at CMSA Building, 20 Garden St, G10
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Michael Hutchings (UC BERKELEY) Equivariant symplectic capacities October 26, 2018, 3:30 pm at Science Center 507
THURSDAY SEMINAR Ben Knudsen (HARVARD UNIVERSITY) Braid groups and braid Thom spectra October 25, 2018, 3:00 pm - 5:00 pm at Science Center 507
BRANDEIS - HARVARD - MIT - NORTHEASTERN JOINT COLLOQUIUM Gabor Szekelyhidi (UNIVERSITY OF NOTRE DAME) Gromov-Hausdorff limits of Kahler manifolds October 25, 2018, Tea at 4:00 pm, Talk at 4:30 pm at Science Center Hall A
LOGIC COLLOQUIUM Sam Buss (UC SAN DIEGO) Bounded Arithmetic, Expanders, and Monotone Propositional Proofs October 24, 2018, 3:00 - 4:00 pm at Logic Center, 2 Arrow St, Rm 420
NUMBER THEORY SEMINAR Wei Ho (UNIVERSITY OF MICHIGAN) Integral points on elliptic curves October 24, 2018, 3:00 pm at Science Center 507
HARVARD-MIT COMBINATORICS SEMINAR Bernd Sturmfels (MPI LEIPZIG AND UC BERKELEY) Moment varieties of measures on polytopes October 24, 2018, 4:15 - 5:15 pm at Science Center Hall E
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Tselil Schramm (HARVARD UNIVERSITY AND MIT) (Nearly) Efficient Algorithms for the Graph Matching Problem in Correlated Random Graphs October 24, 2018, 3:00 - 4:00 pm at CMSA Building, 20 Garden St, G02 *note different location*
DIFFERENTIAL GEOMETRY SEMINAR Mark Stern (DUKE UNIVERSITY) Monotonicity and Betti Number Bounds October 23, 2018, 4:00 pm at Science Center 507
MATHEMATICAL PICTURE LANGUAGE SEMINAR Shamil Shakirov (HARVARD UNIVERSITY) Rational integrable systems associated with higher genus surfaces October 23, 2018, 4:00 pm at Jefferson 356
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Tim Magee (INSTITUTO DE MATHEMáTICAS, UNAM) Toric degenerations of cluster varieties October 23, 2018, 3:00 pm at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Sze Ning Mak (Hazel) (BROWN UNIVERSITY) Tetrahedral geometry in holoraumy spaces of 4D, $\mathcal{N}=1$ and $\mathcal{N}=2$ minimal supermultiplets October 22, 2018, 12:00 - 1:00 pm at CMSA Building, 20 Garden St, G02
CMSA TOPOLOGICAL ASPECTS OF CONDENSED MATTER SEMINAR Yin-Chen He (PERIMETER INSTITUTE) Emergent QED3 and QCD3 in condensed matter system October 22, 2018, 10:00 - 11:30 am at CMSA Building, 20 Garden St, G02
STUDENT/POSTDOC SYMPLECTIC GEOMETRY SEMINAR Yu-Wei Fan (HARVARD UNIVERSITY) Bridgeland stability conditions and mirror symmetry October 19, 2018, 2:00 - 3:15 pm at Science Center 507
HARVARD-MIT COMBINATORICS SEMINAR Bruno Benedetti (UNIVERSITY OF MIAMI) Local constructions of manifolds October 19, 2018, 4:15 - 5:15 pm at MIT 2-147
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Aleksander Doan (STONY BROOK UNIVERSITY) Harmonic Z/2 spinors and wall-crossing in Seiberg-Witten theory October 19, 2018, 3:30 pm at Science Center 507
CMSA MIRROR SYMMETRY SEMINAR Jingyu Zhao (CMSA) Big quantum period theorem for the bulk-deformed potentials of toric surfaces October 19, 2018, 11:00 am - 12:00 pm at Science Center 530
THURSDAY SEMINAR Daniel Álvarez-Gavela (INSTITUTE FOR ADVANCED STUDY) Smale-Hirsch inmersion theory via holonomic approximation October 18, 2018, 3:00 pm - 5:00 pm at Science Center 507
CMSA COLLOQUIUM Jeremy England (MIT) Wisdom of the Jumble October 17, 2018, 4:30 - 5:30 pm at CMSA Building, 20 Garden St, G10
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Thomas Chen (UNIVERSITY OF TEXAS AT AUSTIN) Dynamics of a heavy quantum tracer particle in a Bose gas October 17, 2018, 3:30 - 4:30 pm * please note change in time* at CMSA Building, 20 Garden St, G10
INFORMAL GEOMETRY AND DYNAMICS SEMINAR Curtis McMullen (HARVARD UNIVERSITY) Ergodic theory of foliations of surfaces October 17, 2018, 4:00 pm at Science Center 530
CMSA GENERAL RELATIVITY SEMINAR Sébastien Picard (HARVARD UNIVERSITY) The Anomaly flow over Riemann surfaces October 17, 2018, 11:00 am at CMSA Building, 20 Garden St, G02
NUMBER THEORY SEMINAR Sean Howe (STANFORD UNIVERSITY) A unipotent circle action on p-adic modular forms October 17, 2018, 3:00 pm at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Dori Bejleri (MIT) Compact moduli of elliptic fibrations and degree one del Pezzo surfaces October 16, 2018, 3:00 pm at MIT 2-142
CMSA SPECIAL SEMINAR Artan Sheshmani (CMSA) Atiyah class and sheaf counting on local Calabi-Yau four folds October 16, 2018, 2:30 pm at CMSA Building, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Teng Fei (COLUMBIA UNIVERSITY) cscK metrics and the coupled flow of Li-Yuan-Zhang October 16, 2018, 4:00 pm at Science Center 507
MATHEMATICAL PICTURE LANGUAGE SEMINAR Christoph Gorgulla (HARVARD MEDICAL SCHOOL AND THE MATHEMATICAL PICTURE LANGUAGE PROJECT) A quantum mechanical free energy method based on Feynman's path integral formulation October 16, 2018, 4:00 pm at Jefferson 356
LOGIC SEMINAR Will Boney (HARVARD UNIVERSITY) Erd�`s-Rado Classes October 15, 2018, 5:40 pm at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Chris Gerig (HARVARD UNIVERSITY) A geometric interpretation of the Seiberg-Witten invariants October 15, 2018, 12:00 - 1:00 pm at CMSA Building, 20 Garden St, G10
CMSA SPECIAL SEMINAR Xin Wang (COLUMBIA UNIVERSITY) Quasi-modularity and holomorphic anomaly equation for GW invariants of elliptic curve October 15, 2018, 3:00 pm at CMSA Building, 20 Garden St, G10
CMSA TOPOLOGICAL ASPECTS OF CONDENSED MATTER SEMINAR Ethan Lake (MIT) A primer on higher symmetries October 15, 2018, 10:00 - 11:30 am at CMSA Building, 20 Garden St, G10
OPEN NEIGHBORHOOD SEMINAR Bena Tshishiku (HARVARD UNIVERSITY) The odd thing about can sunshades October 15, 2018, 4:30 pm at Science Center 507
STUDENT/POSTDOC SYMPLECTIC GEOMETRY SEMINAR Yusuf Baris Kartal (MIT) Distinguishing open symplectic mapping tori via the dynamics of Fukaya categories October 12, 2018, 2:00 - 3:15 pm at Science Center 507
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Paul Feehan (RUTGERS UNIVERSITY) Lojasiewicz inequalities and Morse-Bott functions October 12, 2018, 3:30 PM at Science Center 507
BRANDEIS - HARVARD - MIT - NORTHEASTERN JOINT COLLOQUIUM Zhiwei Yun (MIT) From Kloosterman sums to exceptional groups October 11, 2018, Tea at 4:00 pm, Talk at 4:30 pm at Brandeis - 317 Goldsmith
THURSDAY SEMINAR Emily Saunders (HARVARD UNIVERSITY) The Hirsch-Smale theorem October 11, 2018, 3:00 pm - 5:00 pm at Science Center 507
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Alfonso Bandeira (NEW YORK UNIVERSITY) Statistical estimation under group actions: The Sample Complexity of Multi-Reference Alignment October 10, 2018, 3:00 - 4:00 pm at CMSA Building, 20 Garden St, G10
CMSA COLLOQUIUM Justin Solomon (MIT) Correspondence and Optimal Transport for Geometric Data Processing October 10, 2018, 4:30 pm at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Max Menzies (HARVARD UNIVERSITY) The p-curvature and Bost's Conjecture for the Gauss-Manin connection on non-abelian cohomology October 10, 2018, 3:00 pm at Science Center 507
CMSA GENERAL RELATIVITY SEMINAR Salem Al Mosleh (CMSA) Thin elastic shells and isometric embedding of surfaces in three-dimensional Euclidean space October 10, 2018, 11:00 am at CMSA Building, 20 Garden St, G02
DIFFERENTIAL GEOMETRY SEMINAR Semyon Alesker (TEL AVIV UNIVERSITY) Quaternionic Monge-Ampere equations on HKT manifolds October 09, 2018, 4:00 pm at Science Center 507
CMSA TOPOLOGICAL ASPECTS OF CONDENSED MATTER SEMINAR Sagar Vijay (HARVARD UNIVERSITY) Fracton Phases of Matter October 09, 2018, 3:00 PM - 4:30 PM at CMSA Building, 20 Garden St, G10
MATHEMATICAL PICTURE LANGUAGE SEMINAR Jean-Bernard Zuber (SORBONNE UNIVERSITé) ADE and all that, 30 years after October 09, 2018, 4:00 pm at Jefferson 356
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Ignacio Barros (NORTHEASTERN UNIVERSITY) Uniruledness of strata of holomorphic differentials in small genus October 09, 2018, 3:00 pm at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Pei-Ken Hung (MIT) The linear stability of the Schwarzschild spacetime in the harmonic gauge: odd part October 08, 2018, 12:00 - 1:00 pm at CMSA Building, 20 Garden St, G10
GAUGE-TOPOLOGY-SYMPLECTIC SEMINAR Anar Akhmedov (UNIVERSITY OF MINNESOTA AND HARVARD UNIVERSITY) Construction of symplectic 4-manifolds and Lefschetz fibrations via Luttinger surgery October 05, 2018, 3:30 pm at Science Center 507
**CANCELED** STUDENT/POSTDOC SYMPLECTIC GEOMETRY SEMINAR Yu-Wei Fan (HARVARD UNIVERSITY) **CANCELED**Bridgeland stability conditions and mirror symmetry October 05, 2018, 2:00 - 3:15 pm at Science Center 507 **CANCELED**
CMSA SPECIAL SEMINAR Xiao-Gang Wen (MIT) Classification of topological orders in 2+1D and 3+1D October 05, 2018, 11:00 am - 12:00 pm at Science Center 530
MATHEMATICAL PICTURE LANGUAGE SEMINAR Semyon Dyatlov (UC BERKELEY AND MIT) Fractal Uncertainty Principle and Quantum Chaos October 05, 2018, 4:30 pm at Jefferson 453
THURSDAY SEMINAR Bena Tshishiku (HARVARD UNIVERSITY) Relations among Stiefel-Whitney classes of manifolds October 04, 2018, 3:00 pm - 5:00 pm at Science Center 507
OPEN NEIGHBORHOOD SEMINAR Matt Parker (STANDUP MATHS AND QUEEN MARY UNIVERSITY OF LONDON) Stand-up Maths: using performance to engage people with mathematics October 04, 2018, 5:00 pm at Science Center 507
BRANDEIS - HARVARD - MIT - NORTHEASTERN JOINT COLLOQUIUM Semyon Dyatlov (MIT) Fractal uncertainty principle and quantum chaos October 04, 2018, Tea at 4:00 pm, Talk at 4:30 pm at MIT 2-190
NUMBER THEORY SEMINAR Lynnelle Ye (HARVARD UNIVERSITY) Geometry of eigenvarieties for definite unitary groups over the boundary of weight space October 03, 2018, 3:00 PM at Science Center 507
CMSA GENERAL RELATIVITY SEMINAR Christos Mantoulidis (MIT) The Bartnik mass of apparent horizons October 03, 2018, 11:00 am at CMSA Building, 20 Garden St, G02
CMSA COLLOQUIUM Richard Schoen (UC IRVINE) Perspectives on the scalar curvature October 03, 2018, 4:30 - 5:30 pm at CMSA Building, 20 Garden St, G10
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Ian Jauslin (INSTITUTE FOR ADVANCED STUDY) Liquid Crystals and the Heilmann-Lieb model October 03, 2018, 3:00 pm at CMSA Building, 20 Garden St, G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Tom Bachmann (MIT) Affine Grassmannians in motivic homotopy theory October 02, 2018, 3:00 pm at MIT 2-2142
MATHEMATICAL PICTURE LANGUAGE SEMINAR Victor Kac (MIT) Multiplicative Poisson vertex algebras and differential-difference Hamiltonian equations October 02, 2018, 5:00 pm at Jefferson 356
DIFFERENTIAL GEOMETRY SEMINAR Hansol Hong (CMSA & BRANDEIS UNIVERSITY) Mirror construction via Lagrangian deformation October 02, 2018, 4:00 pm at Science Center 507
SPECIAL SEMINAR Andre Neves (UNIVERSITY OF CHICAGO) Abundance of minimal surfaces October 01, 2018, 3:00 - 4:00 pm at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Dori Bejleri (MIT) Stable pair compactifications of the moduli space of degree one del Pezzo surfaces via elliptic fibrations October 01, 2018, 12:00 - 1:00 pm at CMSA Building, 20 Garden St, G10
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Yash Deshpande (MIT) Estimating low-rank matrices in noise: phase transitions from spin glass theory September 28, 2018, 3:00 pm at CMSA Building, 20 Garden St, G10
STUDENT/POSTDOC SYMPLECTIC GEOMETRY SEMINAR Yoosik Kim (BOSTON UNIVERSITY) Gelfand-Cetlin systems and their applications September 28, 2018, 2:00 - 3:15 pm at Science Center 507
THURSDAY SEMINAR Jun Hou Fung (HARVARD UNIVERSITY) Immersions up to cobordism September 27, 2018, 3:00 pm - 5:00 pm at Science Center 507
BRANDEIS - HARVARD - MIT - NORTHEASTERN JOINT COLLOQUIUM Peter Bubenik (UNIVERSITY OF FLORIDA) Mathematical Aspects of Topological Data Analysis September 27, 2018, 4:30 pm at Northeastern - 509 Lake Hall
INFORMAL GEOMETRY AND DYNAMICS SEMINAR Curtis McMullen (HARVARD UNIVERSITY) Moduli spaces and dynamical systems September 26, 2018, 4:00 pm at Science Center 530
CMSA GENERAL RELATIVITY SEMINAR Jordan Keller (BHI) Quasi-local Angular Momentum and Center-of-Mass at Future Null Infinity September 26, 2018, 11 AM at CMSA Building, 20 Garden St., G02
CMSA COLLOQUIUM Xiao-Gang Wen (MIT) A classification of low dimensional topological orders and fully extended TQFTs September 26, 2018, 4:30 - 5:30 pm at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Frank Thorne (UNIVERSITY OF SOUTH CAROLINA & TUFTS UNIVERSITY) Error Terms in Arithmetic Statistics September 26, 2018, 3:00 pm at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Pei-Ken Hung (MIT) The smoothing time of convex inverse mean curvature flows September 25, 2018, 4:00 - 5:00 pm at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Yi Xie (SIMONS CENTER) sl(3) Khovanov module and the detection of planar theta-graph September 24, 2018, 12:00 - 1:00 pm at CMSA Building, 20 Garden St, G10
CMSA TOPOLOGICAL ASPECTS OF CONDENSED MATTER SEMINAR Max Metlitski (MIT) Surface Topological Order and a new 't Hooft Anomaly of Interaction Enabled 3+1D Fermion SPTs September 24, 2018, 10:00 - 11:30 am at CMSA Building, 20 Garden St, G10
GAUGE - TOPOLOGY - SYMPLECTIC SEMINAR Weiyi Zhang (UNIVERSITY OF WARWICK) From smooth to almost complex September 21, 2018, 3:30 pm at Science Center 507
STUDENT/POSTDOC SYMPLECTIC GEOMETRY SEMINAR Tim Large (MIT) Steenrod operations and the Floer homotopy type September 21, 2018, 2:00 - 3:15 pm at Science Center 507
THURSDAY SEMINAR Mike Hopkins (HARVARD UNIVERSITY) The Immersion Conjecture, an overview September 20, 2018, 3:00 pm - 5:00 pm at Science Center 507
CMSA GENERAL RELATIVITY SEMINAR Pei-Ken Hung (MIT) The linear stability of the Schwarzschild spacetime in the harmonic gauge: odd part September 19, 2018, 11:00 am at Science Center 530
NUMBER THEORY SEMINAR Ben Howard (BOSTON COLLEGE) Moduli spaces of shtukas and a higher derivative Gross-Kohnen-Zagier formula September 19, 2018, 3:00 pm at Science Center 507
MATHEMATICAL PICTURE LANGUAGE SEMINAR Liming Ge (CHINESE ACADEMY OF SCIENCE AND UNIVERSITY OF NEW HAMPSHIRE) On Multiplicative Fourier Transforms September 18, 2018, 4:00 pm at Jefferson 356
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Austin Conner (TEXAS A&M UNIVERSITY) New approaches to upper bounds on the complexity of matrix multiplication September 18, 2018, 3:00 pm at MIT 4-153
DIFFERENTIAL GEOMETRY SEMINAR William Minicozzi (MIT) Dynamics near singularities in mean curvature flow September 18, 2018, 4:00 pm at Science Center 507
CMSA TOPOLOGICAL ASPECTS OF CONDENSED MATTER SEMINAR Adrian Po (MIT) A modern solution to the old problem of symmetries in band theory September 17, 2018, 10:30 am - 12:00 pm at CMSA Building, 20 Garden St, G10
CMSA MATHEMATICAL PHYSICS SEMINAR Gaetan Borot (MAX PLANCK INSTITUTE) A generalization of Mirzakhani's identity, and geometric recursion September 17, 2018, 12:00 - 1:00 pm at CMSA Building, 20 Garden St, G10
OPEN NEIGHBORHOOD SEMINAR Moon Duchin (TUFTS UNIVERSITY) Random Everything September 17, 2018, 4:30 pm at Science Center 507
STUDENT/POSTDOC SYMPLECTIC GEOMETRY SEMINAR Andrew Hanlon (UC BERKELEY AND HARVARD) A new perspective on homological mirror symmetry for toric varieties September 14, 2018, 1:30 - 3:00 pm at Science Center 507
GAUGE - TOPOLOGY - SYMPLECTIC SEMINAR Denis Auroux (HARVARD UNIVERSITY) An invitation to homological mirror symmetry September 14, 2018, 3:30 pm at Science Center 507
NUMBER THEORY SEMINAR Daniel Kriz (MIT) A new p-adic Maass-Shimura operator and supersingular Rankin-Selberg p-adic L-functions September 12, 2018, 3:00 pm at Science Center 507
CMSA GENERAL RELATIVITY SEMINAR Aghil Alaee (CMSA) Mass-angular momentum inequality for black holes September 12, 2018, 11:00 am at CMSA Building, 20 Garden St, G02
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Jack Huizenga (PENN STATE) Moduli of sheaves on Hirzebruch surfaces September 11, 2018, 3:00 pm at Science Center 507
CMSA TOPOLOGICAL ASPECTS OF CONDENSED MATTER SEMINAR Dominic Else (MIT) Phases and topology in periodically driven (Floquet) systems September 10, 2018, 10:30 am - 12:00 pm at CMSA Building, 20 Garden St, G10
CMSA MATHEMATICAL PHYSICS SEMINAR Xiaomeng Xu (MIT) Stokes phenomenon, Yang-Baxter equations and Gromov-Witten theory September 10, 2018, 12:00 - 1:00 pm at CMSA Building, 20 Garden St, G10
STUDENT/POSTDOC SYMPLECTIC GEOMETRY SEMINAR Denis Auroux (HARVARD UNIVERSITY) An introduction to Fukaya categories and homological mirror symmetry September 07, 2018, 1:30 - 3:00 pm at Science Center 530
CMSA GENERAL RELATIVITY SEMINAR Christos Mantoulidis (MIT) Capacity and quasi-local mass September 07, 2018, 2:00 pm at CMSA Building, 20 Garden St, G02
MATHEMATICAL PICTURE LANGUAGE SEMINAR Bohan Fang (PEKING UNIVERSITY) Mirror symmetry for toric Calabi-Yau 3-folds September 04, 2018, 4:00 pm at Jefferson 356
CMSA SPECIAL SEMINAR Zhengcheng Gu (THE CHINESE UNIVERSITY OF HONG KONG) Towards a complete classification of symmetry protected topological phases for interacting fermions in three dimensions and a general group supercohomology theory August 29, 2018, 3:00 pm at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL ALGEBRAIC GEOMETRY SEMINAR Netanel Rubin-Blaier (CMSA, HARVARD) The Kontsevich compactification, Abel-Jacobi maps, and symplectomorphism of algebraic varieties May 24, 2018, 3:00 - 4:00 PM at CMSA Building, 20 Garden Street, Room G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS HOMOLOGICAL MIRROR SYMMETRY 2-PART LECTURE SERIES Charles Doran (UNIVERSITY OF ALBERTA AND ICERM) Calabi-Yau fibrations: construction and classification May 17, 2018, 1:00 - 3:00 PM at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS HOMOLOGICAL MIRROR SYMMETRY 2-PART LECTURE SERIES Charles Doran (UNIVERSITY OF ALBERTA AND ICERM) Picard-Fuchs uniformization and Calabi-Yau geometry May 15, 2018, 1:00 - 3:00 PM at CMSA Building, 20 Garden St, G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Renzo Cavalieri (COLORADO STATE) Witten conjecture for Mumford's kappa classes May 08, 2018, 3:00 PM at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Netanel Rubin-Blaier (CMSA, HARVARD) The Kontsevich compactification, Abel-Jacobi maps, and symplectomorphism of algebraic varieties May 01, 2018, 4:15 PM at Science Center 507
CMSA SPECIAL SEMINAR ON SYMPLECTIC GEOMETRY Weiwei Wu (UNIVERSITY OF GEORGIA) Lagrangian surgery formulae and applications April 30, 2018, 1:30 - 2:30 pm at CMSA Building, 20 Garden Street, Room G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Dmitry Tonkong (UC BERKELEY) Geometry of symplectic flux April 30, 2018, 12:00 - 1:00 pm at CMSA Building, 20 Garden Street, Room G10
BRANDEIS, HARVARD, MIT, NORTHEASTERN JOINT COLLOQUIUM AT HARVARD Paul Hacking (UNIVERSITY OF MASSACHUSETTS, AMHERST) Mirror Symmetry and Fano manifolds April 26, 2018, Tea at 4 pm, Talk at 4:30 pm at Science Center Hall A
NUMBER THEORY SEMINAR Ali Altug (BOSTON UNIVERSITY) Beyond endoscopy, the trace formula, and its relatives April 25, 2018, 3:00 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Xi Yin (HARVARD UNIVERSITY) How we can learn what we want to know about M-theory April 25, 2018, 4:30 PM at CMSA Building, 20 Garden St, G10
INFORMAL DYNAMICS & GEOMETRY SEMINAR Yongquan Zhang (HARVARD UNIVERSITY) Andreev's theorem on hyperbolic polyhedra and Kleinian reflection groups April 25, 2018, 4:00 pm at Science Center 530
LOGIC SEMINAR Paul Baginski (FAIRFIELD UNIVERSITY) Model Theoretic Advances for Groups With Bounded Chains of Centralizers April 24, 2018, 5:15 PM at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Jiaping Wang (MINNESOTA) Structure at infinity for four dimensional shrinking Ricci Solitons April 24, 2018, 2:00 - 3:00 PM at Science Center 232
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Melody Chan (BROWN UNIVERSITY) Cohomology of M_g and the tropical moduli space of curves April 24, 2018, 3:00 PM at Science Center 507
CMSA MATHEMATICAL PHYSICS SEMINAR Baohua Fu (CHINESE ACADEMY OF SCIENCE) Equivariant compactifications of vector groups April 23, 2018, 12:00 - 1:00 PM at CMSA Building, 20 garden St., G02
CMSA MIRROR SYMMETRY SEMINAR Alan Thompson (CAMBRIDGE) Threefolds fibred by K3 surfaces and mirror symmetry April 20, 2018, 10:45 AM - 12:00 PM at Sci Center 530
RANDOM MATRIX & PROBABILITY THEORY SEMINAR Carl Lucibello (MIT) The Random Perceptron Problem: thresholds, phase transitions, and geometry April 20, 2018, 2:00 - 3:00 pm at CMSA Building, 20 Garden St, G02
RANDOM MATRIX & PROBABILITY THEORY SEMINAR Yas Despande (MIT) Phase transitions in estimating low-rank matrices April 20, 2018, 3:00 - 4:00 pm at CMSA Building, 20 Garden St, G02
CMSA MIRROR SYMMETRY SEMINAR Xiaomeng Xu (MIT) Stokes phenomenon, quantum groups and 2d topological field theory April 20, 2018, 9:30 - 10:45 AM at Sci Center 530
HARVARD LOGIC COLLOQUIUM Alekos Kechris (CALTECH) Borel equivalence relations, cardinal algebras, and structurability April 19, 2018, 4:00 �" 5:00 PM at Logic Center, Room 420, 2 Arrow Street
THURSDAY SEMINAR Jacob Lurie (HARVARD UNIVERSITY) Koszul Duality IV April 19, 2018, 3:00 - 5:00 pm at Science Center 507
NUMBER THEORY SEMINAR John Voight (DARTMOUTH COLLEGE) On the paramodularity of typical abelian surfaces April 18, 2018, 3:00 PM at Sci Center 507
CMSA COLLOQUIUM Washington Taylor (MIT) On the fibration structure of known Calabi-Yau threefolds April 18, 2018, 4:30 PM at CMSA Building, 20 Garden St, G10
INFORMAL GEOMETRY & DYNAMICS SEMINAR Kathryn Lindsey (BOSTON COLLEGE) Galois conjugates of PCF generalized beta-transformations April 18, 2018, 4:00 PM at Sci Center 530
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR June Park (BROWN UNIVERSITY) Arithmetic of the moduli of quasimaps and the moduli of fibered algebraic surfaces with heuristics for counting curves over global fields April 17, 2018, 3:00 PM at MIT 4-153
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL ALGEBRAIC GEOMETRY SEMINAR Yefeng Shen (UNIVERSITY OF OREGON) LG/CY correspondence for Fermat elliptic curves April 17, 2018, 12:30 - 1:30 pm at CMSA Building, 20 Garden Street, G02
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Yuan Gao (STONY BROOK UNIVERSITY) On the extension of the Viterbo functor April 16, 2018, 12:00 PM at CMSA Building, 20 Garden Street, Room G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MIRROR SYMMETRY SEMINAR Fabian Haiden (HARVARD UNIVERSITY) Geometric flows, iterated logarithms, and balanced weight filtrations April 13, 2018, 11:00am - 12:00pm at Science Center 530
CMSA HOMOLOGICAL MIRROR SYMMETRY FOCUSED LECTURE SERIES Mauricio Romo (INSTITUTE FOR ADVANCED STUDY) Gauged Linear Sigma Models, Supersymmetric Localization and Applications, Part 2 April 12, 2018, 3:00 - 5:00 pm at CMSA Building, 20 Garden Street, G02
HARVARD LOGIC COLLOQUIUM Boris Zilber (UNIVERSITY OF OXFORD) Between Model Theory and Physics April 12, 2018, 4:00 - 5:00 pm at Logic Center, 2 Arrow St, Rm 420
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS Pablo Parillo (MIT) Graph Structure in Polynomial Systems: Chordal Networks April 11, 2018, 4:30 PM at CMSA building, 20 Garden St., G10
INFORMAL GEOMETRY & DYAMICS SEMINAR Corey Bregman (BRANDEIS UNIVERSITY) Kaehler groups and surface bundles April 11, 2018, 4:00 PM at Science Center 530
NUMBER THEORY SEMINAR Dino J. Lorenzini (UNIVERSITY OF GEORGIA) Regular models of curves and wild quotient singularities April 11, 2018, 3:00 PM at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Eric Larson (MIT) The Maximal Rank Conjecture April 10, 2018, 3:00 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Brandon B. Meredith (EMBRY-RIDDLE AERONAUTICAL UNIVERSITY) Mirror Symmetry on Toric Surfaces via Tropical Geometry April 09, 2018, 12:00 PM at CMSA Building, 20 Garden Street, G02
THURSDAY SEMINAR Gijs Heuts (UTRECHT UNIVERSITY) Coalgebras and the Goodwillie tower of the Bousfield-Kuhn functor April 05, 2018, 3:00 - 5:00 pm at Science Center 507
NUMBER THEORY SEMINAR Jennifer Balakrishnan (BOSTON UNIVERSITY) Effective aspects of quadratic Chabauty April 04, 2018, 3:00 PM at Science Center 507
INFORMAL GEOMETRY & DYNAMICS SEMINAR Francis Bonahon (UNIVERSITY OF SOUTHERN CALIFORNIA) What to do when you cannot confine your planes? Teichmüller spaces of non-compact Riemann surfaces April 04, 2018, 4:00 pm at Science Center 530
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Ramesh Narayan (DEPARTMENT OF ASTRONOMY, HARVARD UNIVERSITY) Black Holes and Naked Singularities *Postponed from 3/21/18* April 04, 2018, 4:30 PM at CMSA Building, 20 Garden Street, Room G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Curtis McMullen (HARVARD UNIVERSITY) Billiards, quadrilaterals and moduli spaces April 03, 2018, 3:00 PM at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Marcel Bischoff (OHIO UNIVERSITY) Quantum Symmetries and Conformal Nets April 03, 2018, 4:00 PM at Jefferson 356
LOGIC SEMINAR Warren Goldfarb () Two Notes on Weak Arithmetic April 03, 2018, 5:15 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Cheuk-Yu Mak (UNIVERSITY OF CAMBRIDGE) Discrete Legendre transform and tropical multiplicity from symplectic geometry April 02, 2018, 12:00 PM at CMSA Building, 20 Garden Street, Room G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MIRROR SYMMETRY SEMINAR Yu-Wei Fan (HARVARD UNIVERSITY) Systoles, Special Lagrangians, and Bridgeland stability conditions March 30, 2018, 11:00 - 12:00 at Science Center 530
SPECIAL DIFFERENTIAL GEOMETRY SEMINAR D.H. Phong (COLUMBIA UNIVERSITY) The Anomaly Flow and Fu-Yau Hessian Equations March 30, 2018, 3:00 - 4:15 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL SEMINAR Zhiwei Zheng (TSINGHUA UNIVERSITY) Moduli Spaces of Cubic Fourfolds with Automorphisms March 30, 2018, 9:30 - 11:00 am at Science Center 530
THURSDAY SEMINAR Jacob Lurie (HARVARD UNIVERSITY) Koszul Duality II March 29, 2018, 3:00 - 5:00 pm at Science Center 507
NUMBER THEORY SEMINAR Chuck Doran (UNIVERSITY OF ALBERTA AND ICERM) Arithmetic MIrror Symmetry for K3 Pencils and Hypergeometric Decomposition March 28, 2018, 3:00 PM at Science Center 507
INFORMAL GEOMETRY & DYAMICS SEMINAR Curtis McMullen (HARVARD UNIVERSITY) Planes immersed in hyperbolic 3-manifolds March 28, 2018, 4:00 PM at Science Center 530
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Andrea Montanari (STANFORD UNIVERSITY) A Mean Field View of the Landscape of Two-Layers Neural Networks March 28, 2018, 4:30 PM at CMSA Building, 20 Garden Street, Room G10
LOGIC SEMINAR Marcos Mazari Armida (CARNEGIE MELLON UNIVERSITY) Non-forking w-good frames March 27, 2018, 5:15 pm at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Allen Knutson (CORNELL UNIVERSITY) Deformations of normal crossings March 27, 2018, 3:00 PM at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Feng Xu (UNIVERSITY OF CALIFORNIA, RIVERSIDE) On Questions Around the Reconstruction Program March 27, 2018, 4:00 PM at Jefferson 356
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Yi Xie (SCGP) Surgery, Polygons and Instanton Floer homology March 26, 2018, 12:00 PM at CMSA Building, 20 Garden Street, G02
MATHEMATICAL PHYSICS SEMINAR Wei Zhang (MIT) Superpositivity of L-functions and 'completion of square' March 23, 2018, 3:00 PM at Jefferson 250
HARVARD LOGIC COLLOQUIUM Donald Martin (UCLA) Cantor's Grundlagen March 22, 2018, 4:00 - 5:00 pm at Logic Center, 2 Arrow St, Rm 420
CMSA SPECIAL LECTURE SERIES ON QUANTUM COHOMOLOGY, NAKAJIMA VARETIES AND QUANTUM GROUPS Artan Sheshmani (QGM & CMSA) GW Invariants via Quantum Cohomology March 22, 2018, 1:00 - 3:00 pm at CMSA Building, 20 Garden Street, Room G10
THURSDAY SEMINAR Jacob Lurie (HARVARD UNIVERSITY) Koszul Duality March 22, 2018, 3:00 - 5:00 pm at Science Center 507
NUMBER THEORY SEMINAR Ananth Shankar (MIT) Exceptional splitting of reductions of abelian surfaces March 21, 2018, 3:00 pm at Science Center 507
INFORMAL DYNAMICS & GEOMETRY SEMINAR Kenneth Bromberg (UNIVERSITY OF UTAH) Bounds on renormalized volume and the volume of the convex core March 21, 2018, 4:00 pm at Science Center 530
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Ramesh Narayan (DEPARTMENT OF ASTRONOMY, HARVARD UNIVERSITY) Black Holes and Naked Singularities March 21, 2018, 4:30 PM at CMSA Building, 20 Garden Street, Room G10
MATHEMATICAL PHYSICS SEMINAR Jurg Frolich (ETH ZüRICH) Physics in 2D�"from Kosterlitz-Thouless Transition to Topological Insulators March 20, 2018, 4:00 PM at Jefferson 356
DIFFERENTIAL GEOMETRY SEMINAR Tamás Darvas (UNIVERSITY OF MARYLAND) Complex Monge-Ampere equations with prescribed singularity March 20, 2018, 3:45 pm **change in time** at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Tathagata Basak (IOWA STATE) A complex ball quotient and the monster March 20, 2018, 3:00 PM at MIT 4-153
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Emanuel Scheidegger (ALBERT LUDWIGS UNIVERSITY OF FREIBURG) From Gauged Linear Sigma Models to Landau-Ginzburg orbifolds via central charge functions March 19, 2018, 12:00 PM at CMSA Building, 20 Garden Street, G02
CMSA HOMOLOGICAL MIRROR SYMMETRY FOCUSED LECTURE SERIES Dmytro Shklyarov (TECHNISCHE UNIVERSITäT CHEMNITZ) On categories of matrix factorizations and their homological invariants March 15, 2018, 3:00 - 4:00 PM at CMSA Building, 20 Garden Street, Room G10
CMSA SPECIAL LECTURE SERIES ON QUANTUM COHOMOLOGY, NAKAJIMA VARETIES AND QUANTUM GROUPS Artan Sheshmani (QGM & CMSA) Quantum Cohomology March 15, 2018, 1:00 - 3:00 pm at CMSA Building, 20 Garden Street, Room G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Ariyan Javanpeykar (JOHANNES GUTENBERG UNIVERSITY MAINZ) Arithmetic, algebraic, and analytic hyperbolicity March 13, 2018, 3:00 PM at MIT 4-153
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MIRROR SYMMETRY SEMINAR Guangbo Xu (PRINCETON UNIVERSITY) Open quantum Kirwan map March 09, 2018, 9:30 - 11:00 am at Science Center 530
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MIRROR SYMMETRY SEMINAR Shinobu Hosono (GAKUSHUIN UNIVERSITY) Gluing monodromy nilpotent cones of a family of K3 surfaces March 09, 2018, 11:00 - 12:00 at Science Center 530
CMSA HOMOLOGICAL MIRROR SYMMETRY FOCUSED LECTURE SERIES Adam Jacob (UC DAVIS) The deformed Hermitian-Yang-Mills equation (continued) March 08, 2018, 4:00-5:00 pm at CMSA Building, 20 Garden St., G10
HARVARD LOGIC COLLOQUIUM Dima Sinapova (UNIVERSITY OF ILLINOIS AT CHICAGO) Stronger tree properties and the SCH March 08, 2018, 4:00 - 5:00 pm at Logic Center, 2 Arrow St, Rm 420
THURSDAY SEMINAR Sander Kupers (HARVARD UNIVERSITY) The work of Arone-Dwyer March 08, 2018, 3:00 - 5:00 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Richard Kenyon (BROWN UNIVERSITY) Harmonic functions and the chromatic polynomial March 07, 2018, 4:30 PM at CMSA Building, 20 Garden Street, Room G10
INFORMAL GEOMETRY & DYAMICS SEMINAR Jesse Wolfson & Benson Farb (UC IRVINE & UNIVERSITY OF CHICAGO) The geometry of Hilbert's 13th problem March 07, 2018, 4:00 PM at Science Center 530
NUMBER THEORY SEMINAR Jesse Wolfson (UNIVERSITY OF CALIFORNIA, IRVINE) The theory of resolvent degree, after Hamilton, Klein, Hilbert, and Brauer March 07, 2018, 3:00 PM at Science Center 507 * please also note the talk at the Informal Geometry & Dynamics Seminar*
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS HOMOLOGICAL MIRROR SYMMETRY Adam Jacob (UC DAVIS) The deformed Hermitian-Yang-Mills equation March 06, 2018, 4:00 - 5:00 pm at CMSA Building, 20 Garden St., G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Michael Kemeny (STANFORD UNIVERSITY) Betti numbers of the canonical ring of a curve March 06, 2018, 3:00 PM at MIT 4-153
CMSA SPECIAL MATHEMATICAL PHYSICS SEMINAR Emanuel Scheidegger (ALBERT LUDWIGS UNIVERSITY OF FREIBURG) Periods and quasiperiods of modular forms and the mirror quintic at the conifold March 05, 2018, 11:00 - 12:00 at CMSA Building, 20 Garden Street, Room G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Shinobu Hosono (GAKUSHUIN UNIVERSITY) Movable vs monodromy nilpotent cones of Calabi-Yau manifolds March 05, 2018, 12:00 PM at CMSA Building, 20 Garden Street, Room G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MIRROR SYMMETRY SEMINAR Chuck Doran (UNIVERSITY OF ALBERTA AND ICERM) Mirror Symmetry for Lattice Polarized del Pezzo Surfaces March 02, 2018, 11:00 - 12:00 at Science Center 530
THURSDAY SEMINAR Mike Hopkins (HARVARD UNIVERSITY) Background for the work of Arone-Dwyer March 01, 2018, 3:00 - 5:00 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS HOMOLOGICAL MIRROR SYMMETRY FOCUSED LECTURE SERIES Colin Diemer (IHES) Moduli spaces of Landau-Ginzburg models and (mostly Fano) HMS March 01, 2018, 3:00 - 5:00 pm at CMSA Building, 20 Garden Street, Room G10
INFORMAL GEOMETRY & DYNAMICS SEMINAR Richard Schwartz (BROWN UNIVERSITY) 5 points on the sphere February 28, 2018, 4:00 PM at Science Center 530
NUMBER THEORY SEMINAR Wei Zhang (MIT) Special cycles on simple Shimura varieties February 28, 2018, 3:00 PM at Sci Center 507
DIFFERENTIAL GEOMETRY SEMINAR Yu-Shen Lin (COLUMBIA UNIVERSITY) Tropical/Holomorphic Correspondence for HyperK\ February 28, 2018, 4:15 PM at Science Center 232
LOGIC SEMINAR Alexander Van Abel (CITY UNIVERSITY OF NEW YORK) The Feferman-Vaught Theorem and the Product of All Prime Finite Fields February 27, 2018, 5:15 pm at Science Center 507
CMSA SPECIAL LECTURE SERIES ON QUANTUM COHOMOLOGY, NAKAJIMA VARETIES AND QUANTUM GROUPS Artan Sheshmani (QGM & CMSA) Quantum Cohomology February 27, 2018, 1:00 - 3:00 pm at CMSA Building, 20 Garden Street, Room G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Nick Salter (HARVARD UNIVERSITY) Vanishing cycles for linear systems on toric surfaces February 27, 2018, 3:00 PM at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Hans Wenzl (UNIVERSITY OF CALIFORNIA, SAN DIEGO) Coideal Algebras and Subfactors February 27, 2018, 4:00 PM at Jefferson 356
CMSA HOMOLOGICAL MIRROR SYMMETRY FOCUSED LECTURE SERIES Colin Diemer (IHES) Moduli spaces of Landau-Ginzburg models and (mostly Fano) HMS February 27, 2018, 3:00 - 4:00 PM at CMSA Building, 20 Garden Street, Room G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Tom Hou (CALIFORNIA INSTITUTE OF TECHNOLOGY) Computer-assisted analysis of singularity formation of a regularized 3D Euler equation February 26, 2018, 4:30 PM at CMSA Building, 20 Garden Street, Room G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Jordan Keller (BHI) Linear Stability of Schwarzschild Black Holes February 26, 2018, 12:00 PM at CMSA Building, 20 Garden Street, Room G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MIRROR SYMMETRY SEMINAR Jingyu Zhao (BRANDEIS & HARVARD) Connection on S^1-equivariant Floer theory February 23, 2018, 11:00 am at Science Center 530
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Mustazee Rahman (MIT) On shocks in the TASEP February 23, 2018, 3:30 pm at CMSA Building, 20 Garden Street, G02
MATHEMATICAL PHYSICS SEMINAR Xinqi Gong (INSTITUTE FOR MATHEMATICAL SCIENCES, RENNIN UNIVERSITY OF CHINA) Mathematical Intelligence Applications in Bio-medical Problems February 23, 2018, 3:00 pm at Jefferson 250
CMSA SPECIAL LECTURE SERIES ON QUANTUM COHOMOLOGY, NAKAJIMA VARETIES AND QUANTUM GROUPS Artan Sheshmani (QGM & CMSA) Computing GW Invariants February 22, 2018, 1:00 - 3:00 pm at CMSA Building, 20 Garden Street, Room G10
THURSDAY SEMINAR Jeremy Hahn (HARVARD UNIVERSITY) Chromatic convergence of the Goodwillie tower on spheres February 22, 2018, 3:00 - 5:00 pm at Science Center 507
NUMBER THEORY SEMINAR Juan Rivera-Letelier (UNIVERSITY OF ROCHESTER) Hecke and Linnik February 21, 2018, 3:00 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Don Rubin (HARVARD STATISTICS) Essential concepts of causal inference �" a remarkable history February 21, 2018, 4:30 PM at CMSA Building, 20 Garden Street, Room G10
INFORMAL DYNAMICS & GEOMETRY SEMINAR Ian Biringer (BOSTON COLLEGE) Convex cores of thick hyperbolic 3-manifolds with bounded rank February 21, 2018, 4:00 PM at Science Center 530
LOGIC SEMINAR Andrew Brooke-Taylor (UNIVERSITY OF LEEDS) Products of CW complexes: the full story February 20, 2018, 5:15 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL SEMINAR Jens Hoppe (KTH) New Constructions and Quantization of Minimal Surfaces February 16, 2018, 9:30 am at Science Center 530
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Reza Gheissari (NEW YORK UNIVERSITY) Dynamics of Critical 2D Potts Models February 16, 2018, 3:30 pm at CMSA Building, 20 Garden Street, G02
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MIRROR SYMMETRY SEMINAR Man-Wai Cheung (HARVARD UNIVERSITY) Quiver representations and theta functions February 16, 2018, 11:00 am at Science Center 530
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Zhengwei Liu (HARVARD PHYSICS) A new program on quantum subgroups February 14, 2018, 4:30 PM at CMSA Building, 20 Garden Street, Room G10
INFORMAL DYNAMICS & GEOMETRY SEMINAR Simion Filip (HARVARD UNIVERSITY) Hypergeometric equations and Lyapunov exponents (via Hodge theory) February 14, 2018, 4:00 PM at Science Center 530
NUMBER THEORY SEMINAR Hansheng Diao (PRINCETON UNIVERSITY) Towards a $p$-adic Riemann-Hilbert correspondence February 14, 2018, 3:00 PM at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Nicholas Early (UNIVERSITY OF MINNESOTA) Canonical Bases for Permutohedral Plates February 13, 2018, 4:00 PM at Jefferson 356
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Jesse Kass (UNIVERSITY OF SOUTH CAROLINA) How to count lines on a cubic surface arithmetically February 13, 2018, 3:00 PM at Science Center 507
CMSA SPECIAL LECTURE SERIES ON QUANTUM COHOMOLOGY, NAKAJIMA VARETIES AND QUANTUM GROUPS Artan Sheshmani (QGM & CMSA) Gromov-Witten invariants February 13, 2018, 1:00 - 3:00 pm at CMSA Building, 20 Garden Street, Room G10
LOGIC SEMINAR Alice Medvedev () Unions of chains of signatures February 12, 2018, 5:15 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Matthew Stroffregen (MIT) Equivariant Khovanov Spaces February 12, 2018, 12:00 PM at CMSA Building, 20 Garden Street, Room G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MIRROR SYMMETRY SEMINAR Dan Xie (CMSA, HARVARD) Three dimensional mirror symmetry February 09, 2018, 11:00 - 12:00 at Science Center 530
THURSDAY SEMINAR Ben Knudsen (HARVARD UNIVERSITY) The derivatives of the identity functor February 08, 2018, 3:00 - 5:00 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Fan Chung (UNIVERSITY OF CALIFORNIA, SAN DIEGO) Sequences: random, structured or something in between February 08, 2018, 5:00 pm at CMSA Building, 20 Garden Street, Room G10
INFORMAL DYNAMICS & GEOMETRY SEMINAR Phil Engel (HARVARD UNIVERSITY) Penrose Tilings of Riemann Surfaces February 07, 2018, 4:00 PM at Science Center 530
NUMBER THEORY SEMINAR Alex Kontorovich (RUTGERS AND IAS) Sphere Packings and Arithmetic February 07, 2018, 3:00 PM at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Bhargav Bhatt (UNIVERSITY OF MICHIGAN) Prisms and deformations of de Rham cohomology February 06, 2018, 3:00 PM at MIT - 4-153
MATHEMATICAL PHYSICS SEMINAR Jacob Shapiro (ETH ZüRICH) Bulk-Edge Duality and Complete Localization for Disordered Chiral Chains February 06, 2018, 4:00 PM at Jefferson 356
DIFFERENTIAL GEOMETRY SEMINAR Linyuan Lu (UNIVERSITY OF SOUTH CAROLINA) Ricci-flat graphs with girth at leave five February 06, 2018, 4:30 PM at Science Center 530
LOGIC SEMINAR Jesse Han (MCMASTER UNIVERSITY) Strong conceptual completeness for \omega-categorical theories February 06, 2018, 5:15 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS HOMOLOGICAL MIRROR SYMMETRY FOCUSED LECTURE SERIES Ivan Losev (NORTHEASTERN UNIVERSITY) BGG category O: towards symplectic duality February 06, 2018, 3:00 PM at CMSA Building, 20 Garden Street, G02
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Hyungchul Kim (CMSA & HARVARD PHYSICS) Seiberg duality and superconformal index in 3d February 05, 2018, 12:00 PM at CMSA Building, 20 Garden Street, G02
MATHEMATICAL PHYSICS SEMINAR Yunxiang Ren (TENNESSEE STATE UNIVERSITY) A New Skein Theory for One-Way Yang-Baxter Planar Algebras February 02, 2018, 4:00 PM at Jefferson 250
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MIRROR SYMMETRY SEMINAR Minxian Zhu (YAU MATHEMATICAL SCIENCES CENTER, TSINGHUA UNIVERSITY) The hyperplane conjecture for periods of Calabi-Yau hypersurfaces in P^n February 02, 2018, 11:00 am at Science Center 530
THURSDAY SEMINAR Jacob Lurie (HARVARD UNIVERSITY) Overview February 01, 2018, 3:00 - 5:00 pm at Science Center 507
NUMBER THEORY SEMINAR Chen Wan (INSTITUTE FOR ADVANCED STUDY) The local trace formula for the Ginzburg-Rallis model and the generalized Shalika model January 31, 2018, 3:00 - 4:00 PM at Science Center 507
INFORMAL DYNAMICS & GEOMETRY SEMINAR Curtis McMullen (HARVARD UNIVERSITY) The behavior of planes in confinement January 31, 2018, 4:00 PM at Science Center 530
DIFFERENTIAL GEOMETRY SEMINAR Valentino Tosatti (NORTHWESTERN UNIVERSITY) Estimates for collapsing Calabi-Yau metrics January 30, 2018, 4:15 PM at Science Center 507
LOGIC SEMINAR Rehana Patel () Stable regularity for finite relational structures January 30, 2018, 5:15 - 6:15 pm at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Kaifung Bu (HARVARD UNIVERSITY) De Finetti Theorems for Braiding Parafermions January 30, 2018, 4:00 PM at Jefferson 356
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL SEMINAR Cumrun Vafa (HARVARD PHYSICS) All genus Gromov-Witten invariants for Compact Calabi-Yau threefolds January 30, 2018, 1 - 2:30 pm at CMSA Building, 20 Garden Street, Room G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Qiang Wen (YAU MATHEMATICAL SCIENCES CENTER, TSINGHUA UNIVERSITY) Holographic entanglement entropy in general spacetimes January 29, 2018, 12:00 - 1:00 pm at CMSA Building, 20 Garden Street, Room G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MIRROR SYMMETRY SEMINAR Matthew Young (CHINESE UNIVERISTY OF HONG KONG) Algebra and geometry of orientifold Donaldson-Thomas theory January 26, 2018, 11:00-12:00pm at Science Center 530
HARVARD LOGIC COLLOQUIUM Aki Kanamori (BOSTON UNIVERSITY) Aspect-Perception and the History of Mathematics January 25, 2018, 4:00-5:00 PM at Logic Center, Room 420, 2 Arrow St
CMSA HOMOLOGICAL MIRROR SYMMETRY FOCUSED LECTURE SERIES Ivan Losev (NORTHEASTERN UNIVERSITY) BGG category O: towards symplectic duality January 25, 2018, 3:00 - 5:00pm at CMSA Building, 20 Garden St., G10
CMSA SPECIAL SEMESTER-LONG LECTURE SERIES Artan Sheshmani (QGM/CMSA) Quantum Cohomology, Nakajima Varieties and Quantum groups January 25, 2018, 1:00 - 3:00pm at CMSA Building, 20 Garden St., G10
MATHEMATICAL PHYSICS SEMINAR Bin Gui (VANDERBILT UNIVERSITY) A Unitary Tensor Product Theory for Unitary Vertex Operator Algebra Modules January 23, 2018, 4:00 PM at Jefferson 356
CMSA SPESIAL SEMINAR Shannon Ray (FLORIDA ATLANTIC UNIVERSITY) Wang and Yau's Quasi-Local Energy for an Extreme Kerr Spacetime January 19, 2018, 4:00pm at CMSA Building, 20 Garden St., G10
MATHEMATICAL PHYSICS SEMINAR Marco De Renzi (UNIVERSITY OF PARIS 7) Renormalized Hennings Invariants and TQFTs January 16, 2018, 4:00 PM at Jefferson 356
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Marco Gualtieri (UNIVERSITY OF TORONTO) Holomorphic symplectic Morita equivalence and the generalized Kaehler potential December 12, 2017, 3:00 pm at Science Center Hall A
NUMBER THEORY SEMINAR Robert Lemke Oliver (TUFTS UNIVERSITY) Tate-Shafarevich groups in quadatric twist families December 06, 2017, 3:15 pm *sharp at Science Center 507
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Phillipe Rigollet (MIT) Exact recovery in the Ising blockmodel December 06, 2017, 2:00 - 3:00 PM at CMSA Building, 20 Garden Street, Room G10
DIFFERENTIAL GEOMETRY SEMINAR Eveline Legendre (INSTITUT DE MATHéMATIQUES DE TOULOUSE, UNIVERSITé PAUL SABATIER) An application of the equivariant localization formula in Sasaki geometry December 06, 2017, 4:15 PM at Science Center 507
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Ankur Moitra (MIT) A New Approach to Approximate Counting and Sampling December 06, 2017, 3:00 - 4:00 pm at CMSA Building, 20 Garden Street, Room G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Sara Venkatesh (COLUMBIA UNIVERSITY) Closed-string mirror symmetry for subdomains December 06, 2017, 12:00 PM at CMSA Building, 20 Garden Street, Room G10
THURSDAY SEMINAR Jacob Lurie (HARVARD UNIVERSITY) Comparison of Stable and Unstable v_n-Periodic Homotopy November 30, 2017, 4:00 - 6:00 PM at Science Center 507
NUMBER THEORY SEMINAR Renee Bell (MIT) Local-to-Global Lifting for Curves in Characteristic p November 29, 2017, 3:15 pm *sharp at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Amitai Zernik (INSTITUTE FOR ADVANCED STUDY) Computing the A∞ algebra of RP^2m ↪ CP^2m using open fixed-point localization November 29, 2017, 12:00 PM at CMSA Building, 20 Garden Street, Room G10
INFORMAL DYNAMICS & GEOMETRY SEMINAR Jane Wang (MIT) Finitely many algebraically primitive Teichmüller curves in genus g at least 3 November 29, 2017, 4:00 PM at Science Center 530
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR David Gamarink (MIT) (Arguably) Hard on Average Constraint Satisfaction Problems November 29, 2017, 3:00 PM at CMSA Building, 20 Garden Street, Room G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Valentijn Karemaker (UNIVERSITY OF PENNSYLVANIA) Dynamics of Belyi maps November 28, 2017, 3:00 PM at MIT 4-237
DIFFERENTIAL GEOMETRY SEMINAR Gabor Szekelyhidi (UNIVERSITY OF NOTRE DAME) New Calabi-Yau metrics on C^n November 28, 2017, 4:15 PM at Science Center 507
SPECIAL SEMINAR ON GEOMETRY Ryosuke Takahashi (TOHOKU UNIVERSITY) A new parabolic flow approach to the Kahler-Einstein problem November 22, 2017, 4:00 - 5:15 pm at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Siu-Cheong Lau (BOSTON UNIVERSITY) Moduli theory of Lagrangian immersions and mirror symmetry November 21, 2017, 4:15 PM at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Dhruv Ranganathan (MIT) Curves, maps, and singularities in genus one November 21, 2017, 3:00 PM at Science Center Hall A
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Yue M. Lu (HARVARD JOHN A. PAULSON SCHOOL OF ENGINEERING AND APPLIED SCIENCES) Asymptotic Methods for High-Dimensional Inference: Precise Analysis, Fundamental Limits, and Optimal Designs November 20, 2017, 12:00 PM at Science Center 232
LEARNING SEMINAR ON THE FARGUES-FONTAINE CURVE Laurent Fargues (INSTITUTE OF MATHEMATICS OF JUSSIEU) Structure of the Picard stack and the Abel-Jacobi morphism for the curve. November 20, 2017, 4:00 - 6:00 PM at Science Center 507
BRANDEIS, HARVARD, MIT, NORTHEASTERN JOINT COLLOQUIUM AT HARVARD Spencer Bloch (UNIVERSITY OF CHICAGO) Periods, motivic gamma functions, and Hodge structures November 16, 2017, Tea at 4 pm, Talk at 4:30 pm at Science Center Hall A
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Daniel Sussman (BOSTON UNIVERSITY) Multiple Network Inference: From Joint Embeddings to Graph Matching November 15, 2017, 4:00 - 5:00 pm *note change in time* at CMSA Building, 20 Garden Street, G02
NUMBER THEORY SEMINAR Lucia Mocz (PRINCETON UNIVERSITY) A New Northcott Property for Faltings Height November 15, 2017, 3:15 pm *sharp at Science Center 507
HARVARD LOGIC COLLOQUIUM Steve Jackson (UNIVERSITY OF NORTH TEXAS) Combinatorics of Definable Sets November 15, 2017, 4:00 - 5:00 pm at Logic Center, 2 Arrow St, Rm 420
LEARNING SEMINAR ON THE FARGUES-FONTAINE CURVE Matthew Morrow (INSTITUTE OF MATHEMATICS OF JUSSIEU) Perfectoid after-party: pro-etale topology November 15, 2017, 5:00 pm *note special day/time* at Science Center 507
LOGIC SEMINAR Linda Brown Westrick (UNIVERSITY OF CONNECTICUT) Towards a notion of computable reducibility for discontinuous functions November 14, 2017, 5:15 pm at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Dennis Gaitsgory (HARVARD UNIVERSITY) Higher Representation Theory November 14, 2017, 4:00 PM at Jefferson 356
LEARNING SEMINAR ON THE FARGUES-FONTAINE CURVE Matthew Morrow (INSTITUTE OF MATHEMATICS OF JUSSIEU) Foundations of perfectoid spaces-IV November 13, 2017, 4:00 - 6:00 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Yusuf Baris Kartal (MIT) Dynamical invariants of categories associated to mapping tori November 13, 2017, 12:30 - 1:30 pm at CMSA Building, 20 Garden Street, G02
GAUGE THEORY, TOPOLOGY & SYMPLECTIC GEOMETRY SEMINAR Siqi He (CALIFORNIA INSTITUTE OF TECHNOLOGY) The extended Bogomolny Equations, Generalized Nahm Pole, and SL(2,R) Higgs bundle November 10, 2017, 3:30 - 4:30 PM at Science Center 507
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Zhe Wang (NYU) A Driven Tagged Particle in One-dimensional Simple Exclusion Process November 10, 2017, 12:00 PM at Science Center 232
THURSDAY SEMINAR Jacob Lurie (HARVARD UNIVERSITY) Coanalytic Functors November 09, 2017, 4:00 - 6:00 PM at Science Center 507
BRANDEIS, HARVARD, MIT, NORTHEASTERN JOINT COLLOQUIUM AT HARVARD Jeremy Quastel (UNIVERSITY OF TORONTO) The KPZ fixed point November 09, 2017, Tea at 4 pm, Talk at 4:30 pm at Math Common Room & Science Center Hall A
NUMBER THEORY SEMINAR Kazim Buyuboduk (UC DUBLIN AND HARVARD UNIVERSITY) Non-ordinary symmetric squares and Euler systems of rank 2 November 08, 2017, 3:15 pm *sharp at Science Center 507
HARVARD LOGIC COLLOQUIUM Victoria Gitman (CUNY GRADUATE CENTER) Virtual Large Cardinal Principles November 08, 2017, 4:00 - 5:00 pm at Logic Center, 2 Arrow St, Rm 420
INFORMAL DYNAMICS & GEOMETRY SEMINAR Eduard Duryev (HARVARD UNIVERSITY) Square-tilings of modular curves X(d) November 08, 2017, 4:00 PM at Science Center 530
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Elchanan Mossel (MIT) Optimal Gaussian Partitions November 08, 2017, 3:00 - 4:00 PM at CMSA Building, 20 Garden Street, Room G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Matthew Morrow (CNRS) (Topological) Hochschild homology and and crystalline cohomology November 07, 2017, 3:00 PM at Science Center Hall A
DIFFERENTIAL GEOMETRY SEMINAR Julius Ross (UNIVERSITY OF ILLINOIS AT CHICAGO) The Monge-Ampere Equation and the Hele-Shaw flow November 07, 2017, 4:15 PM at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Liming Ge (UNH AND CHINESE ACADEMY OF SCIENCES) From Riemann and Hilbert to Kadison-Singer November 07, 2017, 4:00 PM at Jefferson 356
LEARNING SEMINAR ON THE FARGUES-FONTAINE CURVE Matthew Morrow (INSTITUTE OF MATHEMATICS OF JUSSIEU) Foundations of perfectoid spaces-III November 06, 2017, 4:00 - 6:00 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Pietro Benetti Genolini (UNIVERSITY OF OXFORD) Topological AdS/CFT November 06, 2017, 12:00 PM at CMSA Building, 20 Garden Street, Room G10
GAUGE THEORY, TOPOLOGY & SYMPLECTIC GEOMETRY SEMINAR Kevin Sackel (MIT) Perverse sheaves from complex Lagrangian intersections November 03, 2017, 3:30 - 4:30 PM at Science Center 507
CMSA ALGEBRAIC GEOMETRY SEMINAR Alexander Moll (IHES) Hibert Schemes from Geometric Quantization of Dispersive Periodic Benjamin-Ono Waves November 02, 2017, 3:00 PM at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS ALGEBRAIC GEOMETRY SEMINAR Shamil Shakirov (HARVARD UNIVERSITY) Undulation invariants of plane curves November 01, 2017, 5:00 PM *note time change* at CMSA Building, 20 Garden Street, Room G10
NUMBER THEORY SEMINAR Alexander Smith (HARVARD UNIVERSITY) $2^\infty$-Selmer groups, $2^\infty$-class groups, and Goldfeld's conjecture November 01, 2017, 3:15 pm *sharp at Science Center 507
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Wei-Ming Wang (CNRS) Quasi-periodic solutions to nonlinear PDE's November 01, 2017, 3:00 - 4:00 PM at CMSA Building, 20 Garden Street, Room G10
JOINT MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Kay Kirkpatrick (UIUC MATH AND PHYSICS, MIT MATH) Quantum groups, Free Araki-Woods Factors, and a Calculus for Moments November 01, 2017, 2:00 - 3:00 pm at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Chenglong Yu (HARVARD UNIVERSITY) Picard-Fuchs systems of zero loci of vector bundle sections November 01, 2017, 12:30 - 1:30 pm at CMSA Building, 20 Garden St, G10
INFORMAL GEOMETRY & DYNAMICS SEMINAR Bena Tshishiku (HARVARD UNIVERSITY) Cohomology of arithmetic groups and characteristic classes of manifold bundles November 01, 2017, 4:00 PM at Science Center 530
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Nick Rozenblyum (UNIVERSITY OF CHICAGO) Shifted symplectic structures and quantization October 31, 2017, 3:00 PM at MIT 4-237
MATHEMATICAL PHYSICS SEMINAR Zhenghan Wang (MICROSOFT) Reconstructing chiral CFTs/VOAs from 2D TQFTs/MTCs October 31, 2017, 4:00 PM at Jefferson 356
LOGIC SEMINAR Sebastien Vasey (HARVARD UNIVERSITY) Non-elementary classification theory October 31, 2017, 5:15 pm at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Bill Goldman (BROWN UNIVERSITY AND UNIVERSITY OF MARYLAND) Dynamics on moduli spaces of flat connections October 31, 2017, 4:15 PM at Science Center 507
LEARNING SEMINAR ON THE FARGUES-FONTAINE CURVE Matthew Morrow (INSTITUTE OF MATHEMATICS OF JUSSIEU) Foundations of perfectoid spaces-II October 30, 2017, 4:00 - 6:00 PM at Science Center 507
GAUGE THEORY, TOPOLOGY & SYMPLECTIC GEOMETRY SEMINAR Piotr Suwara (MIT) The Category of Perverse Sheaves is a Stack, cont'd October 27, 2017, 3:30 - 4:30 PM at Science Center 507
INFORMAL DYNAMICS & GEOMETRY SEMINAR Sander Kupers (HARVARD UNIVERSITY) The metastable homology of mapping class groups October 25, 2017, 4:00 PM at Science Center 530
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX & PROBABILITY THEORY SEMINAR Noga Alon (TEL AVIV UNIVERSITY) Random Cayley Graphs October 25, 2017, 3:00 - 4:00 PM at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Pierre Colmez (UPMC AND IAS) On the cohomology of p-adic analytic curves October 25, 2017, 3:15 pm *sharp at Science Center 507
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX & PROBABILITY THEORY SEMINAR Subhabrata Sen (MICROSOFT AND MIT) Partitioning sparse random graphs: connections with mean-field October 25, 2017, 2:00 - 3:00 pm at CMSA Building, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Yevgeniy Liokumovich (MIT) Heegaard splittings isotopic to an index 1 minimal surface October 24, 2017, 4:15 PM at Science Center 507
LOGIC SEMINAR Will Boney (HARVARD UNIVERSITY) Interpolation beyond $\mathbb{L}_{\omega_1, \omega}$ October 24, 2017, 5:15 pm at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR David Hansen (COLUMBIA UNIVERSITY) Vanishing theorems in rigid analytic geometry October 24, 2017, 1:30 - 2:30 pm *note change in time at MIT 2-4449 *note change in location
LEARNING SEMINAR ON THE FARGUES-FONTAINE CURVE Matthew Morrow (INSTITUTE OF MATHEMATICS OF JUSSIEU) Review of foundations of perfectoid spaces October 23, 2017, 4:00 - 6:00 PM at Science Center 507
JOINT DEPARTMENT OF MATHEMATICS AND CMSA RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Madhu Sudan (CS, HARVARD) General Strong Polarization October 23, 2017, 12:00 - 1:00 PM at Science Center 232
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Florian Beck (UNIVERSITY OF HAMBURG) Hitchin systems in terms of Calabi-Yau threefolds October 23, 2017, 12:00 PM at CMSA Building, 20 Garden Street, Room G10
GAUGE THEORY, TOPOLOGY & SYMPLECTIC GEOMETRY SEMINAR Piotr Suwara (MIT) The Category of Perverse Sheaves is a Stack October 20, 2017, 3:30 - 4:30 PM at Science Center 507
THURSDAY SEMINAR Jacob Lurie (HARVARD UNIVERSITY) The Bousfield-Kuhn functor October 19, 2017, 4:00 - 6:00 PM at Science Center 507
INFORMAL DYNAMICS & GEOMETRY SEMINAR Rohini Ramadas (HARVARD UNIVERSITY) Algebraic dynamics from topological and holomorphic dynamics October 18, 2017, 4:00 PM at Science Center 530
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Nati Blaier (CMSA, HARVARD) Geometry of the symplectic Torelli group October 18, 2017, 12:00 PM at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR V. Kumar Murty (UNIVERSITY OF TORONTO) The Lindelof class of L-functions October 18, 2017, 3:15 pm at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Gang Liu (NORTHWESTERN) Recent progress of Yau's uniformization conjecture October 17, 2017, 4:00 PM at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Dave Jensen (UNIVERSITY OF KENTUCKY) Linear Systems on General Curves of Fixed Gonality October 17, 2017, 3:00 PM at Science Center Hall A
LOGIC SEMINAR Cameron Freer (BORELIAN AND REMINE) Feedback Computability October 17, 2017, 5:15 pm at Science Center 507
LEARNING SEMINAR ON THE FARGUES-FONTAINE CURVE Zijian Yao (HARVARD UNIVERSITY) Properties of the curve (incl. line bundles) October 16, 2017, 4:00 - 6:00 PM at Science Center 507
GAUGE THEORY, TOPOLOGY & SYMPLECTIC GEOMETRY SEMINAR Jiangfeng Lin (MIT) Nearby cycles and vanishing cycles October 13, 2017, 3:30 - 4:30 PM at Science Center 507
THURSDAY SEMINAR Jacob Lurie (HARVARD UNIVERSITY) v_n-Periodic Homotopy Groups October 12, 2017, 4:00 - 6:00 PM at Science Center 507
SPECIAL MATHEMATICAL PHYSICS SEMINAR Yasayuki Kawahigashi (UNIVERSITY OF TOKYO) From Vertex Operator Algebras to Operator Algebras and Back October 12, 2017, 2:00 pm *please note special date, time and location at Jefferson 453
NUMBER THEORY SEMINAR Joseph Silverman (BROWN UNIVERSITY) Moduli Spaces of Rational Maps and a Dynamical Shafarevich Theorem October 11, 2017, 3:15 pm *sharp at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Man-Wai Cheung (HARVARD UNIVERSITY) Quiver representations and theta functions October 10, 2017, 3:00 PM at Science Center 411
LOGIC SEMINAR Nate Ackerman (HARVARD UNIVERSITY) Trees, Sheaves and Definition by Recursion October 10, 2017, 5:15 PM at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Mike Hopkins (HARVARD UNIVERSITY) Reflection Positivity and Invertible Topological Phases October 10, 2017, 4:00 PM at Jefferson 356
GAUGE THEORY, TOPOLOGY & SYMPLECTIC GEOMETRY SEMINAR Jianfeng Lin (MIT) Constructible sheaves and nearby cycles October 06, 2017, 3:30 PM at Science Center 507
NUMBER THEORY SEMINAR David Corwin (MIT) Etale Homotopy Obstructions for Rational Points Applied to Open Subvarieties October 04, 2017, 3:15 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Dingxin Zhang (BRANDEIS UNIVERSITY) <1 part of slopes under degeneration October 04, 2017, 12:00 - 1:00 PM at CMSA Building, 20 Garden Street, Room G10
INFORMAL DYNAMICS & GEOMETRY SEMINAR Nattalie Tamam (TEL AVIV UNIVERSITY) Divergent trajectories in arithmetic homogeneous spaces of rational rank two October 04, 2017, 4:00 PM at Science Center 530
LOGIC SEMINAR Jason Rute () A uniform reducibility in computably presented Polish spaces October 03, 2017, 5:15 pm at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Valentino Tosatti (NORTHWESTERN UNIVERSITY) Special Kahler geometry and collapsing October 03, 2017, 4:15 PM at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Alina Vdovina (UNIVERSITY OF NEWCASTLE AND NEW YORK UNIVERSITY) Buildings, surfaces and quaternions October 03, 2017, 4:00 PM at Jefferson 356
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Yuchen Liu (YALE UNIVERSITY) K-stability of cubic threefolds October 03, 2017, 3:00 PM at MIT 4-237
LEARNING SEMINAR ON THE FARGUES-FONTAINE CURVE Jacob Lurie (HARVARD UNIVERSITY) Constructing Functions on Y: Power Series in the Variable p October 02, 2017, 4:00 - 6:00 PM at Science Center 507
BOSTON GRADUATE TOPOLOGY SEMINAR Cliff Taubes (HARVARD UNIVERSITY) What are Z/2 harmonic forms? September 30, 2017, 9:30 - 10:15 am at Science Center 507
BOSTON GRADUATE TOPOLOGY SEMINAR Melissa Zhang (BOSTON COLLEGE) Annular Khovanov homology of 2-periodic links September 30, 2017, 10:30 - 11:15 am at Science Center 507
BOSTON GRADUATE TOPOLOGY SEMINAR Yu Pan (MIT) Augmentations and Exact Lagrangian cobordisms September 30, 2017, 11:30 am - 12:15 pm at Science Center 507
GAUGE THEORY, TOPOLOGY & SYMPLECTIC GEOMETRY SEMINAR Boyu Zhang (HARVARD UNIVERSITY) Orientation and duality September 29, 2017, 3:30 - 4:30 PM at Science Center 507
RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Herbert Spohn (TECHNICAL UNIVERSITY MUNICH & COLUMBIA) Hydrodynamics of integrable classical and quantum systems September 27, 2017, 3:00 - 4:00 PM at CMSA Building, 20 Garden Street, Room G10
INFORMAL DYNAMICS & GEOMETRY SEMINAR Yusheng Luo (HARVARD UNIVERSITY) Trees in degenerating families of rational maps and isometric G-actions September 27, 2017, 4:00 PM at Science Center 530
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Yu-Wei Fan (HARVARD UNIVERSITY) Weil-Petersson geometry on the space of Bridgeland stability conditions September 27, 2017, 12:00 - 1:00 PM at CMSA Building, 20 Garden Street, Room G10
NUMBER THEORY SEMINAR David Rohrlich (BOSTON UNIVERSITY) Counting Artin representatives September 27, 2017, 3:15 pm at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Alina Marian (NORTHEASTERN UNIVERSITY) On the intersection theory of Hilbert schemes of points on a surface September 26, 2017, 3:00 PM at Science Center Hall A
MATHEMATICAL PHYSICS SEMINAR Jinsong Wu (HARVARD UNIVERSITY) The Brascamp-Lieb Inequalities for Planar Algebras September 26, 2017, 4:00 PM at Jefferson 356
LOGIC SEMINAR Gabriel Goldberg (HARVARD UNIVERSITY) The least strongly compact cardinal and the Ultrapower Axiom September 26, 2017, 5:15 pm at Science Center 507
LEARNING SEMINAR ON THE FARGUE-FONTAINE CURVE Dennis Gaitsgory (HARVARD UNIVERSITY) The relative Fargues-Fontaine curve and ``untilts", continued September 25, 2017, 4:00 - 5:30 PM at Science Center 507
GAUGE THEORY, TOPOLOGY & SYMPLECTIC GEOMETRY SEMINAR Boyu Zhang (HARVARD UNIVERSITY) Derived category of sheaves and the six functors September 22, 2017, 3:30 - 4:30 PM at Science Center 507
THURSDAY SEMINAR Mike Hopkins (HARVARD UNIVERSITY) Unstable chromatic localizations September 21, 2017, 4:00 - 6:00 PM at Science Center 507
NUMBER THEORY SEMINAR Koji Shimizu (HARVARD UNIVERSITY) Local constancy of generalized Hodge-Tate weights of a p-adic local system September 20, 2017, 3:15 - 4:15 pm at Science Center 507
INFORMAL DYNAMICS & GEOMETRY SEMINAR Nick Salter (HARVARD UNIVERSITY) Plane curves, (higher) spin structures, and mapping class groups September 20, 2017, 4:00 PM at Science Center 530
LOGIC SEMINAR Will Boney (HARVARD UNIVERSITY) Model-theoretic characterizations of large cardinals September 19, 2017, 5:15 PM at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Peter Smillie (HARVARD UNIVERSITY) Weingarten foliations of three dimensional Lorentzian spaceforms September 19, 2017, 4:00 - 5:15 pm at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Giulia Sacca (MIT) Degenerations of hyperkahler manifolds September 19, 2017, 3:00 PM at MIT 4-237
LEARNING SEMINAR ON THE FARGUE-FONTAINE CURVE Dennis Gaitsgory (HARVARD UNIVERSITY) The Fargues-Fontaine curve and "untilts" September 18, 2017, 4:00 - 5:30 PM *note earlier ending at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Yoosik Kim (BOSTON UNIVERSITY) Monotone Lagrangian tori in cotangent bundles September 18, 2017, 12:00 - 1:00 PM at CMSA Building, 20 Garden Street, Room G10
GAUGE THEORY, TOPOLOGY & SYMPLECTIC GEOMETRY SEMINAR Boyu Zhang (HARVARD UNIVERSITY) Flat SL_2(C) connections and complex Lagrangian intersections September 15, 2017, 3:30 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS ALGEBRAIC GEOMETRY SEMINAR Yu-Wei Fan (HARVARD UNIVERSITY) Entropy of an autoequivalence on Calabi-Yau manifolds September 14, 2017, 3:30 - 4:30 PM at CMSA Building, 20 Garden Street, Room G10
THURSDAY SEMINAR Mike Hopkins (HARVARD UNIVERSITY) The Bousfield-Kuhn functor September 14, 2017, 4:00 - 6:00 PM at Science Center 507
INFORMAL DYNAMICS & GEOMETRY SEMINAR Philip Engel (HARVARD UNIVERSITY) Tilings and Hurwitz Theory September 13, 2017, 4:00 - 6:00 PM at Science Center 530
NUMBER THEORY SEMINAR Ari Shnidman (BOSTON COLLEGE) Ranks of abelian varieties in quadratic twist families September 13, 2017, 3:15 - 4:15 pm at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Rohini Ramadas (HARVARD UNIVERSITY) Dynamics on the moduli space of point-configurations on the Riemann Sphere September 12, 2017, 3:00 PM at Science Center 411
LOGIC SEMINAR Sebastien Vasey (HARVARD UNIVERSITY) Internal sizes in $\mu$-abstract elementary classes September 12, 2017, 5:15 pm at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Jordan Keller (HARVARD BLACK HOLE INITIATIVE) Linear Stability of Higher Dimensional Schwarzschild Black Holes September 12, 2017, 4:00 - 5:15 pm at Science Center 507
MATHEMATICAL PHYSICS SEMINAR William Norledge (HARVARD UNIVERSITY) A Construction of Lattices in Buildings September 12, 2017, 4:00 PM at Jefferson 356
LEARNING SEMINAR ON THE FARGUES-FONTAINE CURVE Jacob Lurie (HARVARD UNIVERSITY) An Overview September 11, 2017, 4:00 - 6:00 PM at Science Center 232
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Yu-Shen Lin (HARVARD - CENTER OF MATHEMATICAL SCIENCES & APPLICATIONS) From the Decomposition of Picard-Lefschetz Transformation to Tropical Geometry September 11, 2017, 12:00 - 1:00 PM at CMSA Building, 20 Garden Street, Room G10
THURSDAY SEMINAR Jacob Lurie (HARVARD UNIVERSITY) To Rational Homotopy Theory and Beyond September 07, 2017, 4:00 - 6:00 PM *note new time* at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Arthur Jaffe and Zhengwei Liu (HARVARD UNIVERSITY) Introduction to the Mathematical Picture Language Project September 05, 2017, 4:00 PM at Jefferson 356
LOGIC SEMINAR Nate Ackerman (HARVARD UNIVERSITY) Vaught's Conjecture for a Grothendieck topos September 05, 2017, 5:00 - 6:00 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL ALGEBRAIC GEOMETRY SEMINAR Will Donovan (KAVLI IPMU) Twists and braids for general threefold flops August 31, 2017, 3:00 PM at CMSA Building, 20 Garden Street, Room G10
CENTER OF MATHEMATICAL SCIENCES & APPLICATIONS SPECIAL SEMINAR Juven Wang (IAS) Link Invariants of Topological Quantum Matter and New Topological Boundary Conditions May 24, 2017, 11:30 AM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL SEMINAR Zheng-Cheng Gu (CHINESE UNIVERSITY OF HONG KONG) A topological world: from topological material to the origin of elementary particles May 23, 2017, 12:00 pm at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Kwok Wai Chan (CHINESE UNIVERSITY OF HONG KONG) Scattering diagrams from asymptotic analysis on Maurer-Cartan equations May 17, 2017, 3:30 - 4:30 pm at CMSA Building, 20 Garden St, G10
SPECIAL LECTURE SERIES Jean-Pierre Serre (COLLèGE DE FRANCE) Cohomological invariants mod 2 of Weyl groups, Pt. 2 May 09, 2017, 2:00 pm at Science Center 507
GAUGE THEORY, TOPOLOGY AND SYMPLECTIC GEOMETRY SEMINAR Daniel Cristofaro-Gardiner (HARVARD UNIVERSITY) Two or infinity May 05, 2017, 3:30 - 4:30 pm at Science Center 507
INFORMAL GEOMETRY & DYNAMICS SEMINAR Ahmad Rafiqi (CORNELL UNIVERSITY) Surface homeomorphisms from matrices and typical properties of biPerron numbers May 03, 2017, 4:00 - 6:00 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Xue-Mei Li (UNIV. OF WARWICK) Perturbation to conservation law and stochastic averaging May 03, 2017, 4:30 PM at Science Center 507
SPECIAL BASIC NOTIONS SEMINAR Jean-Pierre Serre (COLLèGE DE FRANCE) Some simple facts on lattices and orthogonal group representations May 03, 2017, 3:00 pm at Science Center Hall D
JOINT DEPT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES & APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Ilya Soloveychik (HARVARD SCHOOL OF ENGINEERING & APPLIED SCIENCES) Deterministic Random Matrices May 03, 2017, 3:00 pm at CMSA Building, 20 Garden St, G02
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Simona Cocco (CNRS & ECOLE NORMALE SUPéRIEURE, PARIS, FRANCE) Reverse modeling of protein sequence data: from graphical models to structural and functional predictions. May 02, 2017, 4:00pm at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES & APPLICATIONS SPECIAL SEMINAR David Carchedi (GEORGE MASON UNIVERSITY) Dg-manifolds as a model for derived manifolds April 27, 2017, 2:00 - 3:00 pm at CMSA Building, 20 Garden St, G10
THURSDAY SEMINAR Marc Hoyois (MIT) Bloch-Kato implies Beilinson-Lichtenbaum April 27, 2017, 4:00 - 6:00 pm *note change in time this week* at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Mehran Kardar (MIT) Levitation by Casimir forces in and out of equilibrium April 27, 2017, 3:00 pm at CMSA Building, 20 Garden St, G10
INFORMAL GEOMETRY & DYNAMICS SEMINAR Eduard Duryev (HARVARD UNIVERSITY) Dilation surfaces: geometry, dynamics and moduli space. Part 2 April 26, 2017, 4:00 - 6:00 pm at Science Center 530
RANDOM MATRIX & PROBABILITY SEMINAR Ashkan Nikeghbali (UNIVERSITY OF ZURICH) Random Matrix models in number theory: an infinite dimension point of view. April 26, 2017, 2:00 - 3:00 pm at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Henri Darmon (MCGILL UNIVERSITY) Singular moduli for real quadratic fields: a rigid analytic approach. April 26, 2017, 3:00 PM at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Georgios Daskalopoulos (BROWN UNIVERSITY) Rigidity of Group Actions in General NPC Spaces April 25, 2017, 4:15 pm at Science Center 507
HARVARD MIT ALGEBRAIC GEOMETRY SEMINAR Yaim Cooper (HARVARD UNIVERSITY) A Fock Space Approach to Severi Degrees April 25, 2017, 3:00 pm at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Fei Wei (UNIVERSITY OF NEW HAMPSHIRE) On divisor function in arithmetic progressions (joint work with Yitang Zhang and Boqing Xue) April 25, 2017, 2:45 PM at Jefferson 453
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Patrick Jefferson (HARVARD UNIVERSITY) Towards A Classification of 5d N = 1 SCFTs April 24, 2017, 12:00 pm at CMSA Building, 20 Garden St, G10
THURSDAY SEMINAR Dan Berwick-Evans (UNIVERSITY OF ILLINOIS URBANA-CHAMPAIGN) Field theories and elliptic cohomology from the vantage of characters April 21, 2017, 3:00 pm at MIT 2-131 *note special location*
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL LECTURE SERIES ON DONALDSON-THOMAS AND GROMOV-WITTEN THEORIES Artan Sheshmani (THE QGM AARHUS AND CMSA HARVARD) Proof of S-duality conjecture on quintic threefold II April 21, 2017, 9:00 - 10:30 AM at CMSA Building, 20 Garden St, G10
GAUGE THEORY, TOPOLOGY AND SYMPLECTIC GEOMETRY SEMINAR Jonathan Weitsman (NORTHWESTERN UNIVERSITY) On the geometric quantization of (some) Poisson manifolds April 21, 2017, 3:30 - 4:30 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES & APPLICATIONS COLLOQUIUM Cumrun Vafa (HARVARD PHYSICS) String Swampland April 19, 2017, 4:30 PM at CMSA Building, 20 Garden St, G10
JOINT DPT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Weijun Xu (UNIVERSITY OF WARWICK) Meaing of infinities in KPZ and Phi^4_3 April 19, 2017, 3:00 - 4:00 PM *note correction of time* at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Noam Elkies (HARVARD UNIVERSITY) Remarks on isogenies between elliptic curves over low-degree number fields April 19, 2017, 3:00 pm at Science Center 507
INFORMAL GEOMETRY & DYNAMICS SEMINAR Jane Wang (MIT) Dilation surfaces: geometry, dynamics and moduli space. Part 1 April 19, 2017, 4:00 PM at Science Center 530
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL LECTURE SERIES ON DONALDSON-THOMAS AND GROMOV-WITTEN THEORIES Artan Sheshmani (THE QGM AARHUS AND CMSA HARVARD) Proof of S-duality conjecture on quintic threefold I April 19, 2017, 9:00 - 10:30 AM at CMSA Building, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Fangyang Zheng (OHIO STATE UNIVERSITY) On compact Kahler manifolds with positive or negative holomorphic sectional curvature April 18, 2017, 4:15 pm at Science Center 507
HARVARD MIT ALGEBRAIC GEOMETRY SEMINAR Angelo Vistoli (SCUOLA NORMALE) Motive classes of classifying spaces April 18, 2017, 3:00 pm at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Charles Zhaoxi Xiong (HARVARD UNIVERSITY) What generalized cohomology theories may have to do with symmetry protected topological phases April 18, 2017, 2:45 PM at Jefferson 453
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Ingmar Saberi (UNIVERSITäT HEIDELBERG) Holographic lattice field theories April 17, 2017, 12:00 pm at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL LECTURE SERIES ON DONALDSON-THOMAS AND GROMOV-WITTEN THEORIES Artan Sheshmani (AARHUS UNIVERSITY/CMSA) DT versus MNOP invariants and S_duality conjecture on general complete intersections April 14, 2017, 9:00 - 10:30 am at CMSA Building, 20 Garden St, G10
GAUGE THEORY, TOPOLOGY AND SYMPLECTIC GEOMETRY SEMINAR Alvaro Pelayo (UC SAN DIEGO) Introduction to symplectic and spectral geometry of finite dimensional integrable systems April 14, 2017, 3:30 - 4:30 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL SEMINAR Lothar Göttsche (ICTP) Virtual refinements of the Vafa-Witten formula April 13, 2017, 9:30 am at CMSA Building, 20 Garden St, G10
MATHEMATICAL PHYSICS SEMINAR Nima Arkani-Hamed (IAS, PRINCETON) The Amplituhedron and Scattering Amplitudes as Binary Code April 13, 2017, 2:45 pm at Jefferson 453
HARVARD, BRANDEIS, MIT, NORTHEASTERN JOINT COLLOQUIUM AT HARVARD Tom Church (STANFORD UNIVERSITY) Asymptotic representation theory over Z April 13, 2017, Tea at 4:00 PM, the Austine & Chilton McDonnell Common Room, Science Center 4th Floor, Talk at 4:30 pm at Science Center Hall A
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL LECTURE SERIES ON DONALDSON-THOMAS AND GROMOV-WITTEN THEORIES Artan Sheshmani (AARHUS UNIVERSITY/CMSA) Stable pair PT invariants on nodal fibrations: perverse sheaves, Wallcrossings, and an analog of fiberwise T-duality April 12, 2017, 9:00 - 10:30 am at CMSA Building, 20 Garden St, G10
JOINT DPT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Oanh Nguyen (YALE UNIVERSITY) Roots of random polynomials April 12, 2017, 2:00 - 3:00 pm at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Shlomo Razamat (ISRAEL INSTITUTE OF TECHNOLOGY) Complicated four dimensional physics and simple mathematics April 12, 2017, 4:30 pm at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Jan Vonk (MCGILL UNIVERSITY) Crystalline cohomology of towers of curves April 12, 2017, 3:00 pm at Science Center 507
JOINT DEPT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES & APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Subhajit Goswami (UNIVERSITY OF CHICAGO) Liouville first-passage percolation and Watabiki's prediction April 12, 2017, 3:00 pm at CMSA Building, 20 Garden St, G10
HARVARD MIT ALGEBRAIC GEOMETRY SEMINAR Mihnea Popa (NORTHWESTERN UNIVERSITY) Hodge ideals April 11, 2017, 3:00 pm at MIT 4-153
DIFFERENTIAL GEOMETRY SEMINAR Luca Spolaor (MIT) Optimal regularity for three dimensional mass minimizing cones in arbitrary codimension April 11, 2017, 4:15 pm at Science Center 507
HARVARD MIT ALGEBRAIC GEOMETRY SEMINAR Mihnea Popa (NORTHWESTERN UNIVERSITY) Families of varieties and Hodge theory April 10, 2017, 12:00 pm **Special Talk** at MIT 2-361
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Burkhard Schwab (CMSA) Large Gauge symmetries in Supergravity April 10, 2017, 12:00 pm at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL SEMINAR John Loftin (RUTGERS UNIVERSITY) Families of projective domains and neck-separating degenerations of convex $RP^2$ surfaces April 06, 2017, 3:00 - 4:30 pm at Science Center 232
NUMBER THEORY SEMINAR Frank Calegari (UNIVERSITY OF CHICAGO) Modularity lifting theorems beyond Shimura varieties April 05, 2017, 3:00 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL LECTURE SERIES ON DONALDSON-THOMAS AND GROMOV-WITTEN THEORIES Artan Sheshmani (AARHUS UNIVERSITY/CMSA) Stable pair PT invariants on smooth fibrations April 05, 2017, 9:00 - 10:30 am at CMSA Building, 20 Garden St, G10
JOINT DEPT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES & APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Steven Heilman (UCLA) Noncommutative Majorization Principles and Grothendieck's Inequality April 05, 2017, 3:00 pm at CMSA Building, 20 Garden St, G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Scott Mullane (HARVARD UNIVERSITY) Extremal effective divisors in Mg,n April 04, 2017, 3:00 pm at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Chiu-Chu Melissa Liu (COLUMBIA UNIVERSITY) GW theory, FJRW theory, and MSP fields April 04, 2017, 4:15 pm *change in time this week* at Science Center 507 *change in location this week*
MATHEMATICAL PHYSICS SEMINAR Zhengwei Liu (HARVARD UNIVERSITY) Quon language in mathemtics April 04, 2017, 2:45 pm at Jefferson 453
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Nathan Haouzi (UC BERKELEY) Little Strings and Classification of surface defects April 03, 2017, 12:00 pm at CMSA Building, 20 Garden St, G02
THURSDAY SEMINAR Jacob Lurie (HARVARD UNIVERSITY) Etale motivic cohomolgy II March 30, 2017, 3:00 - 5:00 PM at Science Center 507
BRANDEIS, HARVARD, MIT, NORTHEASTERN JOINT MATHEMATICS COLLOQUIUM AT HARVARD Alexander Goncharov (YALE UNIVERSITY) Quantum Hodge Field Theory March 30, 2017, 4:30 pm, Tea at 4 pm in the Math Common Room at Science Center Hall A
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL SEMINAR Rak-Kyeong Seong (UPPSALA UNIVERSITY) The Mirror and the Elliptic Genus of Brane Brick Models March 30, 2017, 3:00 pm at CMSA Building, 20 Garden St, G02
JOINT DEPT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES & APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Nina Holden (MIT) Percolation-decorated triangulations and their relation with SLE and LQG March 29, 2017, 3:00 pm at CMSA Building, 20 Garden St, G02
NUMBER THEORY SEMINAR Martin Olsson (UC BERKELEY) Local fundamental groups and reduction mod $p$ March 29, 2017, 3:00 pm at Science Center 507
INFORMAL GEOMETRY & DYNAMICS SEMINAR Matt Bainbridge (INDIANA UNIVERSITY) Smooth compactifications of strata of abelian differentials March 29, 2017, 4:00 pm at Science Center 530
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Leslie Greengard (COURANT INSTITUTE) Inverse problems in acoustic scattering and cryo-electron microscopy March 29, 2017, 4:00 pm at CMSA Building, 20 Garden St, G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR David Stapleton (STONY BROOK UNIVERSITY) Hilbert schemes of points on surfaces and their tautological bundles March 28, 2017, 3:00 pm at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Jordan Keller (COLUMBIA UNIVERSITY) Linear stability of Schwarzschild spacetime March 28, 2017, 3:00 - 4:00 PM at CMSA Building, 20 Garden St, G02
MATHEMATICAL PHYSICS SEMINAR Hannes Pichler (HARVARD UNIVERSITY) Photonic tensor networks produced by a single quantum emitter March 28, 2017, 2:45 pm at Jefferson 453
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Agnese Bissi (HARVARD UNIVERSITY) Loops in AdS from conformal symmetry March 27, 2017, 12:00 pm at CMSA Building, 20 Garden St, G02
JOINT DPT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Chiranjib Mukherjee (COURANT INSTITUTE) Compactness and Large Deviations March 24, 2017, 2:00 at Science Center Room 232
THURSDAY SEMINAR Jacob Lurie (HARVARD UNIVERSITY) Etale motivic cohomology March 23, 2017, 3:00 - 5:00 PM at Science Center 507
JOINT DEPARTMENT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Alexander Fribergh (UNIVERSITé DE MONTRéAL) The ant in the labyrinth March 22, 2017, 3:00 PM at CMSA Building, 20 Garden St, G10
INFORMAL GEOMETRY & DYNAMICS SEMINAR Oleg Ivrii (CALIFORNIA INSTITUTE OF TECHNOLOGY) Differentiating Blaschke products March 22, 2017, 4:00 pm at Science Center 507
NUMBER THEORY SEMINAR Peter Sarnak (PRINCETON UNIVERSITY) Integral points on Markoff type cubic surfaces March 22, 2017, 3:00 pm at Science Center 507
MATHEMATICAL PHYSICS SEMINAR James Wootton (UNIVERSITY OF BASEL) Topological Error Correction March 21, 2017, 2:45 pm at Jefferson 453
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SOCIAL SCIENCE APPLICATIONS FORUM Danielle Li (HARVARD UNIVERSITY) Financing Novel Drugs March 21, 2017, 4:30 - 5:30 PM at CMSA Building, 20 Garden St, G02
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Brian Osserman (UC DAVIS) Limit linear series and the maximal rank conjecture March 21, 2017, 3:00 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Philippe Sosoe (CMSA) New bounds for the chemical distance in 2D critical percolation March 20, 2017, 12:00 pm at CMSA Building, 20 Garden St, G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Aleksey Zinger (STONY BROOK UNIVERSITY) Enumerative geometry of curves: old and new March 14, 2017, **CANCELED DUE TO WEATHER. to be rescheduled** at **CANCELED DUE TO WEATHER**
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL LECTURE SERIES ON DONALDSON-THOMAS AND GROMOV-WITTEN THEORIES Artan Sheshmani (THE QGM AARHUS AND CMSA HARVARD) Conifold Transitions and modularity of DT invariants on Nodal fibrations March 10, 2017, 9:00 - 10:30 am at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL SEMINAR Andreas Malmendier (UTAH STATE UNIVERSITY) Kummer surfaces, modular forms, and their roles in string dualities March 08, 2017, 11 am - 12 pm at CMSA Building, 20 Garden St, G10
INFORMAL GEOMETRY & DYNAMICS SEMINAR Dawei Chen (BOSTON COLLEGE & HARVARD UNIVERSITY) Extremal and rigid divisors on moduli spaces of curves March 08, 2017, 4:00 pm at Science Center 530
NUMBER THEORY SEMINAR Taylor Dupuy (VERMONT) On The Integers of CC(t)^{alg} March 08, 2017, 3:00 pm at Science Center 507
CMSA SPECIAL LECTURE SERIES ON DONALDSON-THOMAS AND GROMOV-WITTEN THEORIES Artan Sheshmani (AARHUS UNIVERSITY/CMSA) Modularity of DT invariants on smooth K3 vibrations II March 08, 2017, 9:00 - 10:30 am at CMSA Building, 20 Garden St, G10
JOINT DEPARTMENT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Xiaoqin Guo (STONY BROOK UNIVERSITY) Harnack inequality for a balanced random environment March 08, 2017, 3:00 pm at CMSA Building, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Teng Fei (COLUMBIA UNIVERSITY) A construction of infinitely many solutions to the Strominger system March 07, 2017, 4:15 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SOCIAL SCIENCE APPLICATIONS FORUM Krishna Pendakur (HARVARD AND SIMON FRASER UNIVERSITY) Infant Mortality and the Repeal of Federal Prohibition March 07, 2017, 4:30 - 5:30 PM at CMSA Building, 20 Garden St, G02
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Bill Fulton (UNIVERSITY OF MICHIGAN) Degeneracy Loci with a Line Bundle March 07, 2017, 3:00 pm at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Andreas Malmendier (UTAH STATE UNIVERSITY) Kummer surfaces, modular forms, and their roles in string dualities March 07, 2017, 2:45 pm at Jefferson 453
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Tom Rudelius (HARVARD PHYSICS) 6D SCFTs and Group Theory March 06, 2017, 12:00 pm at CMSA Building, 20 Garden St, G10
THURSDAY SEMINAR Akhil Mathew (HARVARD UNIVERSITY) The Rost Motive March 02, 2017, 3:00 - 5:00 PM at Science Center 507
BRANDEIS, HARVARD, MIT, NORTHEASTERN JOINT MATHEMATICS COLLOQUIUM AT HARVARD Tony Yue Yu (UNIVERSITé PARIS-SUD) Counting open curves via Berkovich geometry March 02, 2017, 4:30 PM at Science Center Hall A
INFORMAL GEOMETRY & DYNAMICS SEMINAR Russell Lodge (STONY BROOK UNIVERSITY) Global dynamics of multi curves in complex dynamics March 01, 2017, 4:00 pm at Science Center 530
CENTER OF MATHEMATICAL SCIENCES & APPLICATIONS COLLOQUIUM Jun Liu (HARVARD UNIVERSITY DEPARTMENT OF STATISTICS) Expansion of biological pathways by integrative Genomics March 01, 2017, 4:30 PM at CMSA Building, 20 Garden St, G10
JOINT DEPARTMENT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Shirshendu Gangly (UC BERKELEY) Large deviation and counting problems in sparse settings March 01, 2017, 3:00 PM at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR David Hansen (COLUMBIA UNIVERSITY) Some remarks on local Shimura varieties March 01, 2017, 3:00 pm at Science Center 507
HARVARD MIT ALGEBRAIC GEOMETRY SEMINAR Daniel Litt (COLUMBIA UNIVERSITY) Arithmetic Restrictions on Geometric Monodromy February 28, 2017, 3:00 pm at MIT 4-153
DIFFERENTIAL GEOMETRY SEMINAR Nick Edelen (MIT) Quantitative Reifenberg for Measures February 28, 2017, 3:15 PM at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Wenbin Yan (CMSA) Argyres-Douglas Theories, Vertex Operator Algebras and Wild Hitchin Characters February 27, 2017, 12:00 PM at CMSA Building, 20 Garden St, G10
THURSDAY SEMINAR Mike Hopkins (HARVARD UNIVERSITY) Voevodsky's condition H90(n) February 23, 2017, 3:00 - 5:00 PM at Science Center 507
NUMBER THEORY SEMINAR Yiwei She (IAS AND COLUMBIA UNIVERSITY) The (unpolarized) Shafarevich conjecture for K3 surfaces February 22, 2017, 3:00 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Steven Rayan (UNIVERSITY OF SASKATCHEWAN) Higgs bundles and the Hitchin system February 22, 2017, 4:30 pm at CMSA Building, 20 Garden St, G10
JOINT DEPARTMENT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Bob Hough (STONY BROOK UNIVERSITY) Random walk on unipotent groups February 22, 2017, 3:00 pm at CMSA Building, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Alex Waldron (SIMONS CENTER AT STONY BROOK) Long-time existence for Yang-Mills flow February 21, 2017, 3:15 - 4:15 PM at CMSA Building, 20 Garden St, G10
MATHEMATICAL PHYSICS SEMINAR Masahito Yamazaki (UNIVERSITY OF TOKYO) Integrable Lattice Models from Gauge Theory February 21, 2017, 2:45 pm at Jefferson 453
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR David Hyeon (SEOUL NATIONAL UNIVERSITY) Commuting nilpotents modulo simultaneous conjugation and Hilbert scheme February 21, 2017, 3:00 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SOCIAL SCIENCE APPLICATIONS FORUM Ravi Jagadeesan (HARVARD UNIVERSITY) Complementary inputs and the existence of stable outcomes in large trading networks February 21, 2017, 4:30 - 5:30 PM at CMSA Building, 20 Garden St, G02
GAUGE THEORY, TOPOLOGY AND SYMPLECTIC GEOMETRY SEMINAR Stefan Mueller (GEORGIA SOUTHERN) C^0-characterization and rigidity of symplectic and Lagrangian February 17, 2017, 3:30 - 4:30 pm at Science Center 507
THURSDAY SEMINAR Mike Hopkins (HARVARD UNIVERSITY) Voevodsky's application of Steenrod operations to the Milnor conjecture II February 16, 2017, 3:00 - 5:00 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES & APPLICATIONS COLLOQUIUM Masahito Yamazaki (IMPU) Geometry of 3-manifolds and Complex Chern-Simons Theory February 15, 2017, 4:30-5:30 PM at CMSA Building, 20 Garden St, G10
JOINT DEPARTMENT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Lisa Hartung (COURANT INSTITUTE) The Structure of Extreme Level Sets in Branching Brownian Motion February 15, 2017, 3:00 PM at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Michael Zieve (UNIVERSITY OF MICHIGAN) Uniform boundedness for maps between curves February 15, 2017, 3:00 PM at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Shamil Shakirov (HARVARD UNIVERSITY) Integrability of Refined Chern-Simons Theory February 14, 2017, 2:45 pm at Jefferson 453
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SOCIAL SCIENCE APPLICATIONS FORUM Mauricio Fernández Duque (HARVARD UNIVERSITY) Pluralistic Ignorance and Preference Complementarities February 14, 2017, 4:30 - 5 :30pm at CMSA Building, 20 Garden St, G02
DIFFERENTIAL GEOMETRY SEMINAR Artan Sheshmani (AARHUS UNIVERSITY/CMSA) Nested Hilbert schemes and DT theory of local threefolds February 14, 2017, 4:15 pm at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Arnav Tripathy (HARVARD UNIVERSITY) Further counterexamples to the integral Hodge conjecture February 14, 2017, 3:00 pm at MIT 4-153
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Artan Sheshmani (AARHUS UNIVERSITY/CMSA) The theory of Nested Hilbert schemes on surfaces February 13, 2017, 12:00 pm at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL SEMINAR Tom Luo (UNIVERSITY OF MINNESOTA) Semidefinite Relaxation of Nonconvex Quadratic Optimization February 09, 2017, 1:30 - 2:30 pm at CMSA Building, 20 Garden St, G10
THURSDAY SEMINAR Mike Hopkins (HARVARD UNIVERSITY) Voevodsky's application of Steenrod operations to the Milnor conjecture February 09, 2017, 3:00 pm at Science Center 507
INFORMAL GEOMETRY & DYNAMICS SEMINAR Fabian Haiden (HARVARD UNIVERSITY) Flat surfaces and stability structures on categories February 08, 2017, 4:00 pm at Science Center 530
NUMBER THEORY SEMINAR Joël Bellaïche (BRANDEIS UNIVERSITY) Lacunarity of modular forms February 08, 2017, 3:00 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Matthew Headrick (BRANDEIS UNIVERSITY) Quantum entanglement, classical gravity, and convex programming: New connections February 08, 2017, 4:30 pm at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SOCIAL SCIENCE APPLICATIONS FORUM Nikhil Naik (MIT) Visual Urban Sensing: Understanding Cities with Computer Vision February 07, 2017, 4:30 - 5:30 PM at CMSA Building, 20 Garden St, G02
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Aaron Bertram (UNIVERSITY OF UTAH) Counting finite quot schemes February 07, 2017, 3:00 pm at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Dan Mangoubi (HEWBREW UNIVERSITY OF JERUSALEM) Harmonic functions sharing the same zero set February 07, 2017, 4:15 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Christophe Keller (HARVARD PAULSON SCHOOL OF ENGINEERING AND APPLIED SCIENCES) Mathieu Moonshine and Symmetry Surfing February 06, 2017, 12:00 pm at CMSA Building, 20 Garden St, G10
INFORMAL DYNAMICS & GEOMETRY SEMINAR Philip Engel (HARVARD UNIVERSITY) The number of positively curved triangulations on the sphere February 02, 2017, 4:00 pm at Science Center 530
THURSDAY SEMINAR Jacob Lurie (HARVARD UNIVERSITY) Steenrod Operations in Motivic Cohomology February 02, 2017, 3:00 - 5:00 PM at Science Center 507
NUMBER THEORY SEMINAR Bjorn Poonen (MIT) Local arboreal representations February 01, 2017, 3:00 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Sean Eddy (HARVARD DEPT OF MOLECULAR AND CELLULAR BIOLOGY) Biological sequence homology searches: the future of deciphering the past February 01, 2017, 4:30 pm at CMSA Building, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Yu Qiu (CHINESE UNIVERSITY OF HONG KONG) Stability conditions for quivers via exchange graphs January 31, 2017, 4:15 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Yu Qiu (CHINESE UNIVERSITY OF HONG KONG) Spherical twists on 3-Calabi-Yau categories of quivers with potentials from surfaces and spaces of stability conditions January 30, 2017, 12:00 pm at CMSA Building, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Yong Lin (RENMIN UNIVERSITY) Heat kernel estimate and solution of semi linear heat equations on graphs January 24, 2017, 4:15 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL SEMINAR Anton Kapustin (CALIFORNIA INSTITUTE OF TECHNOLOGY) Fermionic state-sum models and topological field theory January 12, 2017, 4:00 pm at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL SEMINAR Gerard Ben Arous (COURANT INSTITUTE) Complexity of random functions of many variables: from geometry to statistical physics and deep learning algorithms January 11, 2017, 4:00 pm at CMSA Building, 20 Garden St, G10
JOINT DEPARTMENT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Brian Rider (TEMPLE UNIVERSITY) Universality for the random matrix hard edge December 14, 2016, 3:00 PM at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS HOMOLOGICAL MIRROR SYMMETRY SEMINAR Lino Campos Amorim (BOSTON UNIVERSITY) Fukaya category of a compact toric manifold December 08, 2016, 2:00 - 4:00 pm at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Valentino Tosatti (NORTHWESTERN UNIVERSITY) Metric limits of hyperkahler manifolds December 07, 2016, 4:30 pm at CMSA Building, 20 Garden St, G10
JOINT DEPARTMENT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR: Dan Romik (UC DAVIS) A Pfaffian point process for Totally Symmetric Self Complementary Plane Partitions December 07, 2016, 3:00 pm at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Andrew Sutherland (MIT) Sato-Tate in dimension 3 December 07, 2016, 3:00 PM at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Gabriele di Cerbo (COLUMBIA UNIVERSITY) Log birational boundedness of Calabi-Yau pairs December 6, 2016, 3:00 PM at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Mattias Jonsson (UNIVERSITY OF MICHIGAN) A variational approach to the Yau-Tian-Donaldson conjecture December 06, 2016, 4:15 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Hansol Hong (CMSA) Mirror construction via formal deformation of Lagrangians December 05, 2016, 12:00 pm at CMSA Building, 20 Garden St, G10
GAUGE THEORY, TOPOLOGY AND SYMPLECTIC GEOMETRY SEMINAR Brendan McLellan (HARVARD UNIVERSITY) Some 1-loop results in contact Chern-Simons theory December 02, 2016, 3:30 - 4:30 pm at Science Center 507
THURSDAY SEMINAR Dennis Gaitsgory (HARVARD UNIVERSITY) Voevodsky's motives December 01, 2016, 3:00 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Sharad Ramanathan (HARVARD MCB AND SEAS) Finding co-ordinate systemsd to monitor the development of mammalian embryos November 30, 2016, 4:20 pm at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Akshay Venkatesh (STANFORD) The action of the derived Hecke algebra on weight one forms November 30, 2016, 3:00 pm at Science Center 507
JOINT DEPARTMENT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR James Lee (UNIVERSITY OF WASHINGTON) Conformal growth rates, spectral geometry, and distributional limits of graphs November 30, 2016, 3:00 pm at CMSA Building, 20 Garden St, G02
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Kenny Ascher (COLUMBIA UNIVERSITY) Uniformity of integral points and the Lang-Vojta conjecture November 22, 2016, 3:00 PM at science Center 507
JOINT DEPARTMENT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Jafar Jafarov (STANFORD UNIVERSITY) SU(N) Wilson loop expectations November 22, 2016, 3:00 pm at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Xiangfeng Gu (STONYBROOK) Differential Geometric Methods for Engineering Applications November 22, 2016, 4:00 - 5:00 pm at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Hee Cheol Kim (HARVARD PHYSICS) Defects and instantons in 5d SCFTs November 21, 2016, 12:00 pm at CMSA Building, 20 Garden St, G10
LOGIC COLLOQUIUM Haim Horowitz (HEWBREW UNIVERSITY OF JERUSALEM) On the non-existence and definability of mad families November 17, 2016, 4:00 pm at 2 Arrow St, Rm 408
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MEMBERS' SEMINAR Henri Guenancia (STONY BROOK UNIVERSITY) Singular varieties with trivial canonical bundle November 17, 2016, 5:00 pm at CMSA Building, 20 Garden St, G10
HARVARD, BRANDEIS, MIT, NORTHEASTERN JOINT COLLOQUIUM AT HARVARD Carlos Simpson (UNIVERSITE DE NICE) Calculating harmonic maps to buildings---a 2-dimensional combinatorial reduction calculus November 17, 2016, Tea at 4:00 PM, the Austine & Chilton McDonnell Common Room, Science Center 4th Floor, Talk at 4:30 pm at Science Center Hall C
THURSDAY SEMINAR Jacob Lurie (HARVARD UNIVERSITY) A^1 Homotopy Theory November 17, 2016, 3:00 - 5:00 PM at Science Center 507
NUMBER THEORY SEMINAR Chao Li (COLUMBIA UNIVERSITY) Goldfeld's conjecture and congruences between Heegner points November 16, 2016, 3:00 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Tristan Collins (HARVARD UNIVERSITY) Restricted volumes and finite time singularities of the Kahler-Ricci flow November 16, 2016, 3:30 pm *Special time* at CMSA Building, 20 Garden St, G10
JOINT DEPARTMENT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Yu Gu (STANFORD) Local vs global random fluctuations in stochastic homogenization November 16, 2016, 2:30 pm *note change in time* at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SOCIAL SCIENCE APPLICATIONS FORUM Ben Roth (MIT) Keeping the Little Guy Down: A Debt Trap for Informal Lending November 15, 2016, 4:30 - 5:30 PM at CMSA Building, 20 Garden St, G02
DIFFERENTIAL GEOMETRY SEMINAR Sebastien Picard (COLUMBIA UNIVERSITY) Geometric Flows and Strominger systems November 15, 2016, 4:15 pm at Science Center 507
NUMBER THEORY SEMINAR Jared Weinstein (BOSTON UNIVERSITY) The cohomology of local Shimura varieties November 09, 2016, 3:00 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Norden E. Huang (NATIONAL CENTRAL UNIVERSITY) On Holo-Hilbert Spectral Analysis November 09, 2016, 4:30 pm at CMSA Building, 20 Garden St, G10
RANDOM MATRIX & PROBABILITY THEORY SEMINAR Boaz Barak (HARVARD PAULSON SCHOOL) Computational Bayesianism, sums of squares, cliques, and unicorns November 09, 2016, 3:00 PM at CMSA Building, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Nicolaos Kapouleas (BROWN UNIVERSITY) Gluing constructions for minimal surfaes and other geometric objects November 08, 2016, 4:15 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SOCIAL SCIENCE APPLICATIONS FORUM Sifan Zhou (CMSA) Non-Compete Agreements and the Career of PhDs November 08, 2016, 4:30 PM - 5:30 PM at CMSA Building, 20 Garden St, G02
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Francois Greer (UNIVERSITY OF NOTRE DAME) Noether-Lefschetz Theory and Elliptic Calabi-Yau Threefolds November 08, 2016, 3:00 PM at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Seung-Joo Lee (VIRGINIA TECH) Multiple Fibrations in Calabi-Yau Geometry and String Dualities November 07, 2016, 12:00 PM at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL SEMINAR An Huang (HARVARD UNIVERSITY) Tautological Systems November 04, 2016, 2:00 - 3:00 pm at CMSA Building, 20 Garden St, G10
LOGIC COLLOQUIUM Theodore A. Slaman (UC BERKELEY) Recursion Theory and Diophantine Approximation November 03, 2016, 4:00 pm at 2 Arrow St, Rm 420
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS HOMOLOGICAL MIRROR SYMMETRY Yusuf Baris Kartal (MIT) HMS for Del Pezzo surfaces November 03, 2016, 2:00 PM - 4:00 PM at CMSA Building, 20 Garden St, G10
THURSDAY SEMINAR Marc Hoyols (MIT) The Nesterenko-Suslin-Totaro theorem November 03, 2016, 3:00 - 5:00 PM at Science Center 507
INFORMAL GEOMETRY AND DYNAMICS SEMINAR Simion Filip (HARVARD UNIVERSITY) From Teichmueller curves to higher-dimensional invariant subvarieties November 02, 2016, 4:00 PM at Science Center 530
SPECIAL SEMINAR Henry Cohn (MICROSOFT) Sphere packing problem in 8 and 24 dimensions November 02, 2016, 10:30 AM - 12:00 PM at Science Center 530
NUMBER THEORY SEMINAR () No seminar this week November 02, 2016, at
JOINT DEPARTMENT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Ramon van Handel (PRINCETON UNIVERSITY) Inhomogeneous random matrices November 02, 2016, 3:00 pm at CMSA Building, 20 Garden St, G10
DIFFERENTIAL GEOMETRY SEMINAR Robert Haslhofer (UNIVERSITY OF TORONTO) The moduli space of 2-convex embedded spheres November 01, 2016, 3:00 - 4:00 PM *Special Time* at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Joseph Minahan (UPPSALA UNIVERSITY) Supersymmetric gauge theories on $d$-dimensional spheres October 31, 2016, 12:00 pm at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS SPECIAL SEMINAR Bong Lian (BRANDEIS UNIVERSITY) Tautological systems October 28, 2016, 12:45 pm at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS HOMOLOGICAL MIRROR SYMMETRY SEMINAR Philip Engel (HARVARD UNIVERSITY) Mirror symmetry in the complement of an anticanonical divisor October 27, 2016, 2:00 - 4:00 pm at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Ananth Shankar (HARVARD UNIVERSITY) Abelian varieties isogenous to Jacobians October 26, 2016, 3:00 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Henry Cohn (MICROSOFT) Sums of squares, correlation functions, and exceptional geometric structures October 26, 2016, 4:30 pm at CMSA Building, 20 Garden St, G10
JOINT DEPARTMENT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Wei Wu (NEW YORK UNIVERSITY) Extremal and local statistics for gradient field models October 26, 2016, 3:00 PM at CMSA Building, 20 Garden St, G10
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Ian Shipman (HARVARD UNIVERSITY) Ulrich bundles and a generalization October 25, 2016, 3:00 pm at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Jonathan Zhu (HARVARD UNIVERSITY) Entropy and self-shrinkers of the mean curvature flow October 25, 2016, 3:00 - 4:00 pm *special time at CMSA Building, 20 Garden St, G10 *different location
SPECIAL SEMINAR SERIES, JOINT DEPARTMENT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS Masaki Kashiwara (RIMS, KYOTO UNIVERSITY) Indsheaves and Riemann-Hilbert correspondence of holonomic D-modules, Part 2 October 25, 2016, 4:15 - 5:15 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Arnav Tripathy (HARVARD UNIVERSITY) Spinning BPS states and motivic Donaldson-Thomas invariants October 24, 2016, 12:00 pm at CMSA Building, 20 Garden St, G10
GAUGE THEORY AND SYMPLECTIC GEOMETRY SEMINAR Colin Adams (WILLIAMS COLLEGE) Volume and volume density for hyperbolic knots and links October 21, 2016, 3:30 - 4:30 pm at Science Center 507
THURSDAY SEMINAR Marc Hoyois (MIT) Weight one motivic cohomology October 20, 2016, 3:00 - 5:00 PM at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Vaughan Jones (VANDERBILT UNIVERSITY) Scale Invariant Transfer Matrix and Hamiltonian for Quantum Spin Chains Pt. II October 20, 2016, 1:15 pm at Jefferson 256
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS HOMOLOGICAL MIRROR SYMMETRY SEMINAR Tim Large (MIT) Symplectic cohomology and wrapped Fukaya categories October 20, 2016, 2:00 - 4:00 pm at CMSA Building, 20 Garden St, G10
LOGIC COLLOQUIUM Joel David Hamkins (CITY UNIVERSITY OF NEW YORK) Recent advances in set-theoretic geology October 20, 2016, 4:00 pm at 2 Arrow St, Rm 420
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Vaughan Jones (VANDERBILT UNIVERSITY) Are the Thompson groups any good as a model for Diff(S^1)? October 19, 2016, 4:30 pm at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Rong Zhou (HARVARD UNIVERSITY) Mod p isogeny classes on Shimura varieties with parahoric level structure. October 19, 2016, 3:00 PM at Science Center 507
INFORMAL DYNAMICS & GEOMETRY SEMINAR Bena Tshishiku (HARVARD UNIVERSITY) Moduli space, surface bundles, and the Atiyah-Kodaira examples October 19, 2016, 4:00 pm at Science Center 530
MATHEMATICAL PHYSICS SEMINAR Vaughan Jones (VANDERBILT UNIVERSITY) Scale Invariant Transfer Matrix and Hamiltonian for Quantum Spin Chains October 18, 2016, 2:45 pm at Jefferson 453
DIFFERENTIAL GEOMETRY SEMINAR Jian Xiao (NORTHWESTERN UNIVERSITY) Positivity in the convergence of the inverse $\sigma_k$ flow October 18, 2016, 3:00 - 4:00 PM *Special Time* at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Claudiu Raicu (UNIVERSITY OF NOTRE DAME) Cohomology of determinantal thickenings October 18, 2016, 3:00 pm at MIT 4-153
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Fabian Haiden (HARVARD UNIVERSITY) Balanced filtrations and asymptotics for semistable objects October 17, 2016, 12:00 pm at CMSA Building, 20 Garden St, G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MEMBERS' SEMINAR Vaughan Jones (VANDERBILT UNIVERSITY) The Thompson groups as scaling symmetries of quantum spin chains October 14, 2016, 5:00 pm at CMSA Building, 20 Garden St, G10
THURSDAY SEMINAR Mike Hopkins (HARVARD UNIVERSITY) Statement of the Milnor conjectures October 13, 2016, 3:00 - 5:00 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Conan Leung (CHINESE UNIVERSITY OF HONG KONG) Coisotropic A-branes and their SYZ transform October 12, 2016, 4:30 PM at CMSA Building, 20 Garden Street, Room G10
NUMBER THEORY SEMINAR Yihang Zhu (HARVARD UNIVERSITY) The Hasse-Weil zeta functions of the intersection cohomology of minimally compactified orthogonal Shimura varieties October 12, 2016, 3:00 pm at Science Center 507
JOINT DEPARTMENT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Michael Damron (GEORGIA INSTITUTE OF TECHNOLOGY) Bigeodesics in first-passage percolation October 12, 2016, 3:00 PM at CMSA Building, 20 Garden St, G10
MATHEMATICAL PHYSICS SEMINAR Yasuyuki Kawahigashi (UNIVERSITY OF TOKYO) Gapped Domain Walls Between Topological Phases and Subfactors October 11, 2016, 2:45 PM at Jefferson 453
DIFFERENTIAL GEOMETRY SEMINAR Mark Stern (DUKE UNIVERSITY) Instantons on ALF spaces October 11, 2016, 4:15 pm at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Robert Friedman (COLUMBIA UNIVERSITY) Deformations of cusp singularities October 11, 2016, 3:00 pm at Science Center 507
SPECIAL SEMINAR Nike Sun (UC BERKELEY) Phase transitions in random constraint satisfaction problems October 07, 2016, 3:30 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS HOMOLOGICAL MIRROR SYMMETRY SEMINAR Hansol Hong (CMSA) Homological mirror symmetry for elliptic curves October 06, 2016, 2:00 - 4:00 pm at CMSA Building, 20 Garden St, G10
THURSDAY SEMINAR Jacob Lurie (HARVARD UNIVERSITY) K-Theory of Henselian Rings II October 06, 2016, 3:00 - 5:00 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Alexander Logunov (TEL AVIV UNIVERSITY) Zeroes of harmonic functions and Laplace eigenfunctions October 05, 2016, 4:30 PM at CMSA Building, 20 Garden Street, Room G10
JOINT DEPARTMENT OF MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Edgar Dobriban (STANFORD UNIVERSITY) Computation, statistics and random matrix theory October 05, 2016, 3:00 pm at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR William Yun Chen (INSTITUTE FOR ADVANCED STUDY) Moduli Interpretations for Noncongruence Modular Curves October 05, 2016, 3:00 PM at Science Center 507
MATHEMATICAL PHYSICS SEMINAR Dietmar Bisch (VANDERBILT UNIVERSITY) Single Generated Planar Algebras October 04, 2016, 2:45 pm at Jefferson 453
DIFFERENTIAL GEOMETRY SEMINAR Alexander Logunov (ST. PETERSBURG & TEL-AVIV) Zero set of a non-constant harmonic function in R^3 has infinite surface area October 04, 2016, 4:15 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Masahito Yamazaki (IMPU) Conformal Blocks and Verma Modules October 03, 2016, 12:00 pm at CMSA Building, 20 Garden St, G10
GAUGE THEORY, TOPOLOGY, AND SYMPLECTIC GEOMETRY SEMINAR Vladimir Chernov (DARTMOUTH UNIVERSITY) Minimizing intersection points of loops on a surface and the Andersen-Mattes-Reshetikhin Poisson bracket. (based on a joint work with Patricia Cahn) September 30, 2016, 3:30 - 4:30 PM at Science Center 507
THURSDAY SEMINAR Jacob Lurie (HARVARD UNIVERSITY) K-Theory of Henselian Rings September 29, 2016, 3:00 - 5:00 pm at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS HOMOLOGICAL MIRROR SYMMETRY SEMINAR Netanel Blaier (BRANDEIS UNIVERSITY) Intro to HMS 2 September 29, 2016, 2:00 - 4:00 pm at CMSA Building, 20 Garden St, G10
NUMBER THEORY SEMINAR Min Ru (UNIVERSITY OF HOUSTON) Height inequalities in Diophantine approximation September 28, 2016, 3:00 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Hong Liu (MIT) A new theory of fluctuating hydrodynamics September 28, 2016, 4:30 PM at Science Center 507
JOINT HARVARD MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Antonio Auffinger (NORTHWESTERN UNIVERSITY) Parisi formula for the ground state energy of the Sherrington-Kirkpatrick model September 28, 2016, 3:00 PM at CMSA Building, 20 Garden Street, Room G10
INFORMAL GEOMETRY AND DYNAMICS SEMINAR Curtis McMullen (HARVARD UNIVERSITY) Introduction to Teichmueller curves in genus 2 September 28, 2016, 4:00 pm at Science Center 530
DIFFERENTIAL GEOMETRY SEMINAR Tian-Jun Li (UNIVERSITY OF MINNESOTA) Symplectic divisor compactifications in dimension 4 September 27, 2016, 4:15 pm at Science Center 507
HARVARD MATHEMATICAL PHYSICS SEMINAR Zhengwei Liu (HARVARD UNIVERSITY) A New Diagrammatic Approach to Quantum Information--the four string model September 27, 2016, 2:45 pm at Jefferson 453
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Junliang Shen (ETH ZURICH) Elliptic Calabi-Yau 3-folds, Jacobi forms, and derived categories September 27, 2016, 3:00 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Can Kozcaz (CMSA) Cheshire Cat Resurgence September 26, 2016, 12:00 PM at CMSA Building, 20 Garden St, G10
GAUGE THEORY, TOPOLOGY, AND SYMPLECTIC GEOMETRY SEMINAR Paul Feehan (RUTGERS UNIVERSITY) The Lojasiewicz-Simon gradient inequality and applications to energy discreteness and gradient flows in gauge theory September 23, 2016, 3:30 - 4:30 PM at Science Center 507
THURSDAY SEMINAR Akhil Mathew (HARVARD UNIVERSITY) Thomason's \'etale descent theorem in algebraic K-theory September 22, 2016, 3:00 - 5:00 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS HOMOLOGICAL MIRROR SYMMETRY SEMINAR Netanel Blaier (BRANDEIS UNIVERSITY) Intro to HMS September 22, 2016, 2:00 - 4:00 PM at CMSA Building, 20 Garden Street, Room G10
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM L. Mahadevan (SEAS, DEPT OF PHYSICS AND DEPT OF ORGANISMAL AND EVOLUTIONARY BIOLOGY, HARVARD UNIVERSITY) Morphogenesis: Biology, Physics and Mathematics September 21, 2016, 4:30 PM at CMSA Building, 20 Garden Street, Room G10
JOINT HARVARD MATHEMATICS AND CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS RANDOM MATRIX AND PROBABILITY THEORY SEMINAR Stephane Benoist (MIT) Near critical spanning forests September 21, 2016, 3:00 PM at CMSA Building, 20 Garden Street, Room G10
NUMBER THEORY SEMINAR Alexandra Shlapentokh (EAST CAROLINA UNIVERSITY) On first-order definability and decidability problems over number fields and their infinite algebraic extensions September 21, 2016, 3:00 PM at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Gao Chen (STONY BROOK) Classification of gravitational instantons September 20, 2016, 4:15 PM at Science Center 507
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS MATHEMATICAL PHYSICS SEMINAR Johannes Kleiner (UNIVERSITY OF REGENSBURG) A New Candidate for a Unified Physical Theory September 19, 2016, 12:00 PM at CMSA Building, 20 Garden Street, Room G10
SPECIAL SEMINAR Maryna Viazovska (HUMBOLDT UNIVERSITY OF BERLIN) The sphere packing problem in dimensions 8 and 24 September 19, 2016, 3:00 PM at Science Center 507
GAUGE THEORY, TOPOLOGY, AND SYMPLECTIC GEOMETRY SEMINAR Eli Grigsby (BOSTON COLLEGE) Annular Khovanov-Lee homology, Braids, and Cobordisms September 16, 2016, 3:30 - 4:30 PM at Science Center 507
HARVARD, BRANDEIS, MIT, NORTHEASTERN JOINT COLLOQUIUM AT HARVARD Ziv Ran (UC RIVERSIDE) Generic Projections September 15, 2016, Tea at 4:00 PM, the Austine & Chilton McDonnell Common Room, Science Center 4th Floor, Talk at 4:30 pm at Science Center Hall A
THURSDAY SEMINAR Mike Hopkins (HARVARD UNIVERSITY) The Lichtenbaum-Quillen conjectures September 15, 2016, 3:00 - 5:00 PM at Science Center 507
INFORMAL DYNAMICS AND GEOMETRY SEMINAR Curtis McMullen (HARVARD UNIVERSITY) Almost simple geodesics on the triply-punctured sphere September 14, 2016, 4:00 PM at Science Center 530
CENTER OF MATHEMATICAL SCIENCES AND APPLICATIONS COLLOQUIUM Sze-Man Ngai (GEORGIA SOUTHERN UNIVERSITY) The multifractal formalism and spectral asymptotics of self-similar measures with overlaps September 14, 2016, 4:30 PM at CMSA Building, 20 Garden Street, Room G10
NUMBER THEORY SEMINAR Jennifer Balakrishnan (BOSTON UNIVERSITY) Iterated p-adic integrals and rational points on curves September 14, 2016, 3:00 - 4:00 PM at Science Center 507
HARVARD/MIT ALGEBRAIC GEOMETRY SEMINAR Brooke Ullery (HARVARD UNIVERSITY) Measures of irrationality for hypersurfaces of large degree September 13, 2016, 3:00 PM at Science Center 507
DIFFERENTIAL GEOMETRY SEMINAR Aruna Kesavan (PENN STATE/CMSA) Asymptotic structure of space-time with a positive cosmological constant | CommonCrawl |
For each poster contribution there will be one poster wall (width: 97 cm, height: 250 cm) available. Please do not feel obliged to fill the whole space. Posters can be put up for the full duration of the event.
Anomalous Hall effect due to skew-scattering on rare impurity configurations
Ado, Ivan
Anomalous Hall effect is crucially affected by skew scattering on pairs of closely located impurities. We demonstrate that proper description of this mechanism requires a calculation beyond the commonly employed non-crossing approximation. Inclusion of X and $\mathrm{\Psi}$ diagrams with a single pair of intersecting disorder lines essentially modifies previously obtained results. These diagrams constitute an inherent part of the full skew scattering amplitude. Our argumentation applies to all models of anomalous Hall effect and related phenomena, e.g spin-orbit torque and spin Hall effect. For illustration, we revise the results for 2D massive Dirac fermions [1] and for the Bychkov-Rashba ferromagnet [2]. [1] I. A. Ado, I. A. Dmitriev, P. M. Ostrovsky, and M. Titov, EPL 111, 37004 (2015). [2] I. A. Ado, I. A. Dmitriev, P. M. Ostrovsky, and M. Titov, arXiv:1511.07413 (2015).
Many-body spin echo
Engl, Thomas
We predict a universal echo phenomenon present in the time evolution of many-body states of interacting quantum systems described by Fermi-Hubbard models. It consists of the coherent revival of transition probabilities echoing a sudden flip of the spins that, contrary to its single-particle (Hahn) version, is {\it not} dephased by interactions or spin-orbit coupling. The many-body spin echo signal has a universal shape independent of the interaction strength, and an amplitude and sign depending only on combinatorial relations between the number of particles and the number of applied spin flips. Our analytical predictions, based on semiclassical interfering amplitudes in Fock space associated with chaotic mean-field solutions, are tested against extensive numerical simulations confirming that the coherent origin of the echo lies in the existence of anti-unitary symmetries.
Decoherence of a qubit coupled to a disordered system
Hilke, Michael
In most experimental systems on qubits, the qubit is weakly coupled to an environment, such as an electronic gate or a collection of nuclear spins. Often this environment is disordered. Our goal is to quantify the effect of the disorder on the dynamics of the qubit. Conversely, we can also use the dynamics of the qubit to probe the disordered system directly. For this, we solved the dynamics of a qubit (two level system) attached to an infinite random chain. We obtained expressions for the decoherence rate of the qubit as a function of the disorder of the random chain, which can be expressed in terms of the transmission through the chain evaluated at the eigenenergy of the qubit. Hence, we found a direct correspondence between the dynamics of the qubit and the transmission properties of the disordered environment. This work was done in collaboration with H. Eleuch and R. Mackenzie.
Influence of dephasing on the thermalization in fermionic systems
Medvedyeva, Mariya
We are interested in the effects of dephasing on the fermionic systems. We study the effects of dephasing noise on a prototypical many-body localized system -- the XXZ spin 1/2 chain with a disordered magnetic field. At times longer than the inverse dephasing strength the dynamics of the system is described by a probabilistic Markov process on the space of diagonal density matrices, while all off-diagonal elements of the density matrix decay to zero. The generator of the Markovian process is a bond-disordered spin chain. The scaling variable is identified, and independence of relaxation on the interaction strength is demonstrated. We show that purity and von Neumann entropy are extensive, showing no signatures of localization, while the operator space entanglement entropy exhibits a logarithmic growth with time until the final saturation corresponding to localization breakdown, suggesting a many-body localized dynamics of the effective Markov process. We also present that a spectrum of dissipative fermionic system can be found exactly from the Bethe ansatz equations. We analyze the type of Bethe-ansatz excitations which are the most relevant for the relaxation of the system towards non-equilibrium steady state. Then investigating the Bethe-ansatz wave-function we obtain that the system behaves in a diffusive way. We conclude by presenting future directions where known Bethe-ansatz solvable models can be used to analyze dynamics of dissipative systems.
Fractional chern insulators in Harper-Hofstadter bands with higher chern number
Möller, Gunnar
We will discuss the many-body physics that is realised by interacting particles occupying topological flat bands of the Harper-Hofstadter model with Chern number $|C|>1$ [1,2]. We formulate the predictions of Chern-Simons or composite fermion theory in terms of the filling factor, $\nu$, defined as the ratio of particle density to the number of single-particle states per unit area. We show that this theory predicts a series of fractional quantum Hall states with filling factors $\nu = r/(r|C| +1)$ for bosons, or $\nu = r/(2r|C| +1)$ for fermions. This series includes a bosonic integer quantum Hall state (bIQHE) in $|C|=2$ bands. We construct specific cases where a single band of the Harper-Hofstadter model is occupied. For these cases, we provide numerical evidence that several states in this series are realized as incompressible quantum liquids for bosons with contact interactions, with characteristics matching the predictions of composite fermion theory. Finally, we discuss how band-geometric measures influence the stability of generic fractional Chern insulator phases, providing evidence that the many-body gap correlates not only with the flatness of the Berry-curvature, but additionally we demonstrate the influence of the Fubini-Study metric tensor [3]. [1] G. Möller, N.R. Cooper, Phys. Rev. Lett. 103, 105303 (2009). [2] G. Möller, N.R. Cooper, Phys. Rev. Lett. 115, 126401 (2015). [3] T. Jackson, G. Möller, R. Roy, Nature Communications 6, 8629 (2015).
Magnon kinetics induced by a laser pulse in a half-metallic ferromagnets
Murzaliev, Bektur
We investigate non-equilibrium magnon distribution induced by a short laser pulse in a half-metal ferromagnet described by the s-d model. Using non-equilibrium Green's function formalism we derive and analyse quantum kinetic equation for magnons. We demonstrate that non-quasiparticle states in the ferromagnet play the key role in the energy transfer from conduction electrons to magnons.
Direct experimental evidence for the scaling hypothesis of localization: length-dependent electrical conductivity, ultra-high phase coherence length, and mesoscopic thermopower fluctuations
Narayan, Vijay
We present the first ever experimental demonstration of length-dependent conductivity $\sigma$ consistent with the scaling hypothesis of localisation [1,2]. By measuring the length ($L$)-dependent transport characteristics of several (> 18) two-dimensional electron gases (2DEGs) of mesoscopic dimensions, we observe a clear and systematic decrease $\sigma$ as $L$ increases. We then use a single parameter fit to extract the electronic mean free path $\ell$ of the 2DEGs and find that the relation $k_F \ell \neq \sigma/\sigma_0$ where $\sigma_0 = e^2/h$, the conductance quantum. We discuss the implications of this result within the framework of the putative two-dimensional metal-insulator transition. An important conclusion of our experimental results is that the 2DEG maintains phase coherence over the 2DEG dimensions which can be as high as 10~$\mu$m at 0.3~K. This remarkable and unexpected result gains support from thermopower measurements which show anomalous fluctuations and sign changes as a function of carrier concentration [3,4], as well as asymmetric characteristics upon magnetic field reversal [5], both of which are anticipated in phase coherent systems. We argue that this behaviour arises due to a decoupling of the 2DEG from the lattice bath due to the specific geometry of the devices, and discuss the relevance of our results to recent ideas [6] pertaining to many-body localization [7]. [1] D. Backes, R. Hall, M. Pepper, H. E. Beere, D. Ritchie and V. Narayan, Phys. Rev. B 92, 174426 (2015). [2] D. Backes, R. Hall, M. Pepper, H. E. Beere, D. Ritchie and V. Narayan, J. Phys. Condens. Matter 28, 01LT01 (2016). [3] V. Narayan et al., Phys. Rev. B 86, 125406 (2012). [4] V. Narayan et al., New J. Phys. 16, 085009 (2014). [5] V. Narayan et al., in preparation (2016). [6] S. Banerjee and E. Altman, Phys. Rev. Lett. 116, 116601 (2016). [7] D. M. Basko, I.L. Aleiner, B.L. Altshuler, Ann. Phys. 321, 1126 (2006).
Manifestation of photo-mechanical coupling in vibrating single-electron transistor
Parafilo, Anton
In recent years the effects of spin degree of freedom of electrons on the transport properties of nanoelectromechanical (NEM) systems have been studied intensively. The reason for this is additional possibility to control electric current by using external magnetic field or/and spin-orbit interaction. We have suggested a spin-mediated coupling between high-frequency electromagnetic microwave field and low-frequency (\omega) mechanical vibrations in magnetic NEM device. The system comprises a single-wall carbon nanotube (CNT) suspended between two ferromagnetic electrodes with opposite magnetizations. A magnetic tip placed near the CNT produces non-homogeneous magnetic field B_{\par}, which induces magnetic force that acts on suspended part of nanowire and depends on its deflection and electron spin-projection. Therefore, magnetic field provides coupling between electronic spin and mechanical degree of freedom. The system is subjected into external microwave field. The specific orientation of a magnetic component of the microwave field B_{\perp} and additional condition of coincidence of the field frequency (\Omega) and Zeeman energy splitting (\Delta) result in spin-flip processes and facilitate the electron transport through NEM device. To analyse the effective coupling between microwave field and soft mechanical vibrations we investigate the time evolution of CNT flexural oscillations with respect to electron degree of freedom. The vibrational "ground state" becomes unstable in the case of parallel magnetization between magnetic tip and source electrode when \Delta<\hbar \Omega. It means, that system transfer to the regime of shuttling, when mechanical instability develops into pronounced self-sustained vibrations of CNT resonator. The criterion of shuttle instability is investigated in the case of Coulomb blockade and in the limit of adiabatic motion \omega<\Gamma, where \Gamma is electron energy level width. Different regimes of nano-mechanical oscillations are analysed.
Coherent backscattering in the Fock space of Bose- and Fermi-Hubbard systems
Schlagheck, Peter
Coherent backscattering generally refers to a significant and robust enhancement of the average backscattering probability of a wave within a disordered medium, which from a semiclassical point of view arises due to the constructive interference between backscattered trajectories and their time-reversed counterparts. We recently investigated the manifestation of this wave interference phenomenon in the Fock space of a disordered Bose-Hubbard system of finite extent [1], which can potentially be realized using ultracold bosonic atoms within optical lattices. Preparing the atoms in a well-defined Fock state of the lattice and letting the system evolve for a finite time will, for suitable parameters of the system and upon some disorder average over random on-site energies of the lattice, generally give rise to an equidistribution of the occupation probability within the energy shell of the Fock space that corresponds to the initial energy of the system, in accordance with the quantum microcanonical ensemble. We find, however, that the initial state is twice as often encountered as other Fock states with comparable total energy, which is a consequence of coherent backscattering [1]. Most recently, we showed that this phenomenon also arises in spin 1/2 Fermi-Hubbard rings that involve Rashba hopping terms (which combine inter-site hoppings with spin flips and arise from spin-orbit coupling), for which a newly developed semiclassical theory [2] correctly predicts a coherent enhancement of the occupation probabilities of the initial state and its spin-flipped counterpart. Moreover, performing a global spin flip within this Fermi-Hubbard system will give rise to significant spin echo peaks on those two Fock states, which is again a consequence of quantum many-body interference [3]. The semiclassical predictions of these enhancements and peaks are found to be in very good agreement with numerical findings obtained from the exact quantum time evolution within this Fermi-Hubbard system. [1] T. Engl, J. Dujardin, A. Argüelles, P. Schlagheck, K. Richter, and J. D. Urbina, Phys. Rev. Lett. 112, 140403 (2014). [2] T. Engl, P. Plößl, J. D. Urbina, and K. Richter, Theoretical Chemistry Accounts 133, 1563 (2014). [3] T. Engl, J. D. Urbina, and K. Richter, arXiv:1409.5684.
Thermal transport in the disordered electron liquid
Schwiete, Georg
We study thermal conductivity in the disordered two-dimensional electron liquid in the presence of long-range Coulomb interactions. We present a Renormalization Group (RG) analysis and include scattering processes induced by the Coulomb interaction in the sub-temperature energy range. For the thermal conductivity, unlike for the electric conductivity, these scattering processes yield a logarithmic correction which may compete with the RG-corrections. We use the theory to describe thermal transport on the metallic side of the metal-insulator transition in Si MOSFETs. G. Schwiete and A. M. Finkel'stein, PRB 90, 060201(R) (2014), PRB 90, 155441 (2014), arXiv:1509:02519, arXiv:1510.06529
Towards non-relativistic conformal bootstrap
Surowka, Piotr
Several condensed matter systems obey conformal Schroedinger symmetry. These include anyons, non-viscous fluids or cold atoms. Unlike the relativistic case the symmetry group itself is not powerful enough to determine the three-point correlation functions completely. However, for a certain class of operators symmetry group together with the Schroedinger equation fixes the three-point correlation functions up to a constant. I will show that this observation is a necessary first step towards the non-relativistic bootstrap approach. I will also show how to determine the conformal blocks of non-relativistic CFTs. Finally I will discuss the example of anyons, which is a solvable toy model, which is useful to test the above ideas. | CommonCrawl |
Visualisation of dCas9 target search in vivo using an open-microscopy framework
Direct observation of DNA target searching and cleavage by CRISPR-Cas12a
Yongmoon Jeon, You Hee Choi, … Sangsu Bae
DNA stretching induces Cas9 off-target activity
Matthew D. Newton, Benjamin J. Taylor, … David S. Rueda
The compact Casπ (Cas12l) 'bracelet' provides a unique structural platform for DNA manipulation
Ao Sun, Cheng-Ping Li, … Jun-Jie Gogo Liu
Precise transcript targeting by CRISPR-Csm complexes
David Colognori, Marena Trinidad & Jennifer A. Doudna
CRISPR–Cas9 bends and twists DNA to read its sequence
Joshua C. Cofsky, Katarzyna M. Soczek, … Jennifer A. Doudna
DNA interference states of the hypercompact CRISPR–CasΦ effector
Patrick Pausch, Katarzyna M. Soczek, … Jennifer A. Doudna
Dynamic mechanisms of CRISPR interference by Escherichia coli CRISPR-Cas3
Kazuto Yoshimi, Kohei Takeshita, … Tomoji Mashimo
TALEN outperforms Cas9 in editing heterochromatin target sites
Surbhi Jain, Saurabh Shukla, … Huimin Zhao
Targeted transcriptional modulation with type I CRISPR–Cas systems in human cells
Adrian Pickar-Oliver, Joshua B. Black, … Charles A. Gersbach
Koen J. A. Martens ORCID: orcid.org/0000-0002-9447-85791,2 na1,
Sam P. B. van Beljouw1 na1,
Simon van der Els3,4,
Jochem N. A. Vink5,
Sander Baas1,
George A. Vogelaar1,
Stan J. J. Brouns ORCID: orcid.org/0000-0002-9573-17245,
Peter van Baarlen ORCID: orcid.org/0000-0003-3530-54723,
Michiel Kleerebezem3 &
Johannes Hohlbein ORCID: orcid.org/0000-0001-7436-22211,6
Single-molecule biophysics
CRISPR-Cas9 is widely used in genomic editing, but the kinetics of target search and its relation to the cellular concentration of Cas9 have remained elusive. Effective target search requires constant screening of the protospacer adjacent motif (PAM) and a 30 ms upper limit for screening was recently found. To further quantify the rapid switching between DNA-bound and freely-diffusing states of dCas9, we developed an open-microscopy framework, the miCube, and introduce Monte-Carlo diffusion distribution analysis (MC-DDA). Our analysis reveals that dCas9 is screening PAMs 40% of the time in Gram-positive Lactoccous lactis, averaging 17 ± 4 ms per binding event. Using heterogeneous dCas9 expression, we determine the number of cellular target-containing plasmids and derive the copy number dependent Cas9 cleavage. Furthermore, we show that dCas9 is not irreversibly bound to target sites but can still interfere with plasmid replication. Taken together, our quantitative data facilitates further optimization of the CRISPR-Cas toolbox.
The discovery of clustered regularly interspaced short palindromic repeats (CRISPR) and CRISPR-associated proteins (Cas) as a microbial defence mechanism triggered an ongoing scientific revolution, as CRISPR-Cas can be adapted to perform sequence-specific DNA modification in prokaryotes, archaea, and eukaryotes1,2,3,4. Streptococcus pyogenes Cas9 is a widely used variant5 and an endonuclease activity-deficient version, termed dead Cas9 (dCas9), has been used to visualise endogenous genomic loci in living cells6. The biochemical interaction mechanisms of Cas9 are well understood7,8,9,10,11,12. The DNA-binding protein domain probes the DNA for a specific protospacer adjacent motif (PAM; 5'-NGG-3') via a combination of 3-dimensional diffusion and 1-dimensional sliding on the DNA9. Upon recognition of the PAM, the enzyme starts unwinding the DNA double helix to test for complementarity with a 20 nucleotide-long single guide RNA (sgRNA; R-loop formation). If full complementarity is found, Cas9 continues to cleave the DNA at a fixed position 3 nucleotides upstream of the PAM13.
Optimization of Cas9-mediated genomic engineering in a desired incubation time whilst minimizing off-target DNA cleavage requires exact kinetic information. In the Gram-negative bacterium E. coli, an upper limit for the binding time (30 ms) of dCas9 with DNA has been determined in vivo14, but it is unknown if such binding times are ubiquitous in prokaryotes. In addition, there is a limited understanding of the spatiotemporal relationship between cellular copy numbers of Cas9 proteins, the number of DNA target sites and the duration and dissociation mechanisms of target-bound dCas9. Since genomic engineering of food-related microbes such as Gram-positive lactic acid bacteria15 is becoming increasingly valuable16,17, it is important to assess whether previously determined dCas9 kinetic information can be transferred to food-related microbes.
To study the behaviour of dCas9 in vivo with millisecond time resolution, we used single-particle tracking photo-activated localisation microscopy (sptPALM)18,19. In sptPALM, a photo-activatable fluorescent protein, which is by default not fluorescently active but can be activated via irradiation, is fused to the protein of interest, and the fusion protein is expressed in living cells. By stochastically activating a subset of the available chromophores, the signal of a single emitter is localized with high precision (~30–40 nm 20,21) and, by monitoring its position over time, the movement of the protein fusion is followed and analysed22.
However, sptPALM mostly provides quantitative information if the protein of interest remains in a single diffusional state for the duration of a track (e.g. >40 ms using at least 4 camera frames of 10 ms). As this temporal resolution is insufficient to elucidate in vivo Cas9 dynamic behaviour (<30 ms)14, we developed a Monte-Carlo based variant of diffusion distribution analysis (MC-DDA, for analytical DDA see ref. 23) to extract dynamic information on a timescale shorter than the duration of a single track.
In the experimental realisation, we refine existing single-molecule microscopy frameworks and introduce a new design, the miCube. The miCube is constructed from readily available and custom-made parts, ensuring accessibility for interested laboratories. We then use MC-DDA in combination with the miCube in an assay that employs a heterogeneous expression system in order to explore the dynamic nature of DNA-dCas9 interactions in live bacteria and their dependency on (d)Cas9 protein copy numbers. In particular, we assess dCas9 fused to photo-activatable fluorophore PAmCherry2 in the lactic acid bacterium L. lactis, in the presence or absence of DNA targets. With this assay, we show that dCas9 is screening PAMs 40% of the time, with each binding event having an average duration of 17 ± 4 ms. Moreover, we show a dependency of bound dCas9 fraction on DNA target-binding sites, which allows quantification of plasmid copy numbers. This, in turn, indicates that bound dCas9 interferes with plasmid replication. These results are combined in a model that predicts Cas9 cleavage efficiencies in prokaryotes.
Elucidation of sub 30 ms dynamic interactions with sptPALM
In the absence of cellular target sites, dCas9 is expected to be present in either one of two states (Fig. 1a): bound to DNA (red), which results in low diffusion coefficients (~0.2 µm2/s); or freely diffusing in the cytoplasm (yellow), which results in high diffusion coefficients (~2.2 µm2/s). If the transitioning between these states is slow compared to the length of each track (here: 40 ms), diffusion coefficient histograms can be fitted with two static states (Fig. 1b, top, Supplementary Fig. 1).
Probing cellular dynamics of dCas9 on an open-source microscope using sptPALM. a Simplified expected dynamic behaviour of dCas9 in absence of DNA target sites. The protein can be temporarily bound to DNA (PAM screening), or diffuse freely in cytoplasm, with two kinetic rates governing the dynamics. If the interaction is on a similar timescale as the detection time, a temporal averaging due to transient interactions is expected. b If the dynamic transitions are slow with respect to the camera frame time used in sptPALM, the obtained diffusional data can be fitted with a static model (top), which assumes that every protein is either free (yellow) or DNA-bound (red), but does not interchange. If the dynamic transitions are as fast or faster than the frame time used, Monte-Carlo diffusion distribution analysis (MC-DDA; bottom) can fit the diffusional data. In MC-DDA, dCas9 can interchange between the two states, resulting in a broader distribution. c Render of the open-source miCube super-resolution microscope. The excitation components, main cube, and emission components are indicated in blue, magenta, and green, respectively. Details are provided in the "Methods" section. Scale bar represents 5 cm. d Brightfield images of L. lactis used for computationally obtaining the outline of the cells via watershed (top), and raw single molecule data (bottom; red outline in top is magnified) as obtained on the miCube as part of a typical experiment, overlaid with the determined track where this single molecule belongs to (starting at red, ending at blue). Scale bars represent 2.5 µm (top) or 500 nm (bottom)
However, if transitioning between the states is on a similar or shorter timescale as the length of sptPALM tracks, these transient interactions of dCas9 with DNA (orange) will result in temporal averaging of the diffusion coefficient obtained from a single track. Therefore, we developed a Monte-Carlo diffusion distribution analysis (MC-DDA; Fig. 1b, bottom, Methods, with an analytical approach available elsewhere23) that used the shape of the histogram of diffusion coefficients to infer transitioning rates between diffusional states. The analysis is based on similar approaches used to describe dynamic conformational changes observed with single molecule Förster resonance energy transfer24,25,26. Briefly, MC-DDA consists of simulating the movement and potential interactions of dCas9 inside a cell with a Monte-Carlo approach: the simulated protein is capable of interchanging between interacting with DNA and diffusing freely, defined by kbound→free and kfree→bound. The MC-DDA diffusional data is compared with the experimental data, and by iterating on the kinetic rates and diffusion coefficients, a best fit is obtained.
miCube: an open framework for single-molecule microscopy
For MC-DDA to deduce high kinetic rates, experimental data with high spatiotemporal resolution (< ~50 nm, < ~20 ms) is required. This is challenging, as individual fluorescent proteins have a limited photon budget (<500 photons27), and background fluorescence is introduced by the living cells in which the fluorescent proteins are embedded. While suitable commercial microscopes are available, they often lack accessibility or are prohibitively expensive. This has led to the creation of a plethora of custom-built microscopes in the recent past28,29,30,31,32,33,34,35,36,37,38, ranging from simplified super-resolution microscopes30,31,32,33,34 to additions to commercial microscopes35 or extremely low-cost microscopes36,37.
To increase the accessibility of single-molecule microscopy with high spatiotemporal resolution further, we developed the miCube, an open-source, modular and versatile super-resolution microscope, and provide details to allow interested researchers to build their own miCube or a derivative instrument (Fig. 1c, Supplementary Fig. 2, Methods, https://HohlbeinLab.github.io/miCube). We used 3D-printed components where possible, surrounding a custom aluminium body to minimize thermal drift and provide rigidity. All custom components are supported by technical drawings (Supplementary Figs. 10–18), along with STL files for direct 3D printing. We provide full details on the chosen commercial components, such as lenses, mirrors, and the camera. A detailed description on building a functioning miCube, along with rationale of the design choices, is given in the Methods section. Moreover, we discuss additional options for replacing expensive components with cheaper options.
To facilitate straightforward installation and flexible usability of the miCube, we simplified the alignment of the excitation module by decoupling the movement in the three spatial dimensions (Supplementary Fig. 2e). A variety of imaging modalities are possible on the miCube; super-resolution microscopy in 2D and 3D39, total internal reflection fluorescence (TIRF) microscopy, and LED-based brightfield microscopy. In its current version, the sample area fits a 96-wells plate. The excitation and illumination pathways of the microscope are fitted with 3D-printed enclosures, allowing the instrument to be used under ambient light conditions (including single-particle microscopy). Lastly, we restrained the footprint of the microscope to a 600 × 300 mm breadboard (excluding lasers; Supplementary Fig. 2b), further improving accessibility.
Linear drift calculations indicate that the system experiences a drift of 13 ± 12 nm/min in the lateral plane and 25 ± 15 nm/min in the axial plane without active drift-suppressions systems in place40 (average of three super-resolution measurements performed on three different days). A typical drift measurement is shown in Supplementary Fig. 3.
In vivo sptPALM in L. lactis on the miCube
For our sptPALM assay41, we introduced dCas9 fused to the photo-activatable fluorophore PAmCherry227 in L. lactis under control of the inducible and heterogeneous nisA promotor42 (pLAB-dCas9, Methods). On the same plasmid, a sgRNA with no fully matching targets in the genome is constitutively expressed. We immobilized the L. lactis cells on agarose, and using diffused brightfield LED illumination we computationally separated the cells via the ImageJ watershed43 plugin (Fig. 1d top). Single-particle microscopy was performed with low induction levels (0.1 ng/mL nisin) and low activation intensities (3–620 µW/cm2, 405 nm) to obtain on average PAmCherry2 activation of <1 fluorophore/frame/cell to avoid overlapping tracks (Fig. 1d, bottom). Single particle tracks were limited to individual cells by using the previously obtained cell outlines.
dCas9 is PAM-screening for 17 ms
We first assessed the diffusional behaviour of dCas9-PAmCherry2 (hereafter described as dCas9, unless specifically mentioned) in L. lactis in the absence of target sites (pNonTarget plasmid; Methods). Under these conditions, dCas9 is expected to diffuse freely in the cytoplasm and screen PAM sites on the DNA for under 30 ms14. Under this assumption, diffusion ranges from completely immobile (and thereby fully determined by the localization uncertainty: ~40 nm leads to ~0.16 µm2/s) to freely-moving. The expected free-moving diffusion coefficient can be theoretically described: the fusion protein has a hydrodynamic radius of 5–6 nm27,44, resulting in a diffusion coefficient of 36–43 µm2/s45. Cytoplasmic retardation of ~20× due to increased viscosity and crowding effects reduces this to ~1.8–2.2 µm2/s46. We obtained diffusion coefficients in the range of ~0–3 µm2/s (Fig. 2a), which is within the expected range.
sptPALM of dCas9-PAmCherry2 in pNonTarget L. lactis with increasing dCas9 concentration. a Identified tracks in single pNonTarget L. lactis cells. Tracks are colour-coded based on their diffusion coefficient. Three separate cells are shown with increasing cellular concentration of dCas9. Green dotted outline is an indication for the cell membrane. Scale bars represent 500 nm. b Diffusion coefficient histograms (light green) belonging to 20–200, 400–600, and 800–1000 dCas9 copy numbers, from left to right. Histograms are fitted (dark green line) with a theoretical description of state-transitioning particles between a mobile and immobile state (dashed line represents 95% confidence interval based on bootstrapping the original data). Five diffusion coefficient histograms (Supplementary Fig. 4) were globally fitted with a single free diffusion coefficient (2.0 ± 0.1 µm2/s; mean ± standard deviation), a single value for the localization error (σ = 38 ± 3 nm = 0.15 ± 0.03 µm2/s), and 5 sets of kbound→free and kfree→bound values (indicated in the figures). Residuals of the fit are indicated below the respective distribution. c kbound→free (red) and kfree→bound (blue) plotted as function of the apparent cellular dCas9 copy number. Solid dots show the fits of the actual data; filled areas indicate the 95% confidence intervals obtained from the bootstrapped iterations of fitted MC-DDAs with 20,000 simulated proteins. Source data are provided as a Source Data file
We used a heterogeneous promotor (nisA, Methods), causing the apparent cellular dCas9 copy numbers to vary between 20 and ~1000 (Fig. 2a, Supplementary Fig. 4; cells with less than 20 copies were excluded as we corrected for ~7 tracks (~14 apparent dCas9) found in non-induced cells). The value of the cellular dCas9 is an approximation (Discussion), but a relative increase in cellular dCas9 copy number is certain. We then created five diffusional histograms belonging to cells with a particular apparent dCas9 copy number range (ranges of ~200 dCas9 copy number intervals; Fig. 2b, Supplementary Fig. 4). These diffusional histograms are fitted with the aforementioned MC-DDA, where the shape of the MC-DDA is governed by the localization uncertainty, the free-moving diffusion coefficient, and the kinetic rates of PAM-screening. The localization uncertainty and free-moving diffusion coefficient are independent of cellular dCas9 copy number, since they are determined by the number of photons and a combination of hydrodynamic radius and cytoplasm viscosity, respectively. Therefore, the histograms were globally fitted with a combination of 5 MC-DDAs, each consisting of 20,000 simulated dCas9 proteins, containing a single value for free-moving diffusion coefficient (Dfree = 2.0 ± 0.1 µm2/s (average ± standard deviation of 4 experiments over 3 days, in total consisting of 32,971 tracks), in agreement with the theoretical expectation of ~1.8–2.2 µm2/s), a single value for localization uncertainty (σ = 38 ± 3 nm, or Dimmobile* = 0.15 ± 0.03 µm2/s, expected for fluorescent proteins illuminated for 4 ms39,41), and five pairs of kfree→bound and kbound→free (specified in Fig. 2b, c).
The obtained kinetic constants of kfree→bound and kbound→free were 40 ± 12 s−1 and 60 ± 13 s−1 (mean ± 95% CI), respectively, and did not show a significant dependence on apparent cellular dCas9 copy number (Fig. 2c). This indicates that dCas9 is PAM-screening for 17 ± 4 ms in L. lactis, consisting of screening 1 or more PAMs via 1D diffusion. This value is in the same order of magnitude as the upper limit of 30 ms reported earlier for PAM-screening in E. coli14, suggesting that these PAM-screening kinetics are a general feature of dCas9. Additionally, dCas9 is on average diffusing within the cytoplasm for 25 ± 8 ms before finding a new site for PAM screening. This duration is governed by the diffusion coefficient of the fusion protein, along with the average distance between DNA PAM sites. These results also entail that dCas9 is diffusing in the cytoplasm ~60% of the time, while interacting with the DNA ~40% of the time. Removal of the sgRNA resulted in similar diffusional data, which agrees with PAM-screening being a solely protein–DNA interaction (kfree→bound: 34 ± 16 s−1; kbound→free: 62 ± 21 s−1; diffusion time on average 29 ± 18 ms; PAM-screening time on average 16 ± 6 ms; Supplementary Fig. 5). This also indicates that partial sgRNA-DNA matching of dCas9 with non-targets is not prevalent enough in our assay to affect the screening time significantly.
Target-binding of dCas9 can be observed with sptPALM
We then investigated the effect of DNA target sites complementary to the sgRNA loaded dCas9. To this end, we introduced 5 target sites on a plasmid (pTarget; Methods), which replaced the pNonTarget plasmid used so far. Qualitative visualisation of diffusion in the L. lactis bacteria shows tracks with small diffusion coefficients (Fig. 3a, black tracks), indicative of target-bound dCas9. This immobile population can be observed throughout the dCas9 copy number range but is more prevalent in cells with lower cellular dCas9 copy numbers.
sptPALM of dCas9-PAmCherry2 in pTarget L. lactis shows target-binding behaviour of dCas9. a Identified tracks in individual pTarget L. lactis cells. Tracks are colour-coded based on their diffusion coefficient. Three separate cells are shown with increasing dCas9 concentration. Blue dotted outline is an indication for the cell membrane. Scale bars represent 500 nm. b Diffusion coefficient histograms (light blue) are fitted (dark blue line) with a combination of the respective fit of pNonTarget L. lactis cells (green line), along with a single globally fitted population corresponding to target-bound dCas9 (purple) at 0.38 ± 0.04 µm2/s (mean ± standard deviation). c Left: The population size of the plasmid-bound dCas9 decreases as a function of the cellular dCas9 copy number. The error bar of the measurement is based on the 95% confidence interval determined by bootstrapping; the solid line is a model fit with 20 plasmids, with a 95% confidence interval determined by repeating the model simulation. Right: Occupancy of DNA targets by dCas9 based on 20 target plasmids (100 DNA target sites), based on the same data as presented in the left figure. Source data are provided as a Source Data file
We expect target-bound dCas9 to move with a diffusion coefficient determined by the plasmid size, which is independent on the cellular dCas9 copy number. Therefore, we globally fitted the pTarget-obtained diffusional histograms with a combination of the corresponding pNonTarget MC-DDA fit and an additional single diffusional state belonging to target-bound dCas9 (Fig. 3b, Dplasmid* = 0.38 ± 0.04 µm2/s = Dimmobile* + 0.23 µm2/s, which agrees with the expected diffusion coefficient from plasmids of similar size in bacterial cytoplasm46,47,48; 31,439 total tracks). The plasmid-bound dCas9 population decreases with increasing apparent cellular dCas9 copy numbers from 28 ± 3% at 105 (20–200) copies to 10 ± 5 % at 900 (800–1000) copies (Fig. 3c left, purple squares; mean ± 95% CI). No target-binding behaviour was observed when the sgRNA was removed (Supplementary Fig. 5).
dCas9 does not bind targets irreversibly
This anti-correlation between dCas9 copy number and the size of the plasmid-bound population is indicative of competition for target sites by an increasing amount of dCas9 proteins. To evaluate this hypothesis, we consecutively simulated dCas9 proteins until the cellular dCas9 copy number was reached (Methods). In the simulation, every protein binds or dissociates from a PAM with the kinetic constants determined previously, and will instantly bind to a target site if it binds to a PAM directly adjacent to it. We thus disregard effects of 1D sliding on the DNA, but we believe these effects are limited, as 1D sliding between PAM sites has a low probability when PAMs are randomly positioned on the DNA (< ~10% at 16 bp distance average9). A koff is introduced which dictates removal of dCas9 from the target sites.
This model fully explained the dependency of the target-bound dCas9 fraction on the cellular dCas9 copy number (Fig. 3c left, black line). The slope of the curve towards low cellular dCas9 concentration is dependent on the total cellular number of PAM sites and koff. Assuming on average 1.5 genomes worth of DNA (haploid genome replicated in half the cells) present in the cell, the koff is ~0.01 ± 0.003 s−1. The number of DNA target sites determines the lower bound of the model, and ~100 ± 50 DNA target sites (~20 ± 10 plasmids) led to the observed bound fraction at 900 cellular dCas9 proteins. The fit of the number of target sites at high cellular dCas9 concentration is independent of koff, since at the modelled concentrations and PAM-screening kinetic parameters, the target sites are essentially fully occupied (Fig. 3c, right). It thus follows that the used pTarget plasmid, a derivative of pNZ123, is present at a lower copy number than expected (~60–80) during measurements47. This could hint towards interference of plasmid replication due to dCas9 binding49,50. We investigated this with quantitative polymerase chain reaction (qPCR)51, and we indeed observed a decrease in the amount of pTarget DNA with dCas9 production (Supplementary Fig. 6).
These collective results lead to the model presented in Fig. 4a. dCas9 diffuses freely in the cytoplasm for 25 ± 8 ms on average, and will then interact with a PAM site for 17 ± 4 ms. If the PAM site is not directly adjacent to a target site, dCas9 will move back to freely diffusing in the cytoplasm. However, if the PAM site is directly followed by a target site, dCas9 will be bound to this site for 1.6 min on average, before it is removed by intrinsic or extrinsic factors.
Extrapolation of the dCas9 dynamic model to assess single target cleavage by Cas9. a The proposed model surrounding dCas9 interaction with the obtained kinetic rates. Free dCas9 (yellow) in the cytoplasm interact with PAM sequences (5'-NGG-3') on average every 25 ms. If the PAM is not in front of a target sequence (red), only PAM-screening will occur for on average 17 ms. If the PAM happens to be in front of a target, the dCas9 will be target-bound (purple). We extend this model to predict Cas9 cleavage under conditions where target-bound Cas9 will always cleave the target DNA. b Calculated predicted probability that a single target in the L. lactis genome is cleaved after a certain period of time with a certain cellular Cas9 copy number, based on the model shown in a. Error bars indicate standard deviation calculated from iterations of the model
A single copy of Cas9 find a single DNA target in ~4 h
We adapted the computational target-binding model to predict Cas9 cleavage in L. lactis and other prokaryotes with similar DNA content. We assume that all DNA is accessible to Cas9 and that Cas9 behaves identical to dCas9, but will cleave a target directly after binding. Our proposed Cas9 kinetic scheme depends only on PAM-screening kinetic rates and the ratio of total PAM sites to target sites. We predicted the incubation time-dependent probability that a certain number of cellular Cas9 proteins will bind a single target site on the L. lactis genome (Fig. 4b).
The model shows that a single Cas9 protein can effectively find a single target with 50% probability in ~4 h. It also shows that an increasing cellular Cas9 copy number quickly decreases this search time: With 10 cellular copies of Cas9, the search time is reduced to ~25 min, and 20 copies reduce the search time to ~10 min. Therefore, a single target is almost certainly found within a typical prokaryotic cell generation time (> ~20 min). This agrees with in vivo data of Cas914 (accounting for E. coli's larger genome (~4.6 mbp versus ~2.5 mbp)) and with in vivo data of Cascade in E. coli23, though in different organisms or with different CRISPR-Cas systems.
We have designed a sptPALM assay to probe DNA-protein interactions in vivo, and assessed the kinetic behaviour of dCas9 in L. lactis on the open-hardware, super-resolution microscope miCube. The high spatiotemporal resolution of the experimental data along with the heterogeneity of the used induction protocol allowed us to develop a Monte-Carlo diffusion distribution analysis (MC-DDA) of the diffusional equilibrium.
The obtained dCas9 PAM-screening kinetic rates (kfree→bound = 40 ± 12 s−1, kbound→free = 60 ± 13 s−1) indicate that non-target binding of dCas9 has a mean lifetime of 17 ± 4 ms, and spends ~40% of its time on PAM screening. In fact, a 1:1 ratio between diffusing and binding was shown to be optimal for target search time of DNA-binding proteins52. The MC-DDA further suggests that the kinetic rates governing PAM–dCas9 interactions do not depend on cellular copy number levels of dCas9.
We observed target-binding of dCas9, and showed that higher cellular dCas9 copy numbers resulted in lower probabilities of target-bound dCas9, although absolutely more targets were occupied by dCas9. We linked this finding to the previously found kfree→bound and kbound→free rates and postulate that dCas9 dissociation from target sites is responsible for the obtained probabilities of target binding by dCas9. We made two assumptions when obtaining absolute cellular dCas9 copy numbers. Firstly, we assumed that measurements directly end after all fluorophores in the centre of the microscopy field of view have been imaged once. Secondly, we assumed a maturation grade of 50% (identical to that of PAmCherry1 in Xenopus53). Although an exact determination is possible53,54, this is beyond the scope of this study.
We obtained a dCas9-target koff rate of ~0.01 s−1 that is dependent on the exact cellular dCas9 copy number and total L. lactis genomic content. The biological cause of dissociation of target bound dCas9 from DNA remains speculative: it could be an intrinsic property, resulting in spontaneous release from target sites, or it could be caused by an extrinsic factor, such as RNA transcription or DNA replication. We do not expect RNA polymerase activity on the DNA target sites, although we did not actively block transcription. It is currently unknown whether genomic target-bound dCas9 dissociates from the DNA due to DNA replication, with studies contradictory showing that dCas9 is removed during cell duplication14 and that dCas9 is hindering genomic DNA replication49 or transcription50. We note that genomic DNA replication substantially differs from the rolling-circle DNA replication of pTarget55.
Our data indicate that dCas9 binding to plasmid DNA hinders DNA rolling-circle replication. The pNZ123 plasmid, of which pTarget is a derivative, is believed to be high-copy47 (60–80 plasmids per cell), although the quantification of plasmid copy numbers is challenging (discussed for the single-cell level in reference51). Our model suggests that pTarget is present in only ~20 copy numbers during our measurements. Although we saw an effect of dCas9 production on pTarget copy number via qPCR, the obtained decrease (~20%) is not as large as observed with sptPALM (~70%). The median cellular dCas9 copy number, however, is low (~40; Supplementary Fig. 6) compared to most of the dCas9 copy number bins evaluated with MC-DDA. Therefore, using the averaged cellular community, not all pTarget (60–80 cellular plasmids containing 300–400 target sites), are occupied by a dCas9 protein, which would affect the ensemble qPCR results. The sptPALM plasmid copy number determination, on the other hand, is mostly determined by the L. lactis sub-population with high dCas9 copy numbers, for which pTarget replication is restricted more strongly.
We used our model to make predictions about Cas9 cleavage probabilities, based on kinetic values extracted from the MC-DDA, which are not influenced by the approximated cellular dCas9 copy number. The kinetic parameters of dCas9-PAmCherry2 provide estimates for those of Cas9. We reason that kbound→free will be unchanged, since this rate is based on the duration of the PAM screening, while kfree→bound will be slightly lower for Cas9 compared to dCas9-PAmCherry2, due to the relatively higher diffusion coefficient of Cas9. The model can be expanded to incorporate a protein diffusion coefficient to obtain a modified kfree→bound rate, and to include accessibility of the DNA. These additions would allow the model to predict Cas9 behaviour in more diverse environments such as eukaryotic cells. Other computational models have taken these parameters into account56, but these models were not based on experimental in vivo data, and were based on different assumptions.
Our open microscopy framework enables the study of in vivo protein–DNA interactions with high spatiotemporal resolution, here shown for CRISPR-Cas9 target search, and improves the general accessibility of super-resolution microscopy. Our data shows that heterogeneity in an expression system can be used to obtain new insights in any protein–DNA or protein–protein interaction in vivo, here indicating that target-bound dCas9 interferes with rolling-circle DNA replication. The derived kinetic parameters and information on target search times provide valuable practical insights in CRISPR-Cas engineering and gene silencing in lactic acid bacteria specifically, and suggest to reflect prokaryotic Cas9 search times in general.
miCube design considerations
We designed the miCube to be easy to setup and use, while retaining a high level of versatility. The instrument and its design choices will be described in three parts: the excitation path; the emission path, and the cube connecting the sample with the excitation and emission paths. Throughout this description, we will refer to numbered parts as shown in Supplementary Fig. 2a, c and described in Supplementary Table 1. The information on the miCube presented here can also be found on https://HohlbeinLab.github.io/miCube/component_table.html. The instrument is fully functional in ambient light, due to a fully enclosed sample chamber, illumination pathway and emission pathway. Moreover, the miCube has a small footprint: the final design of the miCube, excluding the lasers and controllers, fits on a 300 × 600 mm Thorlabs breadboard. We placed the whole ensemble in a transparent polycarbonate box (MayTec Benelux, Doetinchem, The Netherlands) to minimize airflow disturbing the setup during experiments.
miCube excitation path
The excitation path is designed to be both robust and easy to align and adjust. The four laser sources located in an Omicron laser box are combined and guided via a single mode fibre towards a reflective collimator (nr. 18) ensuring a well-collimated beam. The reflective collimator is attached directly to an aperture (nr. 17), a focusing lens (nr. 16, 200 mm focal length), and an empty spacer (nr. 12). This excitation ensemble is placed in the 3D-printed piece designed to hold the assembly into place (nr. 13). This holder is then attached to a right-angled mounting plate (nr. 14), which is placed on a 25 mm translation stage (nr. 15). The translation stage should be placed at such a position on the breadboard that the focusing lens (nr. 16) is exactly 200 mm separated from the back-focal plane of the objective when following the laser path.
Easy alignment and adjustment are ensured by isolating the three axes of movement of this excitation ensemble (Supplementary Fig. 2e). Adjustments of distance from objective is achieved by moving the collimator ensemble (nrs. 12, 16–18) inside its holder (nr. 13). Height of the path can be adjusted via a bracket clamp that supports the collimator ensemble (nrs. 13 and 14), and the horizontal alignment can be adjusted via a translation stage where the bracket clamp rests on (nr. 15). We note that the excitation pathway is uncoupled from any laser source due to the fibre-connection, allowing for freedom of choice for the excitation laser unit.
Additionally, the translation stage (nr. 15) can be used to enable highly inclined illumination (HiLo) or total internal reflection (TIR). The stage allows fine and repeatable adjustment of the excitation beam position on the back focal plane of the objective. By aligning the excitation beam in the centre of the objective, the microscope will act as a standard epifluorescence instrument. If the excitation beam is aligned towards the edge of the back focal plane, the miCube will operate in HiLo or TIR.
miCube cube and sample mount
The central component of the miCube is the cube (nr. 5) that connects excitation path, emission path, and the sample. The cube is manufactured out of a solid aluminium block maximising stability and minimising effects of drift due to thermal expansion. Black anodization of the block prevents stray light and unwanted reflections. The illumination path is further protected from ambient light by screwing a 3D-printed cover (nr. 11) on the side of the cube, as well as a door to close the cube off.
Next, the dichroic mirror—full mirror part is assembled (nrs. 6–10). The dichroic mirror unit (nr. 7) consists of a dichroic mount that is magnetically attached to an outer holder. On the side of the dichroic mirror unit, opposing the excitation path, a neutral density filter (nr. 6) is placed to prevent scattered non-reflected light entering the miCube thereby minimizing background signal being recorded by the camera. At the bottom of the dichroic mount assembly, a TIRF filter (nr. 8) is placed to remove scattered back-reflected laser light from entering the emission pathway. This ensembled dichroic mirror unit is screwed via a coupling element (nr. 9) to a mirror holder containing a mirror placed at a 45° angle (nr. 10), which reflects the emission light from the objective to the camera. This completed dichroic mirror—full mirror part is screwed into the backside of the miCube via two M6 screws, which hold the ensemble into place and directly in line with the excitation path (nrs. 12-18), the objective (nr. 3), and the tube lens (nr. 30).
Then, an objective (nr. 3) (Nikon 100× oil, 1.49 NA, HP, SR) is directly screwed into an appropriate thread on top of the cube. Around the objective, a sample mount (nr. 4) is screwed on top of the cube, which is designed to ensure correct height of the sample with respect to the parfocal distance of the chosen objective. We opted for using a sample mount, as it can be easily swapped for another to retain freedom in peripherals. For example, only the height of the sample mount has to be changed if an objective has a different parfocal distance as the one used here. We designed two different sample mounts (nr. 4a, 4b). The first one can hold an xy-translation stage with z-stage piezo insert (nr. 2), to enable full spatial control of the sample (nr. 4a). The second one only holds the z-stage piezo insert, which decreases instrument cost (nr. 4b). In any case, the xy-translation stage with z-stage piezo insert, or only the z-stage piezo insert is screwed in place into corresponding threaded holes in the sample mount. A glass slide holder (nr. 1) is made from aluminium to fit inside a 96-wells-holder like the z-stage (nr. 2).
miCube detection path
A tube lens ensemble is made (nrs. 27–30) which houses a 200 mm focal length tube lens (Thorlabs) in a 3D-printed enclosure which provides space to slot in an emission filter (nrs. 27, 28). This ensemble is then attached directly to the miCube by screwing it into place with four M6 screws. The alignment of the tube lens is therefore exactly in line with the emission light, as the centre of the full mirror (nr. 10) is at the same height of the tube lens. The direction of the emission light can be aligned, which can simply be achieved by tuning the angle of the full mirror (nr. 10).
A cover (nr. 25) is attached to the tube lens ensemble to ensure darkness of the emission path, which is connected to the tube lens by a 3D-printed connector piece (nr. 26). On the other end of the cover, a 3D-printed holder for 2 astigmatic lenses (nr. 21) is placed and screwed into place in the breadboard. Astigmatic lenses (nrs. 22-24) can optionally be used to enable 3D super-resolution microscopy57. They can be easily changed for lenses with a different focal length or with empty holders. With this, astigmatism can be enabled or disabled, and a choice between more accurate z-positional information with a lower total z-range, or less accurate information with a larger range can be made. The Andor Zyla 4.2 PLUS camera (nr. 19) is placed behind the astigmatic lens holder, and is positioned in a 3D-printed camera mount (nr. 20) to ensure correct height and position of the camera, so that the focus of the tube lens is at the camera chip. We chose for a scientific Complementary Metal-Oxide Semiconductor (sCMOS) camera to take advantage of a larger field of view and increased temporal resolution compared to the more traditional electron-multiplying charge coupled device (EMCCD) cameras58.
Note that the length of the cover (nr. 25) and the alignment of the holes at the feet of the 3D-printed astigmatic lens holder (nr. 21) are dependent on the focal length of the tube lens, and of the position of the chosen camera chip with regards to the 3D-printed mount for the camera. The pieces used here were designed for the Andor Zyla 4.2 PLUS, a 200 mm focal length tube lens, and the used custom-designed camera mount (nr. 20).
Strain preparation and plasmid construction
Lactococcus lactis NZ9000 was used throughout the study. NZ9000 is a derivative of L. lactis MG136359 in which the chromosomal pepN gene is replaced by the nisRK genes that allow the use of the nisin-controlled gene expression system42. Cells were grown at 30 °C in GM17 medium (M17 medium (Tritium, Eindhoven, The Netherlands) supplemented with 0.5% (w/v) glucose (Tritium, Eindhoven, The Netherlands) without agitation.
DNA manipulation and transformation
Vectors used in this study are listed in Supplementary Table 2. Oligonucleotides (Supplementary Table 3) and primers Supplementary Table 4) were synthesised at Sigma-Aldrich (Zwijndrecht, The Netherlands). Plasmid DNA was isolated and purified using GeneJET Plasmid Prep Kits (Thermo Fisher Scientific, Waltham, MA, USA). Plasmid digestion and ligation were performed with Fast Digest enzymes and T4 ligase respectively, according to the manufacturer's protocol (Thermo Fisher Scientific, Waltham, MA, USA). DNA fragments were purified from agarose gel using the Wizard SV gel and PCR Clean-Up System (Promega, Leiden, The Netherlands). Electro competent L. lactis NZ9000 cells were generated using a previously described method60. Prior to electro-transformation, ligation mixtures were desalted for one hour by drop dialysis on a 0.025 µm VSWP filter (Merck-Millipore, Billerica, US) floating on MQ water. Electro-transformation was performed with GenePulser XcellTM (Bio-Rad Laboratories, Richmond, California, USA) at 2 kV and 25 µF for 5 ms. Transformants were recovered for 75 min in GM17 medium supplemented with 200 mM MgCl2 and 2 mM CaCl2. Chemically competent E. coli TOP10 (Invitrogen, Breda, The Netherlands) were used for transformation and amplification of the Pnis-dCas9-PAmCherry2-containing pUC16 plasmid (Supplementary Fig. 7). Antibiotics were supplemented on agar plates to facilitate plasmid selection: 10 µg/ml chloramphenicol (for pTarget/pNonTarget) and 10 µg/ml erythromycin (for pLAB-dCas9). Screening for positive transformants was performed using colony PCR with KOD Hot Start Mastermix according to the manufacturer's instructions (Merck Millipore, Amsterdam, the Netherlands). Electrophoresis gels were made with 1% agarose (Eurogentec, Seraing, Belgium) in tris-acetate-EDTA (TAE) buffer (Invitrogen, Breda, The Netherlands). Plasmid digestions were compared with in silico predicted plasmid digestions (Benchling; https://benchling.com).
pLAB-dCas9 plasmid construction
Construction of the pLAB-dCas9 plasmid41,61 was performed by synthesizing (Baseclear B.V., Leiden, The Netherlands) a codon-optimized fragment containing the sequence of Pnis-dCas9-PAmCherry2, flanked by XbaI/SalI restriction sites (Supplementary Fig. 7, Supplementary Note 1). This fragment was supplied in a pUC16 plasmid. After transformation in E. coli, the plasmid was isolated and digested with XbaI and SalI to obtain the Pnis-dCas9-PAmCherry2 fragment. From the pLABTarget expression vector62, the Cas9 expression module was removed by digestion with XbaI and SalI, and replaced by the XbaI-SalI fragment containing Pnis-dCas9-PAmCherry2. The single-stranded guide RNA (sgRNA) for targeting pepN was constructed with the correct overhangs and inserted in the Eco31I digested sgRNA expression handle to yield the pLAB-dCas9 vector62. The plasmids used in this study, and vector maps for pLABTarget and pLAB-dCas9 are available upon request. pLAB-dCas9-PAmCherry2 was sequenced, and was confirmed to be intact in the used strains.
pLAB-dCas9 no-sgRNA
The pLAB-dCas9-nosgRNA plasmid was constructed by BoxI/SmaI digestion of the pLAB-dCas9-PAmCherry2 plasmid, and subsequent self-ligation. This resulted in deletion of the sgRNA handle and transcriptional terminator, successfully removing the functional sgRNA. The resulting pLAB-dCas9-nosgRNA plasmid was confirmed via sequencing.
pTarget and pNonTarget plasmid construction
The plasmid with binding sites for dCas9 (pTarget) was established by engineering five pepN target sites in the pNZ123 plasmid63. To this end, two single-stranded oligonucleotides (10 µl of 100 µM, each, Supplementary Table 3) that upon hybridization form the a single target sequence for the pepN-targeting sgRNA were incubated in 80 µl annealing buffer (10 mM Tris [pH = 8.0] and 50 mM NaCl) for 5 min at 100 °C, followed by gradual cooling to room temperature. The annealed mixed multiplexed oligonucleotides were cloned in HindIII-digested pNZ123. Afterwards, we selected a derivative that contains five pepN target sites via colony PCR (Supplementary Table 4). HindIII re-digestion was prevented by flanking the pepN DNA product by different base pairs, changing the HindIII site. Plasmids with five pepN target sites were designated pTarget (Supplementary Fig. 8). Plasmids without the pepN target sites (the original pNZ123 plasmids) were designated pNonTarget. The vector maps for pTarget and pNonTarget are shown in Supplementary Fig. 8. Correct insertion of the five pepN target sites was confirmed via sequencing.
Construction of strains with pLAB-dCas9 and p(Non)Target
Electro competent L. lactis NZ9000 cells60 harbouring pLAB-dCas9 were transformed with pTarget or with pNonTarget and subsequently used for sptPALM or stored at −80 °C.
Quantitative polymerase chain reaction (qPCR)
Both L. lactis strains containing pLAB-dCas9 and pTarget or pNonTarget were grown under similar lab conditions as the imaging experiments performed in this study (n = 2). After 3 h of growth, the cultures were split and dCas9 was induced (0 ng/ml nisin, 0,4 ng/ml nisin and 2 ng/ml nisin). The cells were then harvested after 12 h of growth by centrifugation. The cell pellets were washed, and DNA was extracted using InstaGene Matrix (Bio-Rad Laboratories, Richmond, California, USA).
Oligonucleotides were designed to amplify a region of spanning approximately 1000 base pairs on both pTarget and pNonTarget, and a region of similar length on the NZ9000 chromosome (Q3 + Q4 and Q7 + Q8; Supplementary Table 4). These oligonucleotides were used in a PCR reaction to generate templates which were diluted to function as a calibration curve in the following qPCR. Both qPCR reactions were performed on each isolated DNA sample (6 technical replicates) and the ratio between measured chromosomal amplicons (Q5 + Q6) and plasmid amplicons (Q1 + Q2) was determined. The samples which were uninduced with nisin were used to standardize the estimated pTarget and pNonTarget copy numbers.
The strains to be used for single-molecule microscopy were grown o/n from glycerol stocks at 30 °C in chemically defined medium for prolonged cultivation (CDMPC)64. Then, they were sub-cultured at 5% v/v and grown for 3 h (average duplication time in CDMPC is ~90 min (determined via OD600 measurements)), before induced with 0.1 ng/ml nisin. 90 min later, the sample preparation began (see below).
Samples were prepared as described previously41. Briefly, after culturing of the cells, 0.5 µg/mL ciprofloxacin (Sigma-Aldrich, Zwijndrecht, The Netherlands) was added to slightly inhibit further cell division and DNA replication for sgRNA-pTarget and sgRNA-pNonTarget experiments65. Then, cells were centrifuged (3500 RPM for 5 min; SW centrifuge (Froilabo, Mayzieu, France) with a CENSW12000024 swing-out rotor fitted with CENSW12000006 15 ml culture tube adaptors) and washed two times by gentle resuspension in 5 mL phosphate-buffered saline (PBS; Sigma-Aldrich, Zwijndrecht, The Netherlands). After removal of the supernatant, cells were resuspended in ~10–50 µL PBS from which 1–2 µL was immobilized on 1.5% 0.2 µm-filtered agarose (Certified Molecular Biology Agarose; BioRad, Veenendaal, The Netherlands) pads between two heat-treated glass coverslips (Paul Marienfeld GmbH & Co. KG, Lauda-Königshofen, Germany; #1.5H, 170 µm thickness). Heat treatment of glass coverslips involves heating the coverslips to 500 °C for 20 min in a muffle furnace to remove organic impurities.
Experimental settings
All imaging was performed on the miCube as described at 20 °C. A 561 nm laser with ~0.12 W/cm2 power output was used for HiLo-to-TIRF illumination with 4 ms stroboscopic illumination24 in the middle of 10 ms frames. Low-power UV illumination (µW/cm2 range) was used and increased during experiments to ensure a low and steady number of fluorophores in the sample until exhaustion of the fluorophores. A UV-increment scheme was consistently used for all experiments (Supplementary Table 5). No emission filter was used except for the TIRF filter (Chroma ZET405/488/561m-TRF). The raw data were acquired using the open source Micro-Manager software66. During acquisition, 2 × 2 binning was used, which resulted in a pixel size of 128 × 128 nm. The camera image was cropped to the central 512 × 512 pixels (65.64 × 65.64 µm) or smaller. For sptPALM experiments, frames 500–55,000 were used for analysis, corresponding to 5–550 s. This prevented attempted localization of overlapping fluorophores at the beginning, and ensured a set end-time. 200–300 brightfield images were recorded by illuminating the sample at the same position as during the measurement. For the brightfield recording, we used a commercial LED light (INREDA, IKEA, Sweden) and a home-made diffuser from weighing paper.
To extract single molecule localizations, a 50-frame temporal median filter (https://github.com/marcelocordeiro/medianfilter-imagej) was used to correct background intensity from the movies67. In short, the temporal median filter determines the median pixel value over a sliding-window of 50 pixels to determine the median background intensity value for a pixel at a specific position and time point. This value is subtracted from the original data, and any negative values are set to 0. In the process, all pixels are scaled according to the mean intensity of each frame to account for shifts in overall intensity. The first and last 25 frames from every batch of 8096 frames are removed in this process.
Single particle localization was performed via the ImageJ68/Fiji69 plugin ThunderSTORM70 with added phasor-based single molecule localization algorithm (pSMLM39). Image filtering was done via a difference-of-Gaussians filter with Sigma1 = 2 px and Sigma2 = 8 px. The approximate localization of molecules was determined via a local maximum with a peak intensity threshold of 8, and 8-neighbourhood connectivity. Sub-pixel localization was done via phasor fitting39 with a fit radius of 3 pixels (region-of-interest of 7-by-7 pixels). Custom-written MATLAB (The MathWorks, Natick, MA, USA) scripts were used to combine the output files from ThunderSTORM (Supplementary Software 1).
Cell segmentation
A cell-based segmentation on the localization positions was performed. First, a watershed was performed on the average of 300 brightfield-recorded frames of the cells. The watershed was done via the Interactive Watershed ImageJ plugin (http://imagej.net/Interactive_Watershed). Second, the localizations were filtered whether or not they fall in a pixel-accurate cell outline. If they do, a cell ID is added to the localization information.
Estimating the copy number of dCas9
The total copy number of dCas9 in a cell is not identical to the number of tracks found in each cell. Firstly, the UV illumination (405 nm wavelength) on the miCube required to photo-activate PAmCherry2 is not homogeneous over the complete field of view. To correct for this, a value for the average UV illumination experienced by each L. lactis cell is calculated. For this, a map of the UV intensity is made by placing a mirror on top of the objective and measuring the reflected scattering of the UV signal. Then, the mean UV intensity in the pixels corresponding to a cell according to the segmented brightfield images is stored. The cellular apparent dCas9 copy number is corrected for this normalized mean cellular UV intensity. Moreover, the cellular apparent dCas9 copy number was corrected for the average maturation grade of PAmCherry1, which is ~50%53 (shown schematically in Supplementary Fig. 9). We assume the maturation grades of PAmCherry1 and PAmCherry2 to be similar.
Tracking and fitting of diffusion coefficient histograms
A tracking procedure was performed in MATLAB, using a modified Particle Point Analysis script71 (https://nl.mathworks.com/matlabcentral/fileexchange/42573-particle-point-analysis) with a tracking window of 8 pixels (1.0 µm) and no memory (Supplementary Software 1). Localizations corresponding to different cells were excluded from being part of the same track. As the tracking window is of similar size as the cells itself, in practice all localizations in a cell are linked together in a track if they appear in successive frames.
An apparent diffusion coefficient, D*, was then calculated for each track from the mean-squared displacement (MSD) of single-step intervals72. In short, for every track with at least 4 localizations, the D* was calculated by calculating the mean square displacement between the first four steps and taking the average of that. Qualitative tracking information in cells (Fig. 2a, Fig. 3a) shows that diffusion coefficients up to ~4 µm2/s are obtained. These high diffusion coefficient tracks are caused by including false positive localizations in tracks. Therefore, tracks with a diffusion coefficient clearly caused by inclusion of false positive localizations (D* > 2.5 µm2/s) were excluded from further analysis. We then binned the diffusion coefficients in 40 logarithmic-divided bins from D* = 0.01 to D* = 2.5 µm2/s. The pNonTarget diffusional information was first corrected for the diffusion histogram obtained from a non-induced sample, subtracting the non-induced histogram from the pNonTarget histogram based on the approximated relative size of the non-induced histogram (~7.2 tracks per cell were found in non-induced cells).
Then, a Monte-Carlo diffusion distribution analysis (MC-DDA; described below) consisting of 20.000 dCas9 proteins was fitted via a general Levenberg-Marquardt fitting procedure in MATLAB. The error of this fit was determined via a general bootstrapping approach, where a D*-list with the same length as the original, but randomly filled with values from the original (allowing for more than one entry of the same data), was fitted via the same procedure. For the pTarget diffusional information, the pNonTarget best fitted model (calculated via the same model, but with 100.000 dCas9 proteins) was fitted and smoothed via a Savitzky-Golay filter with order 3 and length 7, to reduce noise on the fit, alongside a single population following the following equation:
$$y = \frac{{\left( {\frac{n}{{D_{plasmid}}}} \right)^n \cdot x^{\left( {n - 1} \right)} \cdot e^{ - n\frac{x}{{D_{plasmid}}}}}}{{\left( {n - 1} \right)!}}$$
Where Dplasmid is the D*-value corresponding to plasmid-bound dCas9, n the number of steps in the trajectory (set to four in this study), y the count of the histogram, and x the D*-value of the histogram. Dplasmid was kept constant in the global fit, while the size of this population and the size of the pNonTarget model were allowed to vary between apparent cellular dCas9 copy number bins. Again, the error of this fit was determined via a general bootstrapping approach.
pNonTarget Monte-Carlo diffusion distribution analysis
The pNonTarget data is fitted with a Monte-Carlo diffusion distribution analysis (MC-DDA), in which a variable Dfree, localization error, kfree→bound, and kbound→free need to be provided (Supplementary Software 1). A set number of dCas9 proteins are simulated (20,000 for the fit, 100,000 for visualising the fit). These proteins are then randomly placed in a cell, which is simulated as a cylinder with length 0.5 µm and radius 0.5 µm, capped by two half-spheres with radius 0.5 µm, and the current state of the proteins is set to free or immobile, based on the respective kinetic rates (cbound = kfree→bound /(kbound→free + kfree→bound), cfree = 1−cbound). Moreover, the proteins are given a time before they are changed between states (log(rand)/−k, where rand is a random number between 0 and 1, and k is the respective kinetic rate). Then, the movement of the proteins is simulated with over-sampling with regards to the frame time (0.1 ms). The free proteins will move a distance equal to a randomly sampled normal distribution with \(\sigma = \sqrt {2 \cdot D_{free} \cdot steptime}\), where steptime is 0.1 ms. Then, it checked if this position is still within the cell outline. If not, a new location will be pulled from the distribution and checked against the cell outline. Every time-step, the time until state-change is subtracted with the time-step, and if this value becomes ≤ 0, the proteins will switch states, getting a new diffusion coefficient and state-change time. Every 10 ms after an initial equilibrium time of 200 ms, the current location of the proteins is convoluted with a random localization error, from a randomly sampled normal distribution with σ = localization error. The simulation is ended after 5 localization points are acquired for every protein. Further tracking and diffusion coefficient calculations are done the same as the experimental data.
Target simulation
For the target simulation, a certain number of dCas9 are simulated (similar to the average of the bins used in experiments), alongside a variable total number of PAM sites (1/16 chance at ~7.5 mln bases, or 1.5× double-stranded L. lactis genome73), plasmid copy number, target sites (5 per plasmid), incubation time (90 min), fluorophore maturation time (20 min27), and a koff rate (Supplementary Software 1). The dCas9 proteins are simulated one by one. The first dCas9 will have access to all target sites, and will be simulated for [incubation time], assuming the first dCas9 was made exactly at the start of the nisin incubation. Subsequent dCas9 proteins will have access to fewer target sites, depending on whether or not earlier dCas9 proteins have ended the simulation bound to target sites. Subsequent dCas9 proteins will also be simulated for a shorter time, linearly scaling from [incubation time] to [fluorophore maturation time], which assumes that dCas9 proteins are steadily produced throughout the incubation time, but allowing for the fact that dCas9 proteins that do not yet have a matured PAmCherry2 are not visible during sptPALM.
Then, the dCas9 proteins randomly start in the free, PAM-probing, or target-bound state, based on the previously determined kinetic constants, similarly as in the pNonTarget simulation. The proteins are also given a time until state change, as was done in the pNonTarget simulation. Next, the simulation time of a single dCas9 protein was decreased by this time until state change, whereupon a new state was given to the protein: free proteins changed to PAM-probing or target-bound, with the target-bound chance being equal to \({\raise0.5ex\hbox{$\scriptstyle {nr\,target\,sites}$}\kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle {total\,nr\,of\,PAM\,sites}$}}\); PAM-probing or target-bound proteins were changed to free proteins. This was continued until the end of the simulation, after which the final state was determined. If the dCas9 was bound to a target, the available target sites were decreased by 1 for the next simulated dCas9. The reported values are the mean of 50 repetitions of the simulation, with the 95% confidence interval determined via the standard deviation of these repetitions.
For simulating Cas9 cleavage rates, it was assumed that a single target site was present and that a dCas9 would never be removed from a target site. By then analysing the bound dCas9, it indicates whether the target site has been cleaved by Cas9. The other simulation parameters were kept constant.
miCube drift quantification
We characterised the positional stability of the miCube via super-resolution measurements of GATTA-PAINT 80R DNA-PAINT nanorulers (GATTAquant GmbH, Germany). We imaged the nanorulers in total internal reflection (TIR) mode using a 561 nm laser (~7 mW) with a frame time of 50 ms using 2 × 2 pixel binning on the Andor Zyla 4.2 PLUS sCMOS. Astigmatism was enabled by placing a 1000 mm focal length astigmatic lens (Thorlabs) 51 mm away from the camera chip. A video of 10,000 frames was recorded via the MicroManager software66.
After recording the movie, we first localized the x, y, and z-positions of the point spread functions of excited DNA-PAINT nanoruler fluorophores with the ThunderSTORM software70 for ImageJ68 with the phasor-based single molecule localization (pSMLM) add-on39. The ThunderSTORM software was used with the standard settings, and a 7 by 7 pixel region of interest around the approximate centre of the point spread functions was used for pSMLM. To determine the z-position, we compared the astigmatism of the point-spread function to a pre-recorded calibration curve recorded using immobilized fluorescent latex beads (560 nm emission peak, 50 nm diameter).
After data analysis, we performed drift-correction in the lateral plane using the cross-correlation method of the ThunderSTORM software. The cross-correlation images were calculated using 10x magnified super-resolution images from a sub-stack of 1000 original frames. The fit of the cross-correlation was used as drift of the lateral plane. Drift of the axial plane was analysed by taking the average z-position of all fluorophores, assuming that all DNA-PAINT nanorulers are fixed to the bottom of the glass slide.
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
The source data underlying Figs. 2 and 3 and Supplementary Figs. 4–6 and 8 are provided as a Source Data file.
All code necessary to perform this study is made available as Supplementary Information (Supplementary Software 1, which further contains in accompanying programming flowchart).
Qi, L. S. et al. Repurposing CRISPR as an RNA-guided platform for sequence-specific control of gene expression. Cell 152, 1173–1183 (2013).
Komor, A. C., Badran, A. H. & Liu, D. R. CRISPR-based technologies for the manipulation of eukaryotic genomes. Cell 168, 20–36 (2017).
Jiang, W., Bikard, D., Cox, D., Zhang, F. & Marraffini, L. A. RNA-guided editing of bacterial genomes using CRISPR-Cas systems. Nat. Biotechnol. 31, 233–239 (2013).
Liu, J.-J. et al. CasX enzymes comprise a distinct family of RNA-guided genome editors. Nature 566, 218–223 (2019).
Sapranauskas, R. et al. The Streptococcus thermophilus CRISPR/Cas system provides immunity in Escherichia coli. Nucleic Acids Res. 39, 9275–9282 (2011).
Chen, B. et al. Dynamic imaging of genomic loci in living human cells by an optimized CRISPR/Cas system. Cell 155, 1479–1491 (2013).
Bonomo, M. E. & Deem, M. W. The physicist's guide to one of biotechnology's hottest new topics: CRISPR-Cas. Phys. Biol. 15, 041002 (2018).
Article ADS Google Scholar
Anders, C., Niewoehner, O., Duerst, A. & Jinek, M. Structural basis of PAM-dependent target DNA recognition by the Cas9 endonuclease. Nature 513, 569–573 (2014).
Globyte, V., Lee, S. H., Bae, T., Kim, J.-S. & Joo, C. CRISPR/Cas9 searches for a protospacer adjacent motif by lateral diffusion. EMBO J. 38, e99466 (2018).
Knight, S. C. et al. Dynamics of CRISPR-Cas9 genome interrogation in living cells. Science 350, 823–826 (2015).
Sternberg, S. H., Redding, S., Jinek, M., Greene, E. C. & Doudna, J. A. DNA interrogation by the CRISPR RNA-guided endonuclease Cas9. Nature 507, 62–67 (2014).
Singh, D., Sternberg, S. H., Fei, J., Doudna, J. A. & Ha, T. Real-time observation of DNA recognition and rejection by the RNA-guided endonuclease Cas9. Nat. Commun. 7, 12778 (2016).
Gasiunas, G., Barrangou, R., Horvath, P. & Siksnys, V. Cas9–crRNA ribonucleoprotein complex mediates specific DNA cleavage for adaptive immunity in bacteria. Proc. Natl Acad. Sci. USA 109, E2579–E2586 (2012).
Jones, D. L. et al. Kinetics of dCas9 target search in Escherichia coli. Science 357, 1420–1424 (2017).
Machielsen, R., Siezen, R. J., Hijum, S. A. F. Tvan, Vlieg, J. E. T. & van, H. Molecular description and industrial potential of Tn6098 conjugative transfer conferring alpha-galactoside metabolism in Lactococcus lactis. Appl. Environ. Microbiol. 77, 555–563 (2011).
Hidalgo-Cantabrana, C., O'Flaherty, S. & Barrangou, R. CRISPR-based engineering of next-generation lactic acid bacteria. Curr. Opin. Microbiol. 37, 79–87 (2017).
Zhang, C., Wohlhueter, R. & Zhang, H. Genetically modified foods: a critical review of their promise and problems. Food Sci. Hum. Wellness 5, 116–123 (2016).
Manley, S. et al. High-density mapping of single-molecule trajectories with photoactivated localization microscopy. Nat. Methods 5, 155–157 (2008).
Uphoff, S., Reyes-Lamothe, R., Leon, F. G., de, Sherratt, D. J. & Kapanidis, A. N. Single-molecule DNA repair in live bacteria. Proc. Natl Acad. Sci. USA 110, 8063–8068 (2013).
Smith, C. S., Joseph, N., Rieger, B. & Lidke, K. A. Fast, single-molecule localization that achieves theoretically minimum uncertainty. Nat. Methods 7, 373–375 (2010).
Rieger, B. & Stallinga, S. The lateral and axial localization uncertainty in super-resolution light microscopy. ChemPhysChem. 15, 664–670 (2014).
Shen, H. et al. Single particle tracking: from theory to biophysical applications. Chem. Rev. 117, 7331–7376 (2017).
Vink, J. N. A. et al. Direct visualization of native CRISPR target search in live bacteria reveals Cascade DNA surveillance mechanism. Preprint at https://www.biorxiv.org/content/10.1101/589119v1 (2019).
Farooq, S. & Hohlbein, J. Camera-based single-molecule FRET detection with improved time resolution. Phys. Chem. Chem. Phys. 17, 27862–27872 (2015).
Santoso, Y. et al. Conformational transitions in DNA polymerase I revealed by single-molecule FRET. Proc. Natl Acad. Sci. USA 107, 715–720 (2010).
Santoso, Y., Torella, J. P. & Kapanidis, A. N. Characterizing single-molecule FRET dynamics with probability distribution analysis. ChemPhysChem. 11, 2209–2219 (2010).
Subach, F. V. et al. Photoactivatable mCherry for high-resolution two-color fluorescence microscopy. Nat. Methods 6, 153–159 (2009).
Arsenault, A. et al. Open-frame system for single-molecule microscopy. Rev. Sci. Instrum. 86, 033701 (2015).
Nicovich, P. R., Walsh, J., Böcking, T. & Gaus, K. NicoLase—an open-source diode laser combiner, fiber launch, and sequencing controller for fluorescence microscopy. PLoS ONE 12, e0173879 (2017).
Auer, A. et al. Nanometer-scale multiplexed super-resolution imaging with an economic 3D-DNA-PAINT microscope. ChemPhysChem 19, 3024–3034 (2018).
Babcock, H. P. Multiplane and spectrally-resolved single molecule localization microscopy with industrial grade CMOS cameras. Sci. Rep. 8, 1726 (2018).
Diekmann, R. et al. Characterization of an industry-grade CMOS camera well suited for single molecule localization microscopy–high performance super-resolution at low cost. Sci. Rep. 7, 14425 (2017).
Holm, T. et al. A blueprint for cost-efficient localization microscopy. ChemPhysChem 15, 651–654 (2014).
Ma, H., Fu, R., Xu, J. & Liu, Y. A simple and cost-effective setup for super-resolution localization microscopy. Sci. Rep. 7, 1542 (2017).
Kwakwa, K. et al. easySTORM: a robust, lower-cost approach to localisation and TIRF microscopy. J. Biophotonics 9, 948–957 (2016).
Zhang, Y. S. et al. A cost-effective fluorescence mini-microscope for biomedical applications. Lab. Chip 15, 3661–3669 (2015).
Diederich, B., Then, P., Jügler, A., Förster, R. & Heintzmann, R. cellSTORM—Cost-effective super-resolution on a cellphone using dSTORM. PLOS ONE 14, e0209827 (2019).
Aristov, A., Lelandais, B., Rensen, E. & Zimmer, C. ZOLA-3D allows flexible 3D localization microscopy over an adjustable axial range. Nat. Commun. 9, 2409 (2018).
Martens, K. J. A., Bader, A. N., Baas, S., Rieger, B. & Hohlbein, J. Phasor based single-molecule localization microscopy in 3D (pSMLM-3D): an algorithm for MHz localization rates using standard CPUs. J. Chem. Phys. 148, 123311 (2017).
Coelho, S. et al. Single molecule localization microscopy with autonomous feedback loops for ultrahigh precision. Preprint at https://www.biorxiv.org/content/10.1101/487728v1 (2018).
van Beljouw S.P.B. et al. Evaluating single-particle tracking by photo-activation localization microscopy (sptPALM) in Lactococcus lactis. Phys. Biol. 16, 035001 (2019).
Mierau, I. & Kleerebezem, M. 10 years of the nisin-controlled gene expression system (NICE) in Lactococcus lactis. Appl. Microbiol. Biotechnol. 68, 705–717 (2005).
Vincent, L. & Soille, P. Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Trans. Pattern Anal. Mach. Intell. 13, 583–598 (1991).
Nishimasu, H. et al. Crystal structure of Staphylococcus aureus Cas9. Cell 162, 1113–1126 (2015).
Edward, J. T. Molecular volumes and the Stokes-Einstein equation. J. Chem. Educ. 47, 261 (1970).
Trovato, F. & Tozzini, V. Diffusion within the cytoplasm: a mesoscale model of interacting macromolecules. Biophys. J. 107, 2579–2591 (2014).
Vos, D. & M, W. Gene cloning and expression in lactic streptococci. FEMS Microbiol. Rev. 3, 281–295 (1987).
Prazeres, D. M. F. Prediction of diffusion coefficients of plasmids. Biotechnol. Bioeng. 99, 1040–1044 (2008).
Whinn, K. et al. Nuclease dead Cas9 is a programmable roadblock for DNA replication. Preprint at https://www.biorxiv.org/content/10.1101/455543v2 (2018).
Vigouroux, A., Oldewurtel, E., Cui, L., Bikard, D. & van Teeffelen, S. Tuning dCas9's ability to block transcription enables robust, noiseless knockdown of bacterial genes. Mol. Syst. Biol. 14, e7899 (2018).
Tal, S. & Paulsson, J. Evaluating quantitative methods for measuring plasmid copy numbers in single cells. Plasmid 67, 167–173 (2012).
Slutsky, M. & Mirny, L. A. Kinetics of protein-DNA Interaction: facilitated target location in sequence-dependent potential. Biophys. J. 87, 4021–4035 (2004).
Durisic, N., Laparra-Cuervo, L., Sandoval-Álvarez, Á., Borbely, J. S. & Lakadamyali, M. Single-molecule evaluation of fluorescent protein photoactivation efficiency using an in vivo nanotemplate. Nat. Methods 11, 156 (2014).
Nagai, T. et al. A variant of yellow fluorescent protein with fast and efficient maturation for cell-biological applications. Nat. Biotechnol. 20, 87 (2002).
Khan, S. A. Rolling-circle replication of bacterial plasmids. Microbiol Mol. Biol. Rev. 61, 442–455 (1997).
Farasat, I. & Salis, H. M. A biophysical model of CRISPR/Cas9 activity for rational design of genome editing and gene regulation. PLOS Comput. Biol. 12, e1004724 (2016).
Huang, B., Wang, W., Bates, M. & Zhuang, X. Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy. Science 319, 810–813 (2008).
Almada, P., Culley, S. & Henriques, R. PALM and STORM: into large fields and high-throughput microscopy with sCMOS detectors. Methods 88, 109–121 (2015).
Kuipers, O. P., de Ruyter, P. G. G. A., Kleerebezem, M. & de Vos, W. M. Quorum sensing-controlled gene expression in lactic acid bacteria. J. Biotechnol. 64, 15–21 (1998).
Wells, J. M., Wilson, P. W. & Le Page, R. W. F. Improved cloning vectors and transformation procedure for Lactococcus lactis. J. Appl. Bacteriol. 74, 629–636 (1993).
Campelo, A. B. et al. A bacteriocin gene cluster able to enhance plasmid maintenance in Lactococcus lactis. Microb. Cell Factor. 13, 77 (2014).
Els, S. van der, James, J. K., Kleerebezem, M. & Bron, P. A. Development of a versatile Cas9-driven subpopulation-selection toolbox in Lactococcus lactis. Appl. Environ. Microbiol. 84, 02752–17 (2018).
van Asseldonk, M. et al. Cloning of usp45, a gene encoding a secreted protein from Lactococcus lactis subsp. Lact. MG1363. Gene 95, 155–160 (1990).
Goel, A., Santos, F., Vos, W. M. de, Teusink, B. & Molenaar, D. A standardized assay medium to measure enzyme activities of Lactococcus lactis while mimicking intracellular conditions. Appl. Environ. Microbiol. AEM. 05276–11 (2011).
Drlica, K., Malik, M., Kerns, R. J. & Zhao, X. Quinolone-mediated bacterial death. Antimicrob. Agents Chemother. 52, 385–392 (2008).
Edelstein, A. D. et al. Advanced methods of microscope control using μManager software. J. Biol. Methods 1, e10 (2014).
Hoogendoorn, E. et al. The fidelity of stochastic single-molecule super-resolution reconstructions critically depends upon robust background estimation. Sci. Rep. 4, 3854 (2014).
Schneider, C. A., Rasband, W. S. & Eliceiri, K. W. NIH Image to ImageJ: 25 years of image analysis. Nat. Methods 9, 671 (2012).
Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682 (2012).
Ovesný, M., Křížek, P., Borkovec, J., Švindrych, Z. & Hagen, G. M. ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging. Bioinformatics 30, 2389–2390 (2014).
Crocker, J. C. & Grier, D. G. Methods of digital video microscopy for colloidal studies. J. Colloid Interface Sci. 179, 298–310 (1996).
Stracy, M. & Kapanidis, A. N. Single-molecule and super-resolution imaging of transcription in living bacteria. Methods 120, 103–114 (2017).
Linares, D. M., Kok, J. & Poolman, B. Genome sequences of Lactococcus lactis MG1363 (Revised) and NZ9000 and comparative physiological studies. J. Bacteriol. 192, 5806–5812 (2010).
K.J.A.M. is funded by a VLAG PhD-fellowship grant awarded to J.H. J.H. acknowledges funding from the Innovation Program Microbiology Wageningen (IPM-3). S.v.d.E is funded by the BE-Basic R&D program, which was granted a FES subsidy from the Dutch Ministry of Economic affairs. The authors thank the WOSM (Warwick Open Source Microscope, see www.wosmic.org) for inspiration.
These authors contributed equally: Koen J.A. Martens, Sam P.B. van Beljouw.
Laboratory of Biophysics, Wageningen University and Research, Stippeneng 4, 6708 WE, Wageningen, The Netherlands
Koen J. A. Martens, Sam P. B. van Beljouw, Sander Baas, George A. Vogelaar & Johannes Hohlbein
Laboratory of Bionanotechnology, Wageningen University and Research, Bornse Weilanden 9, 6708 WG, Wageningen, The Netherlands
Koen J. A. Martens
Host-Microbe Interactomics Group, Animal Sciences, Wageningen University and Research, De Elst 1, 6708 WD, Wageningen, The Netherlands
Simon van der Els, Peter van Baarlen & Michiel Kleerebezem
NIZO food research, Kernhemseweg 2, 6718 ZB, Ede, The Netherlands
Simon van der Els
Kavli Institute of Nanoscience, Department of Bionanoscience, Delft University of Technology, Van der Maasweg 9, 2629 HZ, Delft, The Netherlands
Jochem N. A. Vink & Stan J. J. Brouns
Microspectroscopy Research Facility, Wageningen University and Research, Stippeneng 4, 6708 WE, Wageningen, The Netherlands
Johannes Hohlbein
Sam P. B. van Beljouw
Jochem N. A. Vink
Sander Baas
George A. Vogelaar
Stan J. J. Brouns
Peter van Baarlen
Michiel Kleerebezem
K.J.A.M., S.B., and J.H. designed, built and characterised the miCube setup. K.J.A.M., S.P.B.v.B. and G.A.V. recorded and analysed the experimental single molecule data. J.H., S.v.d.E. and P.v.B. envisioned using L. lactis, dCas9, fluorescent proteins and p(Non-)Target cells to conduct super-resolution single molecule studies. S.P.B.v.B., S.v.d.E., P.v.B., and M.K. designed the DNA vectors used in this study. S.P.B.v.B. and S.v.d.E. assembled the DNA vectors and transformed cells. K.J.A.M., J.N.A.V., and J.H. developed DDA. K.J.A.M. and J.N.A.V. wrote software for data analysis. J.N.A.V. and S.J.J.B. provided reagents and expertise for setting up the single molecule assays. K.J.A.M. and J.H. wrote the manuscript with input from all authors. J.H. initialised the study and the collaborations, and supervised all aspects of the study.
Correspondence to Johannes Hohlbein.
Peer review information: Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Description of Additional Supplementary Files
Supplementary Software 1
Martens, K.J.A., van Beljouw, S.P.B., van der Els, S. et al. Visualisation of dCas9 target search in vivo using an open-microscopy framework. Nat Commun 10, 3552 (2019). https://doi.org/10.1038/s41467-019-11514-0
Constructing a cost-efficient, high-throughput and high-quality single-molecule localization microscope for super-resolution imaging
John S. H. Danial
Jeff Y. L. Lam
David Klenerman
Open microscopy in the life sciences: quo vadis?
Benedict Diederich
Kirti Prakash
Nature Methods (2022)
Microscopy made to order
Michael Eisenstein
Engineering of the genome editing protein Cas9 to slide along DNA
Trishit Banerjee
Hiroto Takahashi
Kiyoto Kamagata
Scientific Reports (2021)
Multistable and dynamic CRISPRi-based synthetic circuits
Javier Santos-Moreno
Eve Tasiudi
Yolanda Schaerli | CommonCrawl |
What is the difference between posterior and posterior predictive distribution?
I understand what a Posterior is, but I'm not sure what the latter means?
How are the 2 different?
Kevin P Murphy indicated in his textbook, Machine Learning: a Probabilistic Perspective, that it is "an internal belief state". What does that really mean? I was under the impression that a Prior represents your internal belief or bias, where am I going wrong?
posterior definition
A.DA.D
The simple difference between the two is that the posterior distribution depends on the unknown parameter $\theta$, i.e., the posterior distribution is: $$p(\theta|x)=c\times p(x|\theta)p(\theta)$$ where $c$ is the normalizing constant.
While on the other hand, the posterior predictive distribution does not depend on the unknown parameter $\theta$ because it has been integrated out, i.e., the posterior predictive distribution is: $$p(x^*|x)=\int_\Theta c\times p(x^*,\theta|x)d\theta=\int_\Theta c\times p(x^*|\theta)p(\theta|x)d\theta$$
where $x^*$ is a new unobserved random variable and is independent of $x$.
I won't dwell on the posterior distribution explanation since you say you understand it but the posterior distribution "is the distribution of an unknown quantity, treated as a random variable, conditional on the evidence obtained" (Wikipedia). So basically its the distribution that explains your unknown, random, parameter.
On the other hand, the posterior predictive distribution has a completely different meaning in that it is the distribution for future predicted data based on the data you have already seen. So the posterior predictive distribution is basically used to predict new data values.
If it helps, is an example graph of a posterior distribution and a posterior predictive distribution:
Jinhua Wang
$\begingroup$ That posterior predictive distribution graph needs new axis labels and a caption or something. I get the idea because I know what a posterior predictive distribution is, but someone who's just figuring it out could get seriously confused. $\endgroup$ – Cyan Sep 25 '13 at 20:24
$\begingroup$ Thanks @BabakP could you also point me to what what distribution you used to plot the pmf of theta, and P(x*|theta) $\endgroup$ – A.D Sep 26 '13 at 3:42
$\begingroup$ ...cause I would like to work out the full example. $\endgroup$ – A.D Sep 26 '13 at 3:48
$\begingroup$ I just pretended that my posterior was a Beta(3,2). I did not actually work out anything. But of course, if you want an example, assume the likelihood is a Binomial(n,p) and the prior on p is a Beta(a,b) then you should be able to obtain that the posterior is once again a beta distribution. $\endgroup$ – user25658 Sep 26 '13 at 5:03
$\begingroup$ There is a wording problem. The posterior distribution does not depend on the parameter. It is a function of the parameter but one does does not need to know the true value of the parameter. The distinction is whether you want a distribution of $\theta$ or a distribution of data. $\endgroup$ – Frank Harrell Jun 24 '18 at 13:39
The predictive distribution is usually used when you have learned a posterior distribution for the parameter of some sort of predictive model. For example in Bayesian linear regression, you learn a posterior distribution over the w parameter of the model y=wX given some observed data X.
Then when a new unseen data point x* comes in, you want to find the distribution over possible predictions y* given the posterior distribution for w that you just learned. This distribution over possible y*'s given the posterior for w is the prediction distribution.
They refer to distributions of two different things.
The posterior distribution refers to the distribution of the parameter, while the predictive posterior distribution (PPD) refers to the distribution of future observations of data.
SPMQETSPMQET
$\begingroup$ So, if I understand this correctly, if the true likelihood distribution (or true distribution where the data comes from) is Gaussian, then as we gather more and more data (observations), the PPD should converge towards a Gaussian (for some parameter $\theta$), while the posterior distribution should converge towards a spike at the true parameter $\theta$? $\endgroup$ – SimpleProgrammer Oct 29 '20 at 20:03
Thanks for contributing an answer to Cross Validated!
Not the answer you're looking for? Browse other questions tagged posterior definition or ask your own question.
Extra information at prediction time when using a Bayesian logistic regression vs. normal
Interpretation of predictive posterior distribution
What's wrong with this illustration of posterior distribution?
Bayesian neural networks: very multimodal posterior?
Computing the posterior mean using a Gaussian prior?
What Is Meant by "Maximising" Posterior Probability?
Notation in definition of a quantity involving uncertainty and posterior probability | CommonCrawl |
Why, precisely, do mathematicians think the Collatz conjecture is true? [closed]
I noticed Wikipedia says that
most mathematicians who have looked into the problem think the conjecture is true because experimental evidence and heuristic arguments support it
(seen on Wikipedia, late 2017).
Is this just wrong? i.e. it's simply not true that "most" mathematicians who have looked into Collatz think Collatz is true - ?
My question then, if they do tend to think it is true: why, very specifically, do they think that?
The fact that it's been tested up to about 10^61 doesn't mean much, many such theorems turn out to be wrong in such cases.
The "probabilistic" "3/4" observation is not helpful (we already know it's "probably" true from any cursory examination. So what?)
As I understand it Krasikov and Lagarias showed that (in a word) "most" numbers definitely go to one, but we already know that.
(The work of Kurtz and Simon is beyond me but I believe it comes closer to showing - if anything - that the problem is "proven unsolvable", but that if anything would seem to dismiss the idea that "most mathematicians think it is true".)
Is there anything else?
What's the deal on all this? Has there been any further recent breakthroughs (beyond my laughable knowledge level!) which would suddenly mean "most mathematicians who have looked into Collatz think" it is true?
Again, my question is, and thanks in advance, why very specifically do these folks think that??
formal-proofs collatz
Fattie
FattieFattie
closed as primarily opinion-based by Yves Daoust, Professor Vector, user99914, I am Back, 5xum Nov 2 '17 at 8:48
$\begingroup$ Do you want one answer per mathematician ? $\endgroup$ – Yves Daoust Nov 1 '17 at 12:50
$\begingroup$ hi Yves, that's extremely silly. With other similar situations, one can give specific, actual reasons why "we tend to think the conjecture is true, even if there is no proof yet". What, then, are these reasons in the Collatz case? $\endgroup$ – Fattie Nov 1 '17 at 12:52
$\begingroup$ I think it's a very good question -- what are the heuristic arguments that lead some mathematicians to believe that the Collatz conjecture is likely true? $\endgroup$ – littleO Nov 1 '17 at 13:07
$\begingroup$ The style of the question might be antagonizing people a bit, but it's often useful to understand heuristic arguments in favor of a conjecture. For example, Terence Tao discusses the probabilistic heuristic justification of the ABC conjecture here: terrytao.wordpress.com/2012/09/18/… . In fact, Tao discusses why the Collatz conjecture is plausible here: terrytao.wordpress.com/2011/08/25/… $\endgroup$ – littleO Nov 1 '17 at 13:19
$\begingroup$ @Fattie I doubt that the Collatz-conjecture has been tested upto $10^{61}$. You probably mean $2^{61}$ $\endgroup$ – Peter Jul 5 '18 at 13:04
I cannot stand for all mathematicians, however I can describe in detail why after roughly a year of research into the Collatz Conjecture why I personally believe the Conjecture could be true and why the supporting evidence $87*2^{60}$ and the $"3/4"$ argument are significant.
The belief that the Collatz Conjecture always reaches one may stem from mathematicians and others connecting this evidence to what they understand about the Conjecture. Most people who were either introduced to the Collatz Conjecture or heard about it and became curious at some point worked out some of the Collatz sequences by hand or with code. In doing so, they get the sense of why there's so much confusion and gain a first-hand experience of the "randomness" generated by the algorithm.
When looking at the $87*2^{60}$ evidence or the $"3/4"$ argument they may conclude, "This supports what I worked out on paper or on my computer already, so this evidence must make some sense and therefore the Collatz Conjecture could be true." If they choose to spend more time on the Conjecture, this idea may be reinforced over time. This could also lead to the opposite idea where some people believe there must be some gigantic number out there that disproves the Conjecture. Personally, every time I worked on the problem, I became more and more convinced that the Conjecture is true, but it needs to be proven and the algorithm needs to be dissected and explained.
Of course, this only explains where the initial perspective of my own and possibly others came from, and one reason some people may have for believing this evidence has some (or no) meaning. However, interpretation is not the only reason this evidence may mean more than just that.
Without context, any statistic, ratio, or really big number does not mean or suggest anything, and just because a big number or well received ratio was derived from the problem it came from does not mean the context of the original problem supports it. This may not the case for these two pieces of evidence.
The $87*2^{60}$ evidence and $"3/4"$ make more sense as evidence when modified Collatz rules are considered where if a number $x$ is odd, multiply by $ax+b$, and if $x$ is even, perform $x/2$. While tweaking with these rules can sometimes lead to drastically different results, these are the closest rules that can be referenced as additional context since these algorithms share some basic fundamental rules despite not being simpler generalizations most of the time. As a result, we can look at rules that would have 'false conditions' such as another cycle or wandering off to infinity.
Modified Collatz rules such as $3x+5$, $3x+7$, and $3x+11$ have multiple loops for $x>0$. What is interesting here is these loops are fairly accessible; Most of these loops can easily be found by hand. As far as we know, none of these rules have strange loops that start in the millions, or trillions, or anything like that. Another rule $3x+3$ also seems to share the same loop behavior as $3x+1$ but instead of going to one it goes to 3. After seeing some other examples of Collatz loops, it begs the following question: If such a loop existed for $3x+1$ among the googleplexquadrillionsmillions or whatever, why does it exist and why have we come across more examples of loops with smaller numbers in the modified Collatz rules?
A similar approach can be applied to the $"3/4"$ evidence. The modified rule $5x+1$ seems to have a bizarre [1-6-3-16-8-4-2-1] loop and then once you start with 7, the numbers seem to explode towards infinity, occasionally shrinking every-so often along the way. This speculation makes more sense when the formula $log(5)-2log(2)$ expresses a value larger than one, supporting the observed "infinite" behavior. Having this as something to compare to, the $"3/4"$ argument now makes much more sense as a possible explanation for why the Collatz Conjecture on average does not explode where $5x+1$ seems to do so.
I know it may seem like I am cheating, a modified Collatz rule is certainly not the same as the original $3x+1$ problem. However, at least for the $3x+b$ rules, I believe there may be a relevant connection aside from convenience. For instance, the Collatz Conjecture seems to be embedded into some of the positive integers of rules where $b$ is odd and $b>1$. For example, apply $3x+5$ for $x$ = 65. The resulting trajectory will be a multiple of the trajectory for 13 iterated by the algorithm $3x+1$. Therefore, I assume it may be possible there is at least some relation between the Collatz Conjecture and these modified rules.
[$3x+5$] 65->200->100->50->25->80->40->20->10->5->…
[$3x+1$] 13-> 40 -> 20 -> 10 -> 5 -> 16 -> 8 -> 4 -> 2-> 1->…
Obviously, all of this is speculation until a formal and correct proof comes out or provable explanations for these patterns emerge. I hope this at least gives the impression of where I stand on this issue without sounding too much like a crank, and makes these pieces of evidence seem more appropriate when representing a possible case for what the proof of the Conjecture might or might not look like.
(note: Sorry for the long response. I did my best to be specific.)
Griffon Theorist697Griffon Theorist697
$\begingroup$ Astounding answer, thanks - ingesting ! $\endgroup$ – Fattie Nov 2 '17 at 21:19
$\begingroup$ Two additions. 1) for the $3x+r$-case. If we allow fractional numbers then in the $3x+1$-problem we can have cycles on such fractional numbers. Let the smallest one be $a_1=p/q$ with $p,q$ different primes. Then with $r=q$ the $3x+r$-case has a cycle at $a_r=a_1 \cdot r$ which is then an integer. $\endgroup$ – Gottfried Helms Nov 27 '17 at 21:54
$\begingroup$ 2) The idea, that if there is no cycle in small integers then perhaps in the zentillions... The Collatz-cycle problem has the nice property, that there is an upper bound for the members of a cycle - this is (a bit "fuzzy") depending on the cycle-length $N$, so it implies a conceptual tendency against an argument of big numbers. That two conditions might make it again easier for some mathematician to trust that the conjecture is true after up to $a_1 < 87 \cdot 2^{60}$ there's no cycle. (The question of divergence seem to less frequent been considered beyond the statistical reason) $\endgroup$ – Gottfried Helms Nov 27 '17 at 21:56
Not the answer you're looking for? Browse other questions tagged formal-proofs collatz or ask your own question.
The $5n+1$ Problem
Does Collatz Rule $3x+5$ have a bias for certain loops, or are my results faulty?
Why is $3$ the multiplicative coefficient in the Collatz conjecture?
About the Collatz conjecture
Consequences of Collatz Conjecture being true
On solving the Collatz conjecture
If the Collatz Conjecture is unsolvable is it true?
The Worst Case of the Collatz Conjecture
Are the prime-free sequences $x_{n+1}=4x_n+1$ of odd numbers in bijection with the square numbers greater than $16$?
Collatz-esque Conjecture
a question about the Collatz conjecture | CommonCrawl |
Climate change, fisheries management and fishing aptitude affecting spatial and temporal distributions of the Barents Sea cod fishery
Arne Eide1,2
Ambio volume 46, pages 387–399 (2017)Cite this article
Climate change is expected to influence spatial and temporal distributions of fish stocks. The aim of this paper is to compare climate change impact on a fishery with other factors impacting the performance of fishing fleets. The fishery in question is the Northeast Arctic cod fishery, a well-documented fishery where data on spatial and temporal distributions are available. A cellular automata model is developed for the purpose of mimicking possible distributional patterns and different management alternatives are studied under varying assumptions on the fleets' fishing aptitude. Fisheries management and fishing aptitude, also including technological development and local knowledge, turn out to have the greatest impact on the spatial distribution of the fishing effort, when comparing the IPCC's SRES A1B scenario with repeated sequences of the current environmental situation over a period of 45 years. In both cases, the highest profits in the simulation period of 45 years are obtained at low exploitation levels and moderate fishing aptitude.
Avoid the most common mistakes and prepare your manuscript for journal editors.
It is difficult to predict future development of Arctic marine ecosystems and, even more so, how these are affected by human interactions. Immediate effects of such interactions are not only functions of the level and profile of the human activity but also of current state and dynamics of the natural system. Spatial and temporal distributions of prey and predator species vary, depending both on external drivers (e.g. climate and fisheries; Murawski 1993) and internal dynamics (e.g. spawning migrations; Rose 1993; Carvalho 1993). Long-term effects are by nature more difficult to predict than short-term perturbations, being functions of previous interactions and poorly known dynamics causing variations in spatial and temporal distributions of the system.
This describes the complexity of a marine ecosystem in its natural state, including the environmental variation which may occur within the natural sample space of the system (often referred to as natural variation). Climate change could cause system perturbations, redistributing some, or large, parts of the systems sample space. A dramatic change in the sample space of the system may be referred to as an ecosystem shift (Scheffer et al. 2001).
On the other hand, Arctic marine ecosystems are highly specialised to cope with significant environmental fluctuations, between seasons within years and annual variations. The resilience of the system may be regarded as the evolutionary solution of significant natural system variations, where only those species capable of adapting and coping in the long run have survived. This may suggest that system exposed to highly fluctuating environmental conditions, as the boreal marine ecosystems, is less vulnerable than others toward changes caused by climate change.
When looking at the exploitation of the cod (Gadus morhua) stock in the Barents Sea, the resilience of the Northern cod fishery is confirmed by archaeological fishbone analyses (Barrett et al. 2008, 2011) showing that dried cod continuously has been exported from the remote sub-Arctic region to other European countries over a period of more than thousand years. This period includes both the Medieval Warm Period (about 900–1400, Stocker et al. 2013), with significantly warmer climate than today, and the Little Ice Age (1450–1850, Stocker et al. 2013), which we temperature wise still are recovering from (Bianchi and McCave 1999).
While it may seem like a paradox that the fishery holding the longest documented trade history is found within the extreme naturally fluctuating environment in the sub-Arctic, the reasoning above indicates rather that the sub-Arctic is a place where we could expect to find resilient marine ecosystem. Both the human system as well as the marine ecosystems in this area are highly adapted to cope with extreme natural fluctuations.
This paper focuses the Northeast Arctic (NEA) cod fishery, the most important fishery in the Barents Sea. The NEA cod stock employs a variety of different coping strategies to adapt to a fluctuating physical and biological environment such as spawning and feeding migratory patterns, cannibalism, maturation dynamics and opportunistic feeding strategies (Sætersdal and Loeng 1987; Brown et al. 1989; Jørgensen et al. 2008; Kjesbu et al. 2014). The study employs a simulation model emphasising the migratory patterns (spawning and feeding migrations) constituting the most important spatial and temporal model variables. The aim of the study is to compare model results when assuming the current conditions to prevail (zero scenario) versus corresponding results under climate change conditions (climate change scenario).
Given the difficulties of fully understanding the system dynamics in its natural state, the difficulties of predicting the effects of a possible system perturbation caused by climate change become even more challenging. But more so, also observing the actual configurations of a marine ecosystem or mapping its recent history in all details is virtually impossible. The aim of this paper therefore is not to predict or forecast the NEA cod fishery under the two scenarios but rather to present possible outcomes within the sample spaces of the two scenarios (which certainly turn out to also have large overlapping areas, though not being the focus of this study). The climate scenario is based on the IPCC AR4 SRES A1B scenario (Anon. 2007) which at that time (2007) was considered being reasonably realistic. The A1 storyline assumes political focus on economics rather than environmental issues and a globalised economy. Among the different scenarios within the A1 family, the A1B scenario assumes a balanced development of energy technologies. The recent assessment report indicates that the A1B scenario may be too optimistic and less realistic than first anticipated (Stocker et al. 2013).
The focus on spatial distributions and fleet diversity is motivated from the widespread expectation that northern fish species will shift to a more northern distribution caused by increased water temperatures (Perry et al. 2005). The modelling approach utilised in this study has been developed and presented in two previous papers (Eide 2014, 2016). While the previous studies focused on the problems of identifying impacts climate change may have on the Barents Sea cod fishery, this paper provides a comparative study of a selected climate scenario and a zero scenario where no climate effects are considered.
The study makes use of a cellular automata model (CAb: Cellular Automata biological model) covering biological growth and spatial and temporal distribution of the cod stock. This model is run together with an agent-based model (ABe: Agent-Based economic model) defined within the same lattice, covering the economic exploitation of the stock. The flow chart of the combined CAb-ABe-model and the connected SinMod model is shown in Fig. 1. While the SinMod model (Slagstad et al. 2015) is a 3D model with a temporal resolution of 6 h (or less), the CAb module is a 2D spatial model with time unit 1 month.
Model flow chart also indicating the one-way direction from the SinMod model to the CAb-ABe model. The automatised management module processes information about the state of the fish stock (the grey arrow) and set quotas based on given exploitation rates
CAb follows a normal set up with a uniform lattice of squared cells (80 × 80 km) with rules based on a Moore neighbourhood of range two. Each cell is defined in terms of geographical coordinates and the state variable of the cell is the cod biomass in the water column at the geographical position of the cell. Hence, the spatial distribution of the cod biomass at one point in time is given by the matrix of state variables in the lattice. According to the definition of Moore neighbourhoods (Hogeweg 1988) the rules are given as the percentwise distribution of the mid cell of a 5 × 5 cells matrix into all the 25 cells (at range = 2). With a time unit of 1 month, the cod distribution is recalculated monthly on basis of the current state variables, month-specific rules and the cell-specific growth properties. Biomass within each cell grows linearly towards the environmental carrying capacity level at which local stock collapse occurs so that only the fractional part of the biomass is left (while standardising the carrying capacity level to one). The natural mortality in the model is mainly covered by these local collapses, depending on monthly variation in carrying capacity levels and biomass levels in each cell after redistribution of biomass and biomass growth.
In Fig. 1, two arrows from SinMod point into the fish stock box in the CAb-ABe-module, representing the two datasets of monthly average ocean temperatures of each cell at 50-metre depth and the monthly biomass of small zooplankton species contained in each cell's water column. In addition to these two datasets, SinMod also provides bathymetric data which by nature are fixed for the considered time period. The SinMod time series utilised in this study covers the 45-year period 2012–2057 aggregated to monthly intervals. SinMod data have in this study been converted from its original grid resolution of 20 km times 20 km to the CAb-ABe model resolution of 80 km times 80 km (see Eide 2014 for further details).
Information on spatial distribution of NEA cod for the period 2004–2010 has been provided through the FishExChange projectFootnote 1 by courtesy of the project staff. Catches in the database are registered on a quarterly basis while two surveys are carried out each year, winter survey (during April/May) and ecosystem survey (during August/September). Age-structured data from these data sources have been aggregated for the purpose of parameterising the CAb-ABe model. Registered catches and survey data have been spatially interpolated by Radial Basis Function interpolation (Myers 1994) followed by integration of the interpolated biomass surface. The integration has been performed over a geographic grid drawn as an equal size Lambert Azimuthal projection (corresponding to the projection used in the SinMod model with origin coordinates in 60°N, 58°E).
The data sample from the period 2004 to 2010 was considered to represent the current environmental situation, rather than reflecting ongoing changes in climate. There are several reasons for this. The period is rather short and the datasets, although displaying significant variations in the distributional patterns, do not show any significant trends or systematic changes. The seasonal variations are extreme but the seasonal biomass centres of gravity are almost identical each year during the period. In terms of weighted biomass distances for each quarter during the time period from a given geographical point (in the calculations the coordinates of Tromsø was chosen), cluster analyses did not reveal any systematic changes and different years constituted the main cluster for each of the four quarters.
Based on this, the data sample was considered as a representative distribution related to the current climatic conditions. The average monthly spatial distributions of NEA cod stock biomasses during the period 2004–2010 were found by merging the different sources of information relevant for each month as explained in Eide (2014). Resulting distributional maps for each month are shown in Fig. 2. All modifications made on the raw data received from SinMod and FishExchanges are made publicly available through UiT Open Research Data.Footnote 2
Monthly NEA cod distribution charts and cells of gravitation centres of biomasses, blue cells from the integrated biomass data from 2004 to 2010 and red cells corresponding model outputs. The two charts to the right provide the annual sample of monthly gravitation centres for the empirical observations (blue) and the model representation
Spatial and temporal distributions of the NEA cod environmental carrying capacity levels for each scenario have been estimated on the basis of constraining physical and biological factors in addition to the observed distributional patterns (Fig. 2). The NEA cod distribution as assumed to be constrained to ocean depths less than thousand metres and ocean temperatures higher than −1.5 °C (the monthly average at 50 m depths) (Eide 2014). In addition, a cell's environmental carrying capacity is reduced by 80 % when small zooplankton densities fall below 2 g carbon per square metre, considering the density of small zooplankton being a proxy for food availability in the area.
Monthly estimated current carrying capacities are then modified according to SinMod datasets of bathymetry, temperatures and zooplankton biomasses over the simulation period, representing the changes in environmental carrying capacities corresponding to the A1B scenario. As stochastic element is added to estimated environmental carrying capacities. As the mean deviation of the carrying capacity of each cell varies between 20 and 30 % (following the seasonal pattern of cod availability) during the period of observations (2004–2010), a normally distributed stochastic element with a mean value of one and a standard deviation of 10 % is assumed. The stochastic element also serves to establish the zero scenario monthly carrying capacities, repeating the current climate with the minor perturbations caused by the stochastic process.
Figure 3 displays the total NEA cod environmental carrying capacity anomalies of the two scenarios throughout the simulation period. The A1B scenario is essentially as presented in Eide (2014) while the zero scenario is defined as repeated sequences of the first 6 years of the A1B scenario which is presented in Eide (2014, 2016). Both the zero and the A1B scenario show ±10 % fluctuations related to the base year (2012), while the A1B scenario (upper panel in Fig. 3) in the mid 2030s displays a shift upwards, resulting in almost a 10 % increase in carrying capacity compared with the base year.
Monthly aggregates of normalised (base year 2012) carrying capacities for NEA cod based on initial distribution data from the FishExChange project (2004–2010). The upper panel shows the carrying capacity development over the period when utilising data from the SinMod A1B simulations while the lower panel shows the corresponding zero scenario, repeating the environmental conditions of the first 6 years throughout the simulation period
When having established the cellular automata lattice with cell-specific carrying capacities, which develop according to environmental variables and observed distributional pattern in the cod population, the next step is to establish cellular automata distributional rules. Essentially, the rules describe how individual cod moves in terms of directions and distances within the time frame of 1 month. According to Rose et al. (1995), NEA cod may have a range between 210 and 720 km over a period of 30 days, indicating that three cells in all direction from a given cell in a 80 × 80 km grid represent a reasonable range (range = 2, assuming Moore neighbourhood).
The rules should in principle be able to move the cod biomasses over time according to previous observations. This boils down to a straight forward statistical problem minimising the sum of squared distances between the observed centres of gravity in the observed cod biomass and the centres of gravity in the by rule distributed cod biomass (described in detail in Eide 2016). The best model fit is indicated by the red cells in Fig. 2, while the observed centres of gravity (based on surveys and catch information) are shown as blue cells in the same figure. The minimised sum of squares of the 12 observations equals 6.62 (measured in square units) within a distribution of monthly centres of gravity spanning over 8 (horizontally) times 2 (vertically) cells (Eide 2014). This means that the rules perform sufficiently well in replicating observed migratory pattern in the NEA cod stock. The rules are month specific and identical for all cells for each month.
The shifting carrying capacity distributions constitute the model environment and mimic the changes both in the physical and biological environment in which the cod stock lives, defining rich areas allowing the cod stock to expand and poor areas in which saturation levels are reached at low biomass levels. By affecting the distribution of biomasses also the migration pattern is affected, even though the cellular automata distribution rules are fixed for the whole simulation period (Eide 2014).
The ABe model includes four North-Norwegian fishing ports (Svolvær, Tromsø, Hammerfest and Vardø) and two fleet types (small and large vessels) placed in each of these ports (Fig. 4). The small vessels represent coastal fishing vessels with an assumed monthly range of four cells, while the large vessels may operate in the high sea, having a monthly range of eight cells.
The map illustrates the geographical areas covered by each of the eight fleet in the model. The ranges of the high sea vessels are indicated by solid circles while ranges of the small-scale coastal vessels are indicated with dashed circles. The two vessel types are placed in four different ports along the North-Norwegian coast (Svolvær, Tromsø, Hammerfest and Vardø)
Hannesson (1983) and Eide et al. (2003) suggest that the stock-output elasticities in harvest production differ significantly between fleet groups in the NEA cod fishery. In order to accommodate different stock-output elasticities for coastal and high sea fishing vessels, a Cobb–Douglas product equation is used to express the monthly fleet harvest (\( h_{i} \)) in cell i when fishing effort is \( e_{i} \) and stock biomass \( x_{i} \),
$$ h_{i} \left( {e_{i} ,x_{i} } \right) = q e_{i} x_{i}^{\beta } , $$
where q is the catchability coefficient and β is the stock-output elasticity of the fleet, \( 0 \le \beta \le 1 \).
Similarly to Heen and Flaaten (2007), Hannesson (1975), and Eide (2007, 2008, 2016), we assume the cod fleets to be price takers. Following this approach, this study assumes a fixed price (p) per unit of harvest. The fleet revenue (re) obtained in cell i is
$$ re_{i} \left( {e_{i} ,x_{i} } \right) = p h_{i} \left( {e_{i} ,x_{i} } \right) $$
and corresponding variable cost (vc) of the fishing operation is
$$ vc_{i} (e_{i} ,d_{i} ) = (c_{\text{e}} + c_{\text{d}} d_{i} ) e_{i}, $$
where the variable \( d_{i} \) is the distance from homeport to cell i. \( c_{\text{e}} \) and \( c_{\text{d}} \) are post parameters, unit cost of effort and per unit of effort unit cost of distance, respectively. Apart from being operated from four different ports (causing differences in variable costs due to different distances to home ports), each of the two types of vessels (small-scale and high sea vessels) is assumed to be homogeneous in terms of technology and economy. However, the two types of vessels differ from each other in both of these dimensions.
The fleet contribution margin of all cells are found by Eqs. (2) and (3) when summing revenues and cost for all cells. Negative contribution margin will cause the fleet not to fish since the revenue is not sufficient to cover running cost. After adjusting fishing effort accordingly, total annual fleet contribution margin (cm) of all cells is
$$ cm\left( {\varvec{e}, \varvec{x}} \right) = \mathop \sum \limits_{m = 1}^{12} \mathop \sum \limits_{i = 1}^{n} \left\{ {re_{m,i} \left( {e_{m,i} ,x_{m,i} } \right) - vc_{m,i} (e_{m,i} ,d_{i} )} \right\}. $$
The matrices e and x give the fishing effort of the fleet and stock biomasses distributed on cells and months. Index m indicates month number and n is the total number of cells available for the given fleet. Number of available cells depends both on the physical range of the vessel (Fig. 4) and the regulatory divisions of sea areas. In Norway, the high sea vessels are not allowed to fish inside four nautical miles from the baseline.
Annual profit is found by withdrawing the fixed cost (fc) from the contribution margin described in Eq. (4):
$$ \pi \left( {\varvec{e}, \varvec{x}} \right) = cm\left( {\varvec{e}, \varvec{x}} \right) - {\text{fc}}. $$
Total fleet fishing effort at time t (a given month in a given year) is the sum of the fishing effort distributed on all available cells:
$$ E_{t} = \mathop \sum \limits_{i = 1}^{n} e_{i,t}. $$
The fleet capacity in terms of maximum fishing effort which may be produced during a single month is V. The relation between absolute fleet size, V, and utilised fishing effort, E, is
$$ 0 \le E_{t} \le V_{t}. $$
This study assumes a pure or quota-regulated, open access fishery. Entry to and exit from the fishery are driven by profits beyond the normal level or negative profits, respectively. While Vernon Smith in his seminal paper (Smith 1968) assumed flow of capital into a fishery to be proportional to profit, this study assumes fixed entry and exit rates of vessels. The varying degree of fleet utilisation (E/V) may, however, bring the resulting dynamics closer to the dynamics assumed by Smith, since also fleet utilisation varies in space and time (e.g. negative contribution margins keep vessels in harbour). After introducing the entry (fg) and exit (fd) rates, the fleet dynamics are given by
$$ \begin{array}{*{20}c} {{\text{If}}\quad \pi_{t} \left( {\varvec{e}, \varvec{x}} \right) < 0\quad {\text{then}}\quad V_{t + 1} = (1 - {\text{fd}})V_{t} } \\ {{\text{If}}\quad \pi_{t} \left( {\varvec{e}, \varvec{x}} \right) > 0\quad {\text{then}}\quad V_{t + 1} = (1 + {\text{fg}})V_{t} }. \\ \end{array} $$
Entry rates are often expected to be higher than exit rates as in Eide (2007).
A reasonable assumption is that the fishers attempt to maximise their economic performance by fishing at the most profitable areas (e.g. within the most profitable cells). The problem is however to identify where the most profitable cells are positioned. The fishers search to solve this problem through the use of their best knowledge, experience and skills, including the use of fish finding technology, the information that may be obtained within the fishing community and from other sources, attitude towards risk and the economic factors constraining their activity. How successful the fishers are in identifying the most profitable areas depends in this model on the value of a single parameter, the smartness parameter s. The core expression for each vessel group in the model is given by
$$ e_{j,t} = \frac{{\left( {\frac{{re_{j,t} }}{{vc_{j,t} }}} \right)^{s} }}{{\mathop \sum \nolimits_{i = 1}^{n} \left( {\frac{{re_{i,t} }}{{vc_{i,t} }}} \right)^{s} }} E_{t}, $$
where the distribution of fishing effort is determined by the ratios of Eqs. (2) and (3) and the value of smartness parameter s, reflecting the fleets aptitude of identifying the most profitable (in terms of the revenue/cost ratio) fishing grounds. The smartness parameter (s) is a lump-based parameter where a number of features are reduced down to the value of this single parameter. The two extremes (s = 0 and s = ∞) go from a uniform distribution of fishing activities in the area available for the fleet (s = 0, representing total ignorance) to placing all fishing activities into one single cell (s = ∞, perfect knowledge). For the special situation s = 1, the distribution of fishing activities exactly follows the distribution of profit opportunities (expressed by revenue/cost ratios).
In the following, s = 1 is regarded to be the lowest smartness level of interest, while a possibly unrealistically high level of s = 10 is the highest smartness level included in the study. The range \( s \in \{ 1, 10\} \), which spans out a large variety of distributional patterns and the range are considered to cover actual levels of knowledge and insight in possible distributional patterns forming the base of rational decisions on where to fish. A smartness parameter value equal one clearly is far below the expected smartness levels of today's fisheries, while a smartness value equal 10 appears to be too optimistic with respect of level of insight and fishing aptitude. A qualified guess is that the most realistic smartness value is somewhere in the range of 2–3, depending on individual experience, knowledge, technical measures as well as social factors. In this study, a global smartness value is assumed to be global within each simulation. The model parameter setting is presented in Table 1.
Table 1 Values used for fleet parameters and variables between model simulations [from Eide (2016)]
The study includes different governmental constraints represented by four different management regimes of which one is no management (open access). The other three management regimes are all in principle structured similarly to the current management system, assuming different exploitation rates and perfect management control. A total allowable catch (TAC) is set according to a given target levels of the fishing mortality rate (F), assuming perfect stock information.
The NEA cod stock is equally shared between Norway and Russia, and a Russian catch of the same quantity as the Norwegian catch is included without defining specifically a Russian fleet. The Russian capture is assumed to be high sea catches following the distribution of cod biomasses in areas available for Russian vessels.
The total Norwegian quota is shared between coastal (small) vessels and high sea (large) vessel in a fixed ratio (60/40), which is a slight simplification of the current quota allocation system. The high sea vessels are not allowed to fish inside four nautical miles from the baseline, which is implemented by limited access (25 % of total area) to cells along the coast.
This study investigates and compares distribution and variability in the two scenarios, in particular emphasising fleet diversity and spatial distribution of the fishing activity. While previous studies (Eide 2007, 2008) suggest that fisheries management may have a greater impact than climate change on the biological development and economic performance of Arctic groundfish fisheries, these studies did however not include spatial distributions of biomasses and fishing effort.
Twenty-four simulations within each scenario were performed, each scenario combining six smartness levels (represented by the s-values 1, 1.5, 2, 3, 5 and 10) and four management regimes (F = 0.1, F = 0.2, F = 0.4 and open access). The fleet dynamics is in all simulations controlled as described above, inducing a total fleet capacity (number of vessels that may participate in the fishery, V) which may be larger or equal to the active fleet E: \( V \ge E \). Over time, the fleet size (V) and the fishing effort (E) follow different paths in different fishing ports and for the two types of fleets. The Shannon function H is used as a fleet diversity index (Eide 2016), mapping how fleet diversity develops in the different simulations.
Figure 5 shows monthly samples of biomass distribution outputs for all simulations over a period of 2 years (2030 and 2031), indicating how both scenarios follow the seasonal pattern in the cod stock available for exploitation. The expected season profile is displayed as a thick yellow curve, drawn from the mathematical expression for the season profile found in Eide et al. (2003). The variations indicated by each Box–Whisker item show the monthly variation within the 24 simulations performed within each of the two scenarios where the blue bars represent the zero scenario and the red bars the A1B scenario.
The Box–Whisker chart gives monthly values and variations over a period of 2 years (2030–2031) in the CAb-ABe model for all simulations, separated on the zero scenario (blue) and the A1B scenario (red). The thick, yellow curve is the catchability function found for the trawl fishery on the NEA cod stock in Eide et al. (2003)
Even though the 2 years captured in Fig. 5 is just prior to the occurrence of a striking shift in the development in the A1B carrying capacity anomaly (as seen in Fig. 3 this happened about 2034), the A1B scenario biomasses shown in Fig. 5 are significantly higher than the corresponding biomasses representing the zero scenario. To a large degree, however, the two scenarios overlap each other and both describe seasonal paths in close accordance with the expected seasonal profile.
The shift suggested to occur around 2034 is also visible in Figs. 6 and 7, showing the biomass and catch developments for all the simulations. These figures unmask several interesting features. The seasonal profiles of the two scenarios follow to a large degree the same pattern up to the mid-thirties after which a significantly higher stock biomass appears in the cases of an exploitation rate based on a fishing mortality rate (F) equal 0.2 and 0.4. This effect is however not apparent in the case of F = 0.1 in which available stock biomass already is stabilised on a quite high level (around 3 million tons according to Fig. 6) in both scenarios. The effect of the shift in environmental carrying capacity is however reflected in increased monthly catches also in the case of F = 0.1, though significantly less than the increases seen in the cases of F = 0.2 and F = 0.4 (Fig. 7).
Total monthly NEA cod biomasses available for fishing (by the modelled fleets) from 2013 to 2052 for different combinations of the smartness parameter s and the exploitation rate. The thick solid curves give the annual averages while the thin curves connect the monthly biomasses. The blue colour represents the zero scenario and the red colour the A1B scenario. The vertical axes give the stock biomass in million tons
Monthly NEA cod total catches from 2013 to 2052 for different combinations of the smartness parameter s and the exploitation rate. The thick solid curves give the average monthly catches while the thin curves connect the actual monthly catches. The blue colour represents the zero scenario and the red colour the A1B scenario. The vertical axes give the catches in million tons
The unregulated fishery differs from the other three in Figs. 6 and 7, particularly after the shift in carrying capacity where stock biomasses, catches and seasonal peaks clearly are lower in open access fishery. At high smartness levels and open access fishery, monthly available biomasses and obtained catches in peak season even reach higher levels in the zero scenario than that in the A1B scenario.
The years after the environmental shift in the mid-thirties provides however the fleets with considerably higher profits in the A1B scenario than what is obtained in the zero scenario (Fig. S1 of the Electronic Supplementary Material). When comparing the profit surfaces of the two scenarios for all the years (top left in Fig. S1) with the last 25 years of the simulation (top right in Fig. S1 of the Electronic Supplementary Material), it becomes visible how the environmental effect contributes in lifting the whole profit surface of the A1B scenario.
Open access fishery combined with high smartness levels results in higher profits in the zero scenario than in the A1B scenario throughout the simulation period. At lower smartness levels, however, the profit surface area within the open access area reaches surprisingly high levels as seen in the lower-right table in Fig. S1, where the profit obtained the last 25 years in open access when s = 2 is close to the maximum overall profit (at F = 0.2 and s = 5). In general, the A1B scenario seems to give relatively larger benefits to higher smartness levels than the zero scenario does. In both cases, highest profits are found at the fishing mortality rate (F) 0.2 but while the smartness level in A1B scenario maximum is 5 it is 1.5 in the zero scenario.
Figure S1 also indicates, for both scenarios, that the largest profits are obtained at moderate levels of the smartness parameter s, in the range 1–3 in the zero scenario and 1–5 in the A1B scenario. The exception is for the last 25 years of the simulation period, when also higher exploitation levels contribute in large profits in the A1B scenario.
Eide (2016) introduces a fleet diversity index based on the Shannon Function H (Spellerberg and Fedor 2003) which is utilised in Fig. 8 and Fig. S2 of the Electronic Supplementary Material, differing between vessels belonging to the coastal and high sea fishing fleets. As higher values of the diversity index indicate higher diversity, clearly the coastal fleet exhibits the highest diversity at low exploitation levels and for low smartness values at all levels of exploitation. The two scenarios follow the same pattern in this respect and also regarding the trends while increasing smartness levels. As higher smartness levels seem to contribute in increased fleet diversity for the high sea vessels (for exploitation rates at F = 0.2 and above), the opposite is the case for the coastal fleet, though not so pronounced at F = 0.2 as for the highest exploitation levels. Although the fleet diversity for high smartness levels and open access seems to drop below the corresponding levels of the zero scenario and the opposite for F = 0.4, the general impression is that the fleet diversity in the A1B scenario corresponds very closely to the fleet diversities found in the zero scenario simulations.
Fleet diversity indexes (based on the Shannon Function H, see Eide 2016) found for the zero scenario (below) and the A1B scenario (above) for the different management regimes and varying smartness (s) values
The declining diversity for small-scale vessels at higher smartness values and exploitation rates is also clearly visible in Fig. S2. Each graphical plot in Fig. S2 is divided by the diagonal into two sector where the upper sector is the area where the high sea fleet exhibits a higher diversity than the coastal fleet, while it is opposite in the sector below. In Fig. S2, for the zero scenario as well as for the A1B scenario, at high exploitation rates (open access and when F = 04) and for s-values higher than 1.5, all points lay above the diagonal and hence indicate that the high sea fleet is more diverse than the coastal fleet.
The fact that the high sea fleet has a wider range than the coastal fleet may be the simple explanation of the higher diversity at high levels of exploitation. The advantage of a higher mobility combined with a minimum level of fishing aptitude becomes relatively a more important advantage as the exploitation level increases. As seen in Fig. 8, however, also the diversity of the high sea fleet may decrease at sufficiently high smartness levels when the exploitation level is high, while the decline in fleet diversity occurs at lower exploitation levels in the coastal fleet. It should however be noted that this picture may be completely reversed when including alternative fisheries which first of all provide the coastal fleet with different options that could contribute to a higher fleet diversity.
At low s-values in open access, the fleet diversities of the two scenarios are virtually identical. Overall, the highest fleet diversities are found at low exploitation levels and high smartness levels. The fleet diversities of the two scenarios follow each other closely but from the zero to the A1B scenario, the tendency is increasing diversity in the coastal fleet at low smartness while it is the high sea vessel diversity which increases at high smartness levels.
Figure S3 of the Electronic Supplementary Material shows the vertical and horizontal distributional ranges of the gravity centres of stock biomass and fishing effort distributions over the 45 simulated years. The stock distribution in terms of centres of gravity turns out to be very stable, almost not affected of fishing intensity and levels of smartness. A slight North-eastern movement is indicated for the A1B scenario compared with the zero scenario but the main impression is that the stock biomass distribution does not change. In the case of open access, the two scenarios are practically equal in terms of stock biomass distribution.
Significantly larger changes are seen in the distribution of fishing effort, also reflecting the changing fleet compositions due to stock properties, exploitation levels and smartness. At increasing levels of smartness, there is a light tendency towards a more South-western distribution of fishing effort in both scenarios, even for the A1B scenario where the stock distribution slightly moves in the opposite direction. This indicates that the effect of reducing cost related to distance from port may be a more important factor than the possibly more North-eastern stock distribution.
Some details of the information embedded in Fig. S3 come out in Fig. S4 of the Electronic Supplementary Material, showing how the stock biomass distribution clusters for the two scenarios and their different combinations of fishing intensity and smartness levels. The two scenarios come out as independent clusters for all smartness levels at the lowest exploitation rate (F = 0.1), while a more mixed picture is seen at higher exploitation levels. At the higher exploitation levels, the differences between the different scenarios and combinations are smaller but still a distinct clustering between scenarios are visible.
This is however not the case for the distribution of catch and effort (Fig. S5 and S6 of the Electronic Supplementary Material), which for natural reasons are closely related. In these cases, the lowest exploitation level and low smartness levels cluster independent of climate scenario. It seems also to be a combined clustering tendency for both scenarios at higher exploitation levels and higher smartness levels, suggesting that the distribution of effort and hence catches is more depending on smartness levels and fishing intensities than marginal changes in the distribution of stock biomasses.
The idea of the NEA cod moving into a more northern distribution area is not supported by the findings of this study. On the contrary, the centres of gravity of the cod biomass distribution are surprisingly stable throughout the simulation period. While the distribution area in north, south and west is largely constrained by the ocean bathymetry which is unaffected by climate change, a further easterly distribution is constrained by temperatures which still are below the levels preferred by cod (Eide 2014). It is reasonable to expect this to be the case also for other benthic species in the Barents Sea, while pelagic species are less constrained in their spatial distributions.
The SinMod simulation based on the A1B climate scenario suggests a significant environmental shift in the mid 2030ies, causing a corresponding increase in the environmental carrying capacity for the NEA cod stock of about 10 %. The shift also leads to a significant increase in the cod stock biomass, most visible at medium exploitation rates and low smartness levels. In open access, the increased carrying capacity level is not fully utilised due to higher fishing effort and extended seasons. Also at low exploitation levels, the environmental effect is less visible since the cod stock already has reached a high stock level.
Previous conclusions suggesting that fisheries management decisions to have a greater impact on the development of fisheries than climate change (Eide 2007) seem to hold also after including the spatial dimension. Technological and other changes captured with the smartness parameter also have great importance and both management regimes and smartness levels clearly affect profits and fleet diversities. Given sound combinations of management and smartness levels, the climate change impacts on the NEA cod fishery could, however, significantly enhance the economic utilisation of this natural resource.
http://www.imr.no/fishexchange/fishexchangedatabase/nb-no.
doi: 10.18710/B8VW6H.
Anon. 2007. Climate change 2007—Impacts, adaptation and vulnerability. Contribution of Working Group II to the Fourth Assessment Report of the IPCC. ISBN 978 0521 88010-7.
Barrett, J.H., C. Johnstone, J. Harland, W. Van Neer, A. Ervynck, D. Makowiecki, D. Heinrich, A.K. Hufthammer, et al. 2008. Detecting the medieval cod trade: A new method and first results. Journal of Archaeological Science 35: 850–861.
Barrett, J.H., D. Orton, C. Johnstone, J. Harland, W. Van Neer, A. Ervynck, C. Roberts, A. Locker, et al. 2011. Interpreting the expansion of sea fishing in medieval Europe using stable isotope analysis of archaeological cod bones. Journal of Archaeological Science 38: 1516–1524.
Bianchi, G.G., and I.N. McCave. 1999. Holocene periodicity in North Atlantic climate and deep-ocean flow south of Iceland. Nature 397: 515–517.
Brown, J.A., P. Pepin, D.A. Methven, and D.C. Somerton. 1989. The feeding, growth and behaviour of juvenile cod, Gadus morhua L., in cold environments. Journal of Fish Biology 35: 373–380. doi:10.1111/j.1095-8649.1989.tb02989.x.
Carvalho, G.R. 1993. Evolutionary aspects of fish distribution: genetic variability and adaptation. Journal of Fish Biology 43: 53–73.
Eide, A. 2007. Economic impacts of global warming: The case of the Barents Sea fisheries. Natural Resource Modeling 20: 199–221.
Eide, A. 2008. An integrated study of economic effects of and vulnerabilities to global warming on the Barents Sea cod fisheries. Climatic Change 87: 251–262.
Eide, A. 2014. Modelling spatial distribution of the Barents Sea cod fishery. In Lecture notes in computer science (LNCS) 8751, ed. J. Was, G.C. Sirakoulis, and S. Bandini, 288–299. ACRI.
Eide, A. 2016. Causes and consequences of fleet diversity in fisheries—The case of the Norwegian Barents Sea cod fishery. Elementa Science of the Anthropocene 4: 1–18. doi:10.12952/journal.elementa.000110.
Eide, A., F. Skjold, F. Olsen, and O. Flaaten. 2003. Harvest functions: The Norwegian bottom trawl cod fisheries. Marine Resource Economics 18: 81–93.
Hannesson, R. 1975. Fishery dynamics: A north atlantic cod fishery. Canadian Journal of Economics 8: 151–173.
Hannesson, R. 1983. Bioeconomic production function in fisheries: Theoretical and empirical analysis. Canadian Journal of Fisheries and Aquatic Science 40: 968–982.
Heen, K., and O. Flaaten. 2007. Spatial employment impacts of fisheries management: A study of the Barents Sea and the Norwegian Sea fisheries. Fisheries Research 85: 74–83.
Hogeweg, P. 1988. Cellular automata as a paradigm for ecological modeling. Applied Mathematics and Computation 27: 81–100.
Jørgensen, C., E.S. Dunlop, A.F. Opdal, and Ø. Fiksen. 2008. The evolution of spawning migrations: State dependence and fishing-induced changes. Ecology 89: 3436–3448.
Kjesbu, O.S., A.F. Opdal, K. Korsbrekke, J.A. Devine, and J.E. Skjæraasen. 2014. Making use of Johan Hjort's "unknown" legacy: reconstruction of a 150-year coastal time-series on northeast Arctic cod (Gadus morhua) liver data reveals long-term trends in energy allocation patterns. ICES Journal of Marine Science: Journal du Conseil 71: 2053–2063.
Murawski, S.A. 1993. Climate change and marine fish distributions: Forecasting from historical analogy. Transactions of the American Fisheries Society 122: 647–658. doi:10.1577/1548-8659.
Myers, D.E. 1994. Spatial interpolation: An overview. Geoderma 62: 17–28.
Perry, A.L., P.J. Low, J.R. Ellis, and J.D. Reynolds. 2005. Climate change and distribution shifts in marine fishes. Science 308: 1912–1915.
Rose, G.A. 1993. Cod spawning on a migration highway in the north-west Atlantic. Nature 366: 458–461. doi:10.1038/366458a0.
Rose, G.A., B. deYoung, and E.B. Colbourne. 1995. Cod (Gadus morhua) migration speeds and transport relative to currents on the North-East Newfoundland Shelf. ICES Journal of Marine Science 52: 903–914.
Sætersdal, G., and H. Loeng. 1987. Ecological adaptation of reproduction in Northeast Arctic cod. Fisheries Research 5: 253–270. doi:10.1016/0165-7836(87)90045-2.
Scheffer, M., S. Carpenter, J.A. Foley, C. Folke, and B. Walker. 2001. Catastrophic shifts in ecosystems. Nature 413: 591–596. doi:10.1038/35098000.
Slagstad, D., P.F. Wassmann, and I. Ellingsen. 2015. Physical constrains and productivity in the future Arctic Ocean. Frontiers in Marine Science 2: 85.
Smith, V.L. 1968. Economics of production from natural resources. American Economic Review 58: 409–431.
Spellerberg, I.F., and P.J. Fedor. 2003. A tribute to Claude Shannon (1916–2001) and a plea for more rigorous use of species richness, species diversity and the 'Shannon–Wiener' Index. Global Ecology and Biogeography 12: 177–179.
Stocker, T.F., D. Qin, G.-K. Plattner, M.M.B. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, et al. 2013. Climate change: The physical science basis. Agenda (Working Gr): Intergovernmental Panel on Climate Change, Cambridge University Press. doi:10.1017/CBO9781107415324.
The research leading to these results has received funding from the European Union under Grant Agreement n° 265863 within the Ocean of Tomorrow call of the European Commission Seventh Framework Programme.
Faculty of Biosciences, Fisheries and Economics, UiT – The Arctic University of Norway, Breivika, 9037, Tromsø, Norway
Arne Eide
NOFIMA, 9291, Tromsø, Norway
Correspondence to Arne Eide.
Below is the link to the electronic supplementary material.
Supplementary material 1 (PDF 542 kb)
Eide, A. Climate change, fisheries management and fishing aptitude affecting spatial and temporal distributions of the Barents Sea cod fishery. Ambio 46 (Suppl 3), 387–399 (2017). https://doi.org/10.1007/s13280-017-0955-1
Issue Date: December 2017
Fisheries economics
Fleet diversity
Spatial distribution | CommonCrawl |
Mathematical Biosciences and Engineering
2019, Volume 16, Issue 4: 2168-2188. doi: 10.3934/mbe.2019106
Research article Special Issues
Epidemics and underlying factors of multiple-peak pattern on hand, foot and mouth disease inWenzhou, China
Chenxi Dai 1 ,
ZhiWang 1 ,
Weiming Wang 2 , , ,
Yongqin Li 1 , , ,
Kaifa Wang 1 , ,
School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing 400038, P.R. China
School of Mathematical Science, Huaiyin Normal University, Huaian 223300, P.R. China
Received: 27 December 2018 Accepted: 21 February 2019 Published: 12 March 2019
Full Text(HTML)
Background: Several outbreaks of severe hand-foot-mouth disease (HFMD) in East Asia and Southwest Asia in recent years have had a serious impact on the countries. However, the factors that contribute to annual multiple-peak pattern of HFMD outbreaks, and how and when do these factors play the decisive role in the HFMD transmission is still unclear. Methods: Based on the surveillance data of HFMD between 1 January 2010 to 31 December 2015 in Wenzhou, China, the daily modelfree basic reproduction number and its annual average were first estimated by incorporating incubation and infection information, then the annual model-based basic reproduction number was computed by the proposed kinetic model, and finally the potential impact factors of multiple-peak pattern are assessed through the global and time-varying sensitivity analyses. Results: All annual model-based and model-free basic reproduction numbers were significantly higher than one. The school opening both in the spring and fall semester, meteorological e ect in the spring semester, and the interactions among them were strongly correlated with the annual model-based basic reproduction number, which were the main underlying factors on the annual multiple-peak pattern of HFMD outbreaks. Conclusions: School opening was primarily responsible for peaks of HFMD outbreaks and meteorological factors in the spring semester should also be highly concerned. The optimum timing for social distance implementation is at the beginning of every school semester and health education focusing on personal hygiene and good sanitation should be highlighted in the spring semester.
hand-foot-mouth disease,
multiple-peak pattern,
underlying factor,
mathematical modeling,
basic reproduction number
Citation: Chenxi Dai, ZhiWang, Weiming Wang, Yongqin Li, Kaifa Wang. Epidemics and underlying factors of multiple-peak pattern on hand, foot and mouth disease inWenzhou, China[J]. Mathematical Biosciences and Engineering, 2019, 16(4): 2168-2188. doi: 10.3934/mbe.2019106
[1] W. Xing, Q. Liao and C. Viboud, et al., Hand, foot, and mouth disease in China, 2008-12: An epidemiological study, Lancet Infect. Dis., 14 (2014), 308–318.
[2] J. Li, Y. Fu and A. Xu, et al., A spatial-temporal ARMA model of the incidence of hand, foot, and mouth disease in Wenzhou, China, Abstr. Appl. Anal., 2014 (2014), 1–9.
[3] Y. Zhu, B. Xu and X. Lian, et al., A hand-foot-and-mouth disease model with periodic transmission rate in Wenzhou, China, Abstr. Appl. Anal., 2014 (2014), 1–11.
[4] Z.C. Zhuang, Z.Q. Kou and Y.J. Bai, et al., Epidemiological research on hand, foot, and mouth disease in mainland China, Viruses, 7 (2015), 6400–6411.
[5] K. Kaminska, G. Martinetti and R. Lucchini, et al., Coxsackievirus A6 and hand, foot, and mouth disease: three case reports of familial child-to-immunocompetent adult transmission and a literature review, Case Rep. Dermatol., 5 (2013), 203–209.
[6] J.P. Lott, K. Liu and M.L. Landry, et al., Atypical hand-foot-and-mouth disease associated with coxsackievirus A6 infection, J. Am. Acad. Dermatol., 69 (2013), 736–741.
[7] M.H. Ooi, S.C. Wong and P. Lewthwaite, et al., Clinical features, diagnosis, and management of enterovirus 71, Lancet Neurol., 9 (2010), 1097–1105.
[8] Q.Y. Mao, Y. Wang and L. Bian, et al., EV71 vaccine, a new tool to control outbreaks of hand, foot and mouth disease (HFMD), Expert Rev. Vacc., 15 (2016), 599–606.
[9] D. Onozuka and M. Hashizume, The influence of temperature and humidity on the incidence of hand, foot, and mouth disease in Japan, Sci. Total Environ., 410-411 (2011), 119–125.
[10] Z. Du, W. Zhang and D. Zhang, et al., Estimating the basic reproduction rate of HFMD using the time series SIR model in Guangdong, China, PLoS ONE, 12 (2017), 1–11.
[11] F. Gou, X. Liu and J. He, et al., Different responses of weather factors on hand, foot and mouth disease in three different climate areas of Gansu, China, BMC Infect. Dis., 18 (2018), 1–10.
[12] Y.L. Hii, J. Rocklöv and N. Ng, Short term effects of weather on hand, foot and mouth disease, PLoS ONE, 6 (2011), 1–6.
[13] J.Wang, T. Hu and D. Sun, et al., Epidemiological characteristics of hand, foot, and mouth disease in Shandong, China, 2009-2016, Sci. Rep., 7 (2017), 1–9.
[14] P. Wang, H. Zhao and F. You, et al., Seasonal modeling of hand, foot, and mouth disease as a function of meteorological variations in Chongqing, China, Int. J. Biometeorol., 61 (2017), 1411–1419.
[15] W. Dong, X. Li and P. Yang, et al., The effects of weather factors on hand, foot and mouth disease in Beijing, Sci. Rep., 6 (2016), 1–9.
[16] H. Feng, G. Duan and R. Zhang, et al., Time series analysis of hand-foot-mouth disease hospitalization in Zhengzhou: establishment of forecasting models using climate variables as predictors, PLoS ONE, 9 (2014), 1–10.
[17] W. Liu, H. Ji and J. Shan, et al., Spatiotemporal dynamics of hand-foot-mouth disease and its relationship with meteorological factors in Jiangsu province, China, PLoS ONE, 10 (2015), 1–13.
[18] Y. Liu, X. Wang and C. Pang, et al., Spatio-temporal analysis of the relationship between climate and hand, foot, and mouth disease in Shandong province, China, 2008-2012, BMC Infect. Dis., 15 (2015), 1–8.
[19] J. Wei, A. Hansen and Q. Liu, et al., The effect of meteorological variables on the transmission of hand, foot and mouth disease in four major cities of Shanxi province, China: a time series data analysis (2009-2013), PLoS Negl. Trop. Dis., 9 (2015), 1–19.
[20] L.Y. Chang, C.C. King and K.H. Hsu, et al., Risk factors of enterovirus 71 infection and associated hand, foot, and mouth disease/herpangina in children during an epidemic in Taiwan, Pediatrics, 109 (2002), 1–6.
[21] L. Sun, H. Lin and J. Lin, et al., Evaluating the transmission routes of hand, foot, and mouth disease in Guangdong, China, Am. J. Infect. Control., 44 (2016), e13–e14.
[22] Y.H. Xie, V. Chongsuvivatwong and Y. Tan, et al., Important roles of public playgrounds in the transmission of hand, foot, and mouth disease, Epidemiol. Infect., 143 (2015), 1432–1441.
[23] H.W. Hethcote, The mathematics of infectious diseases, SIAM Rev., 42 (2000), 599–653.
[24] Y. Cai, X. Lian and Z. Peng, et al., Spatiotemporal transmission dynamics for influenza disease in a heterogenous environment, Nonlinear Anal. Real., 46 (2019), 178–194. 25. Y. Cai, K. Wang and W.M. Wang, Global transmission dynamics of a Zika virus model, Appl. Math. Lett., 92 (2019), 190–195.
[25] 26. A. Cori, N.M. Ferguson and C. Fraser, et al., A new framework and software to estimate timevarying reproduction numbers during epidemics, Am. J. Epidemiol., 178 (2013), 1505–1512.
[26] 27. E. Ma, C. Fung and S.H.E. Yip, et al., Estimation of the basic reproduction number of enterovirus 71 and coxsackievirus A16 in hand, foot, and mouth disease outbreaks, Pediatr. Infect. Dis. J., 30 (2011), 675–679.
[27] 28. C.C. Lai, D.S. Jiang and H.M. Wu, et al., A dynamic model for the outbreaks of hand, foot, and mouth disease in Taiwan, Epidemiol. Infect., 144 (2016), 1500–1511.
[28] 29. Y. Li, J. Zhang and X. Zhang, Modeling and preventive measures of hand, foot and mouth disease (HFMD) in China, Int. J. Environ. Res. Public Health, 11 (2014), 3108–3117.
[29] 30. Y. Li, L. Wang and L. Pang, et al., The data fitting and optimal control of a hand, foot and mouth disease (HFMD) model with stage structure, Appl. Math. Comput., 276 (2016), 61–74.
[30] 31. Y. Ma, M. Liu and Q. Hou, et al., Modelling seasonal HFMD with the recessive infection in Shandong, China, Math. Biosci. Eng., 10 (2013), 1159–1171.
[31] 32. F.C.S. Tiing and J. Labadin, A simple deterministic model for the spread of hand, foot and mouth disease (HFMD) in Sarawak, Second Asia International Conference on Modelling & Simulation (AMS), (2008), 947–952.
[32] 33. J.Y. Yang, Y. Chen and F.Q. Zhang, Stability analysis and optimal control of a hand-foot-mouth disease (HFMD) model, J. Appl. Math. Comput., 41 (2012), 99–117.
[33] 34. Public Health Statistical Data of Wenzhou City, Report of Wenzhou Center for Disease Control and Prevention, 2018. Available from: http://www.wzcdc.org.cn/.
[34] 35. Guidelines on Drawing up the School Calendar, Report of Wenzhou Education Bureau, 2018. Available from: http://www.wzer.net/.
[35] 36. Meteorological information of Wenzhou city, Report of China Meteorological Data Sharing Service System, 2018. Available from: http://data.cma.cn/.
[36] 37. E. Vynnycky, A. Trindall and P. Mangtani, Estimates of the reproduction numbers of Spanish influenza using morbidity data, , Int. J. Epidemiol., (2007), 881–889.
[37] 38. H. Haario, M. Laine and A. Mira, et al., DRAM: efficient adaptive MCMC, Stat. Comput., 16 (2006), 339–354.
[38] 39. N. Bacaër and S. Guernaoui, The epidemic threshold of vector-borne disease with seasonality, J. Math. Biol., 53 (2006), 421–436.
[39] 40. W. Wang and X.Q. Zhao, Threshold dynamics for compartmental epidemic models in periodic environments, J. Dyn. Differ. Equat., 20 (2008), 699–717.
[40] 41. Y. Xiao, S. Tang and Y. Zhou, et al., Predicting the HIV/AIDS epidemic and measuring the effect of mobility in mainland China, J. Theor. Biol., 317 (2013), 271–285.
[41] 42. S. Marino, I.B. Hogue and C.J. Ray, et al., A methodology for performing global uncertainty and sensitivity analysis in systems biology, J. Theor. Biol., 254 (2008), 178–196.
[42] 43. J. Wu, R. Dhingra and M. Gambhir, et al., Sensitivity analysis of infectious disease models: methods, advances and their application, J. R. Soc. Interface, 10 (2013), 1–14.
[43] 44. B. Yang, E.H.Y. Lau and P. Wu, et al., Transmission of hand, foot and mouth disease and its potential driving factors in Hong Kong, Sci. Rep., 6 (2016), 1–8.
[44] 45. J. Wu, J. Cheng and Z. Xu, et al., Nonlinear and interactive effects of temperature and humidity on childhood hand, foot and mouth disease in Hefei, China, Pediatr. Infect. Dis. J., 35 (2016), 1086–1091.
[45] 46. L.W. Ang, B.K. Koh and K.P. Chan, et al., Epidemiology and control of hand, foot and mouth disease in Singapore, 2001-2007, An. Aca. Med., 38 (2009), 106–112.
[46] 47. T. Solomon, P. Lewthwaite and D. Perera, et al., Virology, epidemiology, pathogenesis, and control of enterovirus 71, Lancet Infect. Dis., 10 (2010), 778–790.
[47] 48. M. B´elanger, K. Gray-Donald and J. O'loughlin, et al., Influence of weather conditions and season on physical activity in adolescents, Ann. Epidemiol., 19 (2009), 180–186.
[48] 49. M.M. Suminski, W.C. Poston and P. Market, et al., Meteorological conditions are associated with physical activities performed in open-air settings, Int. J. Biometeorol., 52 (2008), 189–197.
[49] 50. F.X. Abad, R.M. Pinto and A. Bosch, Survival of enteric viruses on environmental fomites, Appl. Environ. Microbiol., 60 (1994), 3704–3710.
[50] 51. H.L. Chang, C.P. Chio and H.J. Su, et al., The association between enterovirus 71 infections and meteorological parameters in Taiwan, PLoS ONE, 7 (2012), 1–5.
[51] 52. I. Hashimoto and A. Hashimoto, Comparative studies on the neurovirulence of temperaturesensitive and temperature-resistant viruses of enterovirus 71 in monkeys, Acta Neuropathol., 60 (1983), 266–270.
[52] 53. S. Altizer, A. Dobson and P. Hosseini, et al., Seasonality and the dynamics of infectious diseases, Ecol. Lett., 9 (2006), 467–484.
[53] 54. S.F. Dowell, Seasonal variation in host susceptibility and cycles of certain infectious diseases, Emerg. Infect. Dis., 7 (2001), 369–374.
[54] 55. Z. Yang, Q. Zhang and B.J. Cowling, et al., Estimating the incubation period of hand, foot and mouth disease for children in different age groups, Sci. Rep., 7 (2017), 1–5.
[55] 56. W.M. Koh, H. Badaruddin and H. La, et al., Severity and burden of hand, foot and mouth disease in Asia: a modelling study, BMJ Glob. Health, 3 (2018), 1–10.
[56] 57. J. Liu, G.L. Zhang and G.Q. Huang, et al., Therapeutic effect of Jinzhen oral liquid for hand foot and mouth disease: a randomized, multi-center, double-blind, placebo-controlled trial, PLoS ONE, 9 (2014), 1–9.
[57] 58. Wenzhou Health Statistical Year book, Report of Wenzhou Statistical Bureau, 2018. Available from: http://wztjj.wenzhou.gov.cn/.
[58] 59. R. Huang, G. Bian and T. He, et al., Effects of meteorological parameters and PM10 on the incidence of hand, foot, and mouth disease in children in China, Int. J. Environ. Res. Public Health, 13 (2016), 1–13.
[59] 60. H. Qi, Y. Chen and D. Xu, et al., Impact of meteorological factors on the incidence of childhood hand, foot, and mouth disease (HFMD) analyzed by DLNMs-based time series approach, Infect. Dis. Poverty, 7 (2018), 1–10.
[60] 61. MCMC toolbox for Matlab, Report of Marko Laine, 2018. Available from: http://helios. fmi.fi/~lainema/mcmc/.
© 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, [email protected]
沈阳化工大学材料科学与工程学院 沈阳 110142
百度学术搜索
万方数据库搜索
CNKI搜索
1.285 1.3
Article views(667) PDF downloads(828) Cited by(4)
Article outline
Show full outline
Figures(9) / Tables(5)
Other Articles By Authors
Chenxi Dai
ZhiWang
Weiming Wang
Yongqin Li
Kaifa Wang
Chenxi Dai, ZhiWang, Weiming Wang, Yongqin Li, Kaifa Wang. Epidemics and underlying factors of multiple-peak pattern on hand, foot and mouth disease inWenzhou, China[J]. Mathematical Biosciences and Engineering, 2019, 16(4): 2168-2188. doi: 10.3934/mbe.2019106
DownLoad: Full-Size Img PowerPoint
Figure 1. Flowchart of HFMD transmission in a populations
Figure 2. Estimated daily model-free basic reproduction number ($ R_{0}^{free}(day) $) and the reported daily HFMD infectious cases in Wenzhou, 2010-2015. Blue solid line and it's gray shade represent mean and $ 95% $ confidence interval (CI) of the daily model-free basic reproduction number. The black dashed line represents the critical basic reproduction number threshold one. The histogram in this figure represents the daily number of reported infectious cases and the orange bars denotes school opening while the green bars represent days in school vacation
Figure 3. The tendency of daily model-free basic reproduction number during school opening days. (A) Spring semester. (B) Fall semester. Blue dots represent daily model-free basic reproduction number $ R_{0}^{free}(day) $ per year (from 2010 to 2015). Red dot line and the orange error bars represent it's mean and corresponding standard error of daily model-free basic reproduction number. The black dashed line represents the critical basic reproduction number threshold one
Figure 4. Illustration of the fitting result of model (2.1) from January 1st 2010 to December 31th, 2015 under estimated parameters. The solid blue line represents the model-predicted HFMD infected cases and the reported cases are shown as red dots. The gray areas represent the 95% confidence interval of model prediction
Figure 5. Illustration of the sensitivity analysis of the annual model-based basic reproduction number to the five factors in contact rate. The gray area represents PRCC value between $ 0.2 $ and $ 0.4 $ that is moderate correlation. PRCC values higher than the grey area denote strong correlations, while PRCC values lower than the grey area represent no significant correlations
Figure 6. Illustration of time-varying sensitivity analysis. (A) Time series diagram of estimated daily model-free basic reproduction number $ R_{0}^{free}(day) $ and model-based contact rate $ \beta(t) $. Blue solid line and the gray shade represent mean value and 95% confidence interval (CI) of $ R_{0}^{free}(day) $ from 2010 to 2015. Orange, yellow, green, purple and blue shade represent different parts of the daily contact rate, $ \beta_{s\times m}(t) $, $ \beta_{f\times m}(t) $, $ \beta_{ms}(t) $, $ \beta_{mf}(t) $ and $ \beta_{mr}(t) $, respectively. (B) PRCCs between estimated daily contact rate and all five factors over time
Figure 7. Probability density function of incubation, infection and serial interval
Figure 8. The Spearman correlation coefficient between daily model-free basic reproduction number $ R_{0}^{free}(day) $ and the measured mean relative humidity with different time lag days $ \bar{H}_{R}(t-\tau) $, $ \tau = 0, 1, \cdot\cdot\cdot, 30 $
Figure 9. Distribution of each parameter $ b_{i} $ in contact rate based on MCMC analysis, $ i = 1, 2, \cdot\cdot\cdot, 19 $. The algorithm runs for $ 1000000 $ iterations with a burn-in of $ 500000 $ iterations, and the Geweke convergence diagnostic method was employed to assess convergence of chains. The initial values of each parameter were randomly selected within their feasible ranges | CommonCrawl |
The regularity for a class of singular differential equations
Uniqueness of positive steady state solutions to the unstirred chemostat model with external inhibitor
May 2013, 12(3): 1299-1306. doi: 10.3934/cpaa.2013.12.1299
An anisotropic regularity criterion for the 3D Navier-Stokes equations
Xuanji Jia 1, and Zaihong Jiang 2,
Department of Mathematics, Zhejiang Normal University, Jinhua 321004, Zhejiang, P. R.
Department of Mathematics, Zhejiang Normal University, Jinhua 321004, Zhejiang, China
Received January 2012 Revised June 2012 Published September 2012
In this paper, we establish an anisotropic regularity criterion for the 3D incompressible Navier-Stokes equations. It is proved that a weak solution $u$ is regular on $[0,T]$, provided $\frac{\partial u_3}{\partial x_3} \in L^{t_1}(0,T;L^{s_1}(R^3))$, with $\frac{2}{t_1}+\frac{3}{s_1}\leq 2$, $s_1\in(\frac{3}{2},+\infty]$ and $\nabla_h u_3 \in L^{t_2}(0, T; L^{s_2}(R^3))$, with either $\frac{2}{t_2}+\frac{3}{s_2}\leq \frac{19}{12}+\frac{1}{2s_2}$, $s_2\in(\frac{30}{19},3]$ or $ \frac{2}{t_2}+\frac{3}{s_2}\leq \frac{3}{2}+\frac{3}{4s_2}$, $s_2\in(3,+\infty]$. Our result in fact improves a regularity criterion of Zhou and Pokorný [Nonlinearity 23 (2010), 1097--1107].
Keywords: anisotropic regularity criterion., Navier-Stokes equations.
Mathematics Subject Classification: Primary: 35Q35, 35B65; Secondary: 76D0.
Citation: Xuanji Jia, Zaihong Jiang. An anisotropic regularity criterion for the 3D Navier-Stokes equations. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1299-1306. doi: 10.3934/cpaa.2013.12.1299
J. Leray, Sur le mouvement d'un liquide visqueux emplissant l'espace,, Acta Math., 63 (1934), 193. doi: 10.1007/BF02547354. Google Scholar
E. Hopf, Über die Anfangwertaufgaben für die hydromischen Grundgleichungen,, Math. Nachr., 4 (1951), 213. doi: 10.1002/mana.3210040121. Google Scholar
G. Prodi, Un teorema di unicità per el equazioni di Navier-Stokes,, Ann. Mat. Pura Appl., 48 (1959), 173. doi: 10.1007/BF02410664. Google Scholar
J. Serrin, On the interior regularity of weak solutions of the Navier-Stokes equations,, Arch. Rat. Mech. Anal., 9 (1962), 187. doi: 10.1007/BF00253344. Google Scholar
L. Escauriaza, G. Seregin and V. Šverák, Backward uniqueness for parabolic equations,, Arch. Rat. Mech. Anal., 169 (2003), 147. doi: 10.1007/s00205-003-0263-8. Google Scholar
H. Beirão da Veiga, A new regularity class for the Navier-stokes equations in $\mathbfR^n$,, Chin. Ann. Math., 16 (1995), 407. Google Scholar
J. Neustupa, A. Novotný and P. Penel, An interior regularity of a weak solution to the Navier-Stokes equations in dependence on one component of velocity,, in, (2002), 163. Google Scholar
Y. Zhou, A new regularity criterion for weak solutions to the Navier-Stokes equations,, J. Math. Pures Appl., 84 (2005), 1496. doi: 10.1016/j.matpur.2005.07.003. Google Scholar
Y. Zhou, A new regularity result for the Navier-Stokes equations in terms of the gradient of one velocity component,, Methods Appl. Anal., 9 (2002), 563. Google Scholar
M. Pokorný, On the result of He concerning the smoothness of solutions to the Navier-Stokes equations,, Electron. J. Diff. Eqns., 11 (2003), 1. Google Scholar
C. Cao and E. S. Titi, Regularity criteria for the three-dimensional Navier-Stokes equations,, Indiana Univ. Math. J., 57 (2008), 2643. doi: 10.1512/iumj.2008.57.3719. Google Scholar
I. Kukavica and M. Ziane, One component regularity for the Navier-Stokes equations,, Nonlinearity, 19 (2006), 453. doi: 10.1088/0951-7715/19/2/012. Google Scholar
I. Kukavica and M. Ziane, Navier-Stokes equations with regularity in one direction,, J. Math. Phys., 48 (2007). doi: 10.1063/1.2395919. Google Scholar
Y. Zhou and M. Pokorný, On a regularity criterion for the Navier-Stokes equations involving gradient of one velocity component,, J. Math. Phys., 50 (2009). doi: 10.1063/1.3268589. Google Scholar
Y. Zhou and M. Pokorný, On the regularity of the solutions of the Navier-Stokes equations via one velocity component,, Nonlinearity, 23 (2010), 1097. doi: 10.1088/0951-7715/23/5/004. Google Scholar
Daoyuan Fang, Chenyin Qian. Regularity criterion for 3D Navier-Stokes equations in Besov spaces. Communications on Pure & Applied Analysis, 2014, 13 (2) : 585-603. doi: 10.3934/cpaa.2014.13.585
Zujin Zhang. A Serrin-type regularity criterion for the Navier-Stokes equations via one velocity component. Communications on Pure & Applied Analysis, 2013, 12 (1) : 117-124. doi: 10.3934/cpaa.2013.12.117
Vittorino Pata. On the regularity of solutions to the Navier-Stokes equations. Communications on Pure & Applied Analysis, 2012, 11 (2) : 747-761. doi: 10.3934/cpaa.2012.11.747
Igor Kukavica. On regularity for the Navier-Stokes equations in Morrey spaces. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1319-1328. doi: 10.3934/dcds.2010.26.1319
Igor Kukavica. On partial regularity for the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2008, 21 (3) : 717-728. doi: 10.3934/dcds.2008.21.717
Patrick Penel, Milan Pokorný. Improvement of some anisotropic regularity criteria for the Navier--Stokes equations. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1401-1407. doi: 10.3934/dcdss.2013.6.1401
Jishan Fan, Yasuhide Fukumoto, Yong Zhou. Logarithmically improved regularity criteria for the generalized Navier-Stokes and related equations. Kinetic & Related Models, 2013, 6 (3) : 545-556. doi: 10.3934/krm.2013.6.545
Chongsheng Cao. Sufficient conditions for the regularity to the 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1141-1151. doi: 10.3934/dcds.2010.26.1141
Keyan Wang. On global regularity of incompressible Navier-Stokes equations in $\mathbf R^3$. Communications on Pure & Applied Analysis, 2009, 8 (3) : 1067-1072. doi: 10.3934/cpaa.2009.8.1067
Hui Chen, Daoyuan Fang, Ting Zhang. Regularity of 3D axisymmetric Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 1923-1939. doi: 10.3934/dcds.2017081
Yukang Chen, Changhua Wei. Partial regularity of solutions to the fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5309-5322. doi: 10.3934/dcds.2016033
Zijin Li, Xinghong Pan. Some Remarks on regularity criteria of Axially symmetric Navier-Stokes equations. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1333-1350. doi: 10.3934/cpaa.2019064
Lihuai Du, Ting Zhang. Local and global strong solution to the stochastic 3-D incompressible anisotropic Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4745-4765. doi: 10.3934/dcds.2018209
Pavel I. Plotnikov, Jan Sokolowski. Compressible Navier-Stokes equations. Conference Publications, 2009, 2009 (Special) : 602-611. doi: 10.3934/proc.2009.2009.602
Jan W. Cholewa, Tomasz Dlotko. Fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 2967-2988. doi: 10.3934/dcdsb.2017149
Peter Constantin, Gregory Seregin. Global regularity of solutions of coupled Navier-Stokes equations and nonlinear Fokker Planck equations. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1185-1196. doi: 10.3934/dcds.2010.26.1185
Julia García-Luengo, Pedro Marín-Rubio, José Real. Some new regularity results of pullback attractors for 2D Navier-Stokes equations with delays. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1603-1621. doi: 10.3934/cpaa.2015.14.1603
Joel Avrin. Global existence and regularity for the Lagrangian averaged Navier-Stokes equations with initial data in $H^{1//2}$. Communications on Pure & Applied Analysis, 2004, 3 (3) : 353-366. doi: 10.3934/cpaa.2004.3.353
Wendong Wang, Liqun Zhang, Zhifei Zhang. On the interior regularity criteria of the 3-D navier-stokes equations involving two velocity components. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2609-2627. doi: 10.3934/dcds.2018110
Bo-Qing Dong, Juan Song. Global regularity and asymptotic behavior of modified Navier-Stokes equations with fractional dissipation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (1) : 57-79. doi: 10.3934/dcds.2012.32.57
Xuanji Jia Zaihong Jiang | CommonCrawl |
A multimethod approach for county-scale geospatial analysis of emerging infectious diseases: a cross-sectional case study of COVID-19 incidence in Germany
Christopher Scarpone1,
Sebastian T. Brinkmann2,
Tim Große2,
Daniel Sonnenwald2,
Martin Fuchs2 &
Blake Byron Walker ORCID: orcid.org/0000-0002-1983-31472
As of 13 July 2020, 12.9 million COVID-19 cases have been reported worldwide. Prior studies have demonstrated that local socioeconomic and built environment characteristics may significantly contribute to viral transmission and incidence rates, thereby accounting for some of the spatial variation observed. Due to uncertainties, non-linearities, and multiple interaction effects observed in the associations between COVID-19 incidence and socioeconomic, infrastructural, and built environment characteristics, we present a structured multimethod approach for analysing cross-sectional incidence data within in an Exploratory Spatial Data Analysis (ESDA) framework at the NUTS3 (county) scale.
By sequentially conducting a geospatial analysis, an heuristic geographical interpretation, a Bayesian machine learning analysis, and parameterising a Generalised Additive Model (GAM), we assessed associations between incidence rates and 368 independent variables describing geographical patterns, socioeconomic risk factors, infrastructure, and features of the build environment. A spatial trend analysis and Local Indicators of Spatial Autocorrelation were used to characterise the geography of age-adjusted COVID-19 incidence rates across Germany, followed by iterative modelling using Bayesian Additive Regression Trees (BART) to identify and measure candidate explanatory variables. Partial dependence plots were derived to quantify and contextualise BART model results, followed by the parameterisation of a GAM to assess correlations.
A strong south-to-north gradient of COVID-19 incidence was identified, facilitating an empirical classification of the study area into two epidemic subregions. All preliminary and final models indicated that location, densities of the built environment, and socioeconomic variables were important predictors of incidence rates in Germany. The top ten predictor variables' partial dependence exhibited multiple non-linearities in the relationships between key predictor variables and COVID-19 incidence rates. The BART, partial dependence, and GAM results indicate that the strongest predictors of COVID-19 incidence at the county scale were related to community interconnectedness, geographical location, transportation infrastructure, and labour market structure.
The multimethod ESDA approach provided unique insights into spatial and aspatial non-stationarities of COVID-19 incidence in Germany. BART and GAM modelling indicated that geographical configuration, built environment densities, socioeconomic characteristics, and infrastructure all exhibit associations with COVID-19 incidence in Germany when assessed at the county scale. The results suggest that measures to implement social distancing and reduce unnecessary travel may be important methods for reducing contagion, and the authors call for further research to investigate the observed associations to inform prevention and control policy.
Since the initial outbreak in late 2019 in Wuhan, China [1], the novel coronavirus SARS-CoV-2 has spread to 207 countries worldwide, causing an estimated 12.9 million cases and 569,128 deaths due to coronavirus disease 2019 (COVID-19), as of 13 of July [2]. In Germany, the first case was recorded on 27 of January 2020 [3], in Bavaria. Most recently there were 198,963 reported cases and 9064 deaths in Germany as of 13 of July 2020 [4]. Federal social distancing guidelines were nearing peak security measures on 28 March 2020, where curfews were being implemented independently at the NUTS-3 (county) level as early as 20 March [5].
Local person-to-person transmission of the virus is attributable to shedding on the nasopharyngeal, turbinate, and oropharyngeal surfaces [6, 7], then transmitted primarily via airborne droplets ejected from the nose or mouth [6]. Owing to an estimated average incubation period of 5-6 days and ranging up to two weeks [8,9,10,11], the virus can be transmitted to multiple persons by asymptomatic individuals [7]. Up to 78% of individuals who test positive are asymptomatic at the time of testing (Day, 2020), therefore likely accounting for the majority of new cases [7]. Research and public health guidelines have accordingly emphasised interpersonal proximity as a key risk factor, advising a minimum interpersonal distance of 1.5 m to reduce risk of transmission [11].
Meta-population framework
In order to identify spatial patterns and accurately model viral contagion a minimum number of infected individuals must be established. This threshold allows for the identification of transmission parameters necessary for deterministic modelling [12, 13]. Once patterns can be detected, the meta-population theory for epidemiology [14] provides a valuable framework for modelling and analysis. A meta-population is the aggregate of all global populations (Fig. 1). In the context of global CoV-SARS-2 spread, each country can be considered an individual population [12]. The transmission of CoV-SARS-2 is therefore broadly characterised by inter-population transmission and intra-population contagion.
The meta-population framework describes the global and local transmission of emerging infectious diseases (EIDs) by inter-population invasion and intra-population contagion [15]
Intra-population contagion [15, 16] can be locally driven, where individual members inside the extent of the initial outbreak boundary (Wuhan, China) begin to transmit the disease to other members of the local population. Should a threshold number of individuals be diagnosed, the local socioeconomic, built environment, and spatial patterns can then be analysed [13, 15]. The examination of these types of patterns and associations assists researchers and public health officials to define the spatial diffusion and reproduction of a disease, and accordingly, target prevention measures and direct interventions [17, 18].
The subsequent horizontal transmission is referred to as inter-population invasion [15, 19] and is characterised by a semi-stochastic process that acts on a global scale [15]. The infected members of the population transmit the virus from the outbreak extent to new uninfected cities between nodes of transportation networks such as airports and train stations [20, 21]. Global transmission of emerging infectious diseases (EIDs) is therefore the iterative process of intra-population contagion in a population that then allows a stochastic jump to inter-population invasion. We hypothesise that socioeconomic characteristics of a population and features of the built environment comprise important factors in both intra-population contagion and inter-population invasion (e.g., employment rates, social assistance, airports, and major train stations). By examining geospatial patterns of incidence and associated social- and built-environmental features across Germany, this cross-sectional study frames Germany as a population and each constituent county (NUTS-3) as an individual member of the population.
Socioeconomic and built environment factors
Socioeconomic status (SES) is well understood to play a significant role in the transmission of infectious disease, for example, through intra-population contagion among socioeconomically homogeneous subpopulations [22]. For example, age plays a role both in individual risk of respiratory infection and in the frequency and nature of interpersonal contact [23]. More broadly, higher rates of infectious diseases such as influenza, invasive group A streptococcal infections, and pneumococcal infections have been observed among socioeconomically deprived subpopulations (e.g., low-income, high unemployment) [22]. Spatial analysis of SES has thus been widely used to investigate social and economic risk factors, predict high-risk areas, and target interventions [24, 25].
It is well understood that the built environment exerts an influence on patterns of human mobility and social interaction, which are in turn key factors in the transmission and prevalence of infectious disease [26]. For example, the aforementioned study on the risk respiratory infections indicates that the location of contact is important for the risk of transmission [23]. Furthermore, the spatial configuration of buildings can have an impact on disease transmission, for example, by affecting the density of persons moving through a confined space [26]. However, the density of features of the built environment has, to our knowledge, not yet been comprehensively modelled for spatial-epidemiological analysis of infectious disease, presenting an important avenue for investigation which this study seeks to begin to address.
Spatial epidemiology emphasises the importance of geographical patterns in understanding disease risk factors, incidence, and outcomes [17, 18]. For example, incidence rates of an infectious disease often exhibit spatial associations with SES and the built environment [18], which function as possible determinants of interpersonal contact and vulnerability to infection. The identification and investigation of geospatial patterns and high-/low-rate clusters is therefore a key process for characterising aetiologies, identifying high-risk populations, and targeting interventions [27].
The use of geographic information systems (GIS) facilitates empirical representation of the spatial associations between socioeconomic- and built environments and infectious disease incidence [17, 28]. Many studies focus on spatial autocorrelation, which provides a means of estimating the influence of proximity on the interactions between nearby features [28], both in that proximal features are more likely to interact and are more likely to be similar in composition [17, 29]. GIS thus provide a platform for modelling and analysing spatial autocorrelation within a spatial epidemiology framework [18], for example, by interpolating and examining spatiotemporal patterns of infectious disease [30] and identified associations with socioeconomic characteristics of subpopulations and relevant prevention and control measures [31].
Conversely, strictly mathematical approaches to epidemiological modelling focus predominantly on the simulation of propagation dynamics under various defined conditions [12, 15, 32]. These models focus on identifying transmission vectors and simulating transmission scenarios [32], and may include a spatial component [33]. As computational processing power continues to rapidly improve, researchers are increasingly able to incorporate sophisticated mathematical techniques, such as Bayesian machine learning, to model both geospatial patterns and socioeconomic/environmental data within a spatial-epidemiological framework [18, 34]. These efforts are key to identifying otherwise concealed geographical patterns and associations, an important initial step towards advancing our understanding of risk factors and transmission dynamics [27].
A rapid increase in the quantity of socioeconomic, environmental, and health data is further driving modern statistical methodologies for epidemiology modelling [35], as a growing number of variables must be modelled in order to more comprehensively explain spatial patterns of disease. Consequently, such methods are able to account for more complexity and thus have immense value for developing more informed decisions in health care and disease control [34]. Of particular prominence in recent years is the use of geospatially-explicit artificial intelligence for environmental epidemiology [36], including the combined use of machine learning, GIS, precision incidence data, and exposure modelling.
This cross-sectional study presents an empirical exploration and interpretation of the spatial patterns exhibited by COVID-19 incidence rates across Germany. A combination of epidemiological and machine learning techniques are used to identify associations between COVID-19 incidence rates and socioeconomic and built-environment characteristics at the county scale.
Methodology to examine patterns of COVID-19 incidence as defined by spatial, socioeconomic, and built environment features and characteristics. RKI: Robert Koch Institut; INKAR: Indikatoren und Karten zur Raum- und Stadtentwicklung [Indicators and maps for land and urban development]; OSM: OpenStreetMap; EEA: European Environment Agency; BART: Bayesian Additive Regression Trees; GAM: Generalised Additive Models; PDP: Partial Dependence Plot
We followed a linear methodology, as shown in Fig. 2, comprising data acquisition and preprocessing, spatial modelling, and aspatial modelling. County-level COVID-19 incidence data published by the Robert-Koch-Institute were downloaded through the publicly-accessible NPGEO-DE platform [37]. Socioeconomic data for Germany were collected through the INKAR (Indikatoren und Karten zur Raum- und Stadtentwicklung) data portal [38]. Built environment features were downloaded from OpenStreetMap [39] and the German Bundesamt für Kartographie und Geodäsie [40] data. Population densities were derived from the European Environment Agency's 100-metre resolution Population Density Grid. Exploratory analysis of the geographic patterns was then undertaken using a geographical trend analysis and Local Indicators of Spatial Association (LISA). Finally, variable selection was conducted using Bayesian Additive Regression Trees (BART), where the most influential spatial, social-economic, built environment variables were selected for further interpretation in the context of the COVID-19 epidemic in Germany as of 1 April 2020. A 40-fold cross validation was conducted on the final BART outputs to assess prediction accuracy and model fit.
Data acquisition and preprocessing
Incidence rates
COVID-19 incidence were downloaded on 1 April 2020, comprising a table of confirmed cases (N = 57,298) by county (N = 401) from the first case on 28 January until 31 March, comprising patient age group, sex, county of primary residence (NUTS-3), and the date at which the confirmed case was reported to the local health authority. Neither the date, location, nor means of infection were recorded.
Due to high spatial variation of age distributions in Germany, this analysis uses age-adjusted incidence rates. The age groupings used by the Robert-Koch-Institute for COVID-19 case reporting differ from those reported in population datasets; we therefore estimated age distributions for every county in the study area (N = 401). Based on the existing INKAR data, samples for each of the original age groups with sample sizes corresponding to the group's proportion of the total county population were simulated. Those samples were then used to approximate an empirical cumulative distribution function for the entire age distribution, from which the probabilities for the new age groups congruent with those of the RKI were derived. These estimated probabilities were then multiplied by the municipality population to acquire an estimated absolute number of persons per age group. Our R code is available on GitHub [41]. The results were manually cross-checked against INKAR population data for validation and exhibited less than 2% error. With the resulting base population distributions we directly adjusted municipal incidence rates to the German standard population and natural-log-transformed the result to improve the distribution of rates for statistical analysis. The resulting rates were mapped for visualisation and spatial analysis.
Socioeconomic data
The socioeconomic datasets were acquired using the INKAR data access tool, which comprises social, demographic, and economic characteristics of counties collected by various ministries, the federal states, and the municipal governments, and is validated and managed by the German federal government. The dataset includes a diverse set of indicators in the fields of economics, demography, education, and other social data.
Built environment densities
OpenStreetMap data for Germany for each selected built environment feature type were downloaded in April 2020 as separate vector files from Geofabrik [39] and were used as the primary dataset for constructing our built environment variables. For modelling purposes, we separately computed a peak density value for each feature type in each county (e.g., airports, train stations, grocery stores, parks). To calculate the peak densities, we constructed a novel spatial density function to account for each feature type's unique spatial structure, based on an heuristic approximation of geographical accessibility for each county population. This algorithm accounts for both the number and relative proximity of features of each type in each county [42], which were calculated using the Kernel Density Estimates function in the R package spatstat [43]. We created a custom parameterisation for each built environment feature within each county, calculated as the optimal bandwidth \(h_{opt}\):
$$\begin{aligned} h_{opt} = \bigl [\frac{2}{3n}\bigr ]^{1/4}\sigma \end{aligned}$$
where \(\sigma \) is the standard distance of all features within a given county and n is the total count of the selected feature type within that county [44]. A logit link function was then applied to estimate the optimal bandwidth for each county, selected in order to reduce biased weighting of spatially dense clusters of features at the expense of smaller clusters, e.g., in small towns and villages where person-to-person transmission is also likely to occur.
The calculated densities were then summarised for each feature type across each county, and each respective maximum density value was extracted for statistical modelling, based on the assumption that maximum densities provide a better approximation of person-to-person transmission than means or medians (e.g., in mostly rural counties with a small, yet very dense town, as is common in many regions of the study area).
Exploratory spatial modelling
Local indicators of spatial association (LISA) was used to assess whether there was spatial clustering of log adjusted incidence rates for Germany. LISA is an exploratory tool used to statistically assess geographical clustering of high and low values in a dataset [45]. LISA calculates local spatial autocorrelation at each individual county using a single variable, enabling the quantitative estimation of local spatial clustering [45], essentially indicating how similar an observation is to all other observations within a defined radius [46]. We used LISA to identify statistically significant hot spots (clustering of high values), cold spots (clustering of low values), and spatial outliers (e.g., a county with high rates that is within a low-rate cluster). LISA was calculated using ArcGIS 10.7.1 [47]. The distance band (radius of the spatial weight function) was determined by calculating the average distance between all county centroids and an inverse distance squared parameter was used to define the spatial weighting function, selected to ensure higher weights were given to nearer counties.
Exploratory spatial trend analysis of adjusted incidence rates was conducted to identify spatial structure in the data. Trend analysis is the identification and description of a univariate spatial pattern using multiple regression, where the response variable is the variable of interest (adjusted incidence rate) and the predictor variables are longitude and latitude [29, 48, 49]. The results can be interpreted as a global indicator of the spatiality of response variable [50].
We elected to use a Bayesian modelling approach, which has the advantage (among others) of not being bound to the assumption of parametric parameter distributions, while facilitating model parameterisation based on prior data and/or iterative selective sampling of observed data distributions [51]. This approach allows for a reduction of bias and variance and for minimizing error when analysing small samples for inferential and prediction/classification problems [34, 52].
In order to identify important socioeconomic and built environment covariates with COVID-19 incidence rate, a Bayesian Additive Regression Trees (BART) model was selected. BART is a machine learning tool that iteratively creates regression trees with variable hyperparameter distributions (e.g., number of nodes, tree depth) [53]. The parameter distributions are recorded from multiple iterations using a Metropolitan-Hastings sampling algorithm, as all parameters and hyperparameters are not assumed to be parametric [53]. Unlike most ensemble methods, BART computes Bayesian posterior distributions to approximate the nonparametric model parameters and selects a strict error variance parameter to reduce the risk of overfitting. Additionally, BART has been shown to be effective at finding structure in high dimensional data [54] lending itself to be an exploratory method. further insights with the addition of an internal variable reduction method to emphasise important variables [53]. We used further measures to prevent overfitting and to select the optimal independent variables and hyperparameters by running iterative k-fold cross-validations with 5 to 20 folds. The BART Machine models were run in RStudio (v.1.2) using R (v.3.6.3) [55] with the BARTmachine package [53].
For model specification, we entered the natural-log-transformed age-adjusted incidence rates as the response (dependent) variable and all socioeconomic and built environment variables as candidate explanatory (independent) variables. Explanatory variable inclusion was determined through iterative cross-validations, in which each successive permutation of a BART model was assessed according to its error variance and RMSE to derive the model with the highest prediction performance. Overfitting is penalised with the BART model from its prior on error variance which limits the weights given to trees with small \({\sigma ^2}\) values [53].
Variable importance plots were generated from the BART model, which displays a quantitative metric of a variable's relative influence on model predictions, compared to all other variables [53]. We also generated Partial Dependence Plots (PDPs), which are graphical outputs that illustrate the marginal effect of each independent variable on the response variable [56,57,58]. A PDP only displays the marginal effect of each independent variable in relation to the influence of all other independent variables, and should be interpreted as exploratory [53].
To assess how the final COVID-19 BART model should generalize to an independent data set, out-of-sample cross-validation was conducted on the 31 Final Variables that our BART model predicted The original training data were randomly split into training (n = 301) and testing (n = 100) subsets and a new BART model with 31 variables was computed. The model of the training subset was then used to predict the out of sample values of the testing subset. Finally, the actual values and the predicted values were compared with a linear regression analysis and the resulting RMSE and \({R^2}\) were calculated. Model outputs were validated using the test data and the resulting RMSE was calculated. This step was iterated 40 times, and an average RMSE was computed for all 40 runs to internally validate our predictions [56, 59].
Due to nonlinear relationships expressed by model covariates, General Additive Models (GAMs) provide a useful semiparametric technique for modelling nonlinear associations [60]. GAMs operate as an extension of GLMs, but allow for the inclusion of smoothing terms, which can be explained by the following general form [61]:
$$\begin{aligned} \quad g (\mu _{i}) = A_{i}\gamma + \Sigma _{j}f_{j}(x_{ji}), y_{i}\sim EF(\mu _{i},\phi ) \end{aligned}$$
where \(A_{i}\) is the \({i^{th}}\) row of the parametric model matrix of the model with parameters \(\gamma \), and the smooth terms \(\Sigma _{j}f_{j}(x_{ji})\) constitute the nonparametric part of the model. The response variable \(y_{i}\) with the expected value \(\mu _{i}\) follows a distribution from the exponential family, for which a link function \(g (\mu _{i})\) can be specified [61]. The GAM model predictor variables were the top ten variables that were determined from the BART model's variable importance plots, and the natural log-transformed age-adjusted incidence rate was selected as the response variable. Since the transformed incidence rates are approximately normally distributed, a gaussian model with an identity link function was used. The applied the GAM equation can be described as:
$$\begin{aligned} \log (AdjRate)_{i} = \beta _{0} + \Sigma _{j}\beta _{j}x_{ji} + \Sigma _{j}f_{j}(x_{ji}) + f(x_{1i},x_{2i}) \end{aligned}$$
where \(\log (AdjRate)\) is the expected value of the natural log-transformed age-adjusted incidence rate, and the intercept is given by \(\beta _{0}\). \(\Sigma _{j}\beta _{j}x_{ji}\) accounts for the parametric model part to assess linear effects. For the nonlinear predictors \(\Sigma _{j}f_{j}(x_{ji})\) thin plate splines were used as basis functions. For the county centroid coordinates a bivariate, isotropic smoothing term \(f(x_{1i},x_{2i})\) was used, containing latitude and longitude as variables \(x_{1}\) and \(x_{2}\) respectively. A second GAM model was conducted without the latitude and longitude variables to reduce the concurvity amongst the socioeconomic and built environment variables.
There are 401 counties in Germany; as shown in Fig. 3, these vary in size, such that the counties in Southern Germany are generally smaller with higher population densities. Natural log-transformed age-adjusted incidence rates are shown, indicating spatial variation between the northeast and south-southwest of the study area.
Natural log-transformed age-adjusted incidence rates of COVID-19 as of April 1st
Spatial trend and LISA
Trend analysis of age-adjusted incidence rates of COVID-19: adjusted incidence rates map a displays the LISA results, indicating significant spatial clustering in the study area. High-High (HH) indicates clusters of high rates, Low-Low (LL) indicates clusters of low rates. High-Low (HL) values represent individual counties with a high rate, but are surrounded by counties with a low rate, and Low-High (LH) is its inverse. The scatter plot with Pearson's Correlation Coefficient indicates an association with b latitude, but not c longitude
The results of the trend analysis (Fig. 4b, c) indicate no apparent correlation between longitude and incidence rates, as can also be observed in the map (Fig. 4a). However, latitude does exhibit a weak-to-moderate correlation (R = −0.46), such that rates (shown as vertical extrusions on the map) indicate higher rates in the south. The LISA results (choropleth map in Fig. 4a) indicate a large cluster of high rates was observed in the south, whereas the northern and eastern regions exhibit a cluster of low rates. These constitute two major clusters with several outliers, for example, some counties (e.g., Erlangen-Höchstadt and Oberallgäu) are low-rate outliers. An east-west corridor with no significant spatial clustering is observed, dividing the north-eastern and southern clusters.
These trend analysis and LISA results indicate the presence of two distinct spatial patterns within Germany, enabling the classification of all federal states into two regions for the subsequent analysis: High-Rate Regions (HRR, referring to the southern cluster) and Low-Rate Regions (LRR, referring to the northern cluster). These regions are separated by a thick black line in Fig. 3.
Regional comparison
The North/LRR accounts for 48.5% (173,287 \(\hbox {km}^2\)) of the total land area and 35.6% of the population, and the South/HRR for 51.5% (183,887 \(\hbox {km}^2\)) of the total land area and 74.4% of the total population of Germany.
The adjusted incidence rates exhibit two distinct distributions when regionally classified by LRR and HRR (Fig. 5), indicating that LRR and HRR are two distinct patterns. For ease of interpretation, further examination of the two regions is described using untransformed, age-adjusted values (Table 1).
Histogram of the high rates region and low rates region subsets of COVID-19 incidence rates
The south western region has a greater representation of higher incidence rates where \(\overline{\mathrm{X}}\) = 98.96 cases per 100,000 and \(\sigma \) = 70.73 and minimum and maximum incidence rates of 20.60 and 673.93. The northern region has less proportion of counties, with the \(\overline{\mathrm{X}}\) = 41.92 and \(\sigma \) = 25.95 with county-level rates ranging from 5.76 to 139.10. LRR Germany's max value of 139.10 (Mühldorf a. Inn), was lower than 42 counties in HRR, where the max was 673.93 (Tirschenreuth).
Table 1 Descriptive statistics for untransformed age-adjusted incidence rates per 100,000 for Germany and for the low rate and high rate subregions, and the differences between subregions
BART results and validation
The initial BART model included 366 independent variables (longitude, latitude, federal state (Bundesland) and NUTS2 region, and all socioeconomic and built environment variables). The response variable was the age-adjusted incidence rate per 100,000 residents.
Table 2 BART model summary statistics with internal validation
Two BART models (Table 2) were produced to predict COVID-19 incidence rates. The preliminary model (366 variables) produced a root mean square error (RMSE) of 0.23 log-transformed age-adjusted incidence rate per 100,000 with a range of 2 to 6 and a pseudo \(R^2\) of 0.886. This cross-validated model accounts for 88.6% of the variability in incidence rates, indicating a robust prediction.
To decide on the subset of variables that are contributing to the largest proportion of model influences, the variable selection function in the BART package was implemented [53]. Of the 366 variables, this variable reduction method removed all but 31 variables, as they were deemed the most important to the model's predictions. This saw a reduction in pseudo \(R^2\) from 0.886 to 0.734, equating to a 15% reduction in explained variability. The RMSE correspondingly increased to 0.36, indicating that the final model predicted age-adjusted incidence rates of COVID-19 for German counties with an accuracy of +/− 1.3 cases per 100,000. The residuals of both models were found to be normally distributed and exhibited no geographical clustering. The cross-validation was completed with 40 folds, and the resulting \(R^2\) was 0.57 with an RMSE of 0.46, equating to a mean error of 1.58 cases per 100,000.
The density of Christian churches contributed the greatest number of tree splits in the final BART model. Latitude and Longitude respectively ranked second and third, indicating the importance of the spatiality in predicting incidence rates, as also observed in the trend analysis and LISA results. This spatial pattern is based on the x and y coordinates for the county centroids, which the BART model used to split decision trees for rate prediction. Socioeconomic variables account thereafter for a considerable proportion of the variability in rates, the strongest of which was Voter Participation rate. The remaining socioeconomic and built environment variables are described in rank order in an Additional file 1 in the appendix.
Partial dependence
Partial Dependence Plots (PDP) of the 10 most prevalent variables in the final Bayesian Additive Regression Tree (BART) model. Histograms are shown for the entire country (green), for only the low rates region (LRR, teal), and for only the high rates region (HRR, purple). The PDPs indicate marginal changes in the predicted (log-transformed, age-adjusted) incidence rate per 100,000 residents (upper y-axis) for different values of each independent variable (x-axis)
The ten most important variables from the BART model were selected for further description. All variables and their summary statistics are listed in the Additional file 1: appendix. The partial dependence plots and region-specific histograms are shown in Fig. 6. We observed that increase in latitude (Fig. 6a) is associated with a strong marginal decrease in COVID-19 incidence rate, indicating that the model is accounting for the spatial pattern observed in the trend analysis. A partial dependence for longitude (Fig. 6b) indicated that farther east latitudes are associated with higher incidence rates. This trend is observed to be non-linear, rather quadratic. High rates along the Austrian border appear to account for this partial dependence.
LRR was observed to feature lower densities of Christian churches than HRR (Fig. 6c), and a higher density is associated with an increase in COVID-19 incidence rates. The voter participation rate (2017 national election) features minor differences between the two subregions (Fig. 6d) and the PDP indicates a positive relation between voter participation and incidence rates with a gradient increase between the 20th and 40th percentiles (73.5% and 74.3% participation). The histograms of the proportion of foreign guest overnight stays compared to the total number of overall stays (Fig. 6e) slight differences between the two subregions, accompanied by a positive association observed in the accompanying PDP. Conversely, there appear to be no significant differences in the distributions nor any significant observable partial dependence for long-distance train stations (Fig. 6f).
The regional population potential (Fig. 6h) measures the likelihood of direct interactions to occur between inhabitants [38]. The PDP indicates small marginal changes in incidence rates for low values of regional population potential, which can be interpreted as evidence that in counties with a lower probability of human interaction, there is a lower probability of viral contagion. The greatest increase in partial dependence is observed between the 20th and 80 percentiles of regional population potential index scores (14,016 to 47,067), indicating a strong non-linear effect of this variable on incidence rates. Both long-term unemployment rate and unemployment rate ages 15 to 30 exhibit differences between the study subregions, and both indicate minor partial dependence, such that higher unemployment rates correspond with lower observed COVID-19 incidence rates.
GAM results and validation
Initially two base models were fitted, one with the ten variables that attained the highest variable importance in the BART model, and one with eight variables, for which the variables for longitude and latitude were excluded. In both models the residuals showed no association with the response variable. The model including latitude and longitude showed high concurvity values and suffered from lower significance for the non-spatial variables (except church density). Further modelling was conducted on the eight non-spatial variables and the final GAM model was chosen by selecting the model with the lowest RMSE (as validated by a 1000 fold-cross validation) and AIC scores. Among the final model candidates, the non-spatial base model and the model including employment rate of persons ages 15 to 30 and unemployment rate under 25 as single terms display the lowest AIC scores, the lowest RMSE value of 0.485 and an \(R^2\) of 0.557 with the minimum value varying between the two test runs. This model reduced concurvity and model complexity, while performing equally well across all criteria examined here, it was chosen as the final model.
Intra-population contagion
The level of response to COVID-19 has been adapted to the current outbreak with increasing severity, with several initial steps taken in May 2020 to reduce restrictions [62]. Local measures have included encouraging and or mandating a minimum interpersonal distance 1.5 m [11], closing schools, colleges, universities, community centres, and daycare centres, and a widespread implementation of "work-from-home" arrangements. These policies have almost certainly reduced the potential spread of SARS-CoV-2 in the study area, although this study focuses on a snapshot of data from 1 April 2020. The results presented herein may be valuable not only for improving our current understanding of transmission dynamics and population vulnerability, but also for informing outbreak control measures and targeting high-risk areas.
The transmission of COVID-19 is facilitated through interactions occurring at multiple scales as they interact with vertical and lateral transmission. The scale of interaction is defined by its own set of distinct spatial patterns [63]. By examining the assemblage of each pattern, researchers can eventually define the structure of an otherwise prohibitively complex process [64, 65]. In this case, the underlying process of interest is the vertical transmission of intra-population contagion in Germany at population and sub-population scales. The population's members in this instance can be defined by social, cultural, economic, and spatial factors [66, 67] as expressed by the county units.
The LRR and HRR groups defined in this study exhibited very distinct and contrasting characteristics that were observed to influence the higher observed rates in the South-West and the lower rates in the North-East. This regional distinction and the variable selection generated using BART enabled us to achieve high model accuracy and define a spatial pattern related to intra-population contagion as expressed by the sub-population observations.
The most important variables identified through our methodology merit further discussion. Higher densities of churches were observed in the HRR, which were identified as being the most important environmental variable for predicting COVID-19 incidence rates. However, this does not necessarily indicate that the churches themselves are the loci of transmission, rather, we suggest that this feature of the built environment indicates locales with higher walkability where more interpersonal interactions may take place, for example, due to higher social connectivity and community engagement, particularly among senior and elderly populations, who comprise the majority of Christian church attendees in our study area and are more likely to be diagnosed with COVID-19.
Similarly, features of transportation networks such as long-distance train stations may serve predominantly as an indicator of a community's connectedness (inter-population invasion), as well as serving as nodes where high densities of travelling persons increase the probability intra-population contagion [21].
SES and built environment
The transmission of COVID-19 can occur through both direct and indirect interpersonal contact [68, 69]. The frequency and proximity of interactions between individuals is therefore a primary determinant of infection risk. The nature and configuration of the social and built environments therefore are likely to be significant covariates of infection risk, and consequently, the resulting geographical distribution of incidence.
An key driving assumption in this study is that higher built environment densities will correspond with increased direct and indirect contact between persons, and decreased proximities [70]. However, our analysis revealed only one built environment variable that contributed to an heuristically significant proportion of the variability explained in our models: the density of Christian churches. It is therefore crucial to underscore the generalised nature of how built environments are assessed in contemporary methodologies, specifically, that individual features do not necessarily constitute precise loci of transmission, rather, that they may serve as proxies for understanding the configuration of the built environment and difficult-to-measure characteristics of local populations (e.g., community connectivity amongst elderly populations).
Similarly, the socioeconomic variables highlighted in section 3.4 and listed in Additional file 1 appendix may serve to characterise local inter- and intra-connectedness, in addition to describing measurable characteristics of a population (e.g., age distributions). In counties where the incidence rate of COVID-19 is high we postulate that those variables proxying social interactions also exhibit high values, because they increase the potential of spreading the virus through local instances of viral transmission.
Interestingly, three variables related to labour market structure emerged as highly important in predicting COVID-19 incidence rates: unemployment rate, unemployment rate among persons ages 25 or younger, and the employment rate of persons ages 15 to 30. The spatial distributions of these variables also reflect the geographical distribution of labour market participation across Germany, and our models and the resulting partial dependence plots indicate a negative correlation between employment rates and COVID-19 incidence rates. This may be explained by the mechanics of social exclusion and stratification, such that employed persons are more likely to have a more differentiated social network than unemployed persons [71]. However, social exclusion and relative isolation caused by unemployment may lead to a more closely knit socio-spatial milieu [72,73,74]. Accordingly, we would expect that higher employment rates and lower unemployment rates are both correlated with a higher number of social interactions and reduced interpersonal proximities, consequently amplifying the potential spread of SARS-CoV-2. Very recent research is poised to illuminate how and when the actors engaged in social service work are addressing changes in the social settings and consequent vulnerability experienced by socially excluded members of society [75, 76].
Spatial interconnection is represented in our final model primarily by access to long-distance train stations, the proportion of foreign guests, and the regional population potential variables. We therefore hypothesised that the socio-spatial variables would be important in the resulting BART models. The partial dependence plots for these variables also correspond to our heuristic expectations, for example, that voter participation and access to intercity train stations would exhibit positive partial dependence. However, these variables did not exhibit differences in their distributions between the two study regions, except for the proportion of foreign guests, which provides weak correspondence to a differentiation between the regions.
The concept of parsimony is central to new modelling studies, particularly within an exploratory framework [77, 78]. However, when examining large, multidimensional datasets in an exploratory fashion more complex methods are necessary, in order to detect potential patterns and associations [79]. The observations that are made through simpler, often parametric models are critical in interpreting and contextualising results from modern exploratory data-mining models, which are often obscured behind the black box of machine learning [32, 34]. In this context, the robustness thesis can therefore be considered a companion to parsimony, in that is asserts that a method is robust if observations made with a simpler model are also present in a different or more complex model [32].
This study demonstrates a novel methodology for systematically exploring geospatial patterns of EIDS while building ideas of the robustness thesis into our procedure/methodology. Early exploratory analysis (as seen with the trend analysis) enabled us to gain confidence in the subsequent, more complex model's explanation of the spatial pattern [32]. These early exploratory tests can also be used to validate assumptions about the spatial nature of a dataset while providing a method for separately validating trends observed in machine learning results. Latitude and longitude represent simple spatial variables that can help define global functions of an observed spatial pattern of an epidemic, and enable researchers to parameterise models accordingly. For use of this study, we assumed there were no causal effects that are associated with the X and Y variables, instead, these variables were used to validate assumptions we witnessed in our trend model. This approach emphasises necessity for critically interrogating data and methods in order to be confident in our model outputs. As we try to ensure that our data heuristically correspond with the process or target under examination [27, 34, 80], we provide space for hypotheses to be generated that question these intricate data and process relations.
The BART modelling demonstrated that although many variables can be used as inputs, the majority of variability explained will largely be determined from a subset of all variables, whenceforth only a marginal decrease in accuracy will be observed [58]. The preliminary model decreased from 366 to only 31 variables in the final model, and the \(R^2\) only exhibited a proportionally small decrease from 0.89 to 0.73, with very minimal differences in variable importance among the top 10 variables shown herein. Because the removal of 335 variables contributed to a 0.16 reduction in \(R^2\), a more thorough investigation can be conducted on the remaining 31 variables. The cross validation results indicate that even when we further subsetted the data (n = 301 and n = 100), the resulting \(R^2\) values remained relatively high (\(R^2\) = 0.57) with an RMSE of 1.58 cases per 100,000.
The inclusion of the GAM models allowed for a comparison for the efficiency and accuracy of the BART model. As an exploratory tool, GAM was overburdened by the complexity of the data and the amount of variables. The BART analysis was not only required to determine variables for modelling, but the GAM model would often express too much concurvity when purely spatial variables such as latitude and longitude were used. The inclusion of the latitude and longitude were key indicators to express the patterns described by the trend analysis. However, once exploration is conducted, we suggest that future studies use the GAM modelling to further understand the associations that have been presented by the BART modelling.
Another important feature of this study is the use of partial dependence plots to assess marginal effects on the response variable for different values of an independent variable. For example, a visual examination of the PDPs uncovered patterns that were not evident from the maps and trend analysis. The use of PDPs for spatial-epidemiological analysis is therefore recommended as a means of adding a layer of interpretability to machine learning models.
Study limitations
The modelling approaches selected for this study feature several key limitations that may have impacted our results. These limitations are explored in more detail in the methodology papers referenced herein, but several merit mention.
The use of administrative boundaries still requires that our results be considered in light of limitations such as the Modifiable Areal Unit Problem (MAUP) [18]. For example, it is unclear whether there are significant differences in COVID-19 rates and population characteristics between high-incidence counties on either side of Germany's borders with France, Switzerland, and Austria. The next phase of this project intends to expand this methodology to include cross-border effects, using NUTS3 data from multiple countries in continental Europe. In addition, this study is unable to determine whether the origin of each new COVID case is locally or internationally acquired. We have discussed variables that can be used as indicators for global (proportion of foreign guest stays) or local (unemployment rates), however origin is still unknown.
A significant challenge in the modelling of many EIDs is that the true population incidence and prevalence are unknown, largely due to asymptomatic individuals, different testing rates and protocols, misdiagnosis, and differences in reporting protocols. This limitation may provide additional challenges when seeking to conduct analyses that include multiple countries, and must be taken into consideration during comparative or multi-site studies.
Although BART provides a useful non-parametric means of exploring potential associations in large, multidimensional datasets, the use of Markov Chain Monte Carlo to generate prior distributions for all parameters and hyperparameters requires a strong penalty against overfitting; it is unclear whether the built-in penalty against sigma-squared is sufficient. This study used an internal cross-validation approach to account for overfitting, however, an independent validation dataset could be used in future studies to assess these effects. Additionally, because the Metropolitan-Hastings algorithm uses a random seed, some variation in model repetitions is observed and exact replication of results requires additional parameterisation. In order to address this limitation, we provide pre-set seeds in our code, linked in this article. The use of regression trees with many nodes also increases the probability of spurious splits occurring, although BART has the advantage of using the sums of multiple iterations to reduce these effects. However, these instabilities require that BART be used as an exploratory tool, and not in a confirmatory manner. For this reason, the use of GAMs or other robust regression techniques is vital for assessing and confirming BART results.
Although exploratory results determined that no other patterns existed on other administrative scales (NUTS1 and NUTS2), this study focussed primarily on the NUTS3 (county) level of geography, limiting our model interpretations and ability to generalise from the data. Additionally, it has been shown that the spatial scale of data analysed dictates the spatial granularity of a study, which could in turn limit the ability to identify the correct scale for the process under investigation [18].
This study provides a first step towards understanding the spatial, socioeconomic, and built-environment structure of COVID-19 incidence across Germany. Through the BART modeling and variable importance, 10 variables were identified as being very important for explaining variance in incidence rates: church density, latitude, longitude, voter participation, foreign guests, accessibility by intercity rail, employment rate for ages 15–30, population potential, long term unemployment rate, and unemployment under age 25. When split spatially into northeastern (LRR) and southwestern (HRR) regions, clear trends and patterns emerged that assisted with interpreting the most important independent variables and their respective influence on the prediction of COVID-19 incidence rates.
Additionally, this study provides an example of the utility of partial dependence plots for gaining more detailed insights from machine learning models. Especially when combined with other spatial tools, integrating these approaches holds strong potential for elucidating a more complete explanation of epidemiological patterns with greater precision and accuracy. However, a broader movement is required to establish process-based methods for disease and pandemic mapping [27] in order to ultimately improve outbreak prevention and control measures.
We encourage future machine learning studies to follow a similar level of data exploration as shown herein. This procedure facilitated a better understanding of how the produced model interpreted the input data by enabling the observation of spatial patterns in three increasingly complex representations (trend analysis to LISA to BART). This satisfied assumptions defined by the robustness thesis [32], while the splitting of the study area into geospatially relevant regions allowed for increased interpretability of machine learning model results and the partial dependence plots.
INKAR data are available at https://www.inkar.de (gathered and aggregated from the BBSR and the ongoing spatial monitoring of the federal german institutions https://www.bbsr.bund.de) Built environment data are available at https://download.geofabrik.de/ COVID-19 case data are published daily by the Robert-Koch-Institut, and are available at https://npgeo-corona-npgeo-de.hub.arcgis.com/ R code available at https://github.com/CHEST-Lab/BART_Covid-19
World Health Organization: Novel Coronavirus (2019-nCoV) situation reports (06.04.2020). https://www.who.int/emergencies/diseases/novel-coronavirus-2019/situation-reports Accessed 6 Apr 2020
Dong E, Du H, Gardner L. An interactive web-based dashboard to track COVID-19 in real time. Lancet Infect Dis. 2020;. https://doi.org/10.1016/S1473-3099(20)30120-1.
Tagesschau.de: Erster Coronavirus-Fall in Deutschland bestätigt 2020. https://www.tagesschau.de/inland/coronavirus-deutschland-erster-fall-101.html Accessed 6 May 2020
Robert Koch Institute: Coronavirus SARS-CoV-2 - COVID-19: Fallzahlen in Deutschland und weltweit (06.04.2020). https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Fallzahlen.html Accessed 6 Apr 2020
Bayerische Staatskanzlei: BayMBl. 2020 Nr. 152 - Verkündungsplattform Bayern (20.03.2020). https://www.verkuendung-bayern.de/baymbl/2020-152/ Accessed 6 May 2020
Zou L, Ruan F, Huang M, Liang L, Huang H, Hong Z, Yu J, Kang M, Song Y, Xia J, Guo Q, Song T, He J, Yen H-L, Peiris M, Wu J. SARS-CoV-2 viral load in upper respiratory apecimens of infected patients. N Engl J Med. 2020;382(12):1177–9. https://doi.org/10.1056/NEJMc2001737.
Rothe C, Schunk M, Sothmann P, Bretzel G, Froeschl G, Wallrauch C, Zimmer T, Thiel V, Janke C, Guggemos W, Seilmaier M, Drosten C, Vollmar P, Zwirglmaier K, Zange S, Wölfel R, Hoelscher M. Transmission of 2019-nCoV infection from an asymptomatic contact in Germany. N Engl J Med. 2020;382(10):970–1. https://doi.org/10.1056/NEJMc2001468.
Lauer SA, Grantz KH, Bi Q, Jones FK, Zheng Q, Meredith HR, Azman AS, Reich NG, Lessler J. The incubation period of coronavirus disease 2019 (COVID-19) from publicly reported confirmed cases: estimation and application. Ann Intern Med. 2020;. https://doi.org/10.7326/M20-0504.
Backer JA, Klinkenberg D, Wallinga J. Incubation period of 2019 novel coronavirus (2019-nCoV) infections among travellers from Wuhan, China, 20–28 January 2020. Euro Surveill. 2020;. https://doi.org/10.2807/1560-7917.ES.2020.25.5.2000062.
Linton NM, Kobayashi T, Yang Y, Hayashi K, Akhmetzhanov AR, Jung S-M, Yuan B, Kinoshita R, Nishiura H. Incubation period and other epidemiological characteristics of 2019 novel coronavirus infections with right truncation: a statistical analysis of publicly available case data. J Clin Med. 2020;9(2):538. https://doi.org/10.3390/jcm9020538.
Article PubMed Central Google Scholar
Li Q, Guan X, Wu P, Wang X, Zhou L, Tong Y, Ren R, Leung KSM, Lau EHY, Wong JY, Xing X, Xiang N, Wu Y, Li C, Chen Q, Li D, Liu T, Zhao J, Liu M, Tu W, Chen C, Jin L, Yang R, Wang Q, Zhou S, Wang R, Liu H, Luo Y, Liu Y, Shao G, Li H, Tao Z, Yang Y, Deng Z, Liu B, Ma Z, Zhang Y, Shi G, Lam TTY, Wu JT, Gao GF, Cowling BJ, Yang B, Leung GM, Feng Z. Early transmission dynamics in Wuhan, China, of novel coronavirus-infected pneumonia. N Engl J Med. 2020;382(13):1199–207. https://doi.org/10.1056/NEJMoa2001316.
Colizza V, Vespignani A. Epidemic modeling in metapopulation systems with heterogeneous coupling pattern: theory and simulations. J Theor Biol. 2008;251(3):450–67. https://doi.org/10.1016/j.jtbi.2007.11.028.
Hartfield M, Alizon S. Introducing the outbreak threshold in epidemiology. PLoS Pathogens. 2013;9(6):1003277. https://doi.org/10.1371/journal.ppat.1003277.
Hanski I, Gilpin ME. Metapopulation theory. In: Hanski I, Gilpin ME, editors. Metapopulation biology. San Diego: Academic Press; 2010. p. 63–7. https://doi.org/10.1016/B978-012323445-2/50006-7.
Li Q, Zhou L, Zhou M, Chen Z, Li F, Wu H, Xiang N, Chen E, Tang F, Wang D, Meng L, Hong Z, Tu W, Cao Y, Li L, Ding F, Liu B, Wang M, Xie R, Gao R, Li X, Bai T, Zou S, He J, Hu J, Xu Y, Chai C, Wang S, Gao Y, Jin L, Zhang Y, Luo H, Yu H, He J, Li Q, Wang X, Gao L, Pang X, Liu G, Yan Y, Yuan H, Shu Y, Yang W, Wang Y, Wu F, Uyeki TM, Feng Z. Epidemiology of human infections with avian influenza A(H7N9) virus in China. N Engl J Med. 2014;370(6):520–32. https://doi.org/10.1056/NEJMoa1304617.
Ajelli M, Gonçalves B, Balcan D, Colizza V, Hu H, Ramasco JJ, Merler S, Vespignani A. Comparing large-scale computational approaches to epidemic modeling: agent-based versus structured metapopulation models. BMC Infect Dis. 2010;10(1):190. https://doi.org/10.1186/1471-2334-10-190.
Elliott P, Wartenberg D. Spatial epidemiology: current approaches and future challenges. Environ Health Perspect. 2004;112(9):998–1006. https://doi.org/10.1289/ehp.6735.
Kirby RS, Delmelle E, Eberth JM. Advances in spatial epidemiology and geographic information systems. Ann Epidemiol. 2017;27(1):1–9. https://doi.org/10.1016/j.annepidem.2016.12.001.
Wang L, Zhang Y, Huang T, Li X. Estimating the value of containment strategies in delaying the arrival time of an influenza pandemic: a case study of travel restriction and patient isolation. Phys Rev. 2012;86(3 Pt 1):032901. https://doi.org/10.1103/PhysRevE.86.032901.
Colizza V, Barrat A, Barthélemy M, Vespignani A. The role of the airline transportation network in the prediction and predictability of global epidemics. Pnas. 2006;7:2015–20.
Preciado, V.M, Zargham M. Traffic optimization to control epidemic outbreaks in metapopulation models. In: 2013 IEEE Global Conference on Signal and Information Processing, pp. 847–850 2013. https://doi.org/10.1109/GlobalSIP.2013.6737024
Pini A, Stenbeck M, Galanis I, Kallberg H, Danis K, Tegnell A, Wallensten A. Socioeconomic disparities associated with 29 common infectious diseases in Sweden, 2005–14: an individually matched case-control study. Lancet Infect Dis. 2019;19(2):165–76. https://doi.org/10.1016/S1473-3099(18)30485-7.
Mossong J, Hens N, Jit M, Beutels P, Auranen K, Mikolajczyk R, Massari M, Salmaso S, Tomba GS, Wallinga J, Heijne J, Sadkowska-Todys M, Rosinska M, Edmunds WJ. Social contacts and mixing patterns relevant to the spread of infectious diseases. PLoS medicine. 2008;5(3):381–91. https://doi.org/10.1371/journal.pmed.0050074.
Kraemer MUG, Hay SI, Pigott DM, Smith DL, Wint GRW, Golding N. Progress and challenges in infectious disease cartography. Trends Parasitol. 2016;32(1):19–29. https://doi.org/10.1016/j.pt.2015.09.006.
Kistemann T. Jürgen Schweikart. Carsten Butsch: Medizinische Geographie; 2019.
Pinter-Wollman N, Jelić A, Wells NM. The impact of the built environment on health behaviours and disease transmission in social systems. Philosophical transactions of the Royal Society of London. Series B Biol Sci. 2018;. https://doi.org/10.1098/rstb.2017.0245.
Mclafferty S. Disease cluster detection methods: recent developments and public health implications. Annals of GIS. 2015;21(2):127–33. https://doi.org/10.1080/19475683.2015.1008572.
Glick B. The spatial autocorrelation of cancer mortality. Soc Sci Med Part D. 1979;13(2):123–30. https://doi.org/10.1016/0160-8002(79)90058-3.
Auchincloss AH, Gebreab SY, Mair C, Diez Roux AV. A review of spatial methods in epidemiology, 2000–2010. Annu Rev Public Health. 2012;33:107–22. https://doi.org/10.1146/annurev-publhealth-031811-124655.
Bhunia GS, Kesari S, Chatterjee N, Kumar V, Das P. Spatial and temporal variation and hotspot detection of kala-azar disease in Vaishali district (Bihar). India. BMC Infectious Diseases. 2013;13(1):64. https://doi.org/10.1186/1471-2334-13-64.
Cuadros DF, Branscum AJ, Miller FD, Abu-Raddad LJ. Spatial epidemiology of hepatitis C virus infection in Egypt: analyses and implications. Hepatology. 2014;60(4):1150–9. https://doi.org/10.1002/hep.27248.
Huppert A, Katriel G. Mathematical modelling and prediction in infectious disease epidemiology. Clin Microbio Infect. 2013;19(11):999–1005. https://doi.org/10.1111/1469-0691.12308.
North AR, Godfray HCJ. The dynamics of disease in a metapopulation: the role of dispersal range. J Theor Biol. 2017;418:57–65. https://doi.org/10.1016/j.jtbi.2017.01.037.
Wiens J, Shenoy ES. Machine learning for healthcare: on the verge of a major shift in healthcare epidemiology. Clin Infect Dis. 2018;66(1):149–53. https://doi.org/10.1093/cid/cix731.
Bellinger C, Mohomed Jabbar MS, Zaïane O, Osornio-Vargas A. A systematic review of data mining and machine learning for air pollution epidemiology. BMC Public Health. 2017;17(1):907. https://doi.org/10.1186/s12889-017-4914-3.
VoPham T, Hart JE, Laden F, Chiang Y-Y. Emerging trends in geospatial artificial intelligence (geoAI): potential applications for environmental epidemiology. Environ Health. 2018;17(1):40. https://doi.org/10.1186/s12940-018-0386-x.
Robert Koch Institute, ESRI: RKI Corona Landkreise (06.04.2020). https://npgeo-corona-npgeo-de.hub.arcgis.com/datasets/917fc37a709542548cc3be077a786c17_0?selectedAttribute=cases_per_population. Accessed 6 Apr 2020
Bundesinstitut für Bau-, Stadt- und Raumforschung: INKAR-Daten, erhoben aus der laufenden Raumbeobachtung, basierend auf dem Zensus 2011 BRD: verändert durch Martin Fuchs und Daniel Sonnenwald i.A.v. Dr. Blake Byron Walker, Bonn (2020). https://www.inkar.de/ Accessed 26 Mar 2020
OpenStreetMap [Databank]. 2020. http://www.openstreetmap.org.
Bundesamt für Kartographie und Geodäsie: Digitales Landschaftsmodell 1:250 000 (Ebenen): verändert durch Sebastian Brinkmann und Tim Große i.A.v. Dr. Blake Byron Walker, Frankfurt am Main (2018). https://gdz.bkg.bund.de/index.php/default/open-data/digitales-landschaftsmodell-1-250-000-ebenen-dlm250-ebenen.html. Accessed 26 Mar 2020
CHEST Lab GitHub Repository. https://github.com/CHEST-Lab/BART_Covid-19
Lawson, A., Ugarte, M.D., Haining, R.P., Banerjee, S. (eds.): Handbook of Spatial Epidemiology. Handbooks of modern statistical methods. CRC Press, Boca Raton and London and New York (2016). https://www.taylorfrancis.com/books/9781482253023
Baddeley A, Rubak E, Turner R. Spatial Point Patterns: Methodology and Applications with R. Boca Raton, London, New York: A Chapman & Hall book, CRC Press, Taylor & Francis; 2015.
Anselin L, Rey SJ. Perspectives on Spatial Data Analysis. Advances in Spatial Science, The Regional Science Series. Springer-Verlag Berlin Heidelberg, Berlin, Heidelberg 2010. https://doi.org/10.1007/978-3-642-01976-0. http://site.ebrary.com/lib/alltitles/docDetail.action?docID=10359692
Anselin L. Local indicators of spatial association-lisa. Geogr Anal. 1995;27(2):93–115. https://doi.org/10.1111/j.1538-4632.1995.tb00338.x.
Fu WJ, Jiang PK, Zhou GM, Zhao KL. Using Moran's I and GIS to study the spatial pattern of forest litter carbon density in a subtropical region of southeastern China. Biogeosciences. 2014;11(8):2401–9. https://doi.org/10.5194/bg-11-2401-2014.
ArcGIS [GIS software], Version 10.7.1. Redlands, CA: Environmental Systems Research Institute, Inc., 2019.
Che D, Decludt B, Campese C, Desenclos JC. Sporadic cases of community acquired legionnaires' disease: an ecological study to identify new sources of contamination. J Epidemiol Commun Health. 2003;57(6):466–9. https://doi.org/10.1136/jech.57.6.466.
Webster R, Oliver MA. Geostatistics for Environmental Scientists, 2nd ed. edn. Statistics in practice. Wiley, Chichester 2007. http://site.ebrary.com/lib/alltitles/docDetail.action?docID=10257638
O'Sullivan D, Unwin DJ. Geographic information analysis. 2nd ed. Hoboken: Wiley; 2010. https://doi.org/10.1002/9780470549094.
McElreath R. Statistical Rethinking: A Bayesian Course with Examples in R and Stan. Texts in statistical science. CRC Press, Boca Raton, FL 2015. http://proquest.tech.safaribooksonline.de/9781482253481
Best N, Richardson S, Thomson A. A comparison of Bayesian spatial models for disease mapping. Stat Methods Med Res. 2005;14(1):35–59. https://doi.org/10.1191/0962280205sm388oa.
Kapelner A, Bleich J. bartMachine: machine learning with Bayesian additive regression trees. J Stat Softw. 2016;70(4):1–40. https://doi.org/10.18637/jss.v070.i04.
Chipman C, George EI, McCulloch RE. Bart: Bayesian additive regression trees. Ann Appl Stat. 2010;1:266–98.
R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria 2020. R Foundation for Statistical Computing. https://www.R-project.org/
Friedman JH. Machine. Ann Stat. 2001;29(5):1189–232. https://doi.org/10.1214/aos/1013203451.
Hastie T, Tibshirani R, Friedman JH. The elements of statistical learning: data mining, inference, and prediction. 2nd ed. New York: Springer; 2009 Springer series in statistics.
Scarpone C, Schmidt MG, Bulmer CE, Knudby A. Semi-automated classification of exposed bedrock cover in British columbia's southern mountains using a random forest approach. Geomorphology. 2017;285:214–24. https://doi.org/10.1016/j.geomorph.2017.02.013.
Berrar D. Cross-validation. In: Ranganathan, S., Gribskov, M., Nakai, K., Schönbach, C. (eds.) Encyclopedia of Bioinformatics and Computational Biology, pp. 542–545. Academic Press, Oxford 2019. https://doi.org/10.1016/B978-0-12-809633-8.20349-X
Hunter PR, Colón-González FP, Brainard J, Majuru B, Pedrazzoli D, Abubakar I, Dinsa G, Suhrcke M, Stuckler D, Lim T-A, Semenza JC. Can economic indicators predict infectious disease spread? A cross-country panel analysis of 13 European countries. Scand J Public Health. 2020;48:351–61.
Wood SN. Generalized additive models: an introduction with R. 2nd ed. London, Boca Raton, New York: Chapman & Hall/CRC texts in statistical science. CRC Press/Taylor & Francis Group; 2017.
deutschland.de: Coronavirus Timeline Germany 2020. https://www.deutschland.de/de/die-timeline-coronavirus-germany-deutschland. Accessed 17 Apr 2020
Wiens JA. Spatial scaling in ecology. Funct Ecol. 1989;3(4):385. https://doi.org/10.2307/2389612.
Fortin MJ, Dale MRT. Spatial analysis: a guide for ecologists. 7th ed. Cambridge: Cambridge Univ. Press; 2009. https://doi.org/10.1017/CBO9780511542039.
Wheatley M, Johnson C. Factors limiting our understanding of ecological scale. Ecol Complex. 2009;6(2):150–9. https://doi.org/10.1016/j.ecocom.2008.10.011.
Hethcote HW, van Ark JW. Epidemiological models for heterogeneous populations: proportionate mixing, parameter estimation, and immunization programs. Math Biosci. 1987;84(1):85–118. https://doi.org/10.1016/0025-5564(87)90044-7.
Kuperman M, Abramson G. Small world effect in an epidemiological model. Physical Review Letters. 2001;86(13):2909–12. https://doi.org/10.1103/PhysRevLett.86.2909.
Van Doremalen N, Bushmaker T, Morris DH, Holbrook MG, Gamble A, Williamson BN, Tamin A, Harcourt JL, Thornburg NJ, Lloyd-Smith JO, de Wit E, Munster VJ. Aerosol and surface stability of SARS-CoV-2 as compared with SARS-CoV-1. N Engl J Med. 2020;. https://doi.org/10.1056/NEJMc2004973.
Kampf G, Todt D, Pfaender S, Steinmann E. Persistence of coronaviruses on inanimate surfaces and their inactivation with biocidal agents. J Hosp Infect. 2020;104(3):246–51. https://doi.org/10.1016/j.jhin.2020.01.022.
Frank LD, Engelke PO. The built environment and human activity patterns: exploring the impacts of urban form on public health. J Plan Lit. 2001;16(2):202–18. https://doi.org/10.1177/08854120122093339.
Puhr K. (ed.): Inklusion und Exklusion Im Kontext Prekärer Ausbildungs- und Arbeitsmarktchancen: Biografische Portraits, 1. aufl. edn. VS Verlag für Sozialwissenschaften / GWV Fachverlage GmbH Wiesbaden, Wiesbaden 2009. https://doi.org/10.1007/978-3-531-91824-2
Jugendarbeitslosigkeit und soziale Ausgrenzung: Ergebnisse einer qualitativen Analyse in Ost- und Westdeutschland. In: Zempel, J., Bacher, J., Moser, K. (eds.) Erwerbslosigkeit. Psychologie sozialer Ungleichheit, pp. 133–148. VS Verlag für Sozialwissenschaften, Wiesbaden and s.l. 2001. https://doi.org/10.1007/978-3-663-09986-4_7
Thomas Kieselbach, G.B.: Arbeitslosigkeit als Risiko sozialer Ausgrenzung bei Jugendlichen in Europa | APuZ. Bundeszentrale für politische Bildung (6.5.2003). Accessed 15 Apr 2020
Steuerwald C. (ed.): Die Sozialstruktur Deutschlands Im Internationalen Vergleich. Springer Fachmedien Wiesbaden, Wiesbaden 2016. https://doi.org/10.1007/978-3-531-94101-1
Schmitt C. COVID-19. Sozial Extra. 2020;. https://doi.org/10.1007/s12054-020-00284-5.
He J, He L, Zhou W, Nie X, He M. Discrimination and social exclusion in the outbreak of covid-19. Int J Environ Res Public Health. 2020;. https://doi.org/10.3390/ijerph17082933.
Royston P, Altman DG. Regression using fractional polynomials of continuous covariates: Parsimonious parametric modelling. J Royal Stat Soc. 1994;43:429–67.
Royston P. A strategy for modelling the elect of a continuous covariate in medicine and epidemiology. Stat Med. 2000;19:1831–47.
Kreatsoulas C, Subramanian SV. Machine learning in social epidemiology: learning from experience. SSM Popul Health J. 2018;4:347–9.
Scarpone C, Schmidt MG, Bulmer CE, Knudby A. Modelling soil thickness in the critical zone for Southern British Columbia. Geoderma. 2016;282:59–69. https://doi.org/10.1016/j.geoderma.2016.07.012.
The authors wish to thank the Robert Koch Institute, the Bundesinstitut für Bau-, Stadt- und Raumforschung, the R Foundation and the volunteers who spend their free time building and maintaining R and OpenStreetMap. Above all, the authors wish to publicly thank all medical and public health personnel worldwide for their immense efforts to provide health care and infection control during the SARS-CoV-2 outbreak.
No specific funding was provided for this study. BBW is supported by the German Ministry for Education and Research. Open access funding provided by Projekt DEAL.
Urban Forest Research and Ecological Disturbance (UFRED) Lab: Department of Geography, Ryerson University, 350 Victoria Street, Toronto, M5B 2K3, Canada
Christopher Scarpone
Community Health Environments and Social Terrains (CHEST) Lab, Institut für Geographie, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91052, Erlangen, Germany
Sebastian T. Brinkmann, Tim Große, Daniel Sonnenwald, Martin Fuchs & Blake Byron Walker
Sebastian T. Brinkmann
Tim Große
Daniel Sonnenwald
Blake Byron Walker
Project conceptualisation (BBW,CS); literature review and monitoring (DS,MF); data acquisition/preprocessing (DS,SB,MF,TG); data modelling (BBW,CS,SB,TG); interpretation of results and manuscript preparation (CS,BBW,SB,DS,MF,TG). All authors read and approved the final manuscript.
Correspondence to Blake Byron Walker.
Ethical approval and consent to participate
This study is categorised as low-risk, as only aggregated, publicly-available incidence data were analysed.
Not required.
Scarpone, C., Brinkmann, S.T., Große, T. et al. A multimethod approach for county-scale geospatial analysis of emerging infectious diseases: a cross-sectional case study of COVID-19 incidence in Germany. Int J Health Geogr 19, 32 (2020). https://doi.org/10.1186/s12942-020-00225-1
Received: 20 May 2020
Accepted: 05 August 2020
Exploratory Spatial Data Analysis (ESDA)
Coronavirus research highlights
Geographical mapping and tracking of COVID-19 | CommonCrawl |
What is the complexity of (possibly succinct) Nurikabe?
Nurikabe is a constraint-based grid-filling puzzle, loosely similar to Minesweeper/Nonograms; numbers are placed on a grid which is to be filled with on/off values for each cell, with each number indicating a region of connected 'on' cells of that size, and some minor constraints on the region of 'off' cells (it must be connected and can't contain any contiguous 2x2 regions). The Wikipedia page has more explicit rules and sample puzzles.
Generically, puzzles of this sort tend to be NP-complete, and Nurikabe is no exception; they fall into NP because the solution itself serves as a (polynomially-verifiable) witness to the problem. But unlike most similar puzzles, Nurikabe instances might be succinct: Sudoku on an $n\times n$ grid requires $\Theta(n)$ givens to be solvable (if less than $n-1$ givens are offered, then there's no way of distinguishing between the missing symbols), Nonograms obviously require at least one given for each row or column, and Minesweeper must have givens on at least $1\over16$ of the cells or there will be cells not next to a given (and whose status therefore can't be determined). But while the givens of a Nurikabe puzzle have to sum to $\Theta(n^2)$, it's possible to have $\mathrm{O}(1)$ givens each of that size, so that $\Theta(\log(n))$ bits might be enough to specify a Nurikabe puzzle of size $n$ - or inverting, $k$ bits may be enough to specify a Nurikabe instance of size exponential in $k$, meaning that the only guarantee is that the problem lies in NEXP.
Unfortunately, the proofs of Nurikabe's hardness I've found all use constructions with $\Theta(n^2)$ givens of constant size, so their instances are polynomial in the grid size rather than logarithmic, and I can't rule out that all solvable 'succinct' Nurikabe puzzles have additional structure such that solutions can be described and verified just as succinctly; for instance, the one example I know of a puzzle with 2 givens of size $\Theta(n^2)$ leads to regions of both on and off cells that are each the union of $\mathrm{O}(1)$ rectangles, and so have a succinct description of their own. Does anyone know of additional research that's been done into this puzzle beyond the basic NP-completeness result, and in particular any further complexity results for the possibly-succinct cases?
(note: this was originally asked over at math.SE, but there haven't been any answers there yet and this seems appropriately research-level for this site)
cc.complexity-theory np-hardness puzzles
Steven StadnickiSteven Stadnicki
$\begingroup$ Stadnick: perhaps you could clarify your question in light of the answer below, or otherwise accept the answer? (Also: thanks for posting this, thinking about the question helped me to understand my unease about decision problems based on puzzles.) $\endgroup$
– András Salamon
You seem to be really asking: is Nurikabe in NP?
Nurikabe is NP-hard, since one can build polynomial-size gadgets that can be used to reduce an NP-complete problem to a Nurikabe decision problem. This is what Holzer, Klein, and Kutrib do, and also McPhail and Fix in their poster (both referenced from the Wikipedia article).
Both groups of authors assume that the problem is trivially in NP, and wave away the question of membership. Your unease about succinct instances seems spot on -- I do not believe the problem is in NP. Consider the following way to formalise the decision problem:
BINARY NURIKABE
Input: integers m and n in binary, representing a Nurikabe board, and a list of triples, each indicating a position on the board and a positive integer written in that position.
Question: can the remaining board positions be coloured with two colours, respecting the Nurikabe constraints?
If $m$ and $n$ are instead specified in unary, then the decision problem of determining if there exists some (not necessarily unique) solution to a given Nurikabe instance is in NP, as a solution can be specified in at most $mn$ bits which is then polynomial in the input size.
In contrast, with a binary encoding, there is a problem (as you point out): if one has just one large constraint (say, the number $(m-2)(n-2)$ placed in the middle of the $m \times n$ board), then a solution will require $mn-1$ bits to represent, which is exponential in the size of the input $\Theta(\log\, m + \log\, n)$. This means that the problem is not in NP, unless small certificates always exist.
Your question then becomes: do there exist polynomial-sized certificates for all binary Nurikabe instances, which can be checked in polynomial time?
It is not obvious to me that such certificates necessarily exist. Nor is it obvious how one would go about proving that succinct, quickly-verifiable certificates cannot exist.
However, the restriction to unique solutions means that the problem is actually US-hard, so co-NP-hard, and therefore unlikely to be in NP. The point is that if one regards "has a unique solution" as a Nurikabe constraint (as opposed to a desirable feature of instances that are presented to humans), then it is not sufficient to demonstrate that there is a solution, but one must also demonstrate that no other solutions are possible. This requirement alone is then enough to ensure the problem is probably not in NP. This is true even for the unary version of the problem.
In summary: if one relaxes the unique solutions requirement, and specifies the board size in unary, then the decision problem is in NP; with non-unique solutions and binary board size, it is unclear whether the decision problem is in NP; and with unique solutions the decision problem is US-hard and therefore unlikely to be in NP, for either encoding of the board size.
András SalamonAndrás Salamon
Not the answer you're looking for? Browse other questions tagged cc.complexity-theory np-hardness puzzles or ask your own question.
How hard is binary Sudoku puzzle?
What are the complexities of the following SAT subsets ?
What is the minimum number of bits required to store a sudoku puzzle?
The complexity of the puzzle game Net
Succinct complete problems in DTIME(EXP(EXP(...)))
NP-Complete Static Square Puzzles
Complexity of solving systems of linear equations with hash preimages | CommonCrawl |
→ Virtual participation
Virtual contest is a way to take part in past contest, as close as possible to participation on time. It is supported only ICPC mode for virtual contests. If you've seen these problems, a virtual contest is not for you - solve these problems in the archive. If you just want to solve some problem from a contest, a virtual contest is not for you - solve this problem in the archive. Never use someone else's code, read the tutorials or communicate with other person during a virtual contest.
→ Problem tags
No tag edit access
→ Contest materials
Announcement (en)
Tutorial (en)
Custom test
The problem statement has recently been changed. View the changes.
E. Counting Rectangles
time limit per test
memory limit per test
256 megabytes
standard input
You have $$$n$$$ rectangles, the $$$i$$$-th rectangle has height $$$h_i$$$ and width $$$w_i$$$.
You are asked $$$q$$$ queries of the form $$$h_s \ w_s \ h_b \ w_b$$$.
For each query output, the total area of rectangles you own that can fit a rectangle of height $$$h_s$$$ and width $$$w_s$$$ while also fitting in a rectangle of height $$$h_b$$$ and width $$$w_b$$$. In other words, print $$$\sum h_i \cdot w_i$$$ for $$$i$$$ such that $$$h_s < h_i < h_b$$$ and $$$w_s < w_i < w_b$$$.
Please note, that if two rectangles have the same height or the same width, then they cannot fit inside each other. Also note that you cannot rotate rectangles.
Please note that the answer for some test cases won't fit into 32-bit integer type, so you should use at least 64-bit integer type in your programming language (like long long for C++).
The first line of the input contains an integer $$$t$$$ ($$$1 \leq t \leq 100$$$) — the number of test cases.
The first line of each test case two integers $$$n, q$$$ ($$$1 \leq n \leq 10^5$$$; $$$1 \leq q \leq 10^5$$$) — the number of rectangles you own and the number of queries.
Then $$$n$$$ lines follow, each containing two integers $$$h_i, w_i$$$ ($$$1 \leq h_i, w_i \leq 1000$$$) — the height and width of the $$$i$$$-th rectangle.
Then $$$q$$$ lines follow, each containing four integers $$$h_s, w_s, h_b, w_b$$$ ($$$1 \leq h_s < h_b,\ w_s < w_b \leq 1000$$$) — the description of each query.
The sum of $$$q$$$ over all test cases does not exceed $$$10^5$$$, and the sum of $$$n$$$ over all test cases does not exceed $$$10^5$$$.
For each test case, output $$$q$$$ lines, the $$$i$$$-th line containing the answer to the $$$i$$$-th query.
1 1 100 100
1 1 1000 1000
In the first test case, there is only one query. We need to find the sum of areas of all rectangles that can fit a $$$1 \times 1$$$ rectangle inside of it and fit into a $$$3 \times 4$$$ rectangle.
Only the $$$2 \times 3$$$ rectangle works, because $$$1 < 2$$$ (comparing heights) and $$$1 < 3$$$ (comparing widths), so the $$$1 \times 1$$$ rectangle fits inside, and $$$2 < 3$$$ (comparing heights) and $$$3 < 4$$$ (comparing widths), so it fits inside the $$$3 \times 4$$$ rectangle. The $$$3 \times 2$$$ rectangle is too tall to fit in a $$$3 \times 4$$$ rectangle. The total area is $$$2 \cdot 3 = 6$$$. | CommonCrawl |
Estimating mean of Normal with unknown variance and then predict the future observation
I am trying to estimate population mean of 9 observations when the variance is unknown. I marginalized the posterior and understand that the t- distribution would give me the distribution of population mean. I am stuck at this point. Normally, If I had to estimate some thing I would generate 1000 or more random samples of the given distribution and then generate point or interval estimates for it's values. But the T-distribution has confused me. Matlab's tpdf generates only 8 samples, but when I sum them up they do not add up to 1 which looks weird so is it generating actual values? If these are actual values then where is the distribution? how do I estimate mean from it (Substitute these values in the standardization formula to find values of mean?).
PS: I have been studying stats recently and though I understand the mathematical part of it, I feel miserable when doing simulation in matlab. So I would appreciate any pointers twards learning the computational side of it.
EDIT: I understand the mathematical or derivation part of it. It is the computational simulation that confuses me. I use tpdf for using t distribution but it needs data and degree of freedom. and then how do I go about finding the point estimate of mean in matlab. Aso tpdf needs to be translated towards my data values.
probability self-study bayesian normal-distribution t-distribution
Sudh
SudhSudh
$\begingroup$ slef-study?! Please add the tag. And explain more clearly why the $t$ distribution confuses you. It has a mean that you can use as an estimator and you can produce a credible interval by using the t pdf. $\endgroup$ – Xi'an Apr 3 '15 at 7:18
$\begingroup$ Why should they add to 1? $\endgroup$ – Glen_b -Reinstate Monica Apr 3 '15 at 8:33
$\begingroup$ Shouldn't a probability distribution integrate to 1? $\endgroup$ – Sudh Apr 3 '15 at 15:41
$\begingroup$ Values sampled from a distribution are not (usually) probabilities, and even if they were, a set of sampled probabilities do not form a probability distribution. Consider if they did - then the first value (itself a sample of size 1) would have to be "1" to make that initial "distribution" integrate to 1, so all subsequent values would have to be "0". You seem to have a very mistaken notion of what's going on, but there are several errors at once, so it's difficult to start untangling your notions. $\endgroup$ – Glen_b -Reinstate Monica Apr 4 '15 at 0:23
Quoting from our Bayesian Essentials with R book,
if $\mathscr{D}_n$ denotes a normal $\mathscr{N}\left(\mu,\sigma^{2}\right)$ sample of size $n$, if $\mu$ has a prior equal to a $\mathscr{N}\left(0,\sigma^{2}\right)$ distribution, and $\sigma^{-2}$ an exponential $\mathscr{E}(1)$ distribution, the posterior is given by \begin{align*} \pi((\mu,\sigma^2)|\mathscr{D}_n) &\propto \pi(\sigma^2)\times\pi(\mu|\sigma^2)\times f(\mathscr{D}_n|\mu,\sigma^2)\\ & \propto (\sigma^{-2})^{1/2+2}\, \exp\left\{-(\mu^2 + 2)/2\sigma^2\right\}\\ & \times (\sigma^{-2})^{n/2}\,\exp \left\{-\left(n(\mu-\overline{x})^2 + s^2 \right)/2\sigma^2\right\} \\ &\propto (\sigma^2)^{-(n+5)/2}\exp\left\{-\left[(n+1) (\mu-n\bar x/(n+1))^2+(2+s^2)\right]/2\sigma^2\right\}\\ &\propto (\sigma^2)^{-1/2}\exp\left\{-(n+1)[\mu-n\bar x/(n+1)]^2/2\sigma^2\right\}\,.\\ &\times (\sigma^2)^{-(n+2)/2-1}\exp\left\{-(2+s^2)/2\sigma^2\right\}\,. \end{align*} Therefore, the posterior on $\theta$ can be decomposed as the product of an inverse gamma distribution on $\sigma^2$, $$\mathscr{IG}((n+2)/2,[2+s^2]/2)$$ which is the distribution of the inverse of a gamma $$\mathscr{G}((n+2)/2,[2+s^2]/2)$$ random variable and, conditionally on $\sigma^2$, a normal distribution on $\mu$, $$\mathscr{N} (n\bar x/(n+1),\sigma^2/(n+1)).$$ The marginal posterior in $\mu$ is then a Student's $t$ distribution $$ \mu|\mathscr{D}_n \sim \mathscr{T}\left(n+2,n\bar x\big/(n+1),(2+s^2)\big/(n+1)(n+2)\right)\,, $$ with $n+2$ degrees of freedom, a location parameter proportional to $\bar x$ and a scale parameter almost proportional to $s$.
From this distribution, you get the expectation $n\bar x/(n+1)$ that acts as your point estimator of $\mu$. And a credible interval on $\mu$ $$\left(n\bar x/(n+1)-((2+s^2)/(n+1)(n+2))^{1/2}q_{n+2}(\alpha),n\bar x/(n+1)+((2+s^2)/(n+1)(n+2))^{1/2}q_{n+2}(\alpha)\right)$$where $q_{n+2}(\alpha)$ is the $t_{n+1}$ quantile.
Xi'anXi'an
$\begingroup$ Hey thanks, probably I wasn't clear enough. I understand the derivation of it. It is the computational part that confuses me. I mean if I have say 10 data samples. Now how do I estimate population mean using matlab? $\endgroup$ – Sudh Apr 3 '15 at 15:44
$\begingroup$ You have all the statistical elements above, thus only to replace the three parameters of the Student't density with the values obtained from your sample. If this is a matlab question, it should be asked on Stack Overflow, not here. $\endgroup$ – Xi'an Apr 3 '15 at 19:37
$\begingroup$ Not specifically matlab but any computational method. How would it be done in practice? For example, matlab can generate random numbers from a t distribution based on degree of freedom provided but how would that correspond to my data? I mean [5 6 8] and [56 54 57] both have 2 degree of freedom but their means are in very different range. $\endgroup$ – Sudh Apr 4 '15 at 0:25
$\begingroup$ Why would you need a random generator? If I have $n=10$ observations with $\bar x=2.3$ and $s^2=312$, the posterior on $\mu$ is a $t(12,2.09,2.38)$ distribution. End of the story. $\endgroup$ – Xi'an Apr 4 '15 at 8:41
Not the answer you're looking for? Browse other questions tagged probability self-study bayesian normal-distribution t-distribution or ask your own question.
With a small sample from a normal distribution, do you simulate using a t distribution?
Credible interval for Bayesian posterior of variance and mean, and posterior predictive of normal
Estimating variance of center-censored Normal samples
What is the relation between the estimated standard deviation of a normal distribution and the scale of a t distribution when applied to normal data?
Why not use the T-distribution to estimate the mean when the sample is large?
Population is normal with known variance and mean. Sample size is small. Which case?
What is exactly distributed according to t-distribution?
What would be the variance of a circulary complex normal distribution
Mean and variance of multiple trials from normal dist | CommonCrawl |
Area of a square inscribed in a circle of radius r, if area of the square inscribed in the semicircle is given.
If a square is inscribed in a semicircle of radius r and the square has an area of 8 square units, find the area of a square inscribed in a circle of radius r.
I started by assuming that the side of the square is 2(root2). But I did not know how this relates to what it's dimensions were to be if it was inscribed in a full circle. Could someone help? Thank you.
geometry circles area
coolcheetahcoolcheetah
Given the small inscribed square has an area of $8$, so a side length of $2\sqrt2$ The radius $r$ of the semicircle and circle is equal to the distance between the midpoint of the bottom side of the small inscribed square and one of the top vertices. This forms a right triangle with side lengths $2\sqrt2$, $\sqrt2$, and hypotenuse $r$. Using the Pythagorean theorem $$(2\sqrt2)^2+(\sqrt2)^2=r^2$$ $$8+2=r^2$$ $$r=\sqrt{10}$$ Now that we have the radius of the circle, we know that the side length of the large inscribed square is $\frac{2r}{\sqrt2} = r\sqrt2$, ($2r$ is the diagonal of the large inscribed square, also the diameter of circle). The side length of the large inscribed square is $\sqrt{20}=2\sqrt5$, so its area is $20$ square units.
dardeshnadardeshna
Not the answer you're looking for? Browse other questions tagged geometry circles area or ask your own question.
Radius of the circle with area equal to the given square
Quarter-Circle inscribed in a Square.
Circle $1$ is circumscribed about a square of side $8$ and circle $2$ is inscribed in the square
What is the area percentage of a ⌈square in a circle? What is the area percentage of a ⌈circle in a square?
Inscribed Square in Inscribed Circle in Square
Area of the largest square inscribed in an equilateral triangle that is itself inscribed in a circle of radius $r$
Maximize the area of an ellipse inscribed in a semicircle.
Square inscribed in semicircle. Find some areas.
What is the area of the shaded region between the circle and the equilateral triangle?
In the figure, a quarter circle, a semicircle and a circle are mutually tangent inside a square of side length $2$. Find the radius of the circle. | CommonCrawl |
Phylogeny reconstruction based on the length distribution of k-mismatch common substrings
Burkhard Morgenstern ORCID: orcid.org/0000-0002-7431-28621,
Svenja Schöbel1 &
Chris-André Leimeister1
Algorithms for Molecular Biology volume 12, Article number: 27 (2017) Cite this article
Various approaches to alignment-free sequence comparison are based on the length of exact or inexact word matches between pairs of input sequences. Haubold et al. (J Comput Biol 16:1487–1500, 2009) showed how the average number of substitutions per position between two DNA sequences can be estimated based on the average length of exact common substrings.
In this paper, we study the length distribution of k-mismatch common substrings between two sequences. We show that the number of substitutions per position can be accurately estimated from the position of a local maximum in the length distribution of their k-mismatch common substrings.
Phylogenetic distances between DNA or protein sequences are usually estimated based on pairwise or multiple sequence alignments. Since sequence alignment is computationally expensive, alignment-free phylogeny approaches have become popular in recent years, see Vinga [1] for a review. Some of these approaches compare the word composition [2,3,4,5] or spaced-word composition [6,7,8,9] of sequences using a fixed word length or patterns of match and don't-care positions, respectively. Other approaches are based on the matching statistics [10], that is on the length of common substrings of the input sequences [11, 12]. All these methods are much faster than traditional alignment-based approaches. A disadvantage of most word-based approaches to phylogeny reconstruction is that they are not based on explicit models of molecular evolution. Instead of estimating distances in a statistically rigorous way, they only return rough measures of sequence similarity or dissimilarity.
The average common substring (ACS) approach [11] calculates for each position in one sequence the length of the longest substring starting at this position that matches a substring of the other sequence. The average length of these substring matches is then used to quantify the similarity between two sequences based on information-theoretical considerations; these similarity values are finally transformed into symmetric distance values. More recently, we generalized this approach by using common substrings with k mismatches instead of exact substring matches [13]. To assign distance values to sequence pairs, we used the same information-theoretical approach that is used in ACS. Since there is no exact solution to the k-mismatch longest common substring problem that is fast enough to be applied to long genomic sequences, we proposed a simple heuristic: we first search for longest exact matches and then extend these matches until the k + 1st mismatch occurs. Distances are then calculated from the average length of these k-mismatch common substrings similarly as in ACS; the implementation of this approach is called kmacs. Various algorithms have been proposed in recent years to calculate exact or approximate solutions for the k-mismatch average common substring problem and have been applied to phylogeny reconstruction [14,15,16,17,18,19,20]. Like ACS and kmacs, these approaches are not based on stochastic models.
To our knowledge, the first alignment-free approach to estimate the phylogenetic distance between two DNA sequences in a statistically rigorous way was the program kr by Haubold et al. [21]. These authors showed that the average number of nucleotide substitutions per position between two DNA sequences can be estimated by calculating for each position i in one sequence the length of the shortest substring starting at i that does not occur in the other sequence, see also [22, 23]. This way, phylogenetic distances between DNA sequences can be accurately estimated for up to around 0.5 substitutions per position. Some other, more recent, alignment-free approaches also estimate phylogenetic distances based on stochastic models of molecular evolution, namely Co-phylog [24], andi [25], an approach based on the number of (spaced) word matches [7] and Filtered Spaced Word Matches [26].
In this paper, we propose an approach to estimate phylogenetic distances based on the length distribution of k-mismatch common substrings. The manuscript is organized as follows. In the next section, we introduce some notation and the stochastic model of sequence evolution that we are using. In the following two sections, we recapitulate a result from [21] on the length distribution of longest common substrings, we generalize this to k-mismatch longest common substrings, and we study the length distribution of k-mismatch common substrings returned by the kmacs heuristic [13]. Then, we introduce our new approach to estimate phylogenetic distances and explain some implementation details. In the final sections, we report on benchmarking results, discuss these results and address some possible future developments. We should mention that " k-mismatch longest common substrings" and "Heuristics used in kmacs" sections are not necessary to understand our new approach that is introduced in "Distance estimation" section. We added these two sections for completeness, and since they may be used for alternative ways of phylogenetic distance estimation. But readers who are mainly interested in our approach to distance estimation can skip these sections.
Sequence model and notation
We use standard notation such as used in [27]. For a sequence S of length L over some alphabet, S(i) is the ith character in S. S[i..j] denotes the (contiguous) substring from i to j; we say that S[i..j] is a substring at i. In the following, we consider two DNA sequences \(S_1\) and \(S_2\) that are thought to have descended from an unknown common ancestor under the Jukes-Cantor model [28]. That is, we assume that substitutions at different positions are independent of each other, that we have a constant substitution rate at all positions and that all substitutions occur with the same probability. We therefore have a match probability p and a background probability q such that \(P\left( S_1(i) = S_2(j)\right) = p\) if \(S_1(i)\) and \(S_2(j)\) descend from the same position in the hypothetical ancestral sequence—in which case \(S_1(i)\) and \(S_2(j)\) are called 'homologue'—and \(P\left( S_1(i) = S_2(j)\right) = q\) otherwise ('background').
Moreover, we use a gap-free model of evolution where \(S_1\) and \(S_2\) have the same length L, to simplify the considerations below. With this model, \(S_1(i)\) and \(S_2(j)\) are 'homologue' if and only if \(i=j\), so we have
$$\begin{aligned} P\left( S_1(i) = S_2(j)\right) = \left\{ \begin{array}{ll} p \quad{} \text { if } i = j\\ q \quad {} \text { else } \\ \end{array} \right. \end{aligned}$$
Similarly, we call a pair of equal-length substrings of \(S_1\) and \(S_2\) homologue if they start at the same respective positions in \(S_1\) and \(S_2\), and background otherwise. The background match probability q can be easily estimated from the relative frequencies of the four nucleotides. The main goal of the present study is to estimate the probability p. The distance between \(S_1\) and \(S_2\), defined as the number of substitutions per position since two sequences diverged from their last common ancestor, can then be obtained from p by the usual Jukes-Cantor correction. Note that, with our gap-free model, it is trivial to estimate p as the relative frequency of positions i where \(S_i(i)\) equals \(S_2(i)\). However, we will apply our results to real-world sequences with insertions and deletions where such a trivial approach is not possible.
k-mismatch longest common substrings
For positions i and j in sequence \(S_1\) and \(S_2\), respectively, we define random variables
$$\begin{aligned} X_{i,j} = \max \{l: S_1[i..i+l-1] = S_2[j..j+l-1] \} \end{aligned}$$
That is, \(X_{i,j}\) is the length of the longest substring at i that exactly matches a substring at j. Next, we define
$$\begin{aligned} X_i = \max _{1\le j\le L} X_{i,j} \end{aligned}$$
as the length of the longest substring at i that matches a substring of \(S_2\) anywhere in the sequence, see Fig. 1 for an example.
k-mismatch common substrings with \(k=2\). For position \(i=5\) in \(S_1\), kmacs searches the longest substring of \(S_1\) starting at i that exactly matches a substring of \(S_2\). This is the substring starting at \(i^*=2\) in \(S_2\) (matching substrings shown in red). It then extends this match without gaps until the \(k+1\)st mismatch is reached. In this example, the k-mismatch common substring would consist of the red, blue and green substrings and has length 12. In the paper, the lengths of these k-mismatch common substrings are modelled by the random variables \(X_i^{(k)}\), defined in (1). The original version of kmacs uses the average length of these k-mismatch common substrings to assign a distance value to a pair of sequences. In our modified implementation of kmacs, we consider the k-mismatch extension of the longest common substring at i. That is, the program would return the length of the k-mismatch substring match that starts after the first mismatch following the longest common substring. In our example, for \(i=5,\) this would be the substring match starting with 'T' at position 11 in \(S_1\) and at position 8 in \(S_2\), consisting of the blue, green and orange matches; the length of this k-mismatch substring extension would be 9. The length of these k-mismatch extensions are modelled by the random variable \(\hat{X}_i^{(k)},\) defined in (16)
In the following, we ignore edge effects, which is justified if long sequences are compared since the probability of k-mismatch common substrings of length m decreases rapidly if m increases. With this simplification, we have
$$\begin{aligned} P(X_{i,j} < n ) = 1 - P(X_{i,j} \ge n ) = \left\{ \begin{array}{ll} 1-p^n {} \quad \text { if } i = j \\ 1-q^n {} \quad \text { else } \\ \end{array} \right. \end{aligned}$$
If, in addition, we assume equilibrium frequencies for the nucleotides, i.e. if we assume that each nucleotide occurs at each sequence position with probability 0.25, the random variables \(X_{i,j}\) and \(X_{i',j'}\) are independent of each other whenever \(j-i\not =j'-i'\) holds. In this case, we have for \(n\le L-i+1\)
$$\begin{aligned} P(X_{i} < n) & = P(X_{{i,1}} < n \wedge \ldots \wedge X_{{i,L}} < n) \\ & = P(X_{{i,1}} < n)\cdot \ldots \cdot P(X_{{i,L}} < n) \\ &= P(X_{{i,1}} < n)\cdot \ldots \cdot P(X_{{i,L - n + 1}} < n) \\ & = (1 - q^{n} )^{{L - n}} \cdot (1 - p^{n} ) \\ \end{aligned}$$
$$\begin{aligned} P(X_{i} = n) & = P(X_{i} < n + 1) - P(X_{i} {\text{ < }}n) \\ & \quad \quad = (1 - q^{{n + 1}} )^{{L - n - 1}} \cdot (1 - p^{{n + 1}} ) - (1 - q^{n} )^{{L - n}} \cdot (1 - p^{n} ) \\ \end{aligned}$$
so the expected length of the longest common substring at a given sequence position is
$$\begin{aligned} \sum _{n=1}^L n \cdot \left( (1-q^{n+1})^{{L-n-1}} \cdot (1-p^{n+1}) - \left(1-q^n \right)^{{L-n}} \cdot \left (1-p^n \right) \right) \end{aligned}$$
Next, we generalize the above considerations by looking at the average length of the k-mismatch longest common substrings between two sequences for some integer \(k \ge 0\). That is, for a position i in one of the sequences, we consider the longest substring starting at i that matches some substring in the other sequence with a Hamming distance \(\le k.\) Generalizing the above notation, we define random variables
$$\begin{aligned} X_{i,j}^{(k)} = \max \left\{ l: d_H\left( S_1[i..i+l-1],\;S_2[j..j+l-1]\right) {=\ } k \right\} \end{aligned}$$
where \(d_H(\cdot ,\cdot )\) is the Hamming distance between two sequences. In other words, \(X_{i,j}^{(k)}\) is the length of the longest substring starting at position i in sequence \(S_1\) that matches a substring starting at position j in sequence \(S_2\) with k mismatches. Accordingly, we define
$$\begin{aligned} X_{i}^{(k)} = \max _j X^{(k)}_{i,j} \end{aligned}$$
as the length of the longest k-mismatch substring at position i. As pointed out by Apostolico et al. [18], \(X^{(k)}_{i,j}\) follows a negative binomial distribution, and we can write
$$\begin{aligned} P\left( X^{(k)}_{i,j} = n \right) = \left\{ \begin{array}{ll} {n \atopwithdelims ()k} p^{n-k} (1-p)^{k+1} {} \quad \text { if } i=j \\ {n \atopwithdelims ()k} q^{n-k} (1-q)^{k+1} {} \quad \text { else } \\ \end{array} \right. \end{aligned}$$
$$\begin{aligned} P\left( X^{(k)}_{i,j} \ge n \right) = \left\{ \begin{array}{ll} \sum _{k'\le k} {n \atopwithdelims ()k'} p^{n-k'} (1-p)^{k'} {} \quad \text { if } i=j \\ \sum _{k'\le k} {n \atopwithdelims ()k'} q^{n-k'} (1-q)^{k'} {} \quad \text { else } \\ \end{array} \right. \end{aligned}$$
Generalizing (3), we obtain for \(n>k\)
$$\begin{aligned} {P\left( X_{i}^{(k)} n\right) = }\nonumber \\\left( 1 - \sum _{k'\le k} {n \atopwithdelims ()k'} q^{n-k'} (1-q)^{k'}\right) ^{L} {-\;n}\cdot \left( 1 - \sum _{k'\le k} {n \atopwithdelims ()k'} p^{n-k'} (1-p)^{k'}\right) \end{aligned}$$
while we have
$$\begin{aligned} P\left( X_{i}^{(k)} < n\right) = \left\{ \begin{array}{ll} 1 {} \quad \text { if } n > L-i+1\\ 0 {} \quad \text { if } n \le k \\ \end{array} \right. \end{aligned}$$
Finally, we obtain
$$\begin{aligned} {P\left( X_i^{(k)}=n\right) = \left. \left( 1 - \sum _{k'\le k} {n+1 \atopwithdelims ()k'} q^{n+1-k'} (1-q)^{k'}\right) ^{L -{n-1}} \right. }\nonumber \\ \quad \qquad \quad \qquad \cdot \left( 1 - \sum _{k'\le k} {n+1\atopwithdelims ()k'} p^{n+1-k'} (1-p)^{k'}\right) \nonumber \\ \quad \qquad \quad \qquad - \left. \left( 1 - \sum _{k'\le k} {n \atopwithdelims ()k'} q^{n-k'} (1-q)^{k'}\right) ^{L} {-n} \cdot \left( 1 - \sum _{k'\le k} {n \atopwithdelims ()k'} p^{n-k'} (1-p)^{k'}\right) \right. \end{aligned}$$
from which one can obtain the expected length of the k-mismatch longest substrings.
Heuristic used in kmacs
Since exact solutions for the average k-mismatch common substring problem are too time-consuming for large sequence sets, the program kmacs [13] uses a heuristic. In a first step, the program calculates for each position i in one sequence, the length of the longest substring starting at i that exactly matches a substring of the other sequence. kmacs then calculates the length of the longest gap-free extension of this exact match to the right-hand side with k mismatches. Using standard indexing structures, this can be done in \(O (L\cdot k)\) time.
For sequences \(S_1, S_2\) as above and a position i in \(S_1\), let \(j^*\) be a position in \(S_2\) such that the \(X_i\)-length substring starting at i matches the \(X_i\)-length substring at \(j^*\) in \(S_2\). That is, the substring
$$\begin{aligned} S_2[j^*..j^* + X_i -1] \end{aligned}$$
is the longest substring of \(S_2\) that matches a substring of \(S_1\) at position i. In case there are several such positions in \(S_2\), we assume for simplicity that \(j^* \not = i\) holds (in the following, we only need to distinguish the cases \(j^*=i\) and \(j^*\not = i\), otherwise it does not matter how \(j^*\) is chosen). Now, let the random variable \(\tilde{X}^{(k)}_i\) be defined as the length of the k-mismatch common substring starting at i and \(j^*\), so we have
$$\begin{aligned} \tilde{X}^{(k)}_i = X_{i,j^*}^{(k)} = X_i + X^{(k-1)}_{i+X_i,j^*+X_i} + 1 \end{aligned}$$
For a pair of sequences as above, \(1 \le i \le L\) and \(m\le {L - i + 1}\) , the probability of the heuristic kmacs hit of having a length of m is given as
$$\begin{aligned}&P\left( \tilde{X}^{(k)}_i = m\right) \\&= \ \ p^{m-k+1}(1-p)^{k+1} \sum _{m_1+m_2=m{-1}} (1 - q^{m_1+1})^{L-{m_1}} {m_2 \atopwithdelims ()k-1} \\& \quad + \sum _{m_1+m_2=m{-1}} \left[ (1-q^{m_1+1})^{L-{m_1}} - (1-q^{m_1})^{L-{m_1}} \right] \cdot (1-p^{m_1}) \\ & \quad \ \ \cdot {m_2 \atopwithdelims ()k-1} q^{m_2-k+1}(1-q)^k \\ \end{aligned}$$
Distinguishing between 'homologous' and 'background' matches, and using the law of total probability, we can write
$$\begin{aligned} P\left( {\tilde{X}}^{(k)}_i = m\right) &=P\left( {\tilde{X}}^{(k)}_i = m \left| j^*=i\right) P(j^*=i)\right. \nonumber \\ & \quad+ P\left( \tilde{X}^{(k)}_i = m \left| j^*\not =i\right) P(j^*\not =i) \right. \end{aligned}$$
and with (5), we obtain
$$\begin{aligned}&P\left( \tilde{X}^{(k)}_i = m\left| j^*=i\right) \right. \nonumber \\&= \sum _{m_1+m_2=m{-1}} P(X_i = m_1 | j^*=i) P\left( X_{i+m_1{+1},i+m_1{+1}}^{(k-1)}=m_2\right) \nonumber \\&= \sum _{m_1+m_2=m{-1}} P(X_i = m_1 | j^*=i) {m_2 \atopwithdelims ()k-1} p^{m_2-k+1}(1-p)^k \end{aligned}$$
$$\begin{aligned} P(X_i = m_1 | j^*=i) &= \frac{P(X_{i,i}=m_1 \wedge j^*=i)}{P(j^*=i) } \nonumber \\& = \frac{P(X_{i,i}=m_1 \wedge X_{i,i} \ge X_{i,j}, j\not = i)}{P(j^*= i)} \nonumber \\& = \frac{P(X_{i,i}=m_1 \wedge X_{i,j}\le m_1, j\not = i)}{P(j^*= i)} \nonumber \\&= \frac{p^{m_1}(1-p) \cdot (1 - q^{m_1+1})^{L-{m_1}}}{P(j^*= i)} \end{aligned}$$
so with (11) and (12), the first summand in (10) becomes
$$\begin{aligned}& P\left( \tilde{X}^{(k)}_i = m\left| j^*=i\right) P(j^*=i) \right. \nonumber \\&= \sum _{m_1+m_2=m{-1}} P(X_i = m_1 | j^*=i) {m_2 \atopwithdelims ()k-1} p^{m_2-k+1}(1-p)^k \cdot P(j^*=i) \nonumber \\&= \sum _{m_1+m_2=m{-1}} \frac{p^{m_1}(1-p) \cdot (1 - q^{m_1+1})^{L-{m_1}}}{P(j^*= i)} \nonumber \\ & \quad \cdot \ \ {m_2 \atopwithdelims ()k-1} p^{m_2-k+1}(1-p)^k \cdot P(j^*=i) \nonumber \\&= \sum _{m_1+m_2=m{-1}} (1 - q^{m_1+1})^{L-{m_1}} {m_2 \atopwithdelims ()k-1} p^{m_1+m_2-k+1}(1-p)^{k+1} \nonumber \\&=\ \ p^{m-k+1}(1-p)^{k+1} \sum _{m_1+m_2=m{-1}} (1 - q^{m_1+1})^{L-{m_1}} {m_2 \atopwithdelims ()k-1} \end{aligned}$$
Similarly, for the second summand in (10), we note that
$$\begin{aligned}&P\left( \tilde{X}^{(k)}_i = m| j^*\not =i\right) \nonumber \\&= \sum _{m_1+m_2=m{-1}} P(X_i = m_1 | j^*\not =i) {m_2 \atopwithdelims ()k-1} q^{m_2-k+1}(1-q)^k \end{aligned}$$
$$\begin{aligned} P(X_i = m_1 | j^*\not =i) &= \frac{P(X_{i,j^*}=m_1 \wedge j^*\not =i)}{P(j^*\not =i) } \nonumber \\& = \frac{P(X_{i,j^*}=m_1 \wedge X_{i,i} < X_{i,j^*})}{P(j^*\not = i)} \nonumber \\ & = \frac{P(X_{i,j^*}=m_1 \wedge X_{i,i} < m_1 )}{P(j^*\not = i)} \nonumber \\ & = \frac{P(\max _{j\not =i}X_{i,j}=m_1 \wedge X_{i,i} < m_1 )}{P(j^*\not = i)} \nonumber \\ & = \frac{P(\max _{j\not =i}X_{i,j}=m_1) \cdot P( X_{i,i} < m_1 )}{P(j^*\not = i)}\nonumber \\ & = \frac{P(\max _{j\not =i}X_{i,j}=m_1)\cdot P( X_{i,i} < m_1)}{P(j^*\not = i)}\nonumber \\ & = \frac{\left[ (1-q^{m_1+1})^{L-{m_1}} - (1-q^{m_1})^{L-{m_1}} \right] \cdot (1-p^{m_1})}{P(j^*\not = i)} \end{aligned}$$
Thus, the second summand in (10) is given as
$$\begin{aligned}&P\left( \tilde{X}^{(k)}_i = m\left| j^*\not =i\right) P(j^*\not =i) \right. \\&= \sum _{m_1+m_2=m{-1}} P(X_i = m_1 | j^*\not =i) {m_2 \atopwithdelims ()k-1} q^{m_2-k+1}(1-q)^k \cdot P(j^*\not = i) \\&= \sum _{m_1+m_2=m{-1}} \frac{\left[ (1-q^{m_1+1})^{L-{m_1}} - (1-q^{m_1})^{L-{m_1}} \right] \cdot (1-p^{m_1})}{P(j^*\not = i)} \\&\ \ {m_2 \atopwithdelims ()k-1} q^{m_2-k+1}(1-q)^k \cdot P(j^*\not = i) \\&= \sum _{m_1+m_2=m{-1}} \left[ (1-q^{m_1+1})^{L-{m_1}} - (1-q^{m_1})^{L-{m_1}} \right] \cdot (1-p^{m_1}) \\&\ \ \cdot {m_2 \atopwithdelims ()k-1} q^{m_2-k+1}(1-q)^k \\ \end{aligned}$$
For \(1\le m \le L\), the expected number of k-mismatch common substrings of length m returned by the kmacs heuristics is given as \(L \cdot P\left( \tilde{X}^{(k)}_i = m\right)\) and can be calculated using Theorem 1. Moreover, one can use the above considerations to calculate the length distributions of the homologous and background k-mismatch common substrings returned by kmacs. (Remember that, with our simple gap-free model, two substrings of \(S_1\) and \(S_2\), respectively, are called homologous if they start at the same positions and background otherwise.) The probabilities on the right-hand side of Eq. (10) can be used to calculate the expected number of homologous and background k-mismatch common substrings of length m returned by kmacs. In Fig. 2, these expected numbers are plotted against m for \(L=100\) kb, \(p=0.6\) and \(k=20\).
Theoretical length distribution of k-mismatch longest common substrings. The expected number of homologous and background k-mismatch longest common substrings of length m, returned by the kmacs heuristic, was calculated for \(20 \le m \le 80\) using Theorem 1 for an indel-free pair of sequences of length \(L=100\) kb, a match probability \(p=0.6\) (corresponding to 0.57 substitutions per position) and \(k=20\)
Distance estimation
Using Theorem 1, one could estimate the match probability p—and thereby the average number of substitutions per position—from the empirical average length of the k-mismatch common substrings returned by kmacs in a moment-based approach, similar to the approach proposed in [21].
A problem with this moment-based approach is that, for realistic values of L and p, one has \(P(j^*=i) \ll P(j^*\not =i)\), so the above sum is heavily dominated by the 'background' part, i.e. by the second summand in (10). For the parameter values used in Fig. 2, for example, only 1% of the matches returned by kmacs represent homologies while 99% are background noise. There are, in principle, two ways to circumvent this problem. First, one could try to separate homologous from background matches using a suitable threshold value, similarly as we have done in our Filtered Spaced Word Matches approach [29]. But this is more difficult for k-mismatch common substrings, since there can be much more overlap between homologous and background matches than for Spaced-Word matches, see Fig. 2.
There is an alternative to this moment-based approach, however. As can be seen in Fig. 2, the length distribution of the k-mismatch longest common substrings is bimodal, with a first peak in the distribution corresponding to the background matches and the second peak corresponding to the homologous matches. We show that the number of substitutions per positions can be easily estimated from the position of this second peak.
(Figure taken from [13])
Enhanced suffix array. For sequences 'banana' and 'ananas', the enhanced suffix array is shown. Suffixes of the concatenated sequence are lexicographically ordered; a longest common prefix (LCP) array indicates the length of the longest common prefix of a suffix with its predecessor in the list
To simplify the following calculations, we ignore the longest exact match in Eq. (9), and consider only the length of the gap-free 'extension' of this match, see Fig. 1 for an illustration. To model the length of these k-mismatch extensions, we define define random variables
$$\begin{aligned} \hat{X}^{(k)}_i = \tilde{X}_{i}^{(k+1)} - X_i = X^{(k)}_{i+X_i+1,j^*+X_i+1} \end{aligned}$$
In other words, for a position i in sequence \(S_1\), we are looking for the longest substring starting at i that exactly matches a substring of \(S_2\). If \(j^*\) is the starting position of this substring of \(S_2\), we define \(\hat{X}^{(k)}_i\) as the length of the longest possible substring of \(S_1\) starting at position \(i+ X_i + 1\) that matches a substring of \(S_2\) starting at position \(j^* + X_i + 1\) with a Hamming distance of k.
Let \(\hat{X}^{(k)}_i\) be defined as in ( 16 ). Then \(\hat{X}^{(k)}_i\) is the sum of two unimodal distributions, a 'homologous' and a 'background' contribution, and the maximum of the 'homologous' contribution is reached at
$$\begin{aligned} m_H = \left\lceil \frac{k}{1-p} -1 \right\rceil \end{aligned}$$
and the maximum of the 'background contribution'is reached at
$$\begin{aligned} m_B = \left\lceil \frac{k}{1-q} -1 \right\rceil \end{aligned}$$
As in (5), the distribution of \(\hat{X}^{(k)}_i\) conditional on \(j^*=i\) or \(j^*\not =i\), respectively, can be easily calculated as
$$\begin{aligned} P\left( \hat{X}^{(k)}_i = m | j^*=i \right) = P\left( X^{(k)}_{i+ X_i+1,i+ X_i+1} = m \right) = {m \atopwithdelims ()k} p^{m-k} (1-p)^{k+1} \end{aligned}$$
$$\begin{aligned} P\left( \hat{X}^{(k)}_i = m\left| j^*\not = i \right) = {m \atopwithdelims ()k} q^{m-k} (1-q)^{k+1} \right. \end{aligned}$$
$$\begin{aligned} P\left( \hat{X}^{(k)}_i = m\right)&= P(j^* = i) {m \atopwithdelims ()k} p^{m-k} (1-p)^{k+1} \nonumber \\& \quad+ P(j^*\not = i) {m \atopwithdelims ()k} q^{m-k} (1-q)^{k+1} \end{aligned}$$
For the homologous part
$$\begin{aligned} H_k(m) = {P(j^*= i)} {m \atopwithdelims ()k} p^{m-k} (1-p)^{k+1} \end{aligned}$$
we obtain the recursion
$$\begin{aligned} H_k(m+1)= { \frac{m+1}{m+1-k}\cdot p \cdot H_k(m) } \end{aligned}$$
so we have \(H_k(m) \le H_k(m+1)\) if and only if
$$\begin{aligned} \frac{ m+1-k }{m+1} \le p \end{aligned}$$
Similarly, the 'background contribution'
$$\begin{aligned} B_k(m) = P(j^*\not = i) {m \atopwithdelims ()k} q^{m-k} (1-q)^{k+1} \end{aligned}$$
is increasing until
$$\begin{aligned} \frac{ m+1-k }{m+1} \le q \end{aligned}$$
holds, which concludes the proof of the theorem. □
The proof of Theorem 2 gives us lower and upper bounds for p and an easy approach to estimate p from the empirical length distribution of the k-mismatch extensions calculated by kmacs. Let \(m_{\max }\) be the maximum of the homologous part of the distribution \(\hat{X}^{(k)}_i\), i.e. we define
$$\begin{aligned} m_{\max } =\mathop{\text{argmax}}_m {m \atopwithdelims ()k} p^{m-k} (1-p)^{k+1} \end{aligned}$$
Then, by inserting \(m_{\max }-1\) and \(m_{\max }\) into inequality (18), we obtain
$$\begin{aligned} \frac{m_{\max }-k}{m_{\max }} \le p \le \frac{ m_{\max }+1-k }{m_{\max }+1} \end{aligned}$$
Finally, we use (18) to estimate p from the second maximum \(m_E\) of the empirical distribution of \(\hat{X}_i\) as
$$\begin{aligned} \hat{p} \approx \frac{ m_E +1-k }{m_E+1} \end{aligned}$$
For completeness, we calculate the probability \(P(j^* = i)\). First we note that, by definition, for all i, we have
$$\begin{aligned} P(j^* = i) = P\left( X_{i,j}\,< \,X_{i,i} \quad \text { for all } j\not =i\right) \end{aligned}$$
so with the law of total probability and Eq. (2), we obtain
$$\begin{aligned} P(j^* = i) & = P\left( X_{i,j} < X_{i,i} \quad \text { for all } j\not =i\right) \nonumber \\ &= \sum _m P\left( X_{i,j} < X_{i,i} \quad \text { for all } j\not =i| X_{i,i} = m \right) P( X_{i,i} = m) \nonumber \\ & = \sum _m P\left( X_{i,j} < m \quad \text { for all } j\not =i \right) P( X_{i,i} = m) \nonumber \\ & = \sum _m \prod _{j\not =i}\, P( X_{i,j} < m)\, P( X_{i,i} = m) \nonumber \\ & = \sum _m (1-q^m)^{L -1} p^m (1-p) \end{aligned}$$
Empirical length distribution of k-mismatch common substring extensions. The number of k-mismatch extensions of length m was calculated with kmacs for a pair of simulated DNA sequences of length \(L=500\) kb with \(k=90\) and \(80 \le m \le 240\). The plot shows the raw frequencies and smoothed distribution with different values for for the width w of the smoothing window. The hight of the 'homologous' peak is > 50,000
For each position i in one of the two input sequences, kmacs first searches the longest substring starting at i that exactly matches a substring of the other sequence. For a user-defined parameter k, the program then calculates the length of the longest possible gap-free extension with k mismatches of this exact hit. The original version of the program uses the average length of these k-mismatch common substrings (the initial exact match plus the \(k-1\)-mismatch extension after the first mismatch) to assign a distance value to a pair of sequences. We modified kmacs to output the length of the extensions of the identified matches only, ignoring these initial exact matches. Thus, to find k-mismatch common substrings, we ran kmacs with parameter \(k+1\), and we consider the length of the k-mismatch extension after the first mismatch. For each possible length m, the modified program outputs the number N(m) of k-mismatch extensions of length m, starting after the first mismatch after the respective longest exact match.
To find for each position i in one sequence the length of the longest string at i matching a substring of the other sequences, kmacs uses a standard procedure based on enhanced suffix arrays [30], see Fig. 3. The algorithm first identifies the corresponding position in the suffix array. It then goes in both directions, up and down, in the suffix array until the first entry from the respective other sequence is found. In both cases, the minimum of the LCP values is recorded. The maximum of these two minima is the length of the longest substring in the other sequence matching a substring starting at i. In Fig. 3, for example, if i is position 3 in the string ananas, i.e. the 10th position in the concatenate string, the minimum LCP value until the first entry from banana is found, is 3 if one goes up the array and 0 if one goes down. Thus, the longest string in banana matching a substring starting at position 3 in ananas has length 3.
Note that, for a position i in one sequence, it is possible that there exists more than one maximal substring in the other sequence matching a substring at i. In this case, our modified algorithm uses all of these maximal substring matches, i.e. all maximal exact string matches are extended as described above. All these hits can be easily found in the suffix array by extending the search in upwards or downwards direction until the minimum of the LCP entries decreases. In the above example, there is a second occurrence of ana in banana which is found by moving one more position upwards (the corresponding LCP value is still 3).
In addition, we modified the original kmacs to ensure that, for each pair \((i',j')\) of positions from the two input sequences, the extended k-mismatch common substring starting at \((i',j')\) is counted only once. This is necessary for the following reason: if \(S_1\) and \(S_2\) share a long common substring S, then there will be many positions i in \(S_1\) within S such that \(j^*\) is at the corresponding position of S in \(S_2\). In Fig. 1, for example, the red substring starting at positions 5 and 2, respectively, would be such a string S. Here, there are three positions i in \(S_1\)—positions 5, 6 and 7—such that the respective \(j^*\) would be at the corresponding positions in \(S_1\)—at positions 2, 3 and 4, in this case. As a consequence, all maximal exact matches starting at these positions end before the first mismatch after the red substring—at positions 10 and 7—, so the k-mismatch extensions of all these exact matches start at positions \(i'=11\) and \(j'=8\) in \(S_1\) and \(S_2\), respectively. If all k-mismatches returned by kmacs would be counted, the extension starting after the red exact substring match would be counted three times. In real-world genomic sequences, such situations are common. Without the above correction, we observed isolated values m in the length distribution of the k-mismatch extensions, such that the number N(m) of k-mismatch extensions of length m is very high, while \(N(m')\) is zero for neighbouring values \(m'\).
Theoretical length distribution of k-mismatch common substring extensions. The expected number of k-mismatch extensions of length m returned by kmacs was calculated using Eq. (17), distinguishing between 'homologous' and 'background' matches, for a pair of sequences of length \(L=500\) kb with a match probability of \(p=0.5\) for \(k=10\) (top) and \(k=70\) (bottom) for \(20\le m \le 160\). A large enough value of k is necessary to detect the second peak in the distribution that corresponds to the 'homologous' matches
To further process the length distribution returned by the modified kmacs, we implemented a number of Perl scripts. First, the length distribution of the k-mismatch common substrings is smoothed using a window of length w. Next, we search for the second local maximum in this smoothed length distribution. This second peak should represent the homologous k-mismatch common substrings, while the first, larger peak represents the background matches, see Figs. 4 and 5. A simple script identifies the position \(m^*\) of the second highest local peak under two side constraints: we require the height \(N(m^*)\) of the second peak to be substantially smaller than the global maximum, and we require that \(N(m^*)\) is larger than \(N(m^*-x)\) for some suitable parameter x. Quite arbitrarily, we required the second peak to be 10 times smaller than the global maximum peak, and we used a value of \(x=4\). These constraints were introduced to prevent the program to identify small side peaks within the background peak. Finally, we use the position \(m^*\) of the second largest peak in the smoothed length distribution to estimate the match probability p in an alignment of the two input sequences using expression (19). The usual Jukes-Cantor correction is then used to estimate the number of substitutions per position that have occurred since the two sequences separated from their last common ancestor.
We should mention that our algorithm is not always able to output a distance value for two input sequences. It is possible that the algorithm fails to find a second maximum in the length distribution of the k-mismatch common substrings. This can happen, for example, for distantly related sequences where the 'homologue' and the 'background' peak are too close together such that the 'homologous' peak is obscured by the 'background' peak, see Fig. 5 for an example. In this case no distance can be calculated by our algorithm.
To evaluate our approach, we used simulated and real-world genome sequences. As a first set of test data, we generated pairs of simulated DNA sequences of with varying evolutionary distances and compared the distances estimated with our algorithm—i.e. the estimated number of substitutions per position—to their 'real' distances. For each distance value, we generated 100 pairs of sequences of length 500 kb each and calculated the average and standard deviation of the estimated distance values. Figure 6 shows the results of these test runs with a parameter \(k=90\) and a smoothing window size of \(w=31\), with error bars representing standard deviations. A program run on a pair of sequences of length 500 kb took less than a second.
Estimated distances—i.e. estimated average number of substitutions per position—for simulated sequence pairs, plotted against the 'real' distances—i.e. substitution probabilities used in the simulations, for pairs of sequences of length \(L=500\) kb. We applied our own approach with parameters \(k=90\) and \(w=31\) (top) as well as Filtered Spaced Word Matches (middle) and andi (bottom)
Figure 4 shows the length distribution for one of these sequence pairs with various values for w. In Fig. 6, the results are reported for a given distance value, if distances could be computed for at least 75 out of the 100 sequence pairs (as mentioned above, it is possible that our program does not output a distance value since no second maximum could be found in the length distribution of the k-mismatch common substrings). As can be seen in the figure, our approach accurately estimates evolutionary distances up to around 0.9 substitutions per position. For larger distances, the program did not return a sufficient number of distance values, so no results are reported here. To demonstrate the influence of the parameter k, we plotted in Fig. 5, for a given set of parameters, the expected number of k-mismatch common substring extensions of length m, calculated with Eq. (17), using varying values of k.
Evaluation of alignment-free methods for phylogeny reconstruction. Various methods were evaluated on on a set of 27 primate mitochondrial genomes. Robinson-Foulds distances (top) and branch scores (bottom) were calculated to measure the difference between the resulting trees and a reference tree obtained with Clustal \(\Omega\) and Neighbour Joining
As a real-word test case, we used a set of 27 mitochondrial genomes from primates that has been used as benchmark data in previous studies on alignment-free sequence comparison. We applied our method with different values of k and with different window lengths w for the smoothing. In addition, we ran the programs andi [25] and our previously published program Filtered Spaced-Word Matches (FSWM) [29] on these data. As a reference tree, we used a tree calculated with Clustal \(\Omega\) [31] and Neighbour Joining [32]. To compare the produced trees with this reference trees, we used the Robinson-Foulds distance [33] and the branch score distance [34] as implemented in the PHYLIP program package [35]. Figure 7 shows the performance of our approach with different parameter values and compares them to the results of andi and FSWM. For the parameter values shown in the figure, our program was able to calculate distances for all \({27 \atopwithdelims ()2}=351\) pairs of sequences. The total run time to calculate the 351 distance values for the 27 mitochondrial genomes was less than 6 s. Note that the time and memory consumption of our approach essentially depend on kmacs, the scripts that process the output of kmacs are negligible. For a discussion of the time and space complexity of our software, we therefore refer to our previous paper on kmacs [13].
In this paper, we introduced a new way of estimating phylogenetic distances between genomic sequences. We showed that the average number of substitutions per position since two sequences have separated from their last common ancestor can be accurately estimated from the position of local maximum in the smoothed length distribution of k-mismatch common substrings. To find this local maximum, we used a naive search procedure. Two parameter values have to be specified in our approach, the number k of mismatches and the size w of the smoothing window for the length distribution. Table 1 shows that our distance estimates are reasonably stable for a range of values of k and w.
Table 1 Distance values calculated with our algorithm for a pair of simulated sequences of length \(L=500\) kb with a match rate of \(p=0.5\), corresponding to a distance of 0.824 substitutions per position
A suitable value of the parameter k is important to separate the 'homologous' peak from the 'background' peak in the length distribution of the k-mismatch common substrings. As follows from Theorem 2, the distance between these two peaks is proportional to k. The value of k must be large enough to ensure that the homologous peak has a sufficient distance to the background peak to be detectable, see Fig. 5. On the other hand, k should not be too large. All considerations in this paper are based on the assumtion that k-mismatch common substrings are either homologue or background, which is the case under our indel-free model of sequence evolution. For sequences with insertions and deletions, however, an un-gapped segment pair may contain both homologous and background regions, if it involves indels. If k is large, k-mismatch common substrings tend to be long, and 'mixed' k-mismatch common substrings, including both background and homologue segments, will distort our distance estimates. This seems to be the reason why in Fig. 7 our results deteriorate if k becomes too large. One possible solution to this problem would be to recognize 'mixed' k-mismatch common substrings by the distribution of their mismatches and to exclude them from the length statistics. This might allow us to increase k without running into the above mentioned problems, so one could achieve a better separation of 'background' and 'homologous' peaks. We are planning to investigate the effect of indels on our approach in a subsequent study.
Specifying a suitable size w of the smoothing window is also important to obtain accurate distance estimates; a large enough window is necessary to avoid ending up in a local maximum of the raw length distribution. For the data shown in Fig. 4, for example, our approach finds the second maximum of the length distribution at 179 if a window width of \(w=31\) is chosen. From this value, the match probability p is estimated as
$$\begin{aligned} \hat{p} = \frac{179+1-90}{179+1} = 0.5 \end{aligned}$$
using Eq. (18), corresponding to 0.824 substitutions per position according to the Jukes-Cantor formula. This was exactly the value that we used to generate this pair of sequences. With window lengths of \(w=21\) and \(w=1\) (no smoothing at all), however, the second local maxima of the length distribution would be found at 181 and 171, respectively, leading to estimates of 0.808 (\(w=11\)) and 0.897 (\(w=1\)) substitutions per position. If the width w of the smoothing window is too large, on the other hand, the second peak may be obscured by the first 'background' peak. In this case, no peak is found and no distance can be calculated. In Fig. 4, for example, this happens with if a window width \(w=51\) is used. Further studies are necessary to find out suitable values for w and k, depending on the length of the input sequences.
Finally, we should say that we used a rather naive way to identify possible homologies that are then extended to find k-mismatch common substrings. As becomes obvious from the size of the homologous and background peaks in our plots, our approach finds far more background matches than homologous matches. Reducing the noise of background matches should help to find the position of the homologous peak in the length distributions. We will therefore explore alternative ways to find possible homologies that can be used as starting points for k-mismatch common substrings.
Vinga S. Editorial: Alignment-free methods in computational biology. Brief Bioinform. 2014;15:341–2.
Höhl M, Rigoutsos I, Ragan MA. Pattern-based phylogenetic distance estimation and tree reconstruction. Evol Bioinform Online. 2006;2:359–75.
Sims GE, Jun S-R, Wu GA, Kim S-H. Alignment-free genome comparison with feature frequency profiles (FFP) and optimal resolutions. Proc Natl Acad Sci USA. 2009;106:2677–82.
Chor B, Horn D, Levy Y, Goldman N, Massingham T. Genomic DNA \(k\)-mer spectra: models and modalities. Genome Biol. 2009;10:108.
Vinga S, Carvalho AM, Francisco AP, Russo LMS, Almeida JS. Pattern matching through Chaos Game Representation: bridging numerical and discrete data structures for biological sequence analysis. Algorithms Mol Biol. 2012;7:10.
Leimeister C-A, Boden M, Horwege S, Lindner S, Morgenstern B. Fast alignment-free sequence comparison using spaced-word frequencies. Bioinformatics. 2014;30:1991–9.
Morgenstern B, Zhu B, Horwege S, Leimeister C-A. Estimating evolutionary distances between genomic sequences from spaced-word matches. Algorithms Mol Biol. 2015;10:5.
Hahn L, Leimeister C-A, Ounit R, Lonardi S, Morgenstern B. Rasbhari: optimizing spaced seeds for database searching, read mapping and alignment-free sequence comparison. PLOS Comput Biol. 2016;12(10):1005107.
Noé L. Best hits of 11110110111: model-free selection and parameter-free sensitivity calculation of spaced seeds. Algorithms Mol Biol. 2017;12:1.
Chang WI, Lawler EL. Sublinear approximate string matching and biological applications. Algorithmica. 1994;12:327–44.
Ulitsky I, Burstein D, Tuller T, Chor B. The average common substring approach to phylogenomic reconstruction. J Comput Biol. 2006;13:336–50.
Comin M, Verzotto D. Alignment-free phylogeny of whole genomes using underlying subwords. Algorithms Mol Biol. 2012;7:34.
Leimeister C-A, Morgenstern B. kmacs: the \(k\)-mismatch average common substring approach to alignment-free sequence comparison. Bioinformatics. 2014;30:2000–8.
Aluru S, Apostolico A, Thankachan SV. Efficient alignment free sequence comparison with bounded mismatches. In: International conference on research in computational molecular biology; 2015. p. 1–12
Thankachan SV, Chockalingam SP, Liu Y, Apostolico A, Aluru S. ALFRED: a practical method for alignment-free distance computation. J Comput Biol. 2016;23:452–60.
Pizzi C. MissMax: alignment-free sequence comparison with mismatches through filtering and heuristics. Algorithms Mol Biol. 2016;11:6.
Thankachan SV, Apostolico A, Aluru S. A provably efficient algorithm for the \(k\)-mismatch average common substring problem. J Comput Biol. 2016;23:472–82.
Apostolico A, Guerra C, Landau GM, Pizzi C. Sequence similarity measures based on bounded hamming distance. Theor Comput Sci. 2016;638:76–90.
Thankachan SV, Chockalingam SP, Liu Y, Krishnan A, Aluru S. A greedy alignment-free distance estimator for phylogenetic inference. BMC Bioinform. 2017;18:238.
Petrillo UF, Guerra C, Pizzi C. A new distributed alignment-free approach to compare whole proteomes. Theor Comput Sci. 2017;698:100–12.
Haubold B, Pfaffelhuber P, Domazet-Loso M, Wiehe T. Estimating mutation distances from unaligned genomes. J Comput Biol. 2009;16:1487–500.
Haubold B, Pierstorff N, Möller F, Wiehe T. Genome comparison without alignment using shortest unique substrings. BMC Bioinform. 2005;6:123.
Haubold B, Wiehe T. How repetitive are genomes? BMC Bioinform. 2006;7:541.
Yi H, Jin L. Co-phylog: an assembly-free phylogenomic approach for closely related organisms. Nucleic Acids Res. 2013;41:75.
Haubold B, Klötzl F, Pfaffelhuber P. andi: Fast and accurate estimation of evolutionary distances between closely related genomes. Bioinformatics. 2015;31:1169–75.
Leimeister CA, Dencker T, Morgenstern B. Anchor points for genome alignment based on filtered spaced word matches. arXiv:1703.08792 [q-bio.GN]; 2017.
Gusfield D. Algorithms on strings, trees, and sequences: computer science and computational biology. Cambridge: Cambridge University Press; 1997.
Jukes TH, Cantor CR. Evolution of protein molecules. New York: Academy Press; 1969.
Leimeister C-A, Sohrabi-Jahromi S, Morgenstern B. Fast and accurate phylogeny reconstruction using filtered spaced-word matches. Bioinformatics. 2017;33:971–9.
Manber U, Myers G. Suffix arrays: a new method for on-line string searches. In: Proceedings of the first annual ACM-SIAM symposium on discrete algorithms SODA '90; 1990. p. 319–27.
Sievers F, Wilm A, Dineen D, Gibson TJ, Karplus K, Li W, Lopez R, McWilliam H, Remmert M, Söding J, Thompson JD, Higgins DG. Fast, scalable generation of high-quality protein multiple sequence alignments using Clustal Omega. Mol Syst Biol. 2011;7:539.
Saitou N, Nei M. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol Biol Evol. 1987;4:406–25.
Robinson D, Foulds L. Comparison of phylogenetic trees. Math Biosci. 1981;53:131–47.
Kuhner MK, Felsenstein J. A simulation comparison of phylogeny algorithms under equal and unequal evolutionary rates. Mol Biol Evol. 1994;11:459–68.
Felsenstein J. PHYLIP-phylogeny inference package (version 3.2). Cladistics. 1989;5:164–6.
BM conceived the approach, implemented the scripts to estimate phylogenetic distances from the lengths of the k-mismatch common substrings, did some of the program evaluation and wrote the manuscript. SS contributed to the program evaluation. CL adapted the program kmacs as described in the manuscript. All authors read and approved the final manuscript.
Our software is freely available under the GNU license at http://www.gobics.de/burkhard/lendis.tar.
The project was partially funded by the VW Foundation, project VWZN3157. We acknowledge support by the German Research Foundation and the Open Access Publication Funds of the Göttingen University.
Department of Bioinformatics, Institute of Microbiology and Genetics, University of Goettingen, Goldschmidtstr. 1, 37077, Göttingen, Germany
Burkhard Morgenstern, Svenja Schöbel & Chris-André Leimeister
Burkhard Morgenstern
Svenja Schöbel
Chris-André Leimeister
Correspondence to Burkhard Morgenstern.
Morgenstern, B., Schöbel, S. & Leimeister, CA. Phylogeny reconstruction based on the length distribution of k-mismatch common substrings. Algorithms Mol Biol 12, 27 (2017). https://doi.org/10.1186/s13015-017-0118-8
Alignment-free
Kmacs
Average common substring | CommonCrawl |
Existence and uniqueness results to positive solutions of integral boundary value problem for fractional q-derivatives
Furi Guo1,
Shugui Kang1 and
Fu Chen1Email author
Advances in Difference Equations20182018:379
Received: 23 April 2018
Accepted: 4 September 2018
In this paper,we are interested in the existence and uniqueness of positive solutions for integral boundary value problem with fractional q-derivative:
$$\begin{aligned} &D_{q}^{\alpha}u(t)+f\bigl(t,u(t),u(t)\bigr)+g\bigl(t,u(t) \bigr)=0, \quad 0< t< 1, \\ & u(0)=D_{q}u(0)=0, \qquad u(1)=\mu \int_{0}^{1}u(s)\,d_{q}s, \end{aligned}$$
where \(D_{q}^{\alpha}\) is the fractional q-derivative of Riemann–Liouville type, \(0< q<1\), \(2<\alpha\leq3 \), and μ is a parameter with \(0<\mu<[\alpha]_{q}\). By virtue of fixed point theorems for mixed monotone operators, we obtain some results on the existence and uniqueness of positive solutions.
Positive solution
Mixed monotone operator
Fractional q-difference equation
Existence and uniqueness
The theory that fractional differential equations arise in the fields of science and engineering such as physics, chemistry, mechanics, economics, and biological sciences, etc.; see, for example, [1–6]. The q-difference calculus or quantum calculus is an old subject that was put forward by Jackson [7, 8]. The essential definitions and properties of q-difference calculus can be found in [9, 10]. Early development for q-fractional calculus can be seen in the papers by Al-Salam [11] and Agarwal [12] on the existence theory of fractional q-difference. These days the fractional q-difference equation have given fire to increasing scholars' imaginations. Some works considered the existence of positive solutions for nonlinear q-fractional boundary value problem [13–32]. For example, Ferreira [13] studied the existence of positive solutions to the fractional q-difference equation
$$ \textstyle\begin{cases} D_{q}^{\alpha}u(t)+ f(t,u(t))=0, \quad 0< t< 1, 1< \alpha\leq2,\\ u(0)=u(1)=0. \end{cases} $$
Ferreira [14] also considered the existence of positive solutions to the nonlinear q-difference boundary value problem
$$ \textstyle\begin{cases} D_{q}^{\alpha}u(t)+ f(t,u(t))=0, \quad 0< t< 1, 1< \alpha\leq3,\\ u(0)=D_{q}u(0)=0, \qquad D_{q}u(1)=\beta\geq0. \end{cases} $$
EI-Shahed and AI-Askar [15] studied the existence of a positive solution to the fractional q-difference equation
$$ \textstyle\begin{cases} {}_{c} D_{q}^{\alpha}u(t)+a(t)f(t)=0, \quad 0\leq t \leq1, 2< \alpha\leq 3,\\ u(0)=D_{q}^{2}(0)=0, \qquad\gamma D_{q} u(1)+ \beta D_{q}^{2} u(1)=0, \end{cases} $$
where \(\gamma, \beta \leqslant0\), and \({}_{c} D_{q}^{\alpha}\) is the fractional q-derivative of Caputo type.
Darzi and Agheli [16] studied the existence of a positive solution to the fractional q-difference equation
$$ \textstyle\begin{cases} D_{q}^{\alpha}u(t)+a(t)f(t)=0, \quad 0\leq t \leq1, 3< \alpha\leq4,\\ u(0)=D_{q}u(0)=D_{q}^{2}u(0)=0, \qquad D_{q}^{2} u(1)= \beta D_{q}^{2} u(\eta), \end{cases} $$
where \(0 <\eta<1\) and \(1-\beta\eta^{\alpha-3}>0 \).
The methods used in the papers mentioned are mainly the Krasnoselskii fixed point theorem, the Schauder fixed point theorem, the Leggett–Williams fixed point theorem, and so on. Differently from methods used in the literature mentioned, on the basis of the enlightenment of the works [17, 18, 26], we will use fixed point theorems for mixed monotone operators to demonstrate the existence and uniqueness of positive solutions for integral boundary value problems of the form
$$ \textstyle\begin{cases} D_{q}^{\alpha}u(t)+f(t,u(t),u(t))+g(t,u(t))=0, \quad 0< t< 1,\\ u(0)=D_{q}u(0)=0, \qquad u(1)=\mu\int_{0}^{1}u(s)\,d_{q}s, \end{cases} $$
where \(D_{q}^{\alpha}\) is the fractional q-derivative of Riemann–Liouville type, \(0< q<1\), \(2<\alpha\leq3\), \(0<\mu<[\alpha]_{q}\). Our results ensure the existence of a unique positive solution. Moreover, an iterative scheme is constructed for approximating the solution. As far as we know, there are still very few works utilizing the fixed point results for mixed monotone operators to study the existence and uniqueness of a positive solution for fractional q-derivative integral boundary value problems.
The plan of the paper is as follows. In Sect. 2, we give not only basic definitions of q-fractional integral, but also some properties of certain Green's functions, which play a fundamental role in the process of proofs. In Sect. 3, in light of some sufficient conditions, we obtained some results on the existence and uniqueness of positive solutions to problem (1.5). At the closing part, two examples are given to demonstrate the serviceability of our main results in Sect. 4.
2 Preliminaries
For convenience of the reader, on one hand, we recall some well-known facts on q-calculus and, on the other hand, some notations and lemmas that will be used in the proofs of our theorems.
A nonempty closed convex set \(P\subset E\) is a cone if (1) \(x\in P, r\geq0\Rightarrow r x\in P\) and (2) \(x\in P,-x\in P\Rightarrow x=\theta\) (θ is the zero element of E), where \((E, \|\cdot\| )\) is a real Banach space. For all \(x,y\in E\), if there exist \(\mu,\nu >0 \) such that \(\mu x\leq y\leq \nu x\), then we write \(x\sim y\). Obviously, ∼ is an equivalence relation. Let \(P_{h}=\{x\in E| x\sim h, h> \theta\}\).
Let \({q}\in(0,1)\). Then the q-number is given by
$$[a]_{q}= \frac{1-q^{a}}{1-q},\quad a\in R. $$
The q-analogue of the power function \((a-b)^{(n)}\) with \(n\in N_{0}\) is
$$(a-b)^{(0)}=1, \qquad(a-b)^{(n)}=\prod _{k=0} ^{n-1} \bigl(a-bq^{k}\bigr),\quad n\in N, a,b \in R. $$
More generally, if \(\alpha\in R\), then
$$(a-b)^{(\alpha)}=a^{\alpha} \prod_{k=0}^{\infty}\frac {a-bq^{k}}{a-bq^{\alpha+k}}, \quad\alpha\neq0. $$
Note that if \(b=0\), then \(a^{(\alpha)}=a^{\alpha}\). The q-gamma function is defined by
$$\Gamma_{q} (x)= \frac{(1-q)^{(x-1)}}{{1-q}^{x-1}}, \quad x \in R^{+}, $$
and satisfies \(\Gamma_{q} (x+1)=[x]_{q}\Gamma_{q} (x)\).
The q-derivative of a function f is defined by
$$(D_{q} f) (x)=\frac{f(qx)-f(x)}{(q-1)x}, \qquad(D_{q} f) (0)= \lim _{x \rightarrow0} (D_{q} f) (x), $$
and q-derivatives of higher order by
$$\bigl(D_{q}^{0} f\bigr) (x)=f(x), \qquad \bigl(D_{q}^{n} f\bigr) (x)=D_{q} \bigl(D_{q}^{n-1} f\bigr) (x), \quad n\in N. $$
The q-integral of a function f defined in the interval \([0,b]\) is given by
$$(I_{q} f) (x)= \int_{0} ^{x} f(s) \,d_{q}s = x(1-q)\sum _{k=0} ^{\infty}f\bigl(xq^{k} \bigr)q^{k},\quad x \in[0,b]. $$
If \(a \in[0,b]\) and f is defined in the interval \([0,b]\), then its integral from a to b is defined by
$$\int_{a} ^{b} f(s) \,d_{q}s= \int_{0} ^{b} f(s) \,d_{q}s - \int_{0} ^{a} f(s) \,d_{q}s. $$
Similarly to the derivatives, the operator \(I_{q}^{n}\) is given by
$$\bigl(I_{q}^{0} f\bigr) (x)=f(x),\qquad\bigl(I_{q}^{n} f\bigr) (x)=I_{q}\bigl(I_{q}^{n-1} f\bigr) (x), \quad n \in N. $$
The fundamental theorem of calculus applies to the operators \(I_{q}\) and \(D_{q}\), that is,
$$(D_{q} I_{q} f) (x)=f(x), $$
and if f is continuous at \(x=0\), then
$$(I_{q} D_{q} f) (x)=f(x)-f(0). $$
The following formulas will be used later (\({}_{t}D_{q}\) denotes the derivative with respect to variable t):
$$\begin{gathered} {}_{t}D_{q}(t-s)^{(\alpha)}=[\alpha]_{q} (t-s)^{(\alpha-1)}, \\ \biggl({}_{x} D_{q} \int_{0} ^{x} f(x,t)\,d_{q}t \biggr) (x) = \int_{0}^{x} {}_{x}D_{q} f(x,t)\,d_{q}t +f(qx,x). \end{gathered} $$
Definition 2.1
(see [4])
Let \(\alpha\geq0\), and let f be a function defined on \([0,1]\). The fractional q-integral of the Riemann–Liouville type is defined by \((I_{q}^{0} f)(x)=f(x)\) and
$$\bigl(I_{q}^{\alpha} f\bigr) (x)= \frac{1}{\Gamma_{q} (\alpha)} \int_{0} ^{x} (x-qt)^{(\alpha-1)} f(t) \,d_{q}t, \quad\alpha>0, x \in[0,1]. $$
(see [10])
The fractional q-derivative of the Riemann–Liouville type is defined by
$$\bigl(D_{q}^{0} f\bigr) (x)=f(x), \qquad \bigl(D_{q}^{\alpha} f\bigr) (x)= \bigl(D_{q}^{p} I_{q}^{p-\alpha }f\bigr) (x), \quad\alpha>0, $$
where p is the smallest integer greater than or equal to α.
Lemma 2.1
Let \(\alpha, \beta\geq0\), and let f be a function defined on \([0,1]\). Then the following formulas hold:
\((I_{q}^{\beta}I_{q}^{\alpha}f)(x)=(I_{q}^{\beta+\alpha} f)(x)\),
\((D_{q}^{\alpha}I_{q}^{\alpha}f)(x)=f(x)\).
Let \(\alpha>0\), and let be p be a positive integer. Then the following equality holds:
$$\bigl(I_{q}^{\alpha}D_{q}^{p} f\bigr) (x)= \bigl(D_{q}^{p} I_{q}^{\alpha}f \bigr) (x) - \sum_{k=0} ^{p-1} \frac{x^{\alpha-p+k}}{\Gamma_{q} (\alpha+k-p+1)} \bigl(D_{q}^{k} f\bigr) (0). $$
Let \(2<\alpha\leq3\) and \(0<\mu<[\alpha]_{q}\). Let \(x\in C[0,1]\). Then the boundary value problem
$$\begin{aligned} &D_{q}^{\alpha}u(t)+ x(t)=0, \quad 0< t< 1, \\ \end{aligned}$$
$$\begin{aligned} &u(0)=D_{q}u(0)=0, \qquad u(1)=\mu \int_{0}^{1}u(s)\,d_{q}s, \end{aligned}$$
has a unique solution
$$u(t)= \int_{0} ^{1} G(t,qs)x(s)\,d_{q}s, $$
$$ G(t,s)= \textstyle\begin{cases} \frac{t^{\alpha-1}(1-s)^{(\alpha-1)} ([\alpha]_{q}-\mu+\mu q^{\alpha-1}s )- ([\alpha]_{q}-\mu )(t-s)^{\alpha -1}}{ ([\alpha]_{q}-\mu )\Gamma_{q} (\alpha)},& 0 \leq s\leq t \leq1,\\ \frac{t^{\alpha-1}(1-s)^{(\alpha-1)} ([\alpha]_{q}-\mu+\mu q^{\alpha-1}s )}{ ([\alpha]_{q}-\mu )\Gamma_{q} (\alpha)},& 0 \leq t\leq s\leq1. \end{cases} $$
The function \(G(t,qs)\) defined by (2.3) has the following properties:
\(G(t,qs)\) is a continuous function and \(G(t,qs) \geq0\);
\(\frac{\mu q^{\alpha}t^{\alpha-1}(1-qs)^{(\alpha-1)}s}{ ([\alpha]_{q}-\mu )\Gamma_{q} (\alpha)} \leq G(t,qs) \leq\frac {M_{0} t^{\alpha-1}}{ ([\alpha]_{q}-\mu )\Gamma_{q} (\alpha )}\), \(t, s \in[0,1]\),
where \(M_{0}=\max \{ [\alpha-1]_{q} ([\alpha]_{q}-\mu)+\mu q^{\alpha}, q^{\alpha-1} [\alpha]_{q} \}\).
An operator \(A: P \times P \rightarrow P\) is said to be a mixed monotone operator if \(A(x, y)\) is increasing in x and decreasing in y, that is, \(x_{i}, y_{i}\in P\) (\(i = 1, 2\)), \(x_{1} \leq x_{2}\), \(y_{1}\geq y_{2}\) imply \(A(x_{1}, y_{1})\leq A(x_{2}, y_{2})\). An element \(x\in P\) is called a fixed point of A if \(A(x, x) = x\).
An operator \(A: P\rightarrow P \) is said to be subhomogeneous if
$$\begin{aligned} A (tx)\geq t A(x)\quad \mbox{for any } t\in(0,1), x\in P. \end{aligned}$$
Let \(D = P \), and let γ be a real number with \(0\leq\gamma< 1 \). An operator \(A:D\rightarrow D\) is said to be γ-concave if it satisfies
$$\begin{aligned} A(tx)\geq t^{\gamma}A(x)\quad \mbox{for any } t\in(0,1),x\in D. \end{aligned}$$
Let \(h > \theta\) and \(\gamma\in(0,1)\).
Let \(A: P \times P \rightarrow P\) be a mixed monotone operator satisfying
$$\begin{aligned} A\bigl(tx,t^{-1}y\bigr)\geq t^{\gamma}A(x,y) \quad \textit{for any } t \in(0,1), x,y \in P, \end{aligned}$$
and let \(B:P \rightarrow P \) be an increasing subhomogeneous operator. Assume that
there is \(h_{0} \in P_{h} \) such that \(A(h_{0},h_{0})\in P_{h}\) and \(B h_{0} \in P_{h} \);
there exists a constant \(\delta_{0}\) such that \(A(x,y )\geq\delta_{0} B x\) for any \(x,y \in P \).
\(A: P_{h} \times P_{h} \rightarrow P_{h} \) and \(B:P_{h} \rightarrow P_{h}\);
there exist \(u_{0},v_{0} \in P_{h}\) and \(r \in(0,1)\) such that
$$r v_{0} \leq u_{0} < v_{0}, \qquad u_{0} \leq A(u_{0},v_{0})+ B u_{0} \leq A ( v_{0},u_{0})+ B v_{0}\leq v_{0}; $$
the operator equation \(A(x,x )+ B x =x \) has a unique solution \(x^{*} \in P_{h}\);
for any initial values \(x_{0},y_{0} \in P_{h}\), constructing successively the sequences
$$x_{n} = A(x_{n-1},y_{n-1})+ Bx_{n-1}, \qquad y_{n} = A(y_{n-1},x_{n-1})+ B y_{n-1}, \quad n=1,2,\ldots, $$
we have \(x_{n}\rightarrow x^{*}\) and \(y_{n}\rightarrow x^{*} \) as \(n \rightarrow\infty\).
Remark 2.1
When \(B= \theta\) in Lemma 2.5, then the corresponding conclusion still holds.
Let \(A:P \times P \rightarrow P\) be a mixed monotone operator satisfying
$$\begin{aligned} A\bigl(tx,t^{-1}y\bigr)\geq t A(x,y), \quad \textit{for any } t \in(0,1), x, y \in P, \end{aligned}$$
and let \(B:P \rightarrow P \) be an increasing γ-concave operator. Assume that
there exists a constant \(\delta_{0}\) such that \(A(x,y )\leq\delta_{0} B x\) for any \(x,y \in P \).
$$r v_{0} \leq u_{0} < v_{0}, \qquad u_{0} \leq A(u_{0},v_{0})+ B u_{0} \leq A( v_{0},u_{0})+ B v_{0}\leq v_{0}; $$
the operator equation \(A (x,x )+ B x =x \) has a unique solution \(x^{*} \in P_{h}\);
$$x_{n} = A (x_{n-1},y_{n-1})+ Bx_{n-1}, \qquad y_{n} =A(y_{n-1},x_{n-1})+B y_{n-1}, \quad n=1,2,\ldots, $$
When \(A= \theta\) in Lemma 2.6, then the corresponding conclusion still holds.
3 Main results
In this section, we give and prove our main results by applying Lemmas 2.5 and 2.6. We consider the Banach space \(X=C[0,1]\) endowed with standard norm \(\|x\|=\sup\{|x(t)|:{t\in[0,1]}\}\). Clearly, this space can be equipped with a partial order given by
$$x,y \in C[0,1], \quad x\leq y \quad \Leftrightarrow \quad x(t)\leq y(t) \quad \mbox{for }t \in[0,1]. $$
We define the cone \(P=\{x\in X:x(t) \geq 0,t\in[0,1]\}\). Notice that P is a normal cone in \(C[0,1]\) and the normality constant is 1.
Theorem 3.1
Suppose that
\((F_{1})\) :
a function \(f(t,x,y):[0,1]\times[0, +\infty)\times[0,+\infty)\rightarrow[0,+\infty)\) is continuous, increasing with respect to the second variable, and decreasing with respect to the third variable;
a function \(g(t,x): [0,1]\times[0, +\infty)\rightarrow [0,+\infty) \) is continuous and increasing with respect to the second variable;
there exists a constant \(\gamma\in(0,1)\) such that \(f(t,\lambda x,\lambda^{-1} y)\geq\lambda^{\gamma}f(t,x,y)\) for any \(t \in[0,1]\), \(\lambda\in(0,1)\), \(x, y \in [0,+\infty)\), and \(g(t,\lambda x)\geq\lambda g(t,x)\) for \(\lambda\in(0,1)\), \(t\in[0,1]\), \(u\in[0, +\infty)\), and \(g(t,0)\not\equiv0 \);
there exists a constant \(\delta_{0} > 0\) such that \(f(t,x,y)\geq\delta_{0} g(t,x)\), \(t\in[0,1]\), \(x,y\geq0\).
there exist \(x_{0},y_{0}\in P_{h}\) and \(r\in(0,1)\) such that \(r y_{0}\leq x_{0} < y_{0}\) and
$$\begin{gathered} x_{0}\leq \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,x_{0}(s),y_{0}(s) \bigr)+g\bigl(s,x_{0}(s)\bigr)\bigr]\,d_{q}s, \quad t\in[0,1], \\ y_{0}\geq \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,y_{0}(s),y_{0}(s) \bigr)+g\bigl(s,y_{0}(s)\bigr)\bigr]\,d_{q}s, \quad t\in[0,1], \end{gathered} $$
where \(G(t,qs)\) is defined by (2.3), and \(h(t)= t^{\alpha-1}\), \(t \in[0,1]\);
the boundary value problem (1.5) has a unique positive solution \(u^{*} \) in \(P_{h}\), and for any \(x_{0},y_{0} \in P_{h}\), constructing successively the sequences
$$\begin{gathered} x_{n+1}= \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,x_{n}(s),y_{n}(s) \bigr)+g\bigl(s,x_{n}(s)\bigr)\bigr]\,d_{q}s,\quad n=0,1,2,\ldots, \\ y_{n+1}= \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,y_{n}(s),x_{n}(s) \bigr)+g\bigl(s,y_{n}(s)\bigr)\bigr]\,d_{q}s,\quad n=0,1,2,\ldots, \end{gathered} $$
we have \(\|x_{n}-u^{*}\|\rightarrow0\) and \(\|y_{n}-u^{*}\| \rightarrow0\) as \(n \rightarrow\infty\).
We note that if u is a solution of boundary value problem (1.5), then
$$\begin{aligned} u(t)= \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,u(s),u(s)\bigr)+g \bigl(s,u(s)\bigr)\bigr]\,d_{q}s, \quad0 \leq t \leq1. \end{aligned}$$
Define two operators \(T_{1}:P\times P \rightarrow E\) and \(T_{2}: P \rightarrow E \) by
$$\begin{aligned} \begin{aligned} &T_{1}(u,v) (t)= \int_{0}^{1} G(t,qs)f\bigl(s,u(s),v(s) \bigr)\,d_{q}s, \\ &(T_{2}u) (t)= \int _{0}^{1} G(t,qs)g\bigl(s,u(s) \bigr)\,d_{q}s. \end{aligned} \end{aligned}$$
We transform the boundary value problem (1.5) into a fixed point problem \(u = T_{1}(u,u)+ T_{2} u\). From \((F_{1})\), \((F_{2})\), and Lemma 2.4 it is easy to see that \(T_{1}: P\times P \rightarrow P\) and \(T_{2}: P \rightarrow P\). Next, we want to prove that \(T_{1}\) and \(T_{2}\) satisfy the conditions of Lemma 2.5.
To begin with, we prove that \(T_{1}\) is a mixed monotone operator. In fact, for \(u_{1},u_{2},v_{1},v_{2} \in P\) with \(u_{1}\geq u_{2}\) and \(v_{1}\leq v_{2}\), it is easy to see that \(u_{1}(t) \geq u_{2}(t)\), \(v_{1}(t)\leq v_{2}(t)\), \(t \in[0,1]\), and by Lemma 2.4 and \((F_{1})\),
$$\begin{aligned} T_{1}(u_{1},v_{1}) (t)&= \int_{0}^{1} G(t,qs)f\bigl(s,u_{1}(s),v_{1}(s) \bigr)\,d_{q}s \\ &\geq \int_{0}^{1} G(t,qs)f\bigl(s,u_{2}(s),v_{2}(s) \bigr)\,d_{q}s =T_{1}(u_{2},v_{2}) (t). \end{aligned}$$
For any \(\lambda\in(0,1)\) and \(u,v \in P \), by \((F_{3}) \) we have
$$ \begin{aligned}[b] T_{1}\bigl(\lambda u, \lambda^{-1}v\bigr) (t)&= \int_{0}^{1}G(t,qs)f\bigl(s,\lambda u(s), \lambda^{-1}v(s)\bigr)\,d_{q}s \\ &\geq\lambda^{\gamma}\int_{0}^{1}G(t,qs)f\bigl(s, u(s),v(s) \bigr)\,d_{q}s \geq\lambda^{\gamma}T_{1}(u,v) (t). \end{aligned} $$
So, the operator \(T_{1}\) satisfies (2.6).
For any \(u_{1}(t)\geq u_{2}(t)\), \(t \in[0,1]\), from \(G(t,qs) \geq0\) and \((F_{2})\) we know that
$$T_{2}u_{1}(t)= \int_{0}^{1} G(t,qs)g\bigl(s,u_{1}(s) \bigr)\,d_{q}s \geq \int_{0}^{1} G(t,qs)g\bigl(s,u_{2}(s) \bigr)\,d_{q}s=T_{2}u_{2}(t). $$
So \(T_{2}\) is increasing. Further, for any \(\lambda\in(0,1)\) and \(u \in P\), from hypothesis \((F_{3})\) we get
$$\begin{aligned} T_{2}(\lambda u) (t)= \int_{0}^{1}G(t,qs)g\bigl(s,\lambda u(s) \bigr)\,d_{q}s\geq\lambda \int_{0}^{1}G(t,qs)g\bigl(s,u(s)\bigr)\,d_{q}s =\lambda T_{2}u(t), \end{aligned}$$
that is, the operator \(T_{2}\) is subhomogeneous. By \((F_{1})\) and Lemma 2.4, for any \(t \in[0,1]\), we have
$$ \begin{aligned}[b] T_{1}(h,h) (t)&= \int_{0}^{1}G(t,qs)f\bigl(s,h(s),h(s) \bigr)\,d_{q}s \\ &= \int_{0}^{1}G(t,qs)f\bigl(s, s^{\alpha-1},s^{\alpha-1} \bigr)\,d_{q}s \\ &\leq\frac{M_{0}}{\Gamma_{q} (\alpha) ([\alpha]_{q}-\mu )}h(t) \int_{0}^{1} f(s,1,0)\,d_{q}s \end{aligned} $$
$$ \begin{aligned}[b] T_{1}(h,h) (t)&= \int_{0}^{1}G(t,qs)f\bigl(s,h(s),h(s) \bigr)\,d_{q}s \\ &= \int_{0}^{1}G(t,qs)f\bigl(s, s^{\alpha-1},s^{\alpha-1} \bigr)\,d_{q}s \\ &\geq\frac{\mu q^{\alpha}}{\Gamma_{q} (\alpha) ([\alpha]_{q}-\mu )}h(t) \int_{0}^{1}s (1-qs)^{(\alpha-1)}f(s,0,1)\,d_{q}s. \end{aligned} $$
From \((F_{2})\) and \((F_{4})\) we have the inequality
$$f(s,1,0)\geq f(s,0,1)\geq\delta_{0} g(s,0)\geq0. $$
Since \(g(t,0)\not\equiv0 \), we also obtain
$$ \int_{0}^{1} f(s,1,0)\,d_{q}s\geq \int_{0}^{1} f(s,0,1)\,d_{q}s \geq \delta_{0} \int _{0}^{1} g(s,0)\,d_{q}s > 0. $$
$$\begin{aligned} &M_{1}=\frac{M_{0}}{\Gamma_{q} (\alpha) ([\alpha]_{q}-\mu )} \int_{0}^{1} f(s,1,0)\,d_{q}s, \\ &M_{2}=\frac{\mu q^{\alpha}}{\Gamma_{q} (\alpha) ([\alpha]_{q}-\mu )} \int_{0}^{1}s (1-qs)^{(\alpha-1)}f(s,0,1)\,d_{q}s, \\ &M_{3}=\frac{\mu q^{\alpha}}{\Gamma_{q} (\alpha) ([\alpha]_{q}-\mu )} \int_{0}^{1}s (1-qs)^{(\alpha-1)}g(s,0)\,d_{q}s, \\ &M_{4}=\frac{M_{0} }{\Gamma_{q} (\alpha) ([\alpha]_{q}-\mu )} \int_{0}^{1} g(s,1)\,d_{q}s. \end{aligned}$$
Thus we have \(M_{2}h(t)\leq T_{1}(h,h) \leq M_{1}h(t)\), \(M_{3}h(t)\leq T_{2}h\leq M_{4}h(t)\), \(t\in[0,1]\). So, \(T_{1}(h,h)\in P_{h}\). From \(g(t,0)\not\equiv0 \) it is easy to see that \(T_{2}h\in P_{h}\). So, there is \(h(t)=t^{\alpha-1}\in P_{h} \) such that \(T_{1}(h,h)\in P_{h}\) and \(T_{2}h \in P_{h} \).
Next, we prove that the operators \(T_{1}\) and \(T_{2}\) satisfy condition (ii) of Lemma 2.5. In fact, for \(u,v \in P \) and any \(t \in[0,1]\), by \((F_{4}) \) we have
$$ \begin{aligned}[b] T_{1}(u,v) (t)&= \int_{0}^{1}G(t,qs)f\bigl(s,u(s),v(s) \bigr)\,d_{q}s\\ &\geq\delta_{0} \int _{0}^{1}G(t,qs)g\bigl(s,u(s) \bigr)\,d_{q}s=\delta_{0} (T_{2}u) (t). \end{aligned} $$
Then we have \(T_{1}(u,v)\geq\delta_{0} T_{2} u\) for \(u,v\in P\). By Lemma 2.5 we can deduce: there exist \(u_{0},v_{0}\in P_{h}\) and \(r \in (0,1)\) such that \(rv_{0}\leq u_{0}\leq v_{0}\), \(u_{0}\leq T_{1}(u_{0},v_{0})+ T_{2} u_{0}\leq T_{1}(v_{0},u_{0})+T_{2}v_{0}\leq v_{0}\); the operator equation \(T_{1}(u,u)+T_{2}u=u\) has a unique solution \(u^{*} \in P_{h}\); and for any initial values \(x_{0}, y_{0} \in P_{h}\), constructing successively the sequences
$$x_{n} = T_{1} (x_{n-1},y_{n-1})+T_{2}x_{n-1}, \qquad y_{n} =T_{1}(y_{n-1},x_{n-1})+T_{2} y_{n-1},\quad n=1,2,\ldots, $$
we get \(x_{n}\rightarrow u^{*}\) and \(y_{n}\rightarrow u^{*} \) as \(n \rightarrow\infty\). We have the following two inequalities:
$$\begin{gathered} u_{0}(t)\leq \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,u_{0}(s),v_{0}(s) \bigr)+g\bigl(s,u_{0}(s)\bigr)\bigr]\,d_{q}s, \quad t\in[0,1], \\ v_{0}(t)\geq \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,v_{0}(s),u_{0}(s) \bigr)+g\bigl(s,v_{0}(s)\bigr)\bigr]\,d_{q}s, \quad t\in[0,1]. \end{gathered} $$
Thus problem (1.5) has a unique positive solution \(u^{*} \in P_{h}\); for any \(u_{0},v_{0} \in P_{h}\), constructing successively the sequences
$$\begin{gathered} x_{n+1}(t)= \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,x_{n}(s),y_{n}(s) \bigr)+g\bigl(s,x_{n}(s)\bigr)\bigr]\,d_{q}s,\quad n=0,1,2, \ldots, \\ y_{n+1}(t)= \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,y_{n}(s),x_{n}(s) \bigr)+g\bigl(s,y_{n}(s)\bigr)\bigr]\,d_{q}s,\quad n=0,1,2, \ldots, \end{gathered} $$
we have \(\|x_{n}-u^{*}\|\rightarrow0\) and \(\|y_{n}-u^{*}\|\rightarrow0 \) as \(n\rightarrow\infty\). □
Corollary 3.1
Suppose that f satisfies the conditions of Theorem 3.1 and \(g\equiv 0\), \(f(t,0,1)\not\equiv0\). Then:
there exist \(u_{0},v_{0} \in P_{h} \) and \(r\in(0,1)\) such that \(r v_{0} \leq u_{0} < v_{0}\), and
$$\begin{gathered} u_{0}(t)\leq \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,u_{0}(s),v_{0}(s) \bigr)\bigr]\,d_{q}s, \quad t\in [0,1], \\ v_{0}(t)\geq \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,v_{0}(s),u_{0}(s) \bigr)\bigr]\,d_{q}s, \quad t\in[0,1], \end{gathered} $$
the BVP
$$ \textstyle\begin{cases} D_{q}^{\alpha}u(t)+ f(t,u(t),u(t))=0, \quad 0< t< 1, 2< \alpha\leq3,\\ u(0)=D_{q}u(0)=0, \qquad u(1)=\mu\int_{0}^{1}u(s)\,d_{q}s, \end{cases} $$
has a unique positive solution \(u^{*} \) in \(P_{h}\);
for any \(x_{0},y_{0} \in P_{h}\), the sequences
$$\begin{gathered} x_{n+1}= \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,x_{n}(s),y_{n}(s) \bigr)\bigr]\,d_{q}s, \quad n=0,1,2,\ldots, \\ y_{n+1}= \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,y_{n}(s),x_{n}(s) \bigr)\bigr]\,d_{q}s, \quad n=0,1,2,\ldots, \end{gathered} $$
satisfy \(\|x_{n}-u^{*}\|\rightarrow0\) and \(\|y_{n}-u^{*}\|\rightarrow0\) as \(n \rightarrow\infty\).
Suppose that \((F_{1})\)–\((F_{2}) \) hold. In addition, suppose that f, g satisfy the following conditions:
there exists a constant \(\gamma\in(0,1)\) such that \(g(t,\lambda u)\geq\lambda^{\gamma}g(t,u)\) for any \(t \in[0,1] \), \(\lambda\in(0,1)\), \(u \in[0, +\infty)\), and \(f(t,\lambda u,\lambda^{-1} v)\geq\lambda f(t,u,v)\) for \(\lambda\in (0,1)\), \(t\in[0,1]\), \(u,v\in[0,+\infty)\);
\(f(t,0,1)\not\equiv0\) for \(t \in[0,1]\), and there exists a constant \(\delta_{0} > 0\) such that \(f(t,u,v)\leq\delta_{0} g(t,u)\), \(t\in[0,1]\), \(u,v \geq0\).
there exist \(u_{0},v_{0}\in P_{h}\) and \(r\in(0,1)\) such that \(r v_{0}\leq u_{0} < v_{0}\) and
$$\begin{gathered} u_{0}\leq \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,u_{0}(s),v_{0}(s) \bigr)+g\bigl(s,u_{0}(s)\bigr)\bigr]\,d_{q}s, \quad t\in[0,1], \\ v_{0}\geq \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,v_{0}(s),u_{0}(s) \bigr)+g\bigl(s,v_{0}(s)\bigr)\bigr]\,d_{q}s, \quad t\in[0,1], \end{gathered} $$
the boundary value problem (1.5) has a unique positive solution \(u^{*} \) in \(P_{h}\); and for any \(x_{0},y_{0} \in P_{h}\), the sequences
$$\begin{gathered} x_{n+1}= \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,x_{n}(s),y_{n}(s) \bigr)+g\bigl(s,x_{n}(s)\bigr)\bigr]\,d_{q}s,\quad n=0,1,2, \ldots, \\ y_{n+1}= \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,y_{n}(s),x_{n}(s) \bigr)+g\bigl(s,y_{n}(s)\bigr)\bigr]\,d_{q}s,\quad n=0,1,2, \ldots, \end{gathered} $$
Similarly to the proof of Theorem 3.1, \(T_{1}\) and \(T_{2} \) are given in (3.2). From \((F_{1})\) and \((F_{2})\) we know that \(T_{1}: P\times P \rightarrow P \) is a mixed monotone operator and \(T_{2}: P\rightarrow P\) is increasing. By \((F_{5})\) we obtain
$$T_{1}\bigl(\lambda u,\lambda^{-1}v\bigr)\geq\lambda T_{1}(u,v), \qquad T_{2}(\lambda u)\geq\lambda^{\gamma}T_{2}u, \quad \mbox{for }\lambda\in(0,1), u,v \in P. $$
According to \((F_{2})\) and \((F_{6})\), we have
$$f(s,0,1)\leq\delta_{0} g(s,0), \qquad f(s,0,1)\leq f(s,1,0),\quad s \in[0,1]. $$
From \(f(t,0,1) \not\equiv0\) we get
$$\begin{aligned} &0< \int_{0}^{1}f(s,0,1)\,d_{q}s\leq \int_{0}^{1}f(s,1,0)\,d_{q}s, \\ &0< \frac{1}{\delta_{0}} \int_{0}^{1}f(s,0,1)\,d_{q}s\leq \int_{0}^{1}g(s,0)\,d_{q}s\leq \int_{0}^{1} g(s,1)\,d_{q}s, \end{aligned} $$
and the following inequalities hold:
$$\begin{aligned} & \begin{aligned}[b] 0&< \frac{\mu q^{\alpha}}{\Gamma_{q} (\alpha) ([\alpha]_{q}-\mu )} \int_{0}^{1}s (1-qs)^{(\alpha-1)}f(s,0,1)\,d_{q}s \\ &\leq\frac{M_{0} }{\Gamma_{q} (\alpha) ([\alpha]_{q}-\mu )} \int_{0}^{1} f(s,1,0)\,d_{q}s, \end{aligned} \end{aligned}$$
$$\begin{aligned} &\begin{aligned}[b] 0&< \frac{\mu q^{\alpha}}{\Gamma_{q} (\alpha) ([\alpha]_{q}-\mu )} \int_{0}^{1}s (1-qs)^{(\alpha-1)}g(s,0)\,d_{q}s \\ &\leq\frac{M_{0} }{\Gamma_{q} (\alpha) ([\alpha]_{q}-\mu )} \int_{0}^{1} g(s,1)\,d_{q}s. \end{aligned} \end{aligned}$$
Hence we can easily check that \(T_{1}(h,h) \in P\), \(T_{2}h \in P\), \(t\in [0,1]\), and, by using \((F_{6})\), we have
$$\begin{aligned} T_{1}(u,v) (t)&= \int_{0}^{1} G(t,s)f\bigl(s,u(s),v(s) \bigr)\,d_{q}s \\ &\leq\delta_{0} \int_{0}^{1} G(t,s)g\bigl(s,u(s)\bigr)\,d_{q}s= \delta_{0} T_{2}u(t). \end{aligned}$$
Then we have \(T_{1}(u,v)\leq\delta_{0} T_{2}u\) for \(u,v\in P\). Thus, from Lemma 2.6 we get that there exist \(u_{0},v_{0}\in P_{h}\) and \(r \in(0,1)\) such that \(rv_{0}\leq u_{0}\leq v_{0},u_{0}\leq T_{1}(u_{0},v_{0})+ T_{2} u_{0}\leq T_{1}(v_{0},u_{0})+T_{2}v_{0}\leq v_{0}\); the operator equation \(T_{1}(u,u)+T_{2}u=u\) has a unique solution \(u^{*} \in P_{h}\); and for any initial values \(x_{0}\), \(y_{0} \in P_{h}\), the sequences
satisfy \(x_{n}\rightarrow u^{*}\) and \(y_{n}\rightarrow u^{*} \) as \(n \rightarrow\infty\). That is,
$$\begin{gathered} u_{0}(t)\leq \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,u_{0}(s),v_{0}(s) \bigr)+g\bigl(s,u_{0}(s)\bigr)\bigr]\,d_{q}s, \quad t\in[0,1], \\ v_{0}(t)\geq \int_{0}^{1} G(t,qs)\bigl[f\bigl(s,v_{0}(s),u_{0}(s) \bigr)+g\bigl(s,v_{0}(s)\bigr)\bigr]\,d_{q}s, \quad t\in[0,1], \end{gathered} $$
The boundary value problem (1.5) has a unique positive solution \(u^{*} \in P_{h}\); for \(u_{0},v_{0} \in P_{h}\), the sequences
satisfy \(\|x_{n}-u^{*}\|\rightarrow0\) and \(\|y_{n}-u^{*}\|\rightarrow0 \) as \(n\rightarrow\infty\). □
Suppose that g satisfies the conditions of Theorem 3.2, \(f\equiv0\), and \(g(t,0)\not\equiv0\) for \(t \in[0,1]\). Then:
$$\begin{gathered} u_{0}(t)\leq \int_{0}^{1} G(t,qs)\bigl[g\bigl(s,u_{0}(s) \bigr)\bigr]\,d_{q}s, \\ v_{0}(t)\geq \int_{0}^{1} G(t,qs)\bigl[g\bigl(s,v_{0}(s) \bigr)\bigr]\,d_{q}s, \quad t\in[0,1], \end{gathered} $$
$$ \textstyle\begin{cases} D_{q}^{\alpha}u(t)+ g(t,u(t))=0, \quad 0< t< 1, 2< \alpha\leq3,\\ u(0)=D_{q}u(0)=0, \qquad u(1)=\mu\int_{0}^{1}u(s)\,d_{q}s, \end{cases} $$
has a unique positive solution \(u^{*} \) in \(P_{h}\); and for any \(x_{0},y_{0} \in P_{h}\), the sequences
$$\begin{gathered} x_{n+1}= \int_{0}^{1} G(t,qs)g\bigl(s,x_{n}(s) \bigr)\,d_{q}s,\quad n=0,1,2,\ldots, \\ y_{n+1}= \int_{0}^{1} G(t,qs)g\bigl(s,y_{n}(s) \bigr)\,d_{q}s,\quad n=0,1,2,\ldots, \end{gathered} $$
satisfy \(\|x_{n}-u^{*}\|\rightarrow0\) and \(\| y_{n}-u^{*}\|\rightarrow0\) as \(n \rightarrow\infty\).
Now, we give two examples to illustrate our results.
Example 4.1
Consider the following boundary value problem:
$$ \textstyle\begin{cases} -D_{\frac{1}{2}}^{\frac{5}{2}} u(t)=u(t)^{\frac{1}{3}}+[u(t)+1]^{- \frac{1}{2}}+{\frac{u(t)}{1+u(t)}}t^{3}+t^{2}+4, \quad 0< t< 1,\\ u(0)=D_{\frac{1}{2}}u(0)=0, \qquad u(1)=\mu\int_{0}^{1}u(s)\,d_{\frac{1}{2}}s. \end{cases} $$
In this example, we let
$$\begin{aligned} &f(t,u,v )=u^{\frac{1}{3}}+[v+1]^{- \frac{1}{2}}+t^{2}+2, \qquad g(t,u)={\frac{u}{1+u}}t^{3}+2, \\ &\gamma={\frac{1}{2}}, \qquad\mu=\frac{1}{2}. \end{aligned}$$
It is not difficult to find that \(f(t,x,y):[0,1]\times[0, +\infty )\times[0,+\infty)\rightarrow[0,+\infty)\) is continuous, increasing with respect to the second variable, and decreasing with respect to the third variable and that \(g(t,x): [0,1]\times[0, +\infty)\rightarrow[0,+\infty) \) is continuous with \(g(t,0)=2>0\) and increasing with respect to the second variable. We also have
$$\begin{aligned}& g(t,\lambda u )={\frac{{\lambda}u}{1+\lambda{u}}}t^{3}+2 \geq{ \frac {{\lambda}u}{1+ {u}}}t^{3}+2 \lambda=\lambda g(t,u), \quad \lambda\in(0,1), \\& \begin{aligned} f\bigl(t,\lambda u,\lambda^{-1}v \bigr)&=\lambda^{\frac{1}{3}}u^{\frac {1}{3}}+ \lambda^{\frac{1}{2}}[v+\lambda]^{-\frac{1}{2}}+t^{2}+2 \\ &\geq\lambda^{\frac{1}{2}} \bigl\{ u^{\frac{1}{3}}+[v+1]^{-\frac {1}{2}}+t^{2}+2 \bigr\} \\ &=\lambda^{\gamma}f(t,u,v). \end{aligned} \end{aligned}$$
Further, if we take \(\delta_{0} \in(0,\frac{2}{3}]\), then we easily get
$$\begin{aligned} f(t,u,v )&=u^{\frac{1}{3}}+[v+1]^{- \frac{1}{2}}+t^{2}+2\geq2= \frac {2}{3} \cdot3 \\ &\geq\delta_{0}\biggl[{\frac{u}{1+u}}t^{3}+2\biggr]= \delta_{0} g(t,u). \end{aligned}$$
So f and g satisfy the conditions of Theorem 3.1. Thus by Theorem 3.1 the boundary value problem (4.1) has a unique positive solution in \(P_{h}\), where \(h(t)=t^{\alpha-1}=t^{\frac{3}{2}}\), \(t\in[0,1]\).
$$ \textstyle\begin{cases} -D_{\frac{1}{2}}^{\frac{5}{2}} u(t)={({\frac{u(t)}{1+u(t)}})}^{\frac {1}{4}}+[u(t)+1]^{- \frac{1}{3}}+t^{3}+u(t)^{\frac{1}{3}}+t^{2}+1, \quad 0< t< 1,\\ u(0)=D_{\frac{1}{2}}u(0)=0, \qquad u(1)=\mu\int_{0}^{1}u(s)\,d_{\frac{1}{2}}s. \end{cases} $$
We let
$$f(t,u,v )={\biggl({\frac{u}{1+u}}\biggr)}^{\frac{1}{4}}+[v+1]^{- \frac {1}{3}}+t^{3}, \qquad g(t,u)=u^{\frac{1}{3}}+t^{2}+1, \qquad \gamma={ \frac {1}{3}},\qquad \mu=\frac{1}{2}. $$
It is not difficult to find that \(f(t,x,y):[0,1]\times[0, +\infty )\times[0,+\infty)\rightarrow[0,+\infty)\) is continuous, increasing with respect to the second variable, and decreasing with respect to the third variable and that \(g(t,x): [0,1]\times[0, +\infty)\rightarrow[0,+\infty)\) and increasing with respect to the second variable. We also have
$$\begin{aligned} &g(t,\lambda u )=\lambda^{\frac{1}{3}}u^{\frac{1}{3}}+t^{2}+1 \geq \lambda^{\frac{1}{3}}\bigl[u^{\frac{1}{3}}+t^{2}+1\bigr] = \lambda^{\gamma}g(t,u), \quad \lambda\in(0,1), \\ & \begin{aligned} f\bigl(t,\lambda u,\lambda^{-1}v \bigr)&={\biggl({ \frac{\lambda u}{1+\lambda u}}\biggr)}^{\frac{1}{4}}+\bigl[\lambda^{-1}v+1 \bigr]^{- \frac{1}{3}}+t^{3} \\ &\geq\lambda^{\frac{1}{3}} \biggl\{ {\biggl({\frac{ u}{1+ u}} \biggr)}^{\frac {1}{4}}+[v+\lambda]^{- \frac{1}{3}}+t^{3} \biggr\} \\ &\geq\lambda \biggl\{ {\biggl({\frac{ u}{1+ u}}\biggr)}^{\frac{1}{4}}+[v+1]^{- \frac{1}{3}}+t^{3} \biggr\} \\ &=\lambda f(t,u,v). \end{aligned} \end{aligned}$$
If we take \(\delta_{0} =1>0\), then we have
$$f(t,u,v)={\biggl({\frac{u}{1+u}}\biggr)}^{\frac{1}{4}}+[v+1]^{- \frac {1}{3}}++t^{3} \leq u^{\frac{1}{4}}+t^{2}+1\leq u^{\frac{1}{3}}+t^{2}+1= \delta_{0} g(u,t). $$
The authors are very grateful to the reviewers for their valuable suggestions and useful comments, which led to an improvement of this paper.
This project was supported by the National Natural Science Foundation of China (Grant No. 11271235).
FG carried out the molecular genetic studies, participated in the sequence alignment, and drafted the manuscript. SK conceived the study and participated in its design and coordination. FC helped to draft the manuscript. All authors read and approved the final manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
School of Mathematics and Computer Science, Shanxi Datong University, Datong, P.R. China
Oldham, K.B., Spanier, J.: The Fractional Calculus. Academic Press, New York (1974) MATHGoogle Scholar
Miller, K.S., Ross, B.: An Introduction to the Fractional Calculus and Fractional Differential Equation. Wiley, New York (1993) MATHGoogle Scholar
Glockle, W.G., Nonnenmacher, T.F.: A fractional calculus approach of self-similar protein dynamics. Biophys. J. 68, 46–53 (1995) View ArticleGoogle Scholar
Podlubny, I.: Fractional Differential Equations. Mathematics in Science and Engineering. Academic Press, New York (1999) MATHGoogle Scholar
Field, C., Joshi, N., Nijhoff, F.: q-Difference equations of KdV type and Chazy-type second-degree difference equations. J. Phys. A, Math. Theor. 41, 1–13 (2008) MathSciNetView ArticleGoogle Scholar
Abreu, L.: Sampling theory associated with q-difference equations of the Sturm–Liouville type. J. Phys. A 38(48), 10311–10319 (2005) MathSciNetView ArticleGoogle Scholar
Jackson, F.: On q-functions and a certain difference operator. Trans. R. Soc. Edinb. 46, 253–281 (1908) View ArticleGoogle Scholar
Jackson, F.: On q-definite integrals. Pure Appl. Math. Q. 41, 193–203 (1910) MATHGoogle Scholar
Rajković, P., Marinković, S., Stanković, M.: Fractional integrals and derivatives in q-calculus. Appl. Anal. Discrete Math. 1(1), 311–323 (2007) MathSciNetView ArticleGoogle Scholar
Annaby, M.H., Mansour, Z.S.: q-Fractional Calculus and Equations. Lecture Notes in Mathematics, vol. 2056. Springer, Berlin (2012) MATHGoogle Scholar
Al-Salam, W.A.: Some fractional q-integrals and q-derivatives. Proc. Edinb. Math. Soc. 15, 135–140 (1966) MathSciNetView ArticleGoogle Scholar
Agarwal, R.P.: Certain fractional q-integrals and q-derivatives. Proc. Camb. Philos. Soc. 66, 365–370 (1969) MathSciNetView ArticleGoogle Scholar
Ferreira, R.A.C.: Nontrivial solutions for fractional q-difference boundary value problems. Electron. J. Qual. Theory Differ. Equ., 2010, 70 (2010) MathSciNetMATHGoogle Scholar
Ferreira, R.A.C.: Positive solutions for a class of boundary value problems with fractional q-differences. Comput. Math. Appl. 61(2), 367–373 (2011) MathSciNetView ArticleGoogle Scholar
EI-Shahed, M., Al-Askar, F.: Positive solution for boundary value problem of nonlinear fractional q-difference equation. ISRN Math. Anal. 2011, Article ID 385459 (2011) MathSciNetMATHGoogle Scholar
Darzi, R., Agheli, B.: Existence results to positive solution of fractional BVP with q-derivatives. J. Appl. Math. Comput. 55, 353–367 (2017) MathSciNetView ArticleGoogle Scholar
Zhai, C.B., Hao, M.R.: Fixed point theorems for mixed monotone operators with perturbation and applications to fractional differential equation boundary value problems. Nonlinear Anal. 75, 2542–2551 (2012) MathSciNetView ArticleGoogle Scholar
Zhai, C., Yang, C., Zhang, X.: Positive solutions for nonlinear operator equations and several classes of applications. Math. Z. 266, 43–63 (2010) MathSciNetView ArticleGoogle Scholar
Ahmad, B., Ntouyas, S.K., Purnaras, I.K.: Existence results for nonlocal boundary value problems of nonlinear fractional q-difference equations. Adv. Differ. Equ. 2012, 140 (2012) MathSciNetView ArticleGoogle Scholar
Graef, J.R., Kong, L.: Positive solutions for a class of higher order boundary value problems with fractional q-derivatives. Appl. Math. Comput. 218, 9682–9689 (2012) MathSciNetMATHGoogle Scholar
Almeida, R., Martins, N.: Existence results for fractional q-difference equations of order \(\alpha\in[2,3]\) with three-point boundary conditions. Commun. Nonlinear Sci. Numer. Simul. 19, 1675–1685 (2014) MathSciNetView ArticleGoogle Scholar
Yang, W.: Positive solution for fractional q-difference boundary value problems with Φ-Laplacian operator. Bull. Malays. Math. Sci. Soc. 36, 1195–1203 (2013) MathSciNetMATHGoogle Scholar
Ahmad, B., Etemad, S., Ettefagh, M., Rezapour, S.: On the existence of solutions for fractional q-difference inclusions with q-antiperiodic boundary conditions. Bull. Math. Soc. Sci. Math. Roum. 59, 119–134 (2016) MathSciNetMATHGoogle Scholar
Agarwal, R.P., Ahmad, B., Alsaedi, A., Al-Hutami, H.: Existence theory for q-antiperiodic boundary value problems of sequential q-fractional integro-differential equations. Abstr. Appl. Anal. 2014, Article ID 207547 (2014) Google Scholar
Wang, J.R., Zhang, Y.R.: On the concept and existence of solutions for fractional impulsive systems with Hadamard derivatives. Appl. Math. Lett. 39, 85–90 (2015) MathSciNetView ArticleGoogle Scholar
Zhai, C.B., Yan, W.P., Yang, C.: A sum operator method for the existence and uniqueness of positive solution to Riemann–Liouville fractional differential equation boundary value problems. Commun. Nonlinear Sci. Numer. Simul. 18, 858–866 (2013) MathSciNetView ArticleGoogle Scholar
Zhao, Y., Ye, G., Chen, H.: Multiple positive solutions of a singular semipositone integral boundary value problem for fractional q-derivatives equation. Abstr. Appl. Anal. 2013, Article ID 643571 (2013). https://doi.org/10.1155/2013/643571 MathSciNetView ArticleMATHGoogle Scholar
Ahmad, B., Ntouyas, S.K., Alsaedi, A., Al-Hutami, H.: Nonlinear q-fractional differential equations with nonlocal and sub-strip type boundary conditions. Electron. J. Qual. Theory Differ. Equ. 2014, 26 (2014) MathSciNetView ArticleGoogle Scholar
Sitthiwirattham, T.: On nonlocal fractional q-integral boundary value problems of fractional q-difference equations and fractional q-integrodifference equations involving different numbers of order and q. Bound. Value Probl. 2016, Article ID 12 (2016) MathSciNetView ArticleGoogle Scholar
Patanarapeelert, N., Sriphanomwan, U., Sitthiwirattham, T.: On a class of sequential fractional q-integrodifference boundary value problems involving different numbers of q in derivatives and integrals. Adv. Differ. Equ. 2016, Article ID 148 (2016) MathSciNetView ArticleGoogle Scholar
Sriphanomwan, U., Tariboon, J., Patanarapeelert, N., Sitthiwirattham, T.: Existence results of nonlocal boundary value problems for nonlinear fractional q-integral difference equations. J. Nonlinear Funct. Anal. 2017, Article ID 28 (2017) Google Scholar
Proceedings of the International Conference in Mathematics and Applications | CommonCrawl |
MeDeCom: discovery and quantification of latent components of heterogeneous methylomes
Pavlo Lutsik1,4,
Martin Slawski2,3,5,
Gilles Gasparoni1,
Nikita Vedeneev2,
Matthias Hein2 &
Jörn Walter ORCID: orcid.org/0000-0003-0563-74171
Genome Biology volume 18, Article number: 55 (2017) Cite this article
It is important for large-scale epigenomic studies to determine and explore the nature of hidden confounding variation, most importantly cell composition. We developed MeDeCom as a novel reference-free computational framework that allows the decomposition of complex DNA methylomes into latent methylation components and their proportions in each sample. MeDeCom is based on constrained non-negative matrix factorization with a new biologically motivated regularization function. It accurately recovers cell-type-specific latent methylation components and their proportions. MeDeCom is a new unsupervised tool for the exploratory study of the major sources of methylation variation, which should lead to a deeper understanding and better biological interpretation.
DNA methylation is one of the most extensively studied epigenetic marks in the human genome. Methods of detection and quantification are relatively robust and methylation data can be obtained at single-base resolution. DNA methylation closely mirrors the functional state of a cell [1]. Each human cell type has a characteristic methylation profile (methylome) covering its roughly 27 million CpG dinucleotides [2, 3]. DNA methylomes undergo significant global and lineage-related changes during development [4] and form cell-type-specific patterns upon differentiation [3, 5, 6]. They also reflect the individual (genetic) constitution [7], are influenced by gender, are subject to environmental influences [8, 9], and change with age [10]. In aging cells and in diseased cells, they accumulate errors over time and DNA replications [11, 12]. DNA methylation can, therefore, be used to infer the developmental origin, the cell-type specificity, and many other biological and sampling variables contributing to individual epigenetic profiles. A knowledge of these confounding effects and their consequences for methylome changes are of utmost importance for a biological interpretation of DNA-methylation changes in comparative studies.
For practical reasons, comparative epigenomic studies often use tissue samples or cells extracted from body fluids (mostly blood) [3, 13]. All these sources are composed of several major and minor cell types with variable composition [14]. Blood, for example, includes up to ten major and many more minor cell types. Cell type-attributed heterogeneity was shown to be a major source of variation in comparative blood-based DNA methylome studies [15]. The same holds for studies performed with brain tissue, where the compositional changes of cells are strongly influenced by age, gender, and disease state [16–19]. Overall, genetic variation, variable cell composition, and age appear to be the strongest confounders in DNA methylome analysis [20–22].
To overcome the compositional confounding, DNA methylation studies increasingly make use of cell enrichment or cell separation techniques [23, 24] to decompose samples experimentally prior to methylation analysis [25, 26]. These methods clearly enhance the signal interpretability, but they come at the risk of introducing new experimental variation caused by cell-sorting methods, tissue dissection approaches etc. [24, 27]. In the worst case, cell separation may even exclude unknown – but informative – cell populations. Single-cell methylome analyses would be an alternative. However, comprehensive single-cell methylome data are still difficult to obtain and too costly for studies in which large sample numbers have to be compared [28–31]. Moreover, non-uniform cell separation or sampling prior to single-cell approaches may also introduce additional uncontrollable confounding effects. Finally, the sequencing depth has to be high to recover important changes in rare or difficult-to-recover cell populations [31].
Possible approaches for dealing with the heterogeneity problems include computational estimation or correction (adjustment) methods [32]. Houseman et al. were the first to develop a systematic approach that used reference DNA methylation profiles of purified cell types to infer the cell-type proportions in blood via a constrained projection procedure [33–36]. Similar reference-based correction approaches have also been used for complex tissues such as brain [37, 38]. Recently, a series of reference-free methods were developed that adjust for DNA methylation changes caused by cell heterogeneity, allowing for the quantification of direct methylation effects [39–41].
Here we present a novel computational framework called MeDeCom, which uses a special form of regularized non-negative matrix factorization (NMF) to decompose methylome data into a set of underlying latent DNA methylation components (LMCs) and their proportions in each sample. A similar NMF-based approach, RefFreeCellMix, has recently been proposed [42]. However, a key feature distinguishing MeDeCom from RefFreeCellMix and other standard NMF approaches is the incorporation of a biologically motivated regularizer that favors LMCs with per-CpG values close to zero (unmethylated) or one (methylated). In various experiments, we demonstrate that this form of regularization is the key element for an accurate estimation of LMCs corresponding to cell-type-specific methylomes and their associated proportions. Unlike RefFreeCellMix and other NMF-based methods, which infer a correct decomposition only if measurements of pure cell-types are implicitly present in the data set, MeDeCom also works when only measurements of mixtures of different cell types and no purified references are available. We demonstrate the performance of MeDeCom in controlled experimental settings and its application in more complex scenarios of cell populations and tissues. We show that MeDeCom can be used for adjustment in an epigenome-wide association study (EWAS) with excellent performance on par with the most advanced methods [39–41]. Finally, we demonstrate that the unsupervised decomposition of complex methylation data into LMCs and their proportions can be used as a new exploratory tool to obtain novel biological insights going beyond the analysis of confounding factors.
MeDeCom: introduction to the computational framework
We developed MeDeCom, a novel computational framework for methylation data decomposition. The conceptual background of MeDeCom is illustrated in Fig. 1 a. DNA methylation profiles of complex tissues and cell mixtures are a composite mix of patterns of individual cell types with discrete (binary) position-specific methylation values (Fig. 1 a). In other words, the DNA-methylation pattern generated, e.g., using 450K or EPIC bead arrays, is the product of the cell-specific pattern variation C and the frequencies in which individual cells are present in tissues or cell mixes, F (Fig. 1 a, top). MeDeCom decomposes such mixed methylome patterns into two matrices, T and A. T describes the LMCs and reflects an average methylation pattern of an underlying cell type, while A contains the proportions of LMCs in each sample (Fig. 1 a, bottom).
Computational framework of MeDeCom. a The conceptional background of MeDeCom. The measured methylomes (e.g., as 450K data, shown in the center) can be seen as a composition of binary single-cell methylome signatures (C) with their frequencies in each sample (F). Single-cell signatures of a particular cell type form a cell-type specific cluster in C. MeDeCom decomposes the measured methylation data into a matrix T, representing latent methylation components (LMCs), which in turn correspond to the averaged cell methylomes of a cell-type-specific cluster in C, and into A, the relative proportions of LMCs (respectively, cell types) in the sample. b Histograms of the values in the estimated T matrices for the 500 most varying CpG sites for the cell reconstruction experiment of neuronal cells (see text). We observe that both MeDeCom with no regularization (λ=0), and RefFreeCellMix are unable to match the distribution of the reference profiles (ground truth), which is biased towards zero and one. However, MeDeCom with our regularizer (parameter λ is chosen by cross-validation) biases the entries of the LMCs towards zero (unmethylated) and one (methylated). Thus, the distribution of the entries of the estimated LMCs matches approximately the ground truth leading to a significantly better estimation of T as well as A. c-d Geometric intuition about the different methods for a fully synthetic example of two CpGs (n=30, k=3). Each LMC corresponds to a column of T and, thus, is a point in [0,1]2. c shows the estimated LMCs (squares) of RefFreeCellMix and MeDeCom with λ=0 and λ=10−2, and the ground truth (black squares) together with the data (blue dots). The data points are mixtures of the ground truth points and, thus, lie in the convex hull of the latter. Factorization problem (2) (see "Methods") is ill-posed as the solution is not unique. MeDeCom with appropriate regularization estimates T (red squares) very accurately as the solution is biased towards zero or one, whereas RefFreeCellMix and MeDeCom with λ=0 are unable to find the correct LMCs. This also leads to huge errors in the estimation of the proportions as visualized by the ternary plot for ten randomly selected data points (d). In contrast, MeDeCom with appropriate regularization estimates A very accurately
To estimate T and proportions A for LMCs, MeDeCom uses a constrained NMF algorithm together with a regularization function on T. The regularization shifts the estimated matrix of methylation patterns T towards biologically plausible binary values close to zero (unmethylated) or one (methylated). The regularization of T is key to yielding accurate estimates of cell-type-specific methylation patterns and their proportions (see below). MeDeCom has two parameters: i) the number k of LMCs that are supposed to be estimated and ii) the amount of regularization λ. We show that both parameters can be reliably estimated by cross-validation. The details and the mathematical background of MeDeCom are outlined in "Methods."
To facilitate the interpretation of the MeDeCom results, we designed an exploratory interactive visualization tool called FactorViz. This tool allows the user to visualize the performance of MeDeCom, explore the LMCs, and obtain various kinds of information for further biological interpretation. MeDeCom and FactorViz are publicly available as a web resource at [43].
In the following sections, we will demonstrate the use of MeDeCom on synthetic and real Infinium 450k data sets of increasing complexity. We also demonstrate the usefulness of MeDeCom to decompose complex blood and tissue methylation data (also in comparison to reference-based methods) and provide examples showing how the obtained LMCs can help explore the origin of variation. We will adjust these parameters and provide novel ideas for the biological interpretation of methylation data.
Illustration of the effect of regularization
While conceptually simple, the introduction of our biologically motivated regularizer is the major determinant of the superior decomposition achieved by MeDeCom (Fig. 1 b). The histograms of the estimated T matrices are shown for an unregularized model and the regularized model chosen by cross-validation (a more detailed description of the corresponding cell reconstruction experiment follows below). The histogram of T for the regularized model is very close to the histogram of the true methylomes, while the histograms of the unregularized model and of RefFreeCellMix are far from the ground truth, which reflect the lack of bias towards biologically plausible T. The correct estimation of T via regularization allows us also to recover the correct proportions (Fig. 1 c, d). In our model scenario, all data points (blue dots) lie in the convex hull of the three estimated LMCs (squares), showing that there exist multiple solutions with virtually the same fit to the data. MeDeCom breaks this ambiguity in the solution as the regularizer shifts the values of the LMCs towards zero and one. We see that the regularized model fits the ground truth well (Fig. 1 c). A misestimation of T also leads to a misestimation of the proportions in A (Fig. 1 d). The proportions of the three LMCs in each sample as estimated by MeDeCom are very close to the true ones for the regularized model while they are completely wrong for the unregularized model and RefFreeCellMix.
Decomposition of simulated methylation data
To examine the performance of MeDeCom in a controlled setting, we analyzed synthetic DNA methylation mixtures generated by simulation (see "Methods" for details). The controlled data sets varied in the numbers of cell-type-specific patterns (LMCs), the inter-LMC similarity, and the variability of the mixture proportions (see Additional file 1: Table S1).
Figure 2 a–f summarizes the results for moderately variable mixture proportions of five pure blood-derived cell-type profiles (see below). FactorViz inspections show that the cross-validation error (CVE) levels out at k≥5, indicating that MeDeCom identified the correct number of underlying LMCs (Fig. 2 a). The optimal regularization parameter λ was found to be λ=0.01. The estimated LMCs unambiguously match the source DNA methylation profiles (Fig. 2 b). The individual methylation profiles were reconstructed with an overall root-mean-square error (RMSE) of 0.064. MeDeCom also accurately reproduced the mixing coefficients (proportions) with mean absolute error (MAE) of 0.0296 (Fig. 2 c–f). We obtained similar results for other cases with a varying number of underlying components and mixture proportions (see the MeDeCom web resource).
Testing MeDeCom on simulated and artificial cell mixture data. a–f Results for the simulated data example with five methylation components, moderately variable mixing proportions, and medium noise level. a Selection of parameters k and λ by cross-validation. b Matching of the recovered LMCs to the true underlying profiles. The dendrogram visualizes the agglomerative hierarchical clustering analysis with correlation-based distance measure and average linkage. c–f Recovery of the mixing proportions. Truth stands for true mixing proportions and regression denotes the reference-based proportion estimation as described in "Methods." In each line plot, the synthetic samples are sorted by ascending true mixing proportion. g, h Results for the ArtMixN data set. g Selection of parameters k and λ by cross-validation. h Recovery of mixing proportions (only NeuN + is shown) for MeDeCom and RefFreeCellMix. RefFreeCellMix misinterprets the most extreme mixtures as pure cell types and, thus, estimates T (see Fig. 1b) as well as the proportions in A wrongly. Notation is the same as in c–f
The summary plots of the LMC recovery rate (Additional file 2: Figure S1) show that, given a low number of samples, the choice of the model and the variability level of the mixture proportions were key factors in the performance of LMC reconstruction in MeDeCom. However, decomposition became impossible when the variability of the mixture proportions was very low and, at the same time, the noise level was high (see an example in Additional file 2: Figure S2 and the MeDeCom web resource). In this case, the variation in the data due to uneven cell-type composition is comparable or smaller than the noise, and, thus, it becomes impossible to estimate LMCs and their proportions. We also did the same experiments for RefFreeCellMix. For simple cases, it performs similarly to MeDeCom, but RefFreeCellMix is outperformed consistently by MeDeCom once the setting gets more difficult (Additional file 2: Figure S1).
Decomposition of reconstructed cell mixtures
Next, we analyzed the performance of MeDeCom on publicly available 450K data sets of cell mixtures with known proportions [37] (data set ArtMixN in Table 1). In this study, brain cell nuclei were separated using a neuron-specific marker NeuN, and fluorescence activated cell sorting (FACS) into NeuN + (neuronal) and NeuN − (non-neuronal) fractions. These fractions were mixed incrementally (Additional file 1: Table S2) and methylomes measured on a 450K array. We were interested in finding out how well MeDeCom could recover the source NeuN +/− methylomes and their mixing ratios. We show the results for five mixtures: {(0.3,0.7),(0.4,0.6),(0.5,0.5), (0.6,0.4),(0.7,0.3)}. The results for all nine mixtures can be found in Additional file 2: Figures S6 and S7.
Table 1 Public Infinium 450k data sets used in the study
MeDeCom indeed identified two major LMCs at CVE minimum close to λ=5×10−4 (Fig. 2 g; Additional file 2: Figure S3). Each of the recovered LMCs showed high CpG-wise correlation to the average profile of either the NeuN + or NeuN − fractions (Additional file 2: Figure S4) and reproduced it with high accuracy (RMSE 0.029). The mixture proportions were accurately recovered as well (MAE 0.025; Fig. 2 h). As in the artificial example of Fig. 1 c, RefFreeCellMix is inferior both in the estimation of T (RMSE 0.037) and A (MAE 0.162) due to the lack of a bias towards biologically plausible values (Fig. 2 h and Additional file 2: Figure S5). The difference in the results for MeDeCom and RefFreeCellMix becomes even more pronounced if one computes the RMSE for T limited to the 500 most varying CpG sites, where MeDeCom has a RMSE of 0.082 compared to 0.190 for RefFreeCellMix and 0.194 for MeDeCom with no regularizer (λ=0). In Fig. 1 b, we visualize the difference by showing the histogram of the estimated entries of T for the 500 most varying CpG sites. Our estimated histogram is close to the ground truth whereas the histograms of RefFreeCellMix and the unregularized model are much further off, which is then reflected in the wrong estimation of the proportions for RefFreeCellMix. Since the synthetic experiments as well as the artificial mixture experiment show that RefFreeCellMix cannot reliably recover cell-type LMCs and their proportions when there are only mixtures as samples, we do not compare to them in the analysis of complex mixtures from blood or brain tissue.
Methylome decomposition of whole-blood cell samples
Following the successful test of MeDeCom on synthetic data and artificial cell mixtures, we applied our method to whole-blood Infinium 450k samples from two independent studies (Table 1). Our aim was to test the performance, reproducibility, and robustness of our method in a side-by-side comparison. We first applied MeDeCom to control samples from a large rheumatoid arthritis study [35]. To avoid known technical confounding effects (Additional file 2: Figure S9), we confined our first analysis to 87 samples forming a technically homogeneous batch (data set WB1).
The CVE continued to decline until k=20 implying a large number of distinct variation confounders that is, LMCs (Fig. 3 a). We, therefore, examined the factorization results for increasing values of k to understand the relation between LMC recovery and underlying major and minor confounding variants (i.e., cell types, subtypes etc.). For a biological interpretation, we compared the LMCs of increasing values of k to published reference methylomes of FACS-sorted major blood cell types [44] (data set PureBC).
Results for blood cell methylomes. a–e WB1 data set. a Selection of parameters k and λ by cross-validation. b Matching the WB1 LMCs to PureBC methylomes (k=20, λ=0.001). Here and below the dendrogram visualizes agglomerative hierarchical clustering analysis with a correlation-based distance measure and average linkage. c Matching the LMCs from the WB2 data set (k=20, λ=0.001) to the PureBC methylomes. d Matching the WB1 and WB2 LMCs to each other. Pairs of reproducible LMCs also matching to the reference profiles are highlighted by red segments. Green segments mark reproducible LMCs that do not directly match any of the reference profiles. e Adjustment of the association analysis for rheumatoid arthritis in the full Liu et al. data set [35]. Each curve is a Q-Q plot of P values observed in the corresponding analysis versus the expected P values sampled from a uniform distribution. f–h PureBC data. f Selection of parameters k and λ by cross-validation. g Heat map of recovered proportions in PureBC data (k=15, λ=0.001). Rows represent LMCs while columns correspond to individual purified samples. The order of blood donors is the same within column sets, corresponding to one cell type. h Methylation differences in naive versus memory B cells at CpGs differentially methylated between LMC2 and LMC13 from the PureBC data set. WGBS methylation profiles of naive and memory B cells were obtained from BLUEPRINT. The value for memory B cells is an average of three WGBS samples. A Wilcoxon ranked sum test was used to test the null hypothesis that WBGS methylation calls are the same in naive and memory cells at their respective CpG positions
From k=2 on, the recovered LMCs distinguish the cell populations of the myeloid and the lymphoid lineages, respectively (Additional file 2: Figures S10 and S11). This split in the two lineage clusters is maintained at increasing values, e.g., k=20, λ=1.0×10−3 (Fig. 3 b; Additional file 2: Figure S12). Altogether, 11 LMCs in the myeloid arm cluster show greater similarity to LMCs describing FACS-sorted references for monocytes, eosinophils, and neutrophils while the remaining nine LMCs cluster with CD4+ T cells, CD8+ T cells, NK cells, and B cells. In the myeloid cluster, we fail to detect direct sub-lineage-specific LMC matches. In the lymphoid cluster, however, we observed one LMC closely matching the CD4+ T-cell profile, and one LMC corresponding to the sub-cluster of CD8+ T cells and NK cells, indicating a better separability of the T-cell signatures based on the 450K data used. Finally, our analysis directly identified a number of LMCs with high proportions in single donors, most probably reflecting sites with genetic variation (Additional file 2: Figures S13 and S14).
The results of the first data set were reproduced on a second independent whole-blood data set (WB2) from the EPIC Italy study [45], which recovered a highly similar clustering of LMCs (Fig. 3 b and c). A direct comparison of the LMC clustering between both whole-blood data sets reveals a considerable agreement of LMCs matching side by side, suggesting that MeDeCom recovers robust and reproducible LMC signatures (Fig. 3 d).
An aggregated comparison of LMCs matching to reference cell types in both blood analyses showed good correspondence to the regression-based estimations of cell proportions (Additional file 2: Figure S15). For several LMCs in WB1 and WB2, we observed that their proportions correlated with age, e.g., for WB1 the LMC12 related to CD4+ T cells (Additional file 2: Figure S17). Although the total number of CD4+ T cells was reported to change non-significantly with age [46], T-cell-specific immunological senescence is a well-known phenomenon characterized by depletion of the naive T-cell sub-populations [47, 48]. This might imply that LMC12 rather reflects the methylation pattern of the naive CD4+ T cells. Indeed, a comparison to reference methylomes of isolated T cells supports this suggestion (Additional file 2: Figure S16).
Correction of the phenotype association analysis
Next, we examined if LMCs estimated by MeDeCom can be efficiently used for data adjustment in phenotype association analyses. We first verified the potential of the adjustment in a fully synthetic setting mimicking a typical EWAS in blood (see "Methods" for details). We added true methylation effects at the level of single cell type as well as confounding by cell-type proportions (Additional file 2: Figure S29a and b). Adjustment for LMC proportions indeed helped to decrease the confounding and recover the true methylation effects with performance close to the reference-based adjustment of the best available third-party methods (Additional file 2: Figure S29c–e).
We then applied this approach to a large rheumatoid arthritis data set (WB1), which previously has been used by others for confounding corrections [35, 39–41]. We started by selected CpGs significantly associated with the rheumatoid arthritis status using linear modeling (see "Methods" for details) with and without adjustment for common covariates. The deviations of the observed P value distribution from the expected uniform distribution indicated a large inflation of significance (Additional file 2: Figure S19a). This effect is due to the confounding caused by the unequal distribution of the cell types in rheumatoid arthritis patients and controls [35]. We then performed an independent correction for cell composition variability using one reference-based [33] and four reference-free methods [39–42], and compared these results to the results obtained with MeDeCom. Comparative Q-Q plots of the P values show that the methods indeed decrease the inflation of significance (Fig. 3 e and Additional file 2: Figure S19). In this test, the adjustment using the LMCs estimated by MeDeCom showed comparable performance with reference-based analysis and the results of ReFACTor. We conclude that the LMCs generated by MeDeCom are useful for covariate correction.
Purified blood cell populations
Our whole-blood analyses revealed a limitation in an unambiguous assignment of reference cell types by single LMCs, which may have several reasons. One possible explanation is that the methylomes of FACS-sorted CD marker-positive purified cells, which we and others use as references, constitute composed methylomes of donors with a varying content of cell subtypes. First, a recent single-cell-resolution study of transcriptional heterogeneity in mammalian hematopoiesis [49] revealed that the potential of the canonical cell-surface markers to discriminate fine blood cell populations is limited, and their use as FACS gates for cell separation is prone to errors. Second, in particular for B and T cells, it is known that the proportion of cell subtypes may vary and different types of quiescent or dividing cells, such as naive, effector, or memory sub-populations, may confound a clear LMC assignment. We addressed this question by performing a MeDeCom analysis on the seven purified blood cell populations derived from six donors (data set PureBC) [44].
In this analysis, the CVE stabilized at k=16 and λ=10−3 (Fig. 3 f; Additional file 2: Figures S21 and S22). A matrix of mixture proportions (Fig. 3 g) showed that the recovered 16 LMCs could be classified into two distinct groups. Six LMCs (LMCs 6, 7, 8, 10, 15, and 16) could be associated with individual donors, most likely reflecting donor-specific genetic variation at the informative CpG positions underlying these LMCs. In a second group, LMCs 1, 3, 4, 5, 7, 9, and 11 corresponded to the enriched cell-type-associated profiles; e.g., LMC4 was predominantly present in CD4+ T cells, LMC11 in neutrophils etc. Nevertheless, we also observed that several LMCs were shared by related cell types. For instance, eosinophil samples show enrichment of the neutrophil-specific LMC11, CD8+ T cells (LMC9) show overlaps with CD4+ T cells (LMC5). Finally, we observed LMCs that were associated with more than one cell type, but which were not a dominating LMC in any of them. For instance, LMC14 was present at low proportion both in CD8+ T cells and NK cells. The co-occurrence of two or more LMCs within one isolated cell population, as well as sharing of LMCs between the populations, suggests that these cell populations could be either mixtures of still not separated distinct cell types, or that these cell populations share epigenetic features that may indeed co-occur in different cell types.
A clear split for sub-population heterogeneity was observed for CD19+ B cells. Here two LMCs, LMC2 and LMC13, apparently separate naive and memory B cells. To support this conclusion, we selected 401 CpG positions with a methylation difference of more than 0.33 in LMC2 compared to LMC13. First, we saw that many of these CpGs were located in the vicinity of known B-cell-associated genes (Additional file 3), such as PTPRCAP (Additional file 2: Figure S23). We then compared the LMC2- and LMC13-specific CpG 450K values to reference WGBS methylome profiles of memory and naive B-cell samples, obtained by the BLUEPRINT project [50]. Then, 44 CpGs (Additional file 3) indeed directly correspond to the methylation state differences reported by Kulis et al. [50] in memory and naive B-cell sub-populations, respectively (Fig. 3 d). We would like to note that LMC2 and and LMC13 have almost inverse proportions for individual donors, indicating that the MeDeCom analysis directly reflects the differences in sample-specific abundance of memory and naive B cells, which suggests individual- or isolation-attributed variation.
In our blood analysis, we observed that CpGs, which clearly discriminated cell types in purified myeloid and lymphoid lineages, did not exhibit this power in complex samples. To understand this better, we preselected 15,000 marker CpGs with the highest discriminative power between cell types (highest CpG-wise p=2.91×10−44, ANOVA F-test). A visual comparison of these CpGs between individual reference populations and whole-blood data (Additional file 2: Figure S18) clearly showed that they have a rather low variation across whole-blood samples. Indeed, only a relatively small proportion of marker CpGs also showed a high variance across whole-blood samples detectable by MeDeCom (see the row color code in Additional file 2: Figure S18). We conclude that CpGs, which can be assigned to isolated cell types in purified myeloid and lymphoid lineages, are less informative in complex samples since their level of informative variation in an NMF-based analysis of whole blood is low. This may be a second reason to explain why a series of LMCs recovered in whole blood and in the extreme cases of our simulations do not unambiguously match the reference methylomes.
Decomposition of the brain tissue methylomes
Next we applied MeDeCom to examine the heterogeneity of tissue methylomes. The human brain is composed of many neuronal and glial cell types. Current studies apply FACS-based methods to separate glial cells and neurons. The RBFOX3 protein localized in the nuclear membrane of most neuronal cells (also known as NeuN) is used as a selection marker. While the NeuN-enriched and NeuN-depleted cell fractions serve as references in methylome analysis, the question remains to which extent these separated methylomes represent the composition of whole-brain tissue.
We applied MeDeCom to 20 frontal cortex methylomes from a major depression disorder study [37] (data set FC1 in Table 1). The data set also included NeuN + and NeuN − cell fractions (data set PureN), which we analyzed in comparison to total brain tissue. In addition, we examined an independent bulk frontal cortex methylome data set from a recent large-scale Alzheimer's disease (AD) study [19] (data set FC2).
For both the FC1 and FC2 data sets, the inspection of CVEs showed a substantial change at k≥3, strongly suggesting the existence of more than three main epigenetically distinct cell components (LMCs) (Fig. 4 a and Additional file 2: Figure S24). We carefully examined the factorization results and compared the three main LMCs at k=3 and λ=5×10−3 to the NeuN + and NeuN − profiles. Clustering analysis (Fig. 4 b) showed that the average NeuN − reference profile is related to LMC3 while the NeuN + profile is more similar to LMC2. The third component LMC1 was truly distinct from both reference methylomes retaining a slightly higher similarity to LMC2 and the NeuN + methylome. All three LMCs were remarkably well reproduced in the independent FC2 data set at k=3 (Fig. 4 c).
Results for brain methylomes. a–d Decomposition of the FC1 data set. a Selection of parameters k and λ by cross-validation. b Matching frontal cortex LMCs to the reference NeuN +/− profiles. The dendrogram visualizes agglomerative hierarchical clustering analysis with a correlation-based distance measure and average linkage. c Matching of LMCs between FC1 and FC2. d Example of an LMC1-specific CpG (k=3) in the PAX6 locus. e, f AD-associated LMCs in the FC2 data set. e LMC2 is associated with the AD phenotype (Wilcoxon rank sum test P=3.1×10−4). f LMC2 is also significantly associated with the Braak stage (P=4.8×10−3, T test of the linear regression coefficient). g Clustering of the recovered LMCs for k=9 with the LMCs for k=3 and reference profiles. LMC2 belongs to the NeuN −-associated cluster. h Most significant gene ontology terms from the biological process category for the LMC2-associated hypermethylated genes
This finding indicates that the FACS separation of brain tissues into NeuN + and NeuN − cells introduces a new confounding variable. In most cases, the NeuN + and NeuN − fractions together do not fully recapitulate the methylomes of total brain tissues. To get more insights into the biological nature of the LMCs, we asked which loci differ in their methylation between the LMCs and examined the biological annotation of genes associated with LMC-specific CpGs. LMC-specific CpGs were selected to have methylation differences more than 0.33 between one LMC against the two other ones (Additional file 4). We then mapped LMC-specific CpG positions to their neighboring genes (Additional file 4; see "Methods") and performed a functional annotation of the associated genes using GREAT [51] (Additional file 2: Figure S25). LMC2 (NeuN +)-specific CpGs map to genes with a clear enrichment for neuronal-related terms, while LMC3 (NeuN −)-specific CpGs were close to genes associated with non-neuronal, mostly oligodendrocyte-related, categories. LMC1-specific CpGs map close to genes associated with developmental and stem-cell-related terms. Strikingly, among the genes associated with LMC1, we found several markers of the neuronal stem-cell lineage, such as PAX6, ZIC1, ZIC4, and NEUROG1 (Additional file 4). Notably, the DNA methylation patterns at LMC1-specific CpGs showed significantly higher or lower methylation levels in crude brain tissue than in NeuN + and NeuN + reference methylomes (see PAX6 as an example in Fig. 4 d and Additional file 2: Figure S26). Furthermore, a recent study on neuronal heterogeneity in the mouse brain [52] provided a reference for the fine cellular subtypes possibly present in the mammalian frontal cortex. We found several of the most significant LMC1-specific genes among the DMRs reported in [52] (Additional file 1: Table S3).
As outlined earlier, LMC proportions tended to be biased when k was significantly lower than optimal (see the WB1 analysis with k=2 above). We, therefore, explored MeDeCom results at k=4 and λ=0.005 (Additional file 2: Figure S27). The analysis revealed that the NeuN +-specific LMC3 rather accurately reproduced a reference-estimated NeuN + content in most brain samples (Additional file 2: Figures S7a and S28a). However, samples with the highest deviation from the reference-based proportions had the highest proportion of co-purified cells (and methylomes), characteristic of LMC2 (equivalent to LMC1 for k=3; Additional file 2: Figures S27b and S28b). For k=4, two LMCs match to NeuN −. For each of them, the proportions recovered by MeDeCom deviated significantly from the reference-based estimates for NeuN − (Additional file 2: Figure S28c and d). Nevertheless, the combined proportions largely reflected the reference-estimated NeuN − content across all samples (Additional file 2: Figure S28e). Again, we observe that samples with the lowest correspondence had a high contribution of LMC2 (Additional file 2: Figure S28f). The proportion analysis shows that by using MeDeCom we can infer realistic LMC proportions for NeuN +, NeuN − in individual samples, and a third separate LMC with a distinct cell composition. The latter LMC is variably convoluted into the other main NeuN + and NeuN − cell fractions in the reference-based analysis. We conclude that reference-independent decomposition is a very helpful approach for exploring, identifying, and quantifying heterogeneity effects across composite tissue samples and will allow us to obtain important and unbiased correction parameters for epigenetic studies of the brain.
Discovery and annotation of the AD-related LMCs
Finally, we applied MeDeCom for a phenotype-related analysis on samples where reference methylome adjustment is impossible. To demonstrate the exploratory potential of MeDeCom over other methods in such a setting, we first tested MeDeCom on a simulated data set with an admixture of a rare cell population in one of the compared sample groups as the only phenotype-related effect (see "Methods"). In this example, MeDeCom correctly estimated the number of underlying methylation components, and revealed the enrichment of the rare LMC only in the case group (Additional file 2: Figure S30). Encouraged by these results, we applied MeDeCom for an association analysis of the AD phenotype in the FC2 data set. The authors used a canonical CpG-wise approach to identify methylation changes associated with AD. Braak staging was used as a main phenotypic readout. Standard linear modeling using Braak stage as the response variable corrected for sex and age at death revealed residual inflation of significance, arguing for the presence of an unknown confounding variability component (Additional file 2: Figure S31a). A search for the strongest associations with LMC proportions across all obtained factorization solutions revealed that for the decomposition case with k=9 and λ=0.09, the proportion of some LMCs, in particular LMC2, is significantly correlated with both AD phenotype and Braak stage (Fig. 4 e and f). When we included the proportions of the three most significant LMCs as covariates in the association analysis, the remaining P value inflation was eliminated (Additional file 2: Figure S31b). When compared to the LMCs recovered at k=3, LMC2 was the closest to the NeuN −-related cluster (Fig. 4 g). We used GREAT to annotate the LMC2-specific CpG positions. Gene ontology terms with significant enrichment included rhombomere development, brain segmentation, nerve morphogenesis etc. (Fig. 4 h). We also observed an enrichment for gene promoters overlapping the vitamin D receptor and MEIS1 binding motifs (Additional file 2: Figure S32). LMC2 might, therefore, represent one or several cell populations that are enriched in AD samples; however, a more in-depth biological analysis and validation would be necessary to confirm this finding.
DNA methylomes of multicellular samples can be modeled as mixtures of several latent variables. Here we present a novel computational framework called MeDeCom, which decomposes complex DNA methylation data into latent components and sample-dependent proportions based on a mixture model for methylomes. We show that the method performs reproducibly and with high sensitivity on both synthetic and biological data sets.
MeDeCom provides significant advances compared to existing methods. First of all, our method does not require reference cell-type measurements. It can be applied to any DNA methylation data set to explore the compositions of mixtures. Note that reference methylome data are not yet available for many cell types, and MeDeCom offers the possibility of exploring non-standard data in a reference-free manner. Second, MeDeCom has strong conceptual differences to other reference-free methods, such as the surrogate variable analysis (SVA) methods [53, 54], EWASHER [40], or the SVA-inspired RefFreeEWAS [39] method. All these methods focus on the correction of significance analysis for a phenotypic trait of interest by calculating and eliminating confounding heterogeneity effects. In contrast, MeDeCom uses a variant of NMF specifically designed to recover latent DNA methylomes by using biologically motivated constraints and regularization. The imposed constraints on the factorization integrate biological prior knowledge, such as non-negativity of the estimated methylation profiles and their proportions. However, we show that these constraints alone are not sufficient to get biologically meaningful methylation profiles and accurate estimates of their proportions. A key element distinguishing MeDeCom from other methods based on naive matrix factorization, in particular RefFreeCellMix [42], is that we add a regularizer encoding the prior expectation that most sites in the methylation profiles are close to zero or one. This prior expectation is because at the level of a single cell, methylation profiles are binary and for most CpG sites this is true also at the level of a homogeneous population of cells, such as a particular cell type. This allows us to estimate methylation profiles and their proportions simultaneously, without any reference profiles. In contrast to RefFreeCellMix, the employed regularizer enables MeDeCom to identify methylation profiles even in blood and brain tissue where each sample is a heterogeneous mixture of different cell types.
Our proof-of-concept analysis shows that MeDeCom acts robustly and reliably on complex artificial and natural methylome mixtures measured by Infinium 450k arrays. MeDeCom identifies key signatures of major cell populations present in complex whole-blood and brain methylomes without any prior knowledge of references or data adjustment. However, our analysis also reveals the limits of a MeDeCom analysis. The method strongly depends on a fair number of discriminatory methylation positions and a sufficient level of sample-to-sample variation (Additional file 2: Figure S18). In complex 450k whole-blood methylomes, both parameters are affected such that a clean separation and assignment of LMCs specific for blood cell subtypes becomes challenging. Two major aspects are the likely causes of this difficulty. First, the Infinium 450k platform covers only a limited number of CpGs informative for the minor cell subtypes, which can easily become indistinguishable from the remaining technical noise of the 450k arrays. Second, the proportions of most cell subtypes in blood are too low. MeDeCom factorization requires a certain grade of sample-to-sample variation to identify component (cell type)-specific CpG signals. We had noticed both of these limitations in our simulation analysis with artificial mixtures. In the future, these problems may be partially overcome by using WGBS/RRBS or extended array platforms such as the Methylation EPIC array covering additional cell-type-specific variable CpG positions. Furthermore, cell-enrichment or cell-depletion strategies may help to obtain deeper sample-specific compositional insights.
Since MeDeCom does not require predefined references, it can be flexibly applied to any level of methylome analysis. We show that MeDeCom can facilitate a deeper insight into cell composition if the sample complexity is experimentally reduced. As one example, we investigated the composition of methylomes generated after cell preselection, e.g., by surface marker-based separation [44]. Our results on pre-sorted CD4+ (T cell) or CD19+ (B cell) blood cells clearly show that their methylomes still maintain a substantial level of heterogeneity. We identify a number of additional separable DNA methylation components, some of which we can associate with age-dependent changes in T-cell populations or show that they discriminate naive from primed B cells. In both cases, the characteristic CpG signatures vary in their sample-by-sample proportions. Such observations are very important for the biological interpretation of methylation changes across populations of samples. Many of the components identified by MeDeCom are likely to carry such biological information, which can be extracted for further exploration. Furthermore, we show that MeDeCom can, in principle, be extended to include prior information, such as known cell-type profiles and the approximate range of cell-type proportions for certain cell types (Additional file 5: Supplementary Note 2).
The decomposition of brain methylomes provided by MeDeCom further supports the usefulness of unsupervised exploratory decomposition for the analysis of complex methylome data. The separation of brain cells into neuronal and non-neuronal fractions has become a new standard procedure for brain-specific epigenetic studies in human postmortem samples. Our first finding shows that NeuN +/− mixture models do not fully capture the composition of the full brain tissue. In fact, we identify an additional component that differs from the NeuN + (neuron) and NeuN − (non-neuron)-specific components in full brain tissues. This new component is apparently sorted out or even lost in the enrichment procedure. Our analysis shows that the samples denominated as NeuN + and NeuN − contain variable contributions of this unknown cell fraction. Here, MeDeCom opens a new possibility for identifying the differences in cell composition and, hence, making data from different NeuN separations more comparable. Moreover, a biological analysis of the CpGs and genes associated with this new component reveals a strikingly different association of biological terms compared to the NeuN + and NeuN − fractions. Finally, we show that a phenotypic re-analysis of complex brain data sets using LMCs allows us to identify novel associations with cellular origin (neurons) and disease-state progression.
In summary, our analysis demonstrates that MeDeCom is a broadly applicable reference-free tool allowing us to explore complex data sets for confounding variables and, thus, to improve the biological interpretation of large-scale DNA methylation data sets. For the pilot demonstration, we exclusively used Infinium 450k data. In principle, MeDeCom is applicable to any complex methylome data set. However, since MeDeCom requires a low level of technical noise and a high level of biological variation, we suggest that the method is applied to carefully controlled data sets that fulfill such requirements. A high standard technical preprocessing of 450k array data minimizes possible pitfalls of quality, technical batch effects, or other non-biological issues. We, therefore, recommend using data after passing them through available bioinformatic pipelines (see, e.g., [55] or [56]).
MeDeCom element I: mixture model for DNA methylation measurements
Let D∈ [ 0,1]m×n be the matrix of absolute methylation values of m CpGs obtained from n multicellular specimens, with m typically being much larger than n. Here, entry D ij represents the methylation level for CpG i for specimen j, with i=1,…,m and j=1,…,n. We consider an approximate low-rank model for D assuming that the cell populations of samples consist of a finite number of sub-populations each contributing a distinctive methylation profile. We also assume that population mixtures are similar but slightly variable across biological samples collected in the same manner. Both assumptions suggest that the methylation profiles of the samples are a weighted average (mixture) of the methylation profiles associated with the underlying cell types, where the weights equal the proportions of these cell types. Note that we verified this concept in our analysis with artificial cell mixtures. Our matrix factorization model,
$$ D = TA + E, $$
represents this concept where T∈ [ 0,1]m×k represents the methylation profiles of k cell prototypes or other recurrent variables (in most cases representing a specific cell type) and \(A \in \mathbb {R}_{+}^{k \times n}\) such that A ⊤ 1 k =1 n (i.e., the entries of A are non-negative and its columns sum to one). Entry T is equals the methylation profile of CpG i of prototype s, with i=1,…,m and s=1,…,k, while A si equals the relative abundance (proportion) of prototype s in specimen i. The matrix E represents errors, capturing model misspecification and noise arising from the measurement process. Note that the biologically motivated constraints for T and A distinguish our model from other low-rank models as they are used for adjustment of the phenotype association analysis [39–41]. Notably, (1) can be seen as an approximation of a more general constructive or exact model while the emerging approximation error can be estimated analytically (see Fig. 1 a and Additional file 5: Supplementary Note 1).
MeDeCom element II: model fitting
Using a straightforward least-squares approach to fit model (1), yields the optimization problem:
$$ \begin{array}{ll} \min_{T,A} ||{D - TA}||_{F}^{2} = &\sum_{i=1}^{m} \sum_{j = 1}^{n} (D_{ij} - (TA)_{ij})^{2} \\ \text{subject to} \;\;& 0 \leq T_{is} \leq 1 \;\, \forall i,s \\ & A_{sj} \geq 0 \;\, \forall s,j \\ & \sum_{s = 1}^{k} A_{sj} = 1 \;\, \forall j. \end{array} $$
Here and in the following, ∥.∥ F denotes the Frobenius norm of a matrix, defined as the square root of the sum of squares of its entries. We may think of the above problem as an instance of blind source separation, a task that has been well studied in signal processing [57]. The attribute blind expresses that the source signals, as represented by the columns of the matrix T, are unknown, as opposed to when they are given in advance and only the mixture coefficients in A need to be recovered.
The minimization problem in (2) is not jointly convex in T and A. As a result, one cannot hope to converge always to the global optimum; in fact, it has been shown that constrained matrix factorization problems of this form are computationally hard in general [58].
Once T or A is given, the problem (2) leads to a convex quadratic program. This property is the basis of alternating minimization, a common (heuristic) approach for fitting matrix factorization models where one alternates minimization w.r.t. T for fixed A and vice versa [59]. While lacking theoretical guarantees, alternating minimization often works very well in practice.
Note that independently of our work, Houseman et al. [42] recently proposed RefFreeCellMix, an approach like (2). A rather minor difference is that in RefFreeCellMix, the equality constraint, \( \sum _{s = 1}^{k} A_{sj} = 1\), is replaced with an inequality constraint, \(\sum _{s = 1}^{k} A_{sj} \leq 1\). Thus, the components of A estimated by RefFreeCellMix cannot be interpreted as the proportions of the corresponding methylation profiles. Moreover, we will argue in the following that the direct use of approach (2) ignores valuable prior biological information about the problem, which leads to suboptimal solutions. This, in turn, has an adverse effect on the estimation of the proportions A and the methylation profiles T, leading to considerably worse solutions.
The main problem of (2) is ill-posedness. In general, there are multiple optimal solutions to (2) (excluding those generated by column and row permutations in T, respectively, A), as can easily be seen from geometric considerations (see Fig. 1 c). In geometric terms, problem (2) can be re-phrased as follows: find a set of k points {t 1,…,t k }⊂ [ 0,1]m corresponding to the columns of T such that their convex hull \(\mathcal {T} = \{y \in \mathbb {R}^{m}: y = \sum _{s = 1}^{k} \lambda _{s} t_{s}, \; \lambda _{s} \geq 0 \; \forall s, \; \sum _{s=1}^{k} \lambda _{s} = 1\}\) minimizes the sum of squared Euclidean distances of the data points {D :,1,…,D :,n } to that convex hull. As shown in Fig. 1 c, one can easily construct problem instances for which it is possible to extend or shrink \(\mathcal {T}\) while keeping the least-squares objective (essentially) unchanged. Note that a solution from RefFreeCellMix or one from our model without the regularizer (λ=0) will be far away from the ground truth and, thus, have gross errors both in the estimation of the proportions A as well as the profiles T.
To deal with this ambiguity, we suggest complementing the least-squares objective with a biologically plausible regularizing term pushing the points {t 1,…,t k } towards the vertex set of [ 0,1]m, i.e., the set of binary vectors {0,1}m. The rationale behind this is as follows. Recall that the columns of T take the role of methylation profiles of prototypes, which in typical cases represent a (near) homogeneous sub-population of cells. Depending on the homogeneity of the sub-population, the methylation profile of the corresponding prototype may be close to binary since at the level of a single cell, methylation profiles are exactly binary (methylated vs unmethylated) when ignoring the comparatively rare case of half-methylation. Incorporating this structure contributes significantly to the success in finding biologically meaningful matrices T and A. Specifically, we consider the following regularized least-squares criterion:
$${} \begin{aligned} &\min_{T,A} \|D - TA\|_{F}^{2} + \lambda \sum_{i = 1}^{m} \sum_{s=1}^{k} \omega(T_{is}), \text{with} \,\omega(x) = x(1 - x),\\ &\text{subject to~} 0 \leq T_{is} \leq 1 \;\, \forall i,s \\ &\qquad \qquad A_{sj} \geq 0 \;\, \forall s,j \\ & \qquad \qquad \sum_{s = 1}^{k} A_{sj} = 1 \;\, \forall j, \end{aligned} $$
where λ≥0 is a hyperparameter. Note that ω:[ 0,1]→[ 0,1] is a quadratic function symmetric around its mode 0.5 (i.e., ω(x)=ω(1−x)) and vanishes at the boundary points 0 and 1. The additional regularization term in (3) acts as a soft binary constraint depending on the parameter λ. For λ sufficiently large, any minimizer \((\widehat {T}, \widehat {A})\) of (3) must satisfy \(\widehat {T}_{is} \in \{0,1\}\) for all i,s. We stress that the proposed form of regularization is much better suited to the given problem than the popular lasso (ℓ 1 regularization with ω(x)=|x|), which promotes zeroes but discourages ones, which has little meaning for the given problem from a biological perspective.
We would like to stress again that the introduction of this regularizer constitutes a key prerequisite for getting biologically meaningful solutions for matrices T and A. While (2) and RefFreeCellMix work reasonably well if the methylation profiles of the pure cell types are present as samples in the data matrix D, this approach can fail completely if the measured samples consist only of mixtures of cell types, as shown in the artificial NeuN +/− mixture experiment. The reason for the bad performance of RefFreeCellMix is that it basically interprets the mixtures (0.3,0.7) and (0.7,0.3) as columns of T, whereas the regularizer proposed in the present paper pushes T towards 0 (respectively 1), and, thus, can estimate the correct profiles and their proportions accurately.
From the computational standpoint, the extra term in (3) poses an additional challenge compared to (2), as the function ω is non-convex (in fact, it is concave). As a consequence, when using the alternatization scheme mentioned above, one has to bear in mind that optimizing T for fixed A is no longer a convex quadratic program, but a so-called difference of convex program in virtue of the concavity of ω. The concave–convex procedure [60, 61] can be employed to generate a sequence of iterates ensuring the monotonic descent of the objective function before reaching a stationary point. As detailed in Algorithm 1, it is straightforward to integrate this approach into the alternating optimization scheme.
The main computational efforts go into the successive solution of the convex quadratic optimization problems optT and optA, which can be done by a variety of efficient solvers. Updating T follows the concave–convex procedure in which the concave part of the objective (here given by h(T)) is repeatedly linearized, yielding a sequence of convex surrogate minimization problems.
MeDeCom element III: parameter selection
The mixture model (1) and the fitting algorithm (Algorithm 1) involve two free parameters to be provided by the user. The inner dimension k of the matrix product TA, k≤ min{m,n} in (1), equals the number of DNA methylation prototypes used to model the given data. The regularization parameter λ determines how strongly the entries of \(\widehat {T}\) are encouraged to take values in {0,1}. The choice of k can be guided by prior (biological) knowledge about the possible composition of the underlying mixture. However, to select the optimal values of k and λ, we developed a cross-validation procedure.
Cross-validation
Typical approaches to cross-validation in matrix factorization are (a) leaving out columns, (b) leaving out rows, and (c) leaving out both rows and columns [62]. We decided to use (a) since it leads to a straightforward scheme as displayed in Algorithm 2. For each fold, a subset of the samples is left out. Thereby, the column-reduced data matrix D in is factorized as if one were given the full matrix. The resulting left factor \(\widehat {T}^{\text {in}}\) is used to fit the left-out columns in D out as \(D^{\text {out}} \approx \widehat {T}^{\text {in}} \widehat {A}^{\text {out}}\). The squared error of that approximation or CVE is saved and finally combined with the errors from other folds.
Selecting k
The choice of k is critical for the good performance of our model. In some instances, such as for the synthetic mixtures, the number of cell populations are known and the optimal selection of k is straightforward. However, for most biological samples, prior knowledge of cell-type composition and other variables is not available or can only be estimated. Moreover, a number of other variable effects, such as age, gender, genetic background, allelic variations etc., have to be included to obtain an interpretable LMC separation. We observe that k should be chosen such that the estimation error and the approximation error in model (1) are roughly balanced. The former results from noise and is incurred when fitting the model to the data, while the latter is a consequence of model misspecification, which, as discussed above, is inevitable for limited k given the many possible sources and levels of variance.
Statistically the selection of k is related to the selection of numbers of components in a principal component analysis (PCA). In fact, the matrix factorization model (1) can be seen as a method of linear dimension reduction applied to D. A common computational approach to PCA is singular value decomposition (SVD), which yields a matrix factorization of rank k of D by discarding all singular vectors not corresponding to the top k singular values. A notable advantage of our scalable model (1) over the truncated SVD/PCA is its direct interpretability at a biological level, which is achieved by putting suitable constraints on the two factors T and A.
For a fixed value of the parameter λ, the data-fitting term of the factorization problem (3) decreases as k increases. The approximation error of the factorization model decreases since with more columns in T, one has a better chance of capturing differences between the cluster methylomes. At the same time, the estimation error increases as the additional degrees of freedom favor over-adaptation to noise. A suitable choice of k balances both effects. The use of cross-validation is intended to achieve this balance by tracing the CVE over a grid of values for k and selecting the one corresponding to the minimum. The final choice of k is made by combining visual inspection of the cross-validation results and available prior information about the most likely number of underlying methylation signatures.
Selecting λ
In our example in Fig. 1 b, the regularization parameter λ, which balances the trade-off between the data fidelity term and the data-independent regularization term, has a crucial influence on the solution of the factorization problem (1) delivered by Algorithm 1. Since there is, in general, no objective criterion to assess the suitability of each solution at a biological level, we use CVE, as for the parameter k. Determining a minimum CVE for λ is difficult as that parameter takes values in a continuous domain, namely the non-negative real line. To approach this, we perform a two-stage grid search, starting with a coarse grid and then concentrating on a smaller range covered by a finer grid. Details of the procedure are outlined in Algorithm 3. At the beginning of each of the two rounds of the grid search, Algorithm 3 is run for each grid point of λ using multiple (≈50) random initializations. As the solutions corresponding to nearby grid points can be expected to be similar, we complement random initializations with a smoothing scheme in which the solutions of the five preceding and the five subsequent grid points are used for initialization.
Computational performance
When m≫n, the computational burden is dominated by the optimization step over T, which scales in the worst case as O(n m k 3), where O(k 3) is the worst-case performance in solving a quadratic program of size k, which in practice often behaves better. However, the optimization of rows of T can be done independently and, thus, we have parallelized this step, leading to an almost linear speed-up on multi-core machines. Moreover, we have parallelized all the independent runs done for cross-validation and used to find a good regularization parameter. While still computationally demanding, the method is in this way scalable to large data sets, both for the number of CpG sites m and number of samples n. RefFreeCellMix is faster than MeDeCom as it does not have to test for different regularization parameters. However, a single factorization for λ=0 is faster in MeDeCom.
LMC matching
As a first interpretation level, we propose matching MeDeCom LMC results of unknown samples to reference profiles, which can be either methylomes of purified cell types or other LMCs. Given a matrix of k LMCs \(\widehat {T}\) estimated from a data set D and a matrix of k ⋆ reference profiles T ⋆, we first selected a set of rows \(\mathcal {R}\) corresponding to the overlap of CpGs present in both \(\widehat {T}\) and T ⋆. We then computed the matrix S=(S i,j ) of Pearson correlation coefficients between all pairs of vectors \(\widehat {T}_{\mathcal {R},i}\) and \(T^{\star }_{\mathcal {R},j}\). We consider LMC \(\bar {i}\) as a match to reference profile \(\bar {j}\) if \(S_{\bar {i}, \bar {j}}=\max _{i} {S_{i,\bar {j}}}\). We considered the matching unambiguous when \(S_{\bar {i}, \bar {j}}=\max _{j} {S_{\bar {i},j}} = \max _{i} S_{i,\bar {j}}\) for all such matching pairs \((\bar {i}, \bar {j})\). In most of the cases, we observe better matching when both \(\hat {T}\) and T ⋆ are centered, i.e., \((1/k) \hat {T}\mathbf {1}_{k}\) (respectively \((1/k^{\star }) T^{\star } \mathbf {1}_{k^{\star }}\)) is subtracted from each column. To compare sets of prototypes corresponding to different parameter settings, we normalize the total number of unambiguously matching prototypes by the achievable maximum, which yields a score ε∈[0,1] given by \(\epsilon = 1/\min (k,k^{\star }) \ |\big \{ (\bar {i}, \bar {j}) \in \{1,\ldots, k \} \times \{1,\ldots,k^{\star }\} : S_{\bar {i}, \bar {j}}=\max _{j} S_{\bar {i},j}\ \text {and}\ S_{\bar {i}, \bar {j}} = \max _{i} S_{i,\bar {j}} \big \}|\).
On the next alternative level, we propose a combined clustering analysis of LMC prototypes and reference profiles. For that, we composed a matrix \(T^{\dagger } = [\widehat {T}_{\mathcal {R},:} \; T^{\star }_{\mathcal {R},:}]\). We also computed a correlation matrix S † analogously to S, and used it as a similarity matrix for agglomerative hierarchical clustering with average linkage (procedure hclust in the R package clust).
Functional annotation of LMC-specific CpG positions
On a third level, we propose a functional annotation of the recovered LMCs by selecting component-specific CpG positions using a fixed methylation difference threshold θ. We consider a CpG position l∈{1,…,m} to be specific to component i if \(|\widehat {T}_{l,j}-\sum _{j \ne i} \widehat {T}_{l,j}| > \theta \). We investigate each set \(\mathcal {L}_{j}\) of all such CpGs with respect to enrichment of annotation categories using GREAT [51]. In general, we use the default definition for a functional domain of a gene, with a maximal distance of 10 kb upstream or downstream of the transcriptional start site (the "two closest genes" option in GREAT).
Reference-based estimation of cell-type proportions
If a matrix T of k prototype methylomes is available, e.g., experimentally obtained using cell separation methods, one can estimate a corresponding matrix of mixture proportions by solving sub-problem optA in Algorithm 1. From here onwards, we refer to this method as regression, and we apply it for reference-based estimation of mixture proportions whenever the reference methylomes are available. This form of proportion estimation is like a method called constrained projection proposed for the same purpose in [33]. The important difference is, however, that the analogue of the matrix T in that method is constructed from the selection of a comparatively small set of cell-type-specific marker CpGs. In the following, we compare to its proportion estimates whenever appropriate.
Application of RefFreeCellMix
We performed reference-free deconvolution with the method RefFreeCellMix by Houseman et al. [42] using the R package RefFreeEWAS. In accordance with the original publication of the method [42], we applied it to the 20,000 most variable CpG positions from the methylation matrix, unless the total number of rows was less, in which case we used the full matrix. In the former case, we used the available option to obtain the estimates of the methylation components for all CpGs as the final step of the deconvolution procedure (supplying the complete data matrix as argument Yfinal).
For the performance analysis, we generated simulated DNA methylation data by mixing measured profiles of isolated cell types in controlled proportions and adding varying levels of Gaussian noise. An m×n matrix of DNA methylation values D sim was generated according to the model in (1).
The underlying matrix of LMCs \(\phantom {\dot {i}\!}T \in [0,1]^{m \times k_{\text {sim}}}\) was obtained by averaging methylation profiles for k sim purified blood cell types from six donors in the Reinius et al. study [44]. We tested four different constellations of blood cell types:
k sim=2 with two distant cell types (neutrophils and CD4+ T cells).
k sim=2 with two similar cell types (neutrophils and monocytes).
k sim=3 with two similar cell types and one distant from the first two (neutrophils, monocytes and CD4+ T cells).
k sim=5 with all major blood cell types, excluding eosinophils and B cells.
The columns of the matrix of mixture proportions A were sampled from a Dirichlet distribution commonly used to model distributions over the probability simplex. The distribution had k sim parameters \(v\alpha _{1},\dots,v\alpha _{k_{\text {sim}}}\). The simplex base \(\alpha _{1},\dots,\alpha _{k_{\text {sim}}}\), \(\sum _{i} \alpha _{i}=1\), was chosen to model the prior expectation for the mixing proportions in a typical individual. We tested two scenarios: on average equal (uniform) proportions across individuals, i.e., α i =1/k sim, i=1,…,k sim, and a setting where some concentration parameter values were much larger than others, which comes closer to the situation one encounters for whole blood. The scaling factor v was used to control the variability of the mixing proportions, with v=1 yielding highly variable, v=10 moderately variable, and v=100 marginally variable proportions across individuals. Finally, the additive noise term E was generated by sampling mn values from a Gaussian distribution with mean 0 and standard deviation 0.05, 0.1, and 0.2 to simulate low, moderate, and high levels of noise, respectively.
To simulate true methylation effects of average size δ for m e ≪m CpGs in cell type l under a simple case vs control setting, the source cell-type-specific methylation profiles of the affected samples (cases) were changed to mimic DNA methylation differences. More specifically, a set \(\mathcal {C}_{e}\) of affected CpGs randomly sampled from 1,…,m, and a matrix T e was obtained so that \(T^{e}_{l,u} = T^{e}_{l,u} + \mathcal {N}(\delta,\sigma) \mathcal {I}_{\mathcal {N}(0,1)>0}\) where \(u \in \mathcal {C}_{e}\). The simulated effect for the proportion of cell type l was introduced by changing parameter α l of the Dirichlet distribution for one sample group only.
Infinium 450k data
Public Infinium 450k data sets
The publicly available data sets used to validate the factorization approach are summarized in Table 1. To test MeDeCom for blood-based data, we used one reference data set and data from two large whole-blood-based studies. The data set from Reinius et al. contains profiles of purified blood cell types, as well as mixed samples with known cell counts [44]. In addition, we used data from a large rheumatoid arthritis EWAS with 354 cases and 337 controls [35]. Finally, we validated the whole-blood results in the data from the EPIC Italy prospective cohort, which provided 845 Infinium 450k measurements [45]. Neuronal data sets were obtained from one reference study and one large AD cohort. As a reference, we used data from the CETS study [37], which contained in total 145 Infinium 450k profiles of various neuronal samples from major depression disorder patients and healthy controls, such as cortical NeuN +- and NeuN −-enriched cell populations, nine artificial NeuN +/− titration mixtures, as well as 20 intact frontal cortex samples. For validation, we used data from a recent AD study [19].
Processing and preparation of the Infinium 450k data
The raw Infinium 450k data were collected as IDAT files or, if the latter were not available, from probe-wise intensity matrices (Illumina Genome Studio reports). Loading and primary processing, such as intensity summarization and methylation ratio (β value) calling, was performed with the RnBeads package [55]. We used dasen as the primary normalization method [63]. We used several layers of filtering criteria to eliminate low-quality probes. We required each methylation call to be supported by at least five Infinium beads. Since too low and too high probe intensity may indicate measurement problems, we discarded CpGs where the raw intensity for either methylated or unmethylated probes was below 0.1 or above 0.9 quantiles of the total intensity distribution in the respective channel. To diminish the effects of genetic variation, we also discarded CpGs with probes that overlapped with annotated single-nucleotide polymorphism positions (dbSNP132 entries with MAF >0.05, as defined in the RnBeads.hg19 annotation) along the whole probe sequence.
Adjustment of the phenotype association analysis
For consistency with the published results, we performed the association analysis using the code that we obtained from the authors of the ReFACTor paper [41]. For the unadjusted analysis, a logistic linear model was fitted for each CpG site, with the phenotype (rheumatoid arthritis status) as a response variable and methylation level as the only predictor. The T test of the predictor variable coefficient being different from zero was used as the test of the association. For the adjusted analysis, first, the ordinary linear model was fitted to the methylation data for each CpG using the common covariates, such as age, gender, smoking status, and the experimental bath, as predictors. The residuals of this model were then used to fit the phenotype model instead of the actual methylation values. The adjustment for cell composition was performed either via a specialized statistical procedure (RefFreeEWAS [39] and Fast-LMM-EWASher [40]) or by including additional covariate variables reflecting the compositional variation. In the reference-based adjustment, unconstrained cell-type contribution estimates obtained with the Houseman et al. method [33] were added to the covariate list. For RefFreeEWAS and Fast-LMM-EWASher, no custom modeling was performed, but the data and common covariates were supplied directly to the published implementations and the output P values were used for the comparison. When adjusting using ReFACTor and RefFreeCellMix, columns of the recovered matrices R and Ω were included, respectively. For MeDeCom, the LMC proportions were used as covariates. To decrease the complexity for large k, we considered including only k ′ LMCs with, on average, the largest proportions across all samples. The efficiency of the adjustment was assessed by comparing the observed distribution of P values to the expected one under the assumption that none of the tested null hypotheses are false (which corresponds to a uniform distribution).
CVE:
Cross-validation error
EWAS:
Epigenome-wide association study
FACS:
Fluorescence activated cell sorting
GEO:
LMC:
Latent (DNA) methylation component
MAE:
Mean absolute error
MDD:
Major depression disorder
NMF:
Non-negative matrix factorization
PCA:
RMSE:
Root-mean-square error
SVA:
Surrogate variable analysis
SVD:
Singular value decomposition
MACS:
WGBS:
whole-genome bisulfite sequencing
DMR:
differentially methylated region
MAF:
minor allele frequency
Schübeler D. Function and information content of DNA methylation. Nature. 2015; 517(7534):321–6. doi:10.1038/nature14192.
Pelizzola M, Ecker JR. The DNA methylome. FEBS Lett. 2011; 585(13):1994–2000. doi:10.1016/j.febslet.2010.10.061.
Roadmap Epigenomics Consortium, Kundaje A, Meuleman W, Ernst J, Bilenky M, Yen A, et al. Integrative analysis of 111 reference human epigenomes. Nature. 2015; 518(7539):317–30. doi:10.1038/nature14248.
Reik W, Dean W, Walter J. Epigenetic reprogramming in mammalian development. Science. 2001; 293(5532):1089–93. doi:10.1126/science.1063443.
Baron U, Türbachova I, Hellwag A, Eckhardt F, Berlin K, Hoffmuller U, et al. DNA methylation analysis as a tool for cell typing. Epigenetics. 2006; 1(1):55–60. doi:10.4161/epi.1.1.2643.
Ji H, Ehrlich LIR, Seita J, Murakami P, Doi A, Lindau P, et al. Comprehensive methylome map of lineage commitment from haematopoietic progenitors. Nature. 2010; 467(7313):338–42. doi:10.1038/nature09367.
Shoemaker R, Deng J, Wang W, Zhang K. Allele-specific methylation is prevalent and is contributed by CpG-SNPs in the human genome. Genome Res. 2010; 20(7):883–9. doi:10.1101/gr.104695.109.
Christiansen J, Kolte AM, Hansen TO, Nielsen FC. IGF2 mRNA-binding protein 2: biological function and putative role in type 2 diabetes. J Mol Endocrinol. 2009; 43(5):187–95. doi:10.1677/JME-09-0016.
Lee KWK, Pausova Z. Cigarette smoking and DNA methylation. Front Genet. 2013; 4:132. doi:10.3389/fgene.2013.00132.
Horvath S. DNA methylation age of human tissues and cell types. Genome Biol. 2013; 14(10):115. doi:10.1186/gb-2013-14-10-r115.
Baylin SB. DNA methylation and gene silencing in cancer. Nat Clin Pract Oncol. 2005; 2 Suppl 1:4–11. doi:10.1038/ncponc0354.
Esteller M. Cancer epigenomics: DNA methylomes and histone-modification maps. Nat Rev Genet. 2007; 8(4):286–98. doi:10.1038/nrg2005.
Bernstein BE, Stamatoyannopoulos JA, Costello JF, Ren B, Milosavljevic A, Meissner A, et al. The NIH Roadmap Epigenomics Mapping Consortium. Nat Biotechnol. 2010; 28(10):1045–8. doi:10.1038/nbt1010-1045.
Michels KB, Binder AM, Dedeurwaerder S, Epstein CB, Greally JM, Gut I, et al. Recommendations for the design and analysis of epigenome-wide association studies. Nat Methods. 2013; 10(10):949–55. doi:10.1038/nmeth.2632.
Lam LL, Emberly E, Fraser HB, Neumann SM, Chen E, Miller GE, et al. Factors underlying variable DNA methylation in a human community cohort. Proc Natl Acad Sci. 2012; 109(Supplement_2):17253–60. doi:10.1073/pnas.1121249109.
Zhang D, Cheng L, Badner JA, Chen C, Chen Q, Luo W, et al. Genetic control of individual differences in gene-specific methylation in human brain. Am J Hum Genet. 2010; 86(3):411–9. doi:10.1016/j.ajhg.2010.02.005.
Zhang Z, Tang H, Wang Z, Zhang B, Liu W, Lu H, et al. MiR-185 targets the DNA methyltransferases 1 and regulates global DNA methylation in human glioma. Mol Cancer. 2011; 10(1):124. doi:10.1186/1476-4598-10-124.
Kaut O, Schmitt I, Wüllner U. Genome-scale methylation analysis of Parkinson's disease patients' brains reveals DNA hypomethylation and increased mRNA expression of cytochrome P450 2E1. Neurogenetics. 2012; 13(1):87–91. doi:10.1007/s10048-011-0308-3.
Lunnon K, Smith R, Hannon E, De Jager PL, Srivastava G, Volta M, et al. Methylomic profiling implicates cortical deregulation of ANK1 in Alzheimer's disease. Nat Neurosci. 2014; 17(9):1164–70. doi:10.1038/nn.3782.
Adalsteinsson BT, Gudnason H, Aspelund T, Harris TB, Launer LJ, Eiriksdottir G, et al. Heterogeneity in white blood cells has potential to confound DNA methylation measurements. PLOS ONE. 2012; 7(10):46705. doi:10.1371/journal.pone.0046705.
Jaffe AE, Irizarry RA. Accounting for cellular heterogeneity is critical in epigenome-wide association studies. Genome Biol. 2014; 15(2):31. doi:10.1186/gb-2014-15-2-r31.
Houseman EA, Kelsey KT, Wiencke JK, Marsit CJ. Cell-composition effects in the analysis of DNA methylation array data: a mathematical perspective. BMC Bioinform. 2015; 16(1):95. doi:10.1186/s12859-015-0527-y.
Dainiak MB, Kumar A, Galaev IY, Mattiasson B. Methods in cell separations. Adv Biochem Eng Biotechnol. 2007; 106:1–18. doi:10.1007/10_2007_069.
Tomlinson MJ, Tomlinson S, Yang XB, Kirkham J. Cell separation: terminology and practical considerations. J Tissue Eng. 2013; 4:2041731412472690. doi:10.1177/2041731412472690.
Rakyan VK, Beyan H, Down TA, Hawa MI, Maslau S, Aden D, et al. Identification of type 1 diabetes-associated DNA methylation variable positions that precede disease diagnosis. PLOS Genet. 2011; 7(9):1002300. doi:10.1371/journal.pgen.1002300.
Bundo M, Kato T, Iwamoto K. Epigenetic methods in neuroscience research In: Karpova N, editor. Neuromethods. New York: Springer: 2016. p. 115–23. doi:10.1007/978-1-4939-2754-8.
Kumar A, Bhardwaj A. Methods in cell separation for biomedical application: cryogels as a new tool. Biomed Mater. 2008; 3(3):034008. doi:10.1088/1748-6041/3/3/034008.
Kantlehner M, Kirchner R, Hartmann P, Ellwart JW, Alunni-Fabbroni M, Schumacher A. A high-throughput DNA methylation analysis of a single cell. Nucleic Acids Res. 2011; 39(7):44–68. doi:10.1093/nar/gkq1357.
Fang G, Munera D, Friedman DI, Mandlik A, Chao MC, Banerjee O, et al. Genome-wide mapping of methylated adenine residues in pathogenic Escherichia coli using single-molecule real-time sequencing. Nat Biotechnol. 2012; 30(12):1232–9. doi:10.1038/nbt.2432.
Schadt EE, Banerjee O, Fang G, Feng Z, Wong WH, Zhang X, et al. Modeling kinetic rate variation in third generation DNA sequencing data to detect putative modifications to DNA bases. Genome Res. 2013; 23(1):129–41. doi:10.1101/gr.136739.111.
Schwartzman O, Tanay A. Single-cell epigenomics: techniques and emerging applications. Nat Rev Genet. 2015; 16(12):716–26. doi:10.1038/nrg3980.
Lowe R, Rakyan VK. Correcting for cell-type composition bias in epigenome-wide association studies. Genome Med. 2014; 6(3):23. doi:10.1186/gm540.
Houseman EA, Accomando WP, Koestler DC, Christensen BC, Marsit CJ, Nelson HH, et al. DNA methylation arrays as surrogate measures of cell mixture distribution. BMC Bioinform. 2012; 13(1):86. doi:10.1186/1471-2105-13-86.
Koestler DC, Christensen BC, Karagas MR, Marsit CJ, Langevin SM, Kelsey KT, et al. Blood-based profiles of DNA methylation predict the underlying distribution of cell types: a validation analysis. Epigenetics. 2013; 8(8):816–26. doi:10.4161/epi.25430.
Liu Y, Aryee MJ, Padyukov L, Fallin MD, Hesselberg E, Runarsson A, et al. Epigenome-wide association data implicate DNA methylation as an intermediary of genetic risk in rheumatoid arthritis. Nat Biotechnol. 2013; 31(2):142–7. doi:10.1038/nbt.2487.
Accomando WP, Wiencke JK, Houseman EA, Nelson HH, Kelsey KT. Quantitative reconstruction of leukocyte subsets using DNA methylation. Genome Biol. 2014; 15(3):50. doi:10.1186/gb-2014-15-3-r50.
Guintivano J, Aryee MJ, Kaminsky ZA. A cell epigenotype specific model for the correction of brain cellular heterogeneity bias and its application to age, brain region and major depression. Epigenetics. 2013; 8(3):290–302. doi:10.4161/epi.23924.
Montaño CM, Irizarry RA, Kaufmann WE, Talbot K, Gur RE, Feinberg AP, et al. Measuring cell-type specific differential methylation in human brain tissue. Genome Biol. 2013; 14(8):94. doi:10.1186/gb-2013-14-8-r94.
Houseman EA, Molitor J, Marsit CJ. Reference-free cell mixture adjustments in analysis of DNA methylation data. Bioinformatics. 2014; 30(10):1431–9. doi:10.1093/bioinformatics/btu029.
Zou J, Lippert C, Heckerman D, Aryee M, Listgarten J. Epigenome-wide association studies without the need for cell-type composition. Nat Methods. 2014; 11(3):309–11. doi:10.1038/nmeth.2815.
Rahmani E, Zaitlen N, Baran Y, Eng C, Hu D, Galanter J, et al. Sparse PCA corrects for cell type heterogeneity in epigenome-wide association studies. Nat Methods. 2016. doi:10.1038/nmeth.3809.
Houseman EA, Kile ML, Christiani DC, Ince TA, Kelsey KT, Marsit CJ. Reference-free deconvolution of DNA methylation data and mediation by cell composition effects. BMC Bioinform. 2016; 17:259. doi:10.1186/s12859-016-1140-4.
Lutsik P, Slawski M, Gasparoni G, Hein M, Walter J. MeDeCom web resource. http://public.genetik.uni-sb.de/medecom.
Reinius LE, Acevedo N, Joerink M, Pershagen G, Dahlén SE, Greco D, et al. Differential DNA methylation in purified human blood cells: implications for cell lineage and studies on disease susceptibility. PLOS ONE. 2012; 7(7):41361. doi:10.1371/journal.pone.0041361.
Palli D, Berrino F, Vineis P, Tumino R, Panico S, Masala G, et al. A molecular epidemiology project on diet and cancer: the EPIC-Italy Prospective Study. Design and baseline characteristics of participants. Tumori. 2003; 89(6):586–93.
Fahey JL, Schnelle JF, Boscardin J, Thomas JK, Gorre ME, Aziz N, et al. Distinct categories of immunologic changes in frail elderly. Mech Ageing Dev. 2000; 115(1–2):1–20. doi:10.1016/S0047-6374(00)00094-4.
Cossarizza A, Ortolani C, Paganelli R, Barbieri D, Monti D, Sansoni P, et al. CD45 isoforms expression on CD4+ and CD8+ T cells throughout life, from newborns to centenarians: implications for T cell memory. Mech Ageing Dev. 1996; 86(3):173–95. doi:10.1016/0047-6374(95)01691-0.
Romanyukha AA, Yashin AI. Age related changes in population of peripheral T cells: towards a model of immunosenescence. Mech Ageing Dev. 2003; 124(4):433–3.
Paul F, Arkin Y, Giladi A, Jaitin D, Kenigsberg E, Keren-Shaul H, et al. Transcriptional heterogeneity and lineage commitment in myeloid progenitors. Cell. 2015; 163(7):1663–7. doi:10.1016/j.cell.2015.11.013.
Kulis M, Merkel A, Heath S, Queirós AC, Schuyler RP, Castellano G, et al. Whole-genome fingerprint of the DNA methylome during human B cell differentiation. Nat Genet. 2015; 47(7):746–56. doi:10.1038/ng.3291.
McLean CY, Bristor D, Hiller M, Clarke SL, Schaar BT, Lowe CB, et al. GREAT improves functional interpretation of cis-regulatory regions. Nat Biotechnol. 2010; 28(5):495–501. doi:10.1038/nbt.1630.
Mo A, Mukamel EA, Davis FP, Luo C, Henry GL, Picard S, et al. Epigenomic signatures of neuronal diversity in the mammalian brain. Neuron. 2015; 86(6):1369–84. doi:10.1016/j.neuron.2015.05.018.
Leek JT, Storey JD. Capturing heterogeneity in gene expression studies by surrogate variable analysis. PLOS Genet. 2007; 3(9):1724–35. doi:10.1371/journal.pgen.0030161.
Teschendorff AE, Zhuang J, Widschwendter M. Independent surrogate variable analysis to deconvolve confounding factors in large-scale microarray profiling studies. Bioinformatics. 2011; 27(11):1496–505. doi:10.1093/bioinformatics/btr171.
Assenov Y, Müller F, Lutsik P, Walter J, Lengauer T, Bock C. Comprehensive analysis of DNA methylation data with RnBeads. Nat Methods. 2014; 11(11):1138–40. doi:10.1038/nmeth.3115.
Aryee MJ, Jaffe AE, Corrada-Bravo H, Ladd-Acosta C, Feinberg AP, Hansen KD, et al. Minfi: a flexible and comprehensive Bioconductor package for the analysis of Infinium DNA methylation microarrays. Bioinformatics. 2014. doi:10.1093/bioinformatics/btu049.
Choi S, Cichocki A, Park H-M, Lee S-Y. Blind source separation and independent component analysis: a review. Neural Inf Process Lett Rev. 2005; 6(1):1–57.
Vavasis SA. On the complexity of nonnegative matrix factorization. SIAM J Optim. 2007; 20(3):1–12. doi:10.1137/070709967.
Lin CJ. Projected gradient methods for nonnegative matrix factorization. Neural Comput. 2007; 19(10):2756–79. doi:10.1162/neco.2007.19.10.2756.
Tao P, An L. Convex analysis approach to dc programming: theory, algorithms and applications. Acta Mathematica Vietnamica. 1997; 22(1):289–355.
Yuille AL, Rangarajan A. The concave-convex procedure. Neural Comput. 2003; 15(4):915–36. doi:10.1162/08997660360581958.
Owen AB, Perry PO. Bi-cross-validation of the SVD and the nonnegative matrix factorization. Ann Appl Stat. 2009; 3(2):564–94. doi:10.1214/08-AOAS227.
Pidsley R, Wong CCY, Volta M, Lunnon K, Mill J, Schalkwyk LC, et al. A data-driven approach to preprocessing Illumina 450K methylation array data. BMC Genomics. 2013; 14(1):293. doi:10.1186/1471-2164-14-293.
Lutsik P, Slawski M, Gasparoni G, Hein M, Walter J. MeDeCom: R package for decomposition of heterogeneous methylomes. 2016. doi:10.5281/zenodo.208195.
Our implementation of the matrix factorization Algorithm 1 in this paper is based on extensions of code originally developed by Qinqing Zheng in collaboration with MS and MH. We would like to thank Karl Nordström, Abdulrahman Salhab, and the DEEP project for providing and processing the WGBS data of the CD4+ T cells. We are thankful to Elior Rahmani for providing the source code for the adjustment method comparison, and to the authors of the Liu et al. paper for providing additional information about their samples.
PL obtained funding from the European Union's Seventh Framework Programme (FP7/2007-2013) grant agreement 267038 (NOTOX). MS was supported by DFG Cluster of Excellence MMCI. Support was also provided by German Science Ministry grant 01KU1216A (DEEP).
The data sets supporting the conclusions of this article are available in the GEO repository under accessions: GSE35069 (PureBC), GSE42861 (WB1), GSE51032 (WB2), GSE15745 (PureN, ArtMixN, and FC1), and GSE15745 (FC2). WGBS data of CD4+ T cells are deposited in EGA as a part of the DEEP project submission (accession EGAS00001001624). WGBS profiles of naive and memory B cells were downloaded in bedGraph format from the IHEC data portal (sample names S001JP51, C003K951, C003N351, and C0068L51). The MeDeCom source code has been deposited at Zenodo [64] and is available from GitHub (http://github.com/lutsik/MeDeCom) under the GPL-3 license.
PL, MS, MH, and JW together conceived the project. PL did the R implementation and analyzed the data. MS and MH designed the MeDeCom models (with PL) and developed the algorithms implemented and prototyped by MS. GG provided conceptual advice on neuronal data analysis. NV implemented the improved optimization algorithms and ported factorization code to C++. MH suggested improvements for primary data processing and co-supervised the project. JW provided key expertise in the interpretation of biological results and co-supervised the project. All authors discussed the results at all stages and contributed to writing the manuscript. All authors read and approved the final manuscript.
Department of EpiGenetics, Saarland University, Campus A2.4, Saarbrücken, 66123, Germany
Pavlo Lutsik
, Gilles Gasparoni
& Jörn Walter
Machine Learning Group, Saarland University, Campus E1.1, Saarbrücken66123, Germany
Martin Slawski
, Nikita Vedeneev
& Matthias Hein
Department of Statistics and Biostatistics, Department of Computer Science, Rutgers University, 110 Frelinghuysen Rd, Piscataway, 08854, NJ, USA
Present address: Division of Cancer Epigenetics, German Cancer Research Center, Im Neuenheimerfeld 280, Heidelberg, 69120, Germany
Present address: Department of Statistics, Volgenau School of Engineering, George Mason University, 4400 University Drive, MS 4A7 Fairfax, Fairfax, VA 22030-4444, USA
Search for Pavlo Lutsik in:
Search for Martin Slawski in:
Search for Gilles Gasparoni in:
Search for Nikita Vedeneev in:
Search for Matthias Hein in:
Search for Jörn Walter in:
Corresponding authors
Correspondence to Matthias Hein or Jörn Walter.
Additional file 1
Supplementary Tables. PDF document with supplementary tables. (PDF 120 kb)
Supplementary Figures. PDF document with supplementary figures. (PDF 2088 kb)
CpGs used for the analysis of memory and naive B cells. A comma-separated value table file. (CSV 35 kb)
LMC-specific CpG positions of the FC1 data set. A comma-separated value table file. (CSV 256 kb)
Supplementary Text. PDF document with supplementary notes. (PDF 158 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Lutsik, P., Slawski, M., Gasparoni, G. et al. MeDeCom: discovery and quantification of latent components of heterogeneous methylomes. Genome Biol 18, 55 (2017) doi:10.1186/s13059-017-1182-6
DNA methylome
Cell heterogeneity
Deconvolution
Matrix factorization
Submission enquiries: [email protected] | CommonCrawl |
← Princeton Center for Theoretical Physics
Yet More Assorted Links →
Wilczek Goes Anthropic
A few weeks ago one Nobel prize winner put out an article promoting the idea of adopting anthropic reasoning as a new paradigm of how to do theoretical physics. More recently another Nobelist, Frank Wilczek, has to some degree followed suit. Wilczek is one of four authors on a new paper entitled Dimensionless constants, cosmology and other dark matters which first appeared on the arXiv November 29th, then in a slightly revised version on December 8. The other authors are Tegmark, Aguirre and Rees, with Tegmark's name appearing first indicating it's more his work than that of his co-authors.
I wasn't sure quite what to make of this paper when it first came out, especially how much it reflected Wilczek's own point of view on anthropism. Last Friday I attended talks by Wilczek and Tegmark at the 6th Northeast String Cosmology Meeting organized by the Institute for Strings, Cosmology and Astroparticle Physics here at Columbia.
Wilczek's talk was entitled "Enlightenment, Knowledge, Ignorance, Temptation". He explained that these corresponded to categorizing parameters of physical theories according to whether life depended on them or not and whether we have a good idea for what determines them or not. Choosing the two possible answers to these two questions gives four cases:
Enlightenment: Parameters that life depends on, and we think we have a good idea about what determines them. Here his example was the proton mass, very small on the Planck scale, but we think we know why: logarithmic running of coupling constants.
Knowledge: Parameters that life doesn't depend on, and we think we have a good idea about what determines them. One example he gave was strong CP violation, which is irrelevant to life, but very small, perhaps because of axions.
Ignorance: Parameters that life doesn't depend on, and we don't have a good idea about what determines them. This includes most of the standard model parameters, as well as just about all parameters in theories that go beyond the standard model.
Temptation: Parameters that life depends on, and we don't have a good idea about what determines them. The examples he gave were the electron and up and down quark masses.
He said that his talk would concentrate on "Temptation", the temptation being that of using anthropic argumentation. He noted that David Gross believes this is a dangerous opiate, causing people to just give up instead of really solving problems. The one anti-anthropic point he made was to put up a graphic showing agreement of the lattice QCD spectrum calculations with experiment, saying the lesson was that sometimes real calculations turned out to be possible even though people had at times doubted this. So one should try and "limit the damage", not go wild and use anthropics inappropriately, trying to save as much beautiful physics as one can even when anthropic reasoning is forced on us.
The rest of his talk though showed a significant amount of enthusiasm for the new anthropism. He referred to people like his co-author Rees who have been promoting the anthropic point of view for years as "unhonored prophets". Given the paucity of experimental data relevant to explaining where things like standard model parameters come from, he said that at least anthropics gives lots of new questions so one has something to do when one gets up each day which might be fruitful. He attacked the idea of using "pure thought", without consulting the physical world, saying this hasn't worked, not 20 years ago, not now, not in the future. I presume he had string theory in mind when he said this, noting out loud that it might annoy some people in the room.
The main idea about anthropics he was trying to push is that anthropic calculations were "just conditional probability", making much of the equation
f(p)=fprior(p)fselec(p)
for the probability of observing some particular value p of parameters, given some underlying theory in which they are only determined probabilistically by some probability distribution fprior(p). The second factor fselec(p) is supposed to represent "selection effects", and it is here that anthropic calculations supposedly have their role. In the paper the authors argue that "Including selection effects is no more optional than the correct use of logic". The standard way physics has traditionally been done, one hopes that the underlying theory determines p (i.e. fprior(p) is a delta-function), making selection effects irrelevant in this context. The authors attack this point of view, writing:
to elevate this hope into an assumption would, ironically, be to push the anthropic principle to a hedonistic extreme, suggesting that nature must be devised so as to make mathematical physicists happy.
At no point in his or Tegmark's talks, or anywhere in their paper, do they address the central problem with the anthopic principle, that there's a huge issue about whether you can get falsifiable predictions out of it, and thus whether you're really doing science. In this context, the nature of the problem is that if fprior(p) is not peaked somewhere but is flat (or more or less flat), then everything just depends on fselec(p), but if you calculate it anthropically, all you are doing is seeing what you can conclude from known laws of physics and the fact that we exist. In the end what will come out of this kind of calculation is some probability distribution that better be non-zero for the values of the parameters we observe, otherwise you've done the calculation wrong.
There is a particular sort of physical model one can hope to falsify this way. If one assumes our universe is a randomly chosen point in a "multiverse" of possibilities, and looks at an observable that is supposed to have a more or less flat probability distribution in the ensemble given by the multiverse, then one can argue that we should be at some region of parameter space containing the bulk of the probability in the anthropically determined fselec(p), not far out in some tail where the probability distribution is vanishingly small. There are plenty of examples of this already. The proton lifetime is absurdly long compared to bounds from anthropic constraints, so any model of a multiverse that doesn't have some structure built into it to generically sufficiently suppress proton decay is ruled out. This includes the string theory landscape, so one of the many mysteries of the whole anthropic landscape story is why its proponents don't take their own arguments seriously and admit that their model has been falsified already. It also applies to Tegmark's favorite idea, that of the existence of a Level IV multiverse of all possible mathematical structures, an idea he also promotes in the paper with Wilczek.
Wilczek also discussed one particular axion cosmology model in which fprior(p) can be calculated. In these models one has the relation
$$\xi_c\sim f_a^4\sin ^2\frac{\theta_0}{2}$$
for the axion dark matter density in terms of the Peccei-Quinn symmetry breaking scale and the misalignment angle of the axion field at the Peccei-Quinn symmetry breaking phase transition. To make this agree with the observed dark matter density, if one assumes the misalignment angle is some random angle then the Peccei-Quinn scale has to be about 1012GeV. If one wants to make the Peccei-Quinn scale the GUT or Planck scale, one has to find some reason for the misalignment angle to be very small. The proposal here is that this happens for anthropic reasons, since if the angle were not small it would cause an amount of dark matter incompatible with our existence. For these small angles the above formula implies that the probability distribution for the dark matter density caused by such axions satisfies
$$f_{\text{prior}}(\xi)\sim \frac{1}{\sqrt \xi}$$
The Tegmark et. al. paper contains an elaborate calculation of fselec for the dark matter density, involving all sorts of "anthropic" considerations which goes on for eleven pages or so and involves a bafflingly long list of considerations about galaxy, star and planet formation, as well as many possible dangers that could have disrupted the evolution of life, such as disruption of the Oort cloud of comets. I'll freely admit to not having taken the time to follow this argument. The end result for fselec as a function of $\sqrt\xi$ is a probability distribution with the measured dark energy corresponding to something close to the peak.
I'm not sure exactly what conclusions one can or should draw from this calculation. So many different facts about our specific universe are being folded into this that it's not clear to me that there isn't some circular reasoning going on. This is a general problem with "anthropic" arguments: if you assume that life couldn't exist if the universe was much different than it is, you smuggle all sorts of information about the way the world is into your "anthropic" calculation, after which it is not too surprising that it "predicts" the universe has more or less the properties you observe.
What we really care about in these arguments is whether they can be used to extract any information whatsoever about fprior, the physics we are trying to get at. In this axion cosmology case we have a prediction for this distribution and the calculation shows this is consistent with the observed dark energy density, but as far as I can tell, all sorts of other quite different distributions would work too. So, I'm still confused about exactly what this calculation has told us about the underlying axion cosmology physics that it is supposed to address, other than that it is not obviously completely inconsistent.
Tegmark's talk at Columbia was titled "Measuring and Predicting Cosmological Parameters". The "measuring part" was a summary of some of the impressive experimental evidence for the standard cosmological model. The "predicting" part was pretty much pure promotion of anthropism, including a long section on reasons why the electroweak symmetry breaking scale is anthropic and some comments making fun of David Gross ("even he couldn't predict the distance from the earth to the sun. Laughter…"). The only actual "predictions" mentioned were the results about the axion cosmology model mentioned above and described in detail in the Tegmark et. al paper, as well as the well-known Weinberg anthropic "prediction" for the cosmological constant.
All in all, I found these two talks and the Tegmark et. al. paper pretty disturbing. They seem to me to be part of a highly ideological effort to sell the Anthropic Principle as science. The paper devotes two pages to a detailed list of standard model parameters, and makes various statements about the probability distribution function on this large number of parameters, even though it has nothing to say about almost all of them, and I think there's a strong argument that the anthropic program inherently will never have anything useful to say about most of these parameters. Many of Wilczek's remarks were more modest, but the paper he has signed his name to is highly immodest in its claims for anthropism. Together with Weinberg and Susskind's anthropic campaigns, it seems to me that more and more theorists are going to join this bandwagon. Neither Wilczek nor Tegmark are string theorists (and Wilczek is clearly somewhat skeptical about the whole idea), but there seems to be an unholy alliance brewing between them and Susskind and his followers. The only prominent person in the field standing up to this publicly is David Gross, and it is very worrying to see how little support he is getting.
Update: A preprint by Frank Wilczek corresponding to his talk last week entitled Enlightenment, Knowledge, Ignorance, Temptation has appeared. It is a contribution to the same conference as the one Weinberg contributed Living in the Multiverse to, I gather in honor of Martin Rees. Wilczek's preprint announces a "new zeitgeist", that anthropic arguments are in the ascendancy. One quite strange thing in the preprint is that he suggests an anthropic explanation for the long proton lifetime in terms of doing anthropic calculations involving future observers.
He does say there are drawbacks to the new order (a loss of precision and of targets to calculate), but on the whole he seems to embrace the new anthropic paradigm rather whole-heartedly, seeing it as a lesson in humility for those who had the hubris to believe it was possible to understand more about the universe through "pure thought."
Update: Two of the authors of the paper discussed here (Aguirre and Tegmark) wrote in with some comments that are well worth reading (as well as those from Smolin and others about his own proposal). Aguirre points to an interesting paper of his On making predictions in a multiverse (see also an earlier paper with Tegmark), which addresses some of the conceptual issues that were bothering me about this sort of calculation. It points out many of the problems with this kind of calculation, and I don't really share the author's optimism that they can be overcome.
Lee Smolin mentioned to me a somewhat related workshop that was held this past summer at the Perimeter Institute, on the topic of Evolving Laws, especially "do the laws of nature evolve in time?" Audio of the discussions at the workshop is available
107 Responses to Wilczek Goes Anthropic
Anthony Aguirre says:
To "who":
I agree with Aaron Bergman that CNS is subsumed in 'multiverse', (and this is why I did not think it required separate listing either in the present paper or my earlier one with a similar list) though I also agree with you that, as I noted before, there is an aspect of 'fluke; that the conditions for life must coincide with the conditions for black-hole formation (or else life all lives in rare universes and we are back to anthropics to explain why we do not observe the most common type of universe).
Lee:
For me, and I think for many, the argument goes:
1) observations imply inflation.
2) Inflation implies at least the possibility of eternal inflation.
breaking eather link would be extremely interesting. Breaking the first, by creating a viable alternative to inflation for explaining the CMB fluctuations, etc., would obviously be interesting, but has not, in my opinion, been even nearly done, with all due respect to the cyclic and VSL folks.
Breaking the second link, which you suggest, would also be interesting and I would be happy to entertain such a possibility — I just do not see how to do it. You may be right that somehow eternal inflation will 'go away' if we understand vacuum energy, but I'm not sure how this could happen without inflation itself going away.
There are some, I think, who entusiastically embrace the eternal inflation picture. But I think many others others, like myself, find it interesting that observations seem imply it, perhaps despite our wishes.
Anthony, what about Luminet's "hall of mirrors"?
http://arxiv.org/abs/physics/0509171
Lee Smolin says:
Dear Anthony,
Thanks, I am aware of this point of view but I haven't found the literature completely convincing. Perhaps you could tell me the paper where the argument "inflation implies eternal inflation" is most clearly and convincingly presented, and I'll study it.
Worry about the realness of the vacuum energy is not the only issue I've had with this argument. Another is how well defined are the frameworks in which calculations are done. Some are very heuristic, others depend on assumptions about the interpretation of the wave function of the universe and measures on infinite numbers of universes that don't seem to make sense when scruitinized.
Another kind of worry is that extending from inflation to internal inflation requires believing that the mechanism is reliable at scales presently outside our horizon that were formerly way below the Planck scale. If there is a universal planck scale cut off, as in DSR, it could alter the physics presently outside of our horizon. In fact, there are apparent anomalies in the CMB spectra near present Hubble scales-both the low power and the axis of evil. If real they support the idea that Planck scale effects have a non-trivial effect on inflation, which would change things outside our present horizon.
On the other hand, we know our universe produces large numbers of astrophsyical black holes, a fact that depends on features of star formation and galactic dynamics that do not seem otherwise necessary.
ps to Who, thanks for pressing the argument.
I tried to get a response over at CV and none so far, so I might as well point it out here: there's a new paper from Reza Mansouri, http://arxiv.org/abs/astro-ph/0512605, which claims that properly accounting for inhomogeneities in the universe eliminates the need for dark energy or a cosmological constant to explain the data. It looks reasonable to me and much more convincing than the Kolb stuff earlier this year. Any thoughts? This is somewhat off-topic, but on the other hand if this sort of reasoning is shown to be correct I think much of the reason for supporting anthropic arguments will go away fairly quickly.
Absolutely agree that many of the arguments for eternal 'stochastic' inflation (driven by quantum fluctuations) are somewhat unrigorous, even at the level of effective field theory in curved spacetime. I find them reasonably compelling but consider it possible that there is something fundamentally wrong with them. The argument for eternal inflation with multiple minima, however, seems extremely strong to me — the spacetime can be described more-or-less exactly, and a very straightforward computation implies that the (physical) inflating volume increases when spatial sections are chosen so as to make the background spacetime homogeneous. One must only accept the Coleman-De Luccia picture of the decay of the false vaccum for this to be true for at least some multiple-well potentials.
Now, the questions of how *generic* eternal inflation is, is I think an interesting one: are there non-fine-tuned inflation models that explain the observations and are not eternal? I don't know of any decent study on this question (and may well undertake one myself). In the context of CNS, however, even the *possibility* of eternal inflation is very dangerous, however, because unless eternal inflation is *forbidden* by the 'meta' laws that govern which types of inflation are possible in the landscape of possibilities, it must only be realized once to 'take over' the ensemble by creating an infinite number of black holes out of one. The only escape from this, it would seem, would be to have other channels to create infinitely many black holes from one 'parent' — but then you would be in the same extremely thorny boat of comparing infinities that eternal inflation is already in; further, which type of universe would come out 'winning' in this competition between infinities would seem quite independent of mundania such as the IMF…
One interesting way that EI might be essentially wrong is if the QFT in curved spacetime description is essentially wrong, for example for 'holographic' reasons (see the recent Banks paper). In this view, all of the zillions of 'other universes' are just refigurings of the same degrees of freedom inside the horizon (this realizes, in some sense, a version of the 'hall of mirrors' mentioned by 'dissident' above.) I find this view very hard to understand, but perhaps it is right and would change the way we think about EI, perhaps in a way less troubling (to me at least) in regard to CNS.
Chronos says:
Would it be fair to say that the Anthropic Principle rules out the impossible, but is not predictive beyond that?
Anthony and others,
what do you think about Penrose 's objections to inflation?
I have not seen a good counter-argument to them. | CommonCrawl |
Dual effect of TiO2 and Co3O4 co-semiconductors and nanosensitizer on dye-sensitized solar cell performance
F. A. Taher1Email author,
Galila M. El-sayed1,
N. M. Khattab2 and
N. Almohamady3
Renewables: Wind, Water, and Solar20152:15
© Taher et al. 2015
Accepted: 16 October 2015
Dye-sensitized solar cell (DSSC) was fabricated using nanosize of the dye sensitizer (Alizarin Yellow, AY) that was prepared by ball milling. The particle size and the composition of nano-Alizarin Yellow (nAY) was investigated using TEM and 1H- and 13C-NMR spectra, respectively. The effect of sensitizer size reduction on DSSC efficiency was studied. Co3O4 as a semiconductor in DSSC was prepared and confirmed by XRD. Also, composite of TiO2 and Co3O4 was used to improve the DSSC efficiency. In addition, the effect of terpineol as a solvent was tested. Photocurrent–photovoltage curves of all prepared DSSCs were investigated. Finally, to test the validity of the results, standard error was calculated.
DSSC
Co3O4@TiO2 nanocomposites
Nanosensitizer
Alizarin Yellow
DSSC is an alternative solution for the future energy crisis as a productive source for renewable energy (Kato et al. 2011; Zhuiykov 2014; Ludin et al. 2014). Excitation of dye sensitizer that was doped onto semiconductor or co-semiconductor by sun radiation to generate an electron and leave behind a hole is the initial photon-induced electron reaction in DSSC (Yum et al. 2014). After transition of the excited electron from semiconductor conduction band to a counter electrode through working electrode, the ground state of the dye is reached by electrolyte oxidation (Choi et al. 2013; Han and Ho 2014). The main issue is in returning some electrons back to the dye ground state or electrolyte causing an increase in the electron–hole recombination rate and then deficiency in DSSC efficiency (Lai et al. 2008; Akpan and Hameed 2009; Yamaguchi et al. 2010; Reda 2010; Kato et al. 2011; Tian et al. 2010; Kantonis et al. 2011; Sharma et al. 2010; Basheer et al. 2014a, b). Since, the efficiency of the DSSC relies on the sensitizer and semiconductor, the idea here is to increase the absorption band of the sensitizer by increasing its surface area or decrease the electron–hole recombination rate using darker co-semiconductor to achieve higher solar conversion efficiency.
Actually, Im and his co-worker have used the cocktail effect of TiO2 and Fe2O3 to increase the performance of DSSC. The efficiency of the DSSC has been developed by over 300 % (Im et al. 2011). Also, NiO/TiO2 nanocomposites were prepared and used as modified photoelectrodes in quasi-DSSC with 2.29 % conversion efficiency as by Mekprasart et al. (2011). To the best of our knowledge, so far, the effect of Co3O4 as a co-semiconductor was not previously reported therein. In this work, the dye sensitizer was converted to nanosize to investigate its size reduction on the DSSC efficiency. Also, a composite of TiO2 and Co3O4 was prepared to use as a semiconductor in DSSC. In addition, the effect of terpineol as a solvent was tested via I–V characteristic curves.
Preparation of nanodye
The chemical structure of Alizarin Yellow (AY, DyeStar) is shown in Fig. 1. Nano-Alizarin Yellow, nAY, was prepared by ball milling machine for 8 h (RETSCH PM 400, Germany), then heated at 40 °C for 24 h. The chemical composition of nAY was confirmed by 1H- and 13C-NMR spectroscopy (Bruker High-Performance Digital FT-NMR Spectrometer Avance III 400 MHz). TEM was measured to determine the particle size of nAY and its distribution (JEOL, TEM-1230).
Chemical structure of Alizarin Yellow (AY)
Preparation of nanocobalt oxide
Cobalt oxide nanopowder was obtained by co-precipitation method. Drops of 0.3 M sodium hydroxide aqueous solution (98 %, Adwic) was stirred with aqueous solution of 0.01 M cobaltous chloride hexahydrate (98 %, Indiamart) for 2 h at room temperature. The obtained green precipitate, Co(OH)2, was washed several times with distilled water. Nanocobalt oxide (Co3O4) was obtained after drying at 80 °C and sintering at 900 °C. All theses steps can be represented by the following equations:
$$2{\text{NaOH}}_{{ ( {\text{aq)}}}} + \,{\text{CoCl}}_{ 2} \cdot 6{\text{H}}_{ 2} {\text{O}}_{{ ( {\text{aq)}}}} \to 2{\text{NaCl}}_{{ ( {\text{aq)}}}} + \,{\text{Co(OH)}}_{ 2} \cdot 6{\text{H}}_{ 2} {\text{O}}_{{ ( {\text{aq)}}}}$$
$${\text{Co(OH)}}_{2} \cdot6{\text{H}}_{2} {\text{O}}\xrightarrow{{ \triangleq 80^{\circ } {\text{C}}}}{\text{Co(OH)}}_{2} + 6 {\text{H}}_{2} {\text{O}}$$
$${\text{Co(OH)}}_{2} {\mkern 1mu} \xrightarrow{{ \triangleq 900^{\circ } {\text{C}}}}{\text{Co}}_{3} {\text{O}}_{4} + {\text{H}}_{2} {\text{O}}$$
Accordingly, the crystalline structure of Co3O4 was confirmed by powder X-ray diffraction (XRD: Empyrean, Holland). To obtain the particle size of Co3O4 and its distribution, TEM measurement was conducted (JEOL, TEM-1230).
Preparation of Co3O4@TiO2 composite
1.6 g Co3O4 and 5 g TiO2 (anatase 99.7 %, P25, Sigma-Aldrich) were mixed with 25 ml distilled water, and stirred for 48 h at room temperature. The resultant complex was sintered at 600 ℃ for 1 h.
Preparation of TiO2 and Co3O4@TiO2 pastes
To prepare the pastes, 2 g TiO2 and 2 g Co3O4@TiO2 composite were separately added into a solution of 0.5 g polyethylene glycol (20,000 g/mol, Sisco) dissolved in 7 ml of distilled water (as a binder to prevent the film from cracking during drying), 5 ml ethanol, and 15 ml terpineol (Sigma-Aldrich). The resultant two mixtures were thermally heated at 100 °C for 6 h.
Preparation of the working electrode
Fluorine-doped tin oxide glass (FTO, Pilkington Kappa Energy, 18 Ω/cm2) was cleaned with 95 % ethanol, 1-propanol and distilled water, then left to dry in open air. Before applying TiO2 and Co3O4@TiO2 pastes, FTO glass was heated in 0.2 M TiCl4 solution (99 %, Merck) at 70 °C for 30 min to make a nanocrystalline TiO2 film which prevents the electrolyte from approaching the conductive layer preventing the cell from the dark current. The previous pastes were coated onto FTO by the doctor blade technique using Scotch adhesive tape (thickness: 50 μm). The film was air dried for 10 min at room temperature and then annealed and sintered at 450 ℃ for 30 min. The loaded pastes on FTO were separately immersed in an aqueous solution of 1 × 10−4 M AY and 1 × 10−4 M nAY. The resultant working electrode was dried at room temperature overnight.
Preparation of the counter electrode
FTO glass was coated with Pt paste (Platisol, Solaronix) then dried at 70 °C for 3 h and sintered for 30 min at 450 °C under airflow of 30 ml/min. The counter electrode was then left to cool down to room temperature before usage.
Assembly of the DSSC
Between the counter and the working electrodes, the iodide/iodine electrolyte solution (0.5 M potassium iodide mixed with 0.05 M iodine in water-free ethylene glycol) was located and then binder clipped to immobilize each part. The area of the DSSC was fixed to be 2.25 cm2.
Measurement of the photophysical and electrochemical properties
UV–Vis spectrophotometer was used to record the absorption spectra of AY, nAY, TiO2 and Co3O4@TiO2 solutions; emission spectra of AY and nAY solutions; and photoluminescence spectra of AY, nAY, AY–TiO2 and nAY–TiO2 solutions (Perkin Elmer, lambada 35, USA). I–V characteristics were measured using a photocurrent–voltage (I–V) curve analyzer (Peccell Technologies, Inc., PECK2400-N, version 2.1) under AM 1.5 (950 mW/cm2) irradiation with a solar simulator (Peccell Technologies, PEC-L11).
Effect of the size reduction on the characteristics of nAY
The effect of size reduction on the particle size, chemical composition and spectral analyses was investigated by TEM image, NMR and UV–Vis spectra, respectively. Figure 2 shows TEM photograph of the as prepared nAY. A homogenous rod-like structure was observed with diameters and lengths less than 20 and 100 nm, respectively.
TEM image of a AY and b nAY
To confirm whether the ball milling process results in the partial decomposition of some AY molecules or not, the 1H- and 13C-NMR spectra of nAY were measured (Fig. 3). As can be seen in Fig. 3a, the aromatic ring protons multiplet of nAY molecule were observed from 7.11 to 8.70 ppm. The resonance signal due to OH proton singlet was observed at 9.80 ppm. While for the corresponding carbon resonance, Fig. 3b, the carbonyl carbon signal appears in a characteristic field of 146.02 ppm which confirms that there is no partial chemical decomposition of nAY molecule and the ball milling process does not affect its composition.
Experimental a 1H and b 13C NMR spectra of nAY in CDCl3
UV–Vis absorption spectra were obtained from nAY and AY solution (Fig. 4). When comparing the maximum absorption wavelength, λ max, a bit of bathochromic shift of AY λ max from 400 to 445 nm for nAY was observed which corresponds to the transition from HOMO to LUMO. This red shift of λ max of nAY can be attributed to its smaller particle size (17–35 nm) that reflects a strong electron donation ability of nAY (increasing the delocalization of the π* orbital of AY), i.e., the absorption energy was shifted to lower frequency with decrease of the particles' diameter. This was readily observed from the reflected color change of AY from brilliant yellow to mustard yellow of nAY passing through canary yellow, where each color corresponds to the different particle size of nAY. After sensitization of AY and nAY on TiO2, λ max was shifted to red region by 100 and 55 nm, respectively, due to J-aggregation on TiO2 surface (curves not inserted). In addition, the emission spectra of AY and nAY at 300 nm showed a broad spectral peak at the same position corresponding to relaxation to lower energy level (Fig. 5). Also, the lower emission intensity of nAY indicates the delay in recombination rate of e− and h+ that emphasizes the advantage of using nAY in DSSC fabrication. This can also be noticed in the hypochromic shift of the photoluminescence spectra of AY to lower intensity for nAY (Fig. 6). This hypochromic effect indicates the decrease of photons number coming from electrons and holes recombination (Balraju et al. 2010). In addition, the adsorption of AY and nAY on TiO2 showed the decrease in these photons numbers.
UV–Vis absorption spectra of AY and nAY
Emission spectra of AY and nAY
Photoluminescence spectra for AY, nAY, AY–TiO2 and nAY–TiO2
Characteristics of Co3O4
The X-ray diffraction characteristic peaks in Fig. 7a were analyzed to determine the structure and crystallite size of the as-prepared Co3O4. Nanopowder XRD peaks of Co3O4 are well consistent with the data of the JCPDS file (card no. 78-1970) of phase-pure Co3O4 with cubic spinel structure (Co2+ ions occupy the tetrahedral sites and Co3+ ions the octahedral sites) showing the main Bragg's reflection peak in the (311) plane. The peaks at 2θ value of 18.9°, 31.2°, 36.9°, 38.6°, 44.7°, 55.6°, 59.4° and 65.1° correspond to the crystal planes of (111), (220), (311), (222), (400), (422), (511) and (440) of well-crystallized Co3O4, respectively (Xiao et al. 2014; Kong et al. 2014). It is clear that Co3O4 is the only phase after the decomposition of the green precipitate, Co(OH)2, at 900 °C without any other diffraction peak. The crystal size of Co3O4 is deduced through (311) plane using Scherrer equation \(D\, = \,{{ 0. 94\,\lambda } \mathord{\left/ {\vphantom {{ 0. 9 4\lambda } {\beta \;{ \cos }\theta }}} \right. \kern-0pt} {\beta \;{ \cos }\theta }}\) where D is the crystal size of Co3O4, λ is the wavelength of incident X-ray (0.154 nm), θ is the half diffraction angle of peak in degree, and β is the full width half maximum of a reflection located at 2θ. The average crystal size of Co3O4 is 154 nm which in a good agreement with TEM result of the cubic Co3O4 nanoparticles (Fig. 7b). Furthermore, the UV–Vis absorption spectra of TiO2 and Co3O4@TiO2 composite, Fig. 8, showed a band gap of 3.2 eV of the absorption band of TiO2. Co3O4@TiO2 composite exhibits a continuous absorption band of the dark composite toward higher wavelength in the range 300–750 nm (Kim et al. 2014). This red shift of the edge of the absorption peak implies band gap narrowing. The smaller band gap reflects the advantage of using Co3O4 in decreasing the recombination rate of e− and h+. The electrons that would transfer to the electrolyte or dye (AY or nAY) can be confined in the conduction band of Co3O4@TiO2 composite (i.e., Co3O4@TiO2 composite captures electrons from TiO2 conduction band).
a X-ray diffraction pattern of nano-Co3O4 b TEM of cubic Co3O4
UV–Vis absorption spectra of TiO2 and Co3O4@TiO2 composite
Photocurrent–voltage behavior of the DSSCs
Three effects of photosensitizer size reduction, nAY, oxide co-semiconductor, Co3O4, and presence of terpineol as a solvent on the efficiency of the prepared DSSCs were investigated under 950 mW/cm2 by analyzing their photocurrent density–voltage curves in Fig. 9 with error bars of photocurrent density. As for a comparison, the photovoltaic properties of the solar cells fabricated with commercial dye, AY, and its nanosize, nAY, in absence and presence of Co3O4 on the working electrode were measured. The values of open-circuit photovoltage, V oc, short-circuit photocurrent, I sc, fill factor FF and overall energy conversion efficiency are presented in Table 1.
I–V curves of DSSCs (A–E)
The cell performance parameters of the prepared DSSCs
DSSCs
Voc (V)
Isc (mA)
SE (±)
η (%)
Co3O4@TiO2
The reduction of the original macro-size of AY to less than 100 nm of nAY, Fig. 2, has a great effect on the DSSC efficiency that increased by 70 % (C). This could be related to the nanosize of AY. As the size of a dye crystal decreases to nanometer regime, the size of AY particles begin to modify the properties of the crystal, so the electronic structure is altered from the continuous electronic bands to discrete or quantized electronic levels. Therefore, the nanomaterial becomes size dependent, and the electronic excitations shift to higher energy, and the oscillator strength is concentrated into just a few transitions. Actually, the presence of Co3O4 as a co-semiconductor in DSSCs electrode (B and D) increased their efficiency by 165 and 620 times in comparison with DSSC (A), respectively. This can be explained due to (1) TiO2–Co3O4 composite that is darker in color and has a high absorption in the visible region of the solar spectrum. (2) The hexagonal crystal structure of Co3O4 which is a p-type semiconductor due to O2-deficiency in its lattice and consequently the electron charge is fast injected into the conduction band (CB) of Co3O4. (3) Co3O4 shows different types of band gaps: direct allowed 2.06 and 1.44 eV, direct forbidden 1.38 and 1.26 eV, indirect allowed 1.10 eV (energy of phonon assisting indirect transition = 0.02) and indirect forbidden 0.75 eV (energy of phonon assisting indirect transition = 0.27) (Kabre 2011). The efficiency of DSSC (D) increased by 13-fold in the presence of terpineol as a solvent (E). It can be seen that the film prepared using terpineol as the solvent has exhibited the highest energy conversion efficiency. When water was used as the solvent, only polyethylene glycol exhibited a stable film formation on FTO conductive glass. This exhibited a cracked surface that was recognized by naked eye observation. On the other hand, when terpineol—an organic solvent with the hydroxyl functionalities that can accept or donate hydrogen bonds—was used as solvent, most of the semiconductor dispersions enabled us to form uniform thin films. Accordingly, the increased gap between ground state of dye and redox potential electrolyte would lead to higher performances of DSSCs in the following direction: (A) < (C) < (B) < (D) < (E). Figure 10 shows the standard error (SE) of the photocurrent density mean of DSSCs (A–E) to test the validity. A valid mean is reliable mean if it is at least 2.5 times standard deviation (Bevington and Robinson 2002). However, by calculating the DSSCs efficiency and considering these standard errors values, it was found the efficiency values did not change for four digits.
Standard error analysis of photocurrent density of DSSCs (A–E)
The predicted mechanism for the conversion of photons to current for the DSSCs could be interpreted to pass through the following stages, Fig. 11. The electrons are excited by solar energy from HOMO to LUMO level of dye (AY/AY+) that is adsorbed on TiO2–Co3O4 composite surface, owing to the intermolecular π–π* transition (Stage 1). These excited electrons diffuse immediately into the CB of TiO2 (Stage 2); then move to the CB of Co3O4 (Stage 3) which decrease the electrons flow back to the HOMO (Stage 4) or \(I_{3}^{ - }\) electrolyte in recombination (Stage 5) (i.e., reduce the electron trapping effect—by increasing the contact surface area of the TiO2–Co3O4 composite with AY or electrolyte) (Anta 2012). These electrons go forward to the FTO of the working electrode (Stage 6). Consequently, these electrons reach the counter electrode through the external wiring (Stage 7). The oxidized dye (AY+) accepts electron from I − redox mediator, regenerating the HOMO of the dye (AY) and I − is oxidized to \(I_{3}^{ - }\). The oxidized redox mediator, \(I_{3}^{ - }\), is reproduced to I − at the counter electrode.
DSSCs' predicted conversion mechanism
Five DSSCs were prepared to investigate the effects of their construction on their solar conversion efficiency. The nanosize of AY (less than 100 nm) has a great effect on the DSSC efficiency that increased by 70 %. Actually, the presence of Co3O4 as a co-semiconductor in DSSCs electrode increased their efficiency by 165 and 620 times for the cells modified by TiO2 + Co3O4 only and TiO2 + Co3O4 with nAY, respectively. The presence of solvent (terpineol) increased the efficiency of DSSC by 13-fold. Finally, the predicted mechanism for the conversion of photons to current for the DSSCs was discussed.
FAT carried out the electrochemical studies of the DSSCs, participated in the sequence alignment, and drafted the manuscript and also the revision process. GME conceived the study, and participated in its design and helped to draft the manuscript. NK measured all the photophysical properties of DSSC, also participated in the study design and coordination. NA prepared all the as-obtained compounds and assembled the DSSCs. All authors read and approved the final manuscript.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Department of Physical Chemistry, Faculty of Science (Girls), Al-Azhar University, Youssif Abbas St., Nasr city, Cairo, Egypt
Solar Energy Department, National Research Center, El-bohooth St., Dokki, Giza, Egypt
Faculty of Science (Girls), Youssif Abbas St., Nasr city, Cairo, Egypt
Akpan, U. G., & Hameed, B. H. (2009). Parameters affecting the photocatalytic degradation of dyes using TiO2-based photocatalysts: a review. Journal of Hazardous Materials, 170(2), 520–529.View ArticleGoogle Scholar
Anta, J. A. (2012). Electron transport in nanostructured metal–oxide semiconductors. Current Opinion in Colloid & Interface Science, 17(3), 124–131.View ArticleGoogle Scholar
Balraju, P., Kumar, M., Deol, Y. S., Roy, M. S., & Sharma, G. D. (2010). Photovoltaic performance of quasi-solid state dye sensitized solar cells based on perylene dye and modified TiO2 photo-electrode. Synthetic Metals, 160(1), 127–133.View ArticleGoogle Scholar
Basheer, B., Mathew, D., George, B. K., & Nair, C. R. (2014a). An overview on the spectrum of sensitizers: the heart of dye sensitized solar cells. Solar Energy, 108, 479–507.View ArticleGoogle Scholar
Basheer, B., Mathew, D., George, B. K., & Nair, C. R. (2014b). An overview on the spectrum of sensitizers: the heart of dye sensitized solar cells. Solar Energy, 108, 479–507.View ArticleGoogle Scholar
Bevington, P. R., & Robinson, D. K. (2002). Data reduction and error analysis for the physical sciences (III ed.). New York: McGraw–Hill.Google Scholar
Choi, H., Nahm, C., Kim, J., Kim, C., Kang, S., Hwang, T., & Park, B. (2013). Review paper: toward highly efficient quantum-dot and dye-sensitized solar cells. Current Applied Physics, 13, S2–S13.View ArticleGoogle Scholar
Han, N., & Ho, J. C. (2014). One-dimensional nanomaterials for energy applications. In S. C. Tjong (Ed.), Nanocrystalline materials: their synthesis-structure-property relationships and applications (II ed., pp. 75–120). USA: Elsevier.View ArticleGoogle Scholar
Im, J. S., Lee, S. K., & Lee, Y. S. (2011). Cocktail effect of Fe2O3 and TiO2 semiconductors for a high performance dye-sensitized solar cell. Applied Surface Science, 257(6), 2164–2169.View ArticleGoogle Scholar
Kabre, T. S. (2011). Co3O4 thin films: sol-gel synthesis, electrocatalytic properties and photoelectrochemistry. (Ohio: M.Sc. thesis).Google Scholar
Kantonis, G., Stergiopoulos, T., Katsoulidis, A. P., Pomonis, P. J., & Falaras, P. (2011). Electron dynamics dependence on optimum dye loading for an efficient dye-sensitized solar cell. Journal of Photochemistry and Photobiology A: Chemistry, 217(1), 236–241.View ArticleGoogle Scholar
Kato, N., Higuchi, K., Tanaka, H., Nakajima, J., Sano, T., & Toyoda, T. (2011). Improvement in long-term stability of dye-sensitized solar cell for outdoor use. Solar Energy Materials and Solar Cells, 95(1), 301–305.View ArticleGoogle Scholar
Kim, H. S., Kim, D., Kwak, B. S., Han, G. B., Um, M. H., & Kang, M. (2014). Synthesis of magnetically separable core@ shell structured NiFe2O4@ TiO2 nanomaterial and its use for photocatalytic hydrogen production by methanol/water splitting. Chemical Engineering Journal, 243, 272–279.View ArticleGoogle Scholar
Kong, C., Min, S., & Lu, G. (2014). Dye-sensitized cobalt catalysts for high efficient visible light hydrogen evolution. International Journal of Hydrogen Energy, 39(10), 4836–4844.View ArticleGoogle Scholar
Lai, W. H., Su, Y. H., Teoh, L. G., & Hon, M. H. (2008). Commercial and natural dyes as photosensitizers for a water-based dye-sensitized solar cell loaded with gold nanoparticles. Journal of Photochemistry and Photobiology A: Chemistry, 195(2), 307–313.View ArticleGoogle Scholar
Ludin, N. A., Mahmoud, A. A. A., Mohamad, A. B., Kadhum, A. A. H., Sopian, K., & Karim, N. S. A. (2014). Review on the development of natural dye photosensitizer for dye-sensitized solar cells. Renewable and Sustainable Energy Reviews, 31, 386–396.View ArticleGoogle Scholar
Mekprasart, W., Noonuruk, R., Jarernboon, W., & Pecharapa, W. (2011). Quasi-solid-state dye-sensitized solar Cells Based on TiO2/NiO core-shell nanocomposites. Journal of Nanoscience and Nanotechnology, 11(7), 6483–6489.View ArticleGoogle Scholar
Reda, S. M. (2010). Synthesis of ZnO and Fe2O3 nanoparticles by sol–gel method and their application in dye-sensitized solar cells. Materials Science in Semiconductor Processing, 13(5–6), 417–425.View ArticleGoogle Scholar
Sharma, G. D., Suresh, P., & Mikroyannidis, J. A. (2010). Quasi solid state dye-sensitized solar cells with modified TiO2 photoelectrodes and triphenylamine-based dye. Electrochimica Acta, 55(7), 2368–2372.View ArticleGoogle Scholar
Tian, H., Yang, X., Cong, J., Chen, R., Teng, C., Liu, J., et al. (2010). Effect of different electron donating groups on the performance of dye-sensitized solar cells. Dyes and Pigments, 84(1), 62–68.View ArticleGoogle Scholar
Xiao, S., Cui, J., Yi, P., Yang, Y., & Guo, X. (2014). Insight into electrochemical properties of Co3O4-modified magnetic polymer electrolyte. Electrochimica Acta, 144, 221–227.View ArticleGoogle Scholar
Yamaguchi, T., Tobe, N., Matsumoto, D., Nagai, T., & Arakawa, H. (2010). Highly efficient plastic-substrate dye-sensitized solar cells with validated conversion efficiency of 7.6 %. Solar Energy Materials and Solar Cells, 94(5), 812–816.View ArticleGoogle Scholar
Yum, J. H., Lee, J. W., Kim, Y., Humphry-Baker, R., Park, N. G., & Grätzel, M. (2014). Panchromatic light harvesting by dye-and quantum dot-sensitized solar cells. Solar Energy, 109, 183–188.View ArticleGoogle Scholar
Zhuiykov, S. (2014). Nanostructured semiconductor composites for solar cells. In S. Zhuiykov (Ed.), Nanostructured semiconductor oxides for the next generation of electronics and functional devices properties and applications (pp. 267–320). Cambridge: Woodhead Publishing Limited.Google Scholar | CommonCrawl |
Is the Hilbert space spanned by both bound and continuous hydrogen atom eigenfunctions?
As e.g. Griffiths says (p. 103, Introduction to Quantum Mechanics, 2nd ed.), if a spectrum of a linear operator is continuous, the eigenfunctions are not normalizable, therefore it has no eigenfunctions in the Hilbert space.
On the other hand, both bound and continuous eigenfunctions are required to have a complete set, to be able to expand an arbitrary wave function in terms of the eigenfunctions (Landau&Lifshitz, Quantum Mechanics, p.19). How are these results connected, how to explain the apparent contradiction?
Is a formulation that the Hilbert space is spanned by both bound and continuous hydrogen atom eigenfunctions correct?
Update: I just found Scattering states of Hydrogen atom in non-relativistic perturbation theory which is related (but only partially answers this question).
quantum-mechanics mathematical-physics operators hilbert-space
wonderingwondering
$\begingroup$ Related: physics.stackexchange.com/q/68639/2451 and links therein. $\endgroup$ – Qmechanic♦ Mar 9 '15 at 10:55
$\begingroup$ I answered this at physicsoverflow.org/30657 $\endgroup$ – Arnold Neumaier Jun 28 '16 at 20:23
If a self-adjoint operator has pure point spectrum, then (by one definition of pure point spectrum) its eigenfunctions form a complete basis for the Hilbert space.
However, an operator may also have continuous spectrum, in which case to get a decomposition of the identity one must use the spectral measure of the self-adjoint operator.
The usual thing physicists are used to in quantum mechanics is a decomposition of the identity when the operator has only pure point spectrum: $$ I = \sum_{n}|\psi_n\rangle\langle\psi_n|\,. $$
$|\psi_n\rangle$ are the eigenfunctions of the operator: $H|\psi_n\rangle=E_n|\psi_n\rangle$.
The way to generalize this relation to operators that also have other types of spectrum is as follows: for fixed $n$, think of $|\psi_n\rangle\langle\psi_n|$ like a projection so that the above displayed equation can just as well be written as: $$ I = \sum_{\lambda \in \text{spectrum of }H}P_\lambda$$ where $P_{\lambda_n}=|\psi_n\rangle\langle\psi_n|$ is the projection operator onto the particular eigenstate that has the corresponding eigenvalue. Now, operators that have continuous rather than discrete spectrum call for an integral, and so we get $$ I = \int_{\lambda \in \text{spectrum of }H} \mathrm{d}P_\lambda\,. $$
This might seem a bit intimidating: Now $P$ is what is called the projection-valued spectral measure of the operator $H$. It plays the same role that was played before by the same symbol, but now it's not limited to be an atomic measure. If the kind of continuous spectrum we have is only absolutely continuous (which is often the case) then this may be further decomposed as $$ I = \sum_{\lambda_n \in \text{point spectrum of }H}|\psi_n\rangle\langle\psi_n|+\int_{\lambda \in \text{continous spectrum of }H} F(\lambda)\mathrm{d}\lambda\,. $$
$F$ is called the projection-valued density of the spectral measure of $H$ w.r.t. the Lebesgue measure (it is the Radon-Nikodym derivative of the spectral measure $P$ w.r.t. the Lebesgue measure).
The point is you really need both parts to get a decomposition of the identity, and the hydrogen atom (as well as a particle in a finite box) has both types of spectrum indeed.
If you want more details about the construction of these spectral measures I recommend Rudin's book on functional analysis.
PPRPPR
Not the answer you're looking for? Browse other questions tagged quantum-mechanics mathematical-physics operators hilbert-space or ask your own question.
Can a normalizable function *always* be decompose into the discrete Hydrogen spectrum?
Scattering states of Hydrogen atom in non-relativistic perturbation theory
Why are eigenfunctions which correspond to discrete/continuous eigenvalue spectra guaranteed to be normalizable/non-normalizable?
What is a basis for the Hilbert space of a 1-D scattering state?
Eigenstates of position and momentum operators in QM
Does the position operator have enough eigenfunction to span the whole Hilbert space?
Is this assertion by Landau and Lifshitz's QM equivalent to the axiom that state spaces are vector spaces?
How Do We Define Integration over Bra and Ket Vectors?
Why is the momentum operator Hermitian?
Does the wavefunction need to be continuous and satisfy other boundary conditions? | CommonCrawl |
Decreasing the execution time of reducers by revising clustering based on the futuristic greedy approach
Ali Bakhthemmat1 &
Mohammad Izadi2
Journal of Big Data volume 7, Article number: 6 (2020) Cite this article
MapReduce is used within the Hadoop framework, which handles two important tasks: mapping and reducing. Data clustering in mappers and reducers can decrease the execution time, as similar data can be assigned to the same reducer with one key. Our proposed method decreases the overall execution time by clustering and lowering the number of reducers. Our proposed algorithm is composed of five phases. In the first phase, data are stored in the Hadoop structure. In the second phase, we cluster data using the MR-DBSCAN-KD method in order to determine all of the outliers and clusters. Then, the outliers are assigned to the existing clusters using the futuristic greedy method. At the end of the second phase, similar clusters are merged together. In the third phase, clusters are assigned to the reducers. Note that fewer reducers are required for this task by applying approximated load balancing between the reducers. In the fourth phase, the reducers execute their jobs in each cluster. Eventually, in the final phase, reducers return the output. Decreasing the number of reducers and revising the clustering helped reducers to perform their jobs almost simultaneously. Our research results indicate that the proposed algorithm improves the execution time by about 3.9% less than the fastest algorithm in our experiments.
The amount of data generated on the internet grows every day at a high rate. This rate of data generation requires rapid processing. The MapReduce technique is applied for distributed computing of huge data, whose main idea is job parallelization. The MapReduce algorithm deals with two important tasks, namely Map and Reduce. Initially, the Map includes a set of data, which is broken down into tuples (key/value pairs). Secondly, reduce task takes the map output as an input whereby Reducers run the tasks. Job clustering can determine an allocation of jobs to the reducers and mappers. In recent years, this method has been used frequently for job allocation in MapReduce for shortening the execution time of big data processing [1].
Previous research has shown that clustering methods can be useful for big data analysis. K-means as one of the clustering methods (partitioning based) is simple and fast, which outperforms many other methods. Another clustering method is known as hierarchical clustering method which is performed by splitting or merging data. However, the time complexity of the hierarchical clustering method is not suitable in practice. Also, in this method, the number of clusters is not constant. Grid-based method is a type of clustering method which uses on spatial data, with the EM algorithm functioning effectively in big data clustering. Finally, the density-based clustering method offers adequate precision and proper execution time [2].
Decreasing the execution time of jobs is the main motivation of clustering methods. Therefore, the purpose of this paper is to present a new method based on clustering for big data processing in Hadoop framework using the MapReduce programming model. We use the MR-DBSCAN-KD method as it is one of the fastest density-based clustering methods. However, MR-DBSCAN-KD has two main drawbacks: First, the outliers' allocation to reducers is not determined in this method. Thus, we propose an FGC algorithm in order to solve this challenge. Secondly, MR-DBSCAN-KD creates various clusters with significantly different densities. Use of this method of clustering does not lead to load balancing in the clusters [3]. Accordingly, we propose an algorithm in order to solve this problem. Our proposed method is based on MR-DBSCAN clustering, the futuristic greedy approach, and approximated load balancing.
Clustering operations of big data involve expensive computation. Hence, the execution time of sequential algorithms is very long. Parallelization of clustering algorithms is recommended for processing big data which can be fulfilled by MapReduce programming. The MapReduce programming can decrease the execution time provided that it uses proper density-based clustering techniques. In this section, we focus on new approaches to big data processing. We also try to categorize these approaches into the idea structure of prior research and discuss their strengths plus weaknesses. We have classified the new approaches into five categories: clustering based on pure parallelizing, clustering based on load balancing, clustering based on traffic-aware, clustering based on innovative methods, and clustering based on cluster optimization.
The first category of this categorization is called density-based clustering based on pure parallelizing. Zhao et al. [3] proposed a clustering algorithm based on pure parallelizing. They applied parallel k-means clustering in MapReduce for big data processing. The results of their work showed that the proposed algorithm functions in a reasonable time as the data grow. Srivastava et al. [4] proposed a parallel K-medoid clustering algorithm in Hadoop to be accurate in clustering. When the number of reducers increases in this method, the make span time diminishes as it correlates with the data growth. Dai et al. [5] stated that the parallel DBSCAN algorithms are not efficient for big data processing in MapReduce when the number of the reducers with small data increases. They illustrated the MR-DBSCAN-KD algorithm for bulky data. In this method, the execution time of small data in reducers was negligible. Most methods based on pure parallelizing in density-based clustering create heterogeneous clusters. The jobs in heterogeneous clusters are never executed simultaneously, which prolong the run of jobs.
We place the load balancing approach in the second category for solving heterogeneous clusters' problem in parallelized clustering. A number of scientists investigated the total execution time by load balancing in parallel clustering. He et al. [6] used this approach to big data processing. They proposed the MR-DBSCAN algorithm based on load balancing for heavily skewed data. Their method was implemented completely in parallel. They achieved load balancing in heavily skewed data and their results verified the efficiency and scalability of MR-DBSCAN. Also, Verma et al. [7] studied job scheduling in MapReduce in order to minimize the span and improve clustering. They presented an innovative heuristic structure for job scheduling. This method generated balanced workload, thereby reducing the completion time of jobs. Ramakrishnan et al. [8] and Fan et al. [9] load-balancing methods based on MapReduce have also been reviewed. Xia et al. [10] used a new greedy algorithm with load balancing for MapReduce programming. In this method, data were allocated to the reducers based on iterative calculation of sample data. This method used a greedy algorithm instead of a hash algorithm since its execution time was shorter than that of hash portioning algorithms. Clustering methods based on load balancing have not focused very much on issues of online job arrival and clustering accuracy. In clustering methods, jobs traffic changes irregularly when a job arrives online. Thus, load balancing in clusters disappears.
In recent years, a third category has been introduced, which is based on traffic awareness for arrival of irregular jobs. Xia et al. [11] proposed an algorithm based on traffic awareness. They applied efficient MapReduce-based parallel clustering algorithm for distributed traffic subarea division. In this research, a metric distance is innovated for the k-means-parallelized algorithm. Evaluation of the experimental results indicates the efficiency in execution time and high accuracy of clustering. Ke et al. [12] and Reddy et al. [13] also proposed a traffic-aware partition and aggregation in big data applications. They classified data based on job traffic. Also, Venkatesh et al. [14] investigated MapReduce based on traffic-aware partition and aggregation for huge data via an innovative approach. This method considerably reduces the response time for big data in the Hadoop framework and consists of three layers: The first layer performs partitioning and mapping on big data. In the second layer, data are shuffled based on traffic aware mapping. In the third layer, data are reduced; this layer reduces the network traffic in the traffic aware clustering algorithm in response to which the execution time diminishes. However, in spite of the available methods, clustering time in some data sets is very high.
Indeed, recent methods could still be implemented within a shorter time by decreasing the clustering computation. Hence, innovative methods of the fourth category have aided to reducing the clustering computations. For example, HajKacem et al. [15] presented a one-pass MapReduce-based clustering method called the AMRKP method, for mixed large-scale data. The AMRKP method reduces the computation required for calculating the distance between clusters. Also, the data are read and written only once. Consequently, the number of I/O operations on the disk would be reduced and operation iterations would improve the execution time. Sudhakar Ilango et al. [16] developed an algorithm with an artificial bee colony based on clustering approach for big data processing. It minimized the execution time but did not always provide good precision for clustering. Fan et al. [17] focused on multimedia big data. They observed that Canopy + K-means algorithm operates faster than k-means as the amount of data increases. Canopy algorithm is composed of two steps. In the first step, the data are grouped based on new distance function calculation through greater precision in clustering. These group data are introduced as canopy. Thereafter, the groups are assigned to clusters. This structure could improve the total execution time. Jane et al. [18] proposed an algorithm by sorting based on the K-means algorithm and the Median-based algorithm for clustering. This algorithm uses the multi-machine technique for big data processing (SBKMMA). It also reduces the number of iterations in the k-means algorithm. The drawback of this algorithm is the determination of the number of clusters as the primary number of clusters affects the execution time of algorithm. Kaur et al. [19] presented SUBSCALE, a novel clustering algorithm, to find non-trivial subspace clusters for a k-dimensional data set. Their algorithm is applied for high dimensionality of the dataset. Note that parallelism in this method is independent of multiple dimensions, and thus iterations of SUBSCALE algorithm diminish. Kanimozhi et al. [20] proposed an approach for clustering based on bivariate n-gram frequent items. This approach reduced the amount of big data for processing in reducers, leading to an increase in the speed of execution in big data. Nevertheless, in innovative methods, many clusters are not clustered precisely because of the border points (outliers) in them. Accordingly, it is better to optimize the clusters.
Finally, the fifth category of algorithms is designed to optimize clusters for improving the clustering accuracy. Zerabi et al. [21] developed a new clustering method using conditional entropy index. This method involves a process with three tasks with each dealing with MapReduce operations. These tasks operate based on the conditional entropy index, whereby the clusters will be optimized. Hosseini et al. [22] proposed a scalable and robust fuzzy weighted clustering based on MapReduce through micro array gene expressions. This method merges data based on similarity index. Data are processed in a parallelized and distributed platform offering a reasonable execution time in this method. Hemant Kumar Reddy et al. [23] improved the map-reduce performance by novel-entropy-based data placement strategy (EDPS). They extracted data-groups based on dependencies among datasets. Then, data-groups are assigned to the data centre heterogeneity. Finally, data-groups are assigned to clusters based on their relative entropy, whereby clusters are optimized approximately. Beck et al. [24] applied mean shift clustering for grouping big data. They applied NNGA + algorithm for dataset pre-processing. They could improve the quality of clustering and execution time via the mean shift model for big data clustering. Gates et al. [25] showed that random models can have an impact on similar clustering pairs. These models can be applied for evaluating several methods in Map-Reduce. Heidari et al. [26] discussed clustering with variable density based on huge data. They presented MR-VDBSCAN in this method. Their idea search local density of points for avoiding of connecting clusters with various densities. In this way, clustering optimization is performed.
Researchers have tried to improve execution time by approaches such as parallelism, load balancing, jobs categorization based on traffic-aware, reducing clustering computation and cluster optimization. Parallelism creates heterogeneous clusters, which significantly affect the runtime in the reducers. In this way, the total execution time of jobs in clusters increases. Load balancing in clustering could create approximately homogenous clusters. Nevertheless, the jobs arriving online disrupt the load balance while also generating heavy computations in the clustering based on load balancing. For this reason, clustering was performed based on job traffic. This approach did not solve the problem of high computation in clustering either. We consider parallelization and reduction of calculations as well as optimization of clusters and load balances in the proposed method, respectively. Innovative methods have reduced runtime by reducing computation and using local options. Nonetheless again because of the boundary points, the clusters are not carefully clustered. Cluster optimization can be done with the minimum number of clusters suitable for reducers. Lowering the number of reducers with proper clustering and load balancing can diminish the total runtime as reducers can function almost simultaneously. Since with fewer reducers the execution time decreases, we can consider maximizing the usage of reducers with load balancing. Hence, we tried to present a new method that would decrease the number of reducers by clustering jobs and load balancing in reducers. The main challenge is the bounded points (outliers) created in density-based clustering. We try to cluster data based on density and subsequently, apply approximated load balancing to the clusters. The proposed idea presents a distance function called Futuristic Greedy Index for appending outliers to clusters. Also, it can shorten the execution time by correcting the clusters. Cluster correction is done by discovering similar data and assigning them to clusters, provided that interdependence between clusters is minimized.
We consider two main goals for designing the proposed algorithm (diminishing and load balancing of reducers). We design the proposed algorithm by mapping, where reducing operations are performed in the Hadoop structure. In the proposed method, the jobs are stored in HDFS structure in order and without heavy computation. The jobs are stored in file systems equally. Thereafter, the file systems are assigned to mappers sequentially. Each mapper is clustered by MR-DBSCAN algorithm. Accordingly, clusters and outliers are generated. Then, the generated outliers merge together or other clusters based on FGI. Next, the generated clusters merge together based on centroids distance. Subsequently, new clusters are created by load balancing, which are assigned to reducers. Finally, the results of reducers are combined together, and the output is returned. The proposed method is composed of five phases.
In the first phase, jobs are stored in the HDFS structure (Vi). They are assigned to the file systems equally. Each file system (fs) can store a limited number of jobs because each file system accommodates limited capacity.
In the second phase, mapping operations are performed. This phase consists of three steps. In the first step, the data are assigned to the mappers, and then data in each mapper are clustered using the MR-DBSCAN method. The output of this operation is an uncertain number of heterogeneous clusters and outliers.
\(\left( {C_{ij} ,O_{ij} } \right)\). In the second step, FGC algorithm is employed in order to assign outliers to existing clusters or together. In the final step, some of the generated clusters merge together based on centroid distance. The output of the second phase contains new clusters \((C'_{ij} )\). Hence, the number of reducers diminishes.
In the third phase, clusters must be assigned to reducers. Clusters have almost similar jobs, but they are heterogeneous. Therefore, if clusters are assigned to reducers, then reducers will have a variable work load, which can increase the total execution time. Thus, clusters are grouped based on the average cluster workload \((ETA_{k} )\). Accordingly, the grouped clusters are assigned to reducers based on the approximated load balancing.
In the fourth phase, jobs are assigned to reducers, and then each reducer executes the related jobs. We expect that the execution time decreases as the clusters are being assigned to fewer reducers with load balancing. It results in diminished communication cost of data transmission.
In the fifth phase, the outputs of the reducers are combined together, and then the final outputs are displayed.
The phases of the proposed method are illustrated in Fig. 1. Table 1 presents the notations utilized in the proposed algorithm.
Block diagram of proposed method phases
Table 1 Notations
Phase 1: Storing the data set in HDFS
Data are stored in the Hadoop structure as a set of data nodes where each data node presents a data point. Clustering operations are performed on each data point separately. Data points are presented by V1, V2, V3,…,Vn. Each of the data points is stored in a file in the distributed file system denoted by fs. Algorithm1 presents the storing operations. The time complexity of storing the data set in HDFS is:
Algorithm 1. Storing operations in Hadoop.
Phase 2: FGC-Mapping
FGC-Mapping is performed in the following steps (clustering mappers, assigning outliers to clusters, merging clusters).
Step 1. Clustering mappers
In the previous phase, big data were split equally among V1,V2,V3,…,Vn data nodes which were assigned to the mappers. In this step, first the MR-DBSCAN-KD algorithm is applied to the mappers, both in parallel and separately. This step is illustrated in Fig. 2 where the data are split among three mappers. The data points assigned to each mapper are clustered, generating mappers that are comprised of clusters and outliers. The clusters are presented by Cij with each of them possessing different densities. For example, mapper 1 includes clusters c11, c12, c13, and four outliers. Outliers are jobs that do not fall in any cluster. Algorithm 2 presents the first step of FGC-Mapping. The time complexity of the MR-DBSCAN algorithm equals \(\left( {\frac{N}{n}} \right)^{2}\) such that \(\frac{N}{n}\) is the number of jobs in each mapper, n represents the number of mappers, and N denotes the number of all data. In the parallel structure, the time complexity of step 1 is:
Mapping with MR-DBSCAN-KD (step 1 of phase 2)
Algorithm 2. Assigning data points to mappers.
Step 2. Assigning outliers to clusters.
Each of the clusters includes a cluster centre called centroid. Further, each cluster may include outliers as illustrated in Fig. 2. The accuracy of the MR-DBSCAN-KD algorithm is high; however, creating outliers and heterogeneous clusters are some of the drawbacks of the MR-DBSCAN-KD algorithm [3]. The proposed algorithm appends outliers to the existing clusters or other outliers using the Futuristic Greedy Index function (FGI). FGI is a new distance function which calculates the distance between the outliers and the clusters. The outliers are assigned to the closest cluster using this function. Algorithm 3 illustrates the steps of assigning the outliers to the clusters in the second step of the FGC-Mapping.
Algorithm 3. Assigning outliers by FGA.
The FGI function is calculated using Eqs. 1 and 2. Equation 1 calculates the Euclidean distance, while Eq. 2 computes the futuristic index. FGI function is designed based on of two parts (futuristic and greedy) in Eq. 3.
$$dist(O_{ij} ,{\hat{C}}_{ij} ) = \sqrt {\sum \limits_{j = 1}^{n} (O_{ij} - {\hat{C}}_{ij} )^{2} } \quad for \ Mapper_{i}$$
$$Futuristic\_ Index = \frac{1}{{\mathop \sum \nolimits_{j = 1}^{n} dist\left( {O_{ij} ,\hat{C}_{ij} } \right)}}$$
$$FGI\left( {O_{ij } ,C_{ij} ,Mapper_{i} } \right) = \frac{{dist\left( {O_{ij} ,\hat{C}_{ij} } \right)}}{{\mathop \sum \nolimits_{j = 1}^{n} dist\left( {O_{ij} ,\hat{C}_{ij} } \right)}}$$
FGI assigns each outlier to a cluster. In some cases, the distance between an outlier and different clusters may not be significantly different. The futuristic in FGI function determines that the outlier point is far from the other clusters, while the greedy in FGI function specifies that the outlier point is the closest to one cluster. Closeness in Eq. 1 is determined by the greedy distance index denoted by \(dist\left( {O_{ij} ,\hat{C}_{ij} } \right)\). The greedy distance index may create improper clustering since the boundary point may not be assigned to the proper cluster. Thus, we consider future in Eq. 2. Equation 3 presents the futuristic greedy index. Indeed, Eq. 3 is an outcome of the multiplication of Eq. 1 and Eq. 2.
Finally, FGI is calculated by Eq. 2. We append the outlier points to the clusters provided that clustering does not deteriorate in the next iteration of the greedy algorithm as greedy selections do not guarantee appropriate selections in the subsequent steps. Figure 3 illustrates the output of Algorithm 3. Value 'j' is number of jobs in mappers and value 'c' denotes the number of mappers. The time complexity of step 2 is:
$$iii.\, O\left( {b*n*j + b*j} \right) = O\left( {n*j} \right) , \ j = \frac{N}{c} , \ c < N\mathop \Rightarrow \limits^{{}} O\left( {n* \frac{N}{c}} \right) = O\left( N \right)$$
Step 3. Merging clusters.
Outlier allocation to clusters (step 2 of phase 2)
In this step, clusters that are located close in the mappers are merged together. Algorithm 4 presents the merging method used in the third step of the FGC-Mapping. Initially, centroids are clustered by MR-DBSCAN-KD. The outputs of this clustering are presented by (\(C^{\prime\prime}_{11} C^{\prime\prime}_{11} \left( 1 \right),C^{\prime\prime}_{12} \left( 2 \right), \ldots\),\(C^{\prime\prime}_{ij} \left( k \right)\)), where \(C^{\prime\prime}_{ij} \left( t \right)\) denotes centroid of the t-th cluster. Clusters linked by centroids merge together according to centroids' clustering. A new set of clusters is updated in the existing mappers. Hence, new clusters will be considered (\(C^{\prime}_{11} \left( 1 \right),C^{\prime}_{12} \left( 2 \right), \ldots\),\(C^{\prime}_{ij} \left( p \right)\)), which have different densities. Figure 4 illustrates algorithm 4. Note that when the clusters are merged, a to new cluster is formed. Hence, the number of merged clusters is fewer than the number of previous clusters. Accordingly, clustering based on centroids can lower the number of clusters. The time complexity of step 3 is:
Merge of clusters to new clusters in mappers (step 3 of phase 2)
Algorithm 4. Merging clusters based on centroids.
Phase 3: Load balancing in clusters
The result of the MR-DBSCAN-KD algorithm is a set of heterogeneous clusters. In the previous phase, the number of clusters was reduced by merging some clusters. In phase 3, the clusters are modified with load balancing. The number of clusters is represented by P and the proper number of reducers is denoted by F. \(ETA_{k }\) presents the average density of the k-th cluster in MR-DBSCAN. We design Algorithm 5 (the third phase of the proposed method) based on \(ETA_{k }\). We assign reducers to clusters with an approximately similar density to \(ETA_{k }\). Hence, load balancing will be accomplished in cluster densities.
In Algorithm 5, the clusters with densities greater than \(ETA_{k }\) are split into equal clusters with \(ETA_{k }\) density. They are then assigned to reducers. The remaining clusters are assigned to other reducers using the Best Fit algorithm [27]. Accordingly, load balancing in reducers leads to distribution of traffic balance.
The time complexity of phase 3 is:
Algorithm 5. Cluster revising based on load balancing.
Phase 4: Job execution in reducers
In phase 4, clusters are assigned to reducers, and then reducers execute the jobs in parallel. Load balancing in reducers leads them to execute jobs almost simultaneously. As a result, the execution time of the jobs diminishes. It is because the fewer number of reducers results in less communication cost. Subsequently, the total execution time decreases. Algorithm 6 illustrates phase 4.
Assigning clusters to reducers.
Complexity time of step 4 is:
$$vi. O\left( {p + p} \right) = O\left( p \right)$$
Phase 5: Determining the outputs
In the last phase, the outputs of the reducers are combined together where the final output is returned. Algorithm 7 depicts the last phase of the proposed algorithm. Complexity time of step 5 is:
$$vii. O\left( {p * \frac{N}{p}} \right) = O\left( N \right)$$
Combining the outputs of reducers.
The time complexity of the proposed method is calculated by the complexity from phase 1 to phase 5 in Sect. 3 of this paper, which is shown in Table 2. Thu, the complexity of the proposed algorithm is:
Table 2 Complexity of the algorithm phases
$$O\left( N \right) + \left( {O\left( {\frac{{N^{2} }}{n}} \right) + O\left( N \right) + O\left( N \right)} \right) + O\left( {P^{2} *logP} \right) + O\left( P \right) + O\left( N \right) = O\left( {\frac{{N^{2} }}{n}} \right)$$
The time complexity of phase 3 (load balancing in clusters) is \(O\left( {PlogP} \right)\). P represents the number of clusters which is far lower than N. Phase 3 is additional phase that we add to the main steps of MapReduce. Phase 3 creates waiting time with complexity \(O\left( {P^{2} *logP} \right)\). Note that it is negligible in contrast to \(O\left( {\frac{{N^{2} }}{n}} \right)\) since P ≪ N. Also, the approximated load balancing in reducers improves the execution. The experimental results confirm this claim.
The experimental platform is implemented using Hadoop and is composed of one master machine plus eight slave machines. All of the machines had the following specifications: Intel Xeon E7-2850 @ 2.00 GHz (Dual Cpu) and 8.00 GB RAM. All of the experiments were performed on Ubuntu 16.04 with Hadoop 2.9.1 and JDK 1.8. The codes were implemented by Java in the Hadoop environment.
Table 3 presents datasets employed in this research. These datasets are big data in this research for two reasons: Firstly, part of the main memory of available computers is occupied by the operating system and other information required. Thus, it is not possible to load all of data in datasets into the existing main memory of the available computer. Datasets in this research are too large and complex to be processed by traditional algorithms and computers. As the second reason, datasets are composed of several types and several attributes.
Table 3 Datasets
Four types of datasets have been used in the experiments called NCDC, PPG-DaLiA, HARCAS, and YMVG. The NCDC dataset [29] contains files with every station sub-hourly (5-min) data in terms of year from U.S. Climate Reference Network (USCRN). Sub-hourly data include air temperature, precipitation, global solar radiation, surface infrared temperature, relative humidity, soil moisture and temperature, wetness, and 1.5-m wind speed. Instances are from 2006 to 2019, where the size of this dataset is about 26G. PPG-DaLiA dataset contains data from 15 subjects wearing physiological and motion sensors, providing a PPG dataset for motion compensation and heart rate estimation in daily life activities. PG-DaLiA dataset is a publicly available dataset for PPG-based heart rate estimation. This multimodal dataset features physiological and motion data, recorded from both a wrist- and a chest-worn device, of 15 subjects while performing a wide range of activities under close to real-life conditions. The included ECG data provides the heart rate ground truth. The included PPG- and 3D-accelerometer data can be used for heart rate estimation, while compensating for motion artifacts. Human activity recognition from Continuous Ambient Sensor dataset (HARCAS) represents the ambient data collected in houses with volunteer residents. Data are collected continuously while residents perform their normal routines. Ambient PIR motion sensors, door/temperature sensors, and light switch sensors are placed throughout the house of the volunteer, which are related to specific activities of daily living we wish to capture. The dataset should be useful particularly for research on multi-view (multimodal) learning, including multi-view clustering and/or supervised learning, co-training, early/late fusion, and ensemble techniques. YouTube Multiview Video Games (YMVG) dataset consists of feature values and class labels for about 120,000 videos (instances). Each instance is described by up to 13 feature types, from three high level feature families: textual, visual, and auditory features. There are 31 class labels, 1 through 31. The first 30 labels correspond to popular video games. Class 31 is not specific, which means none of the 30. Note that neither the identity of the videos nor the class labels (video-game titles) are released. Again, the dataset should be useful particularly for research on multi-view (multimodal) learning, including multi-view clustering and/or supervised learning, co-training, early/late fusion, and ensemble techniques.
The results are compared with K-means-parallel, GRIDDBSCAN, DB-Scan, Mean shift clustering, and EM clustering methods. Table 4 summarizes the execution time for these algorithms. The proposed method executes jobs faster than the other algorithms due to four reasons. Firstly, big data are categorized and assigned to mappers equally without heavy calculations. Also, each mapper consists of small data. Hence, the clustering operation is performed on small data of each mapper within a short execution time. Then, clusters and outliers are created and outliers are assigned to other clusters or together within a short execution time. Secondly, the generated clusters are merged based on their centroids' distance. Therefore, the distance function is not computed for each node of cluster and only is calculated based on centroids. It prevents from high computation. Accordingly, computation of distance in clustering is decreased. Thirdly, the load balancing in clusters divides the workload between the reducers almost equally. Thus, reducers execute the jobs almost simultaneously. Also, it results in diminished number of reducers. The low number of reducers shortens the time of data transmission in the Hadoop framework. Accordingly, the communication cost in the Hadoop framework drops. Note that the communication cost consists of coordination between reducers, which is performed by a coordinator. Also, load balancing in the traffic of reducers leads to less data transmission between reducers. If load balancing is not established, then it is possible to transfer high loads to one reducer. Hence, the total execution time increases in the parallel structure of MapReduce. Furthermore, each of the clusters is composed of jobs with almost similar computation. These clusters are assigned to the reducers and since the computation of jobs is almost similar, the reducers execute jobs very fast. Accordingly, some similar operations in reducers do not need to be recalculated. Also, several similar operations can be processed fast (for example, a similar key in the key-value structure of MapReduce for counting the number of one word). Consequently, iterations of operations and execution time will be reduced. Figure 5 indicates that the proposed algorithm performs faster than other methods when applied to the four datasets. The speed of algorithms is shown based on the percentage of improvement in the total execution time. Near clusters in the mappers are merged together in order to lower the total number of clusters.
Table 4 Comparison of execution time (seconds)
Percentage of improvement for execution time
We can compare clustering methods with a similar index. The Rand Index in data clustering is a measure of the similarity between two clusters. It shows view the measure of the percentage of correct decisions made by the algorithm. The Rand Index is calculated using Eq. (4) [30]:
$$Rand\,Index = \frac{TP + FN}{TP + FP + TN + FN}$$
Rand Index is calculated for every two clusters. Subsequently, we consider the average Rand Index, and compare clusters with it. TP is the number of true positives, TN represents the number of true negatives, FP shows the number of false positives, and FN denotes the number of false negatives. Rand Index can calculate clustering accuracy, and it is applied even when class labels are not used [31].
Table 5 shows that the K-means-parallel has the minimum Rand Index while our proposed method offers the largest Rand Index. Figure 6 illustrates the Rand Index of the five algorithms when applied to the four. It demonstrates the percentage of improvement of Rand Index. Table 5 presents the Rand index in various algorithms. This table shows that the proposed method performs more efficiently compared to the other clustering methods in creating similar clusters. This efficiency is the result of usage of FGI for assigning outliers and merging near small clusters. In the second phase of the proposed algorithm, the data points in the mappers are clustered quickly using MR-DBSCAN-KD. The quick clustering is a result of the fact that the data are of normal size and are not considered big data as we apply distributed processing by mappers and reducers. The clusters are merged based on centroids' clustering. This clustering and calculating the distance between the centroids occur quickly as number of the centroids is k (k ≪ N). Meanwhile, merging the clusters together leads to a reduction in the number of clusters, and the merged clusters in this step are approximately similar. In subsequent phases, the operations are performed on the clusters created in this phase. Thus, less calculation is established in each phase of the proposed algorithm. Load balancing of jobs in the third step leads to a balanced assignment of jobs to the reducers. Hence, reducers finish jobs almost simultaneously. Our proposed method improves the average execution time compared to other methods.
Table 5 Comparison of Rand Index
Percentage of improvement for Rand Index
Before executing Algorithm 5, the proposed algorithm allocates each cluster to similar jobs, which may suggest that heterogeneous clusters are created. Some heterogeneous clusters have high density. Hence, these clusters cannot be assigned to reducers since load imbalance in cluster density causes some reducers to execute jobs within a long time. Subsequently, the total execution time increases.
Algorithm 5 creates two states. In the first state, if a cluster has a density higher than the average number of clusters, then the cluster is evenly divided into multiple partitions. Next, each partition is allocated to a reducer. In the second state, if the cluster has density less than the average density, then it is allocated to a reducer. Thus, the entire capacity of reducers is not used. Accordingly, load balancing is not fully achieved in reducers. It is important to note that the majority of the clusters have density higher than mean density of clustering before Algorithm 5 due to merging smaller clusters in the previous steps. An excessive increase in the number of clusters with a higher density than the average density causes a very small amount of imbalance to be increased.
Reducers must process similar jobs in one cluster with approximal similar density. Empirical tests show that Algorithm 5 performs properly when the number of clusters with high density is very large. Also, experimental results suggest that the number of high-density clusters increases before running Algorithm 5 as many small clusters merge together in previous steps. If the proposed algorithm presents proper load balancing, then the total execution time will decrease as reducers execute jobs almost simultaneously. Note that our proposed algorithm presents approximated load balancing. Table 6 reports the load balancing and total execution time in datasets by the proposed method. The load balancing of reducers is defined by Eq. 5, which indicates the average difference of execution time in the reducers. Table 6 shows that dataset size enlargement has had a proper effect on execution time and little effect on load balancing in reducers.
Table 6 Correlation between load balancing and total execution time
$$t_{r} \ is \ the \ execution \ time \ of \ reducer \ r th$$
$$\bar{t}\,represents \ the \ average \ execution \ time \ of \ reducers$$
$$r \ is \ the \ \ number \ of \ reducers$$
$$LB \ denotes \ load \ balancing \ in \ the \ execution \ time$$
$$LB = 1 - \frac{{\mathop \sum \nolimits_{i = 1}^{r} \left| {t_{r} - \bar{t}} \right|}}{r}$$
In this paper, we proposed a new method based on MR-DBSCAN-KD and futuristic greedy index for processing big data. Our proposed method was composed of 5 steps. In the first step, big data were partitioned into data points and the data points were stored in the Hadoop structure. In the second step, the data points stored in Hadoop were clustered using MR-DBSCAN-KD in parallel. The outliers were then assigned to the existing clusters using the Futuristic Greedy Index. At the end of the second step, the clusters were merged together based on the distance between their centroids. As a result, the number of clusters decreased. In the third step, the clusters were classified based on the decline in the number of reducers. In the fourth step, the clusters were assigned to the reducers, and in the fifth step the outputs of the reducers were merged together.
Our experimental results indicated that use of this method reduced the execution time of jobs. A reasonable execution time was achieved since less data were processed in parallel in mappers and reducers throughout each phase. Meanwhile similar data were located in the same reducer. This led the reducers to execute the jobs faster. A decrease in the number of reducers resulted in shorthand execution time. Note that creation of outliers is a drawback of MR-DBSCAN-KD. As a solution, the proposed method used futuristic greedy method for assigning these outliers to existing clusters. Exchange of jobs between clusters is likely to improve load balancing, and is recommended to be considered in future research. Also, utilisation of novel density-based algorithms instead of MR-DBSCAN-KD might decrease the execution time.
All data used in this study are publicly available and accessible in the cited sources. Dataset: including details of Dataset that used in experiment see the web site: https://archive.ics.uci.edu/ml.
FGI :
Futuristic Greedy Index
FGC :
Futuristic Greedy Clustering
fs :
HDFS :
Hadoop Distributed File System
Tsai C-W, Lai C-F, Chao H-C, Vasilakos AV. Big data analytics: a survey. J Big data. 2015;2(1):21.
Sanse K, Sharma M. Clustering methods for Big data analysis. Int J Adv Res Comput Eng Technol. 2015;4(3):642–8.
Zhao W, Ma H, He Q. Parallel k-means clustering based on mapreduce. In: IEEE international conference on cloud computing. 2009. p. 674–9.
Srivastava DK, Yadav R, Agrwal G. Map reduce programming model for parallel K-mediod algorithm on hadoop cluster. In: 2017 7th international conference on communication systems and network technologies (CSNT). 2017. p. 74–8.
Dai B-R, Lin I-C. Efficient map/reduce-based dbscan algorithm with optimized data partition. In: 2012 IEEE Fifth international conference on cloud computing. 2012. p. 59–66.
He Y, Tan H, Luo W, Feng S, Fan J. MR-DBSCAN: a scalable MapReduce-based DBSCAN algorithm for heavily skewed data. Front Comput Sci. 2014;8(1):83–99.
Verma A, Cherkasova L, Campbell RH. Two sides of a coin: Optimizing the schedule of mapreduce jobs to minimize their makespan and improve cluster performance. In: 2012 IEEE 20th international symposium on modeling, analysis and simulation of computer and telecommunication systems. 2012. p. 11–8.
Ramakrishnan SR, Swart G, Urmanov A. Balancing reducer skew in MapReduce workloads using progressive sampling. In: Proceedings of the Third ACM symposium on cloud computing. 2012. p. 16.
Fan L, Gao B, Zhang F, Liu Z. OS4M: Achieving Global Load Balance of MapReduce workload by scheduling at the operation level. arXiv Prepr arXiv14063901. 2014.
Xia H. Load balancing greedy algorithm for reduce on Hadoop platform. In: 2018 IEEE 3rd international conference on big data analysis (ICBDA). 2018. p. 212–6.
Xia D, Wang B, Li Y, Rong Z, Zhang Z. An efficient MapReduce-based parallel clustering algorithm for distributed traffic subarea division. Discret Dyn Nat Soc. 2015;2015.
Ke H, Li P, Guo S, Guo M. On traffic-aware partition and aggregation in mapreduce for big data applications. IEEE Trans Parallel Distrib Syst. 2015;27(3):818–28.
Reddy YD, Sajin AP. An efficient traffic-aware partition and aggregation for big data applications using map-reduce. Indian J Sci Technol. 2016;9(10):1–7.
Venkatesh G, Arunesh K. Map Reduce for big data processing based on traffic aware partition and aggregation. Cluster Comput. 2018. p. 1–7.
HajKacem MA, N'cir C-E, Essoussi N. One-pass MapReduce-based clustering method for mixed large scale data. J Intell Inf Syst. 2019;52(3):619–36.
Ilango SS, Vimal S, Kaliappan M, Subbulakshmi P. Optimization using artificial bee colony based clustering approach for big data. Cluster Comput. 2018. p. 1–9.
Fan T. Research and implementation of user clustering based on MapReduce in multimedia big data. Multimed Tools Appl. 2018;77(8):10017–31.
Jane EM, Raj E. SBKMMA: sorting based K means and median based clustering algorithm using multi machine technique for big data. Int J Comput. 2018;28(1):1–7.
Kaur A, Datta A. A novel algorithm for fast and scalable subspace clustering of high-dimensional data. J Big Data. 2015;2(1):17.
Kanimozhi K V, Venkatesan M. A novel map-reduce based augmented clustering algorithm for big text datasets. In: Data Engineering and Intelligent Computing. New York: Springer; 2018. p. 427–36.
Zerabi S, Meshoul S, Khantoul B. Parallel clustering validation based on MapReduce. In: International conference on computer science and its applications. 2018. p. 291–9.
Hosseini B, Kiani K. FWCMR: a scalable and robust fuzzy weighted clustering based on MapReduce with application to microarray gene expression. Expert Syst Appl. 2018;91:198–210.
Reddy KHK, Pandey V, Roy DS. A novel entropy-based dynamic data placement strategy for data intensive applications in Hadoop clusters. Int J Big Data Intell. 2019;6(1):20–37.
Beck G, Duong T, Lebbah M, Azzag H, Cérin C. A Distributed and approximated nearest neighbors algorithm for an efficient large scale mean shift clustering. arXiv Prepr arXiv190203833. 2019.
Gates AJ, Ahn Y-Y. The impact of random models on clustering similarity. J Mach Learn Res. 2017;18(1):3049–76.
Heidari S, Alborzi M, Radfar R, Afsharkazemi MA, Ghatari AR. Big data clustering with varied density based on MapReduce. J Big Data. 2019;6(1):77.
Kenyon C, others. Best-Fit Bin-Packing with Random Order. In: SODA. 1996. p. 359–64.
Data set. https://archive.ics.uci.edu/ml/. Accessed 9 Feb 2018.
Data set. ftp://ftp.ncdc.noaa.gov/pub/data/uscrn/products/subhourly01. Accessed 11 Feb 2019.
Sammut C, Webb GI. Encyclopedia of machine learning. New York: Springer; 2011.
Rand WM. Objective criteria for the evaluation of clustering methods. J Am Stat Assoc. 1971;66(336):846–50.
Kish International Campus, Sharif University of Technology, Tehran, Iran
Ali Bakhthemmat
Department of Computer Engineering, Sharif University of Technology, Tehran, Iran
Mohammad Izadi
Search for Ali Bakhthemmat in:
Search for Mohammad Izadi in:
All authors read and approved the final manuscript.
Correspondence to Ali Bakhthemmat.
The authors ethics approval and consent to participate.
The authors consent for publication.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Bakhthemmat, A., Izadi, M. Decreasing the execution time of reducers by revising clustering based on the futuristic greedy approach. J Big Data 7, 6 (2020) doi:10.1186/s40537-019-0279-z
Futuristic greedy
Decreasing the number of reducers | CommonCrawl |
Limits on the flux of tau neutrinos from 1 PeV to 3 EeV with the MAGIC telescopes (1805.02750)
MAGIC Collaboration: M.L. Ahnen, S. Ansoldi, L.A. Antonelli, C. Arcaro, D. Baack, A. Babić, B. Banerjee, P. Bangale, U. Barres de Almeida, J.A. Barrio, J. Becerra González, W. Bednarek, E. Bernardini, R.Ch. Berse, A. Berti, W. Bhattacharyya, A. Biland, O. Blanch, G. Bonnoli, R. Carosi, A. Carosi, G. Ceribella, A. Chatterjee, S.M. Colak, P. Colin, E. Colombo, J.L. Contreras, J. Cortina, S. Covino, P. Cumani, P. Da Vela, F. Dazzi, A. De Angelis, B. De Lotto, M. Delfino, J. Delgado, F. Di Pierro, A. Domínguez, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Elsaesser, V. Fallah Ramazani, A. Fernández-Barral, D. Fidalgo, M.V. Fonseca, L. Font, C. Fruck, D. Galindo, R.J. García López, M. Garczarczyk, M. Gaug, P. Giammaria, N. Godinović, D. Góra, D. Guberman, D. Hadasch, A. Hahn, T. Hassan, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, K. Ishio, Y. Konno, H. Kubo, J. Kushida, D. Kuveždić, D. Lelas, E. Lindfors, S. Lombardi, F. Longo, M. López, C. Maggio, P. Majumdar, M. Makariev, G. Maneva, M. Manganaro, K. Mannheim, L. Maraschi, M. Mariotti, M. Martínez, S. Masuda, D. Mazin, K. Mielke, M. Minev, J.M. Miranda, R. Mirzoyan, A. Moralejo, V. Moreno, E. Moretti, T. Nagayoshi, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, C. Nigro, K. Nilsson, D. Ninci, K. Nishijima, K. Noda, L. Nogués, S. Paiano, J. Palacio, D. Paneque, R. Paoletti, J.M. Paredes, G. Pedaletti, M. Peresano, M. Persic, P.G. Prada Moroni, E. Prandini, I. Puljak, J.R. Garcia, I. Reichardt, W. Rhode, M. Ribó, J. Rico, C. Righi, A. Rugliancich, T. Saito, K. Satalecka, T. Schweizer, J. Sitarek, I. Šnidarić, D. Sobczynska, A. Stamerra, M. Strzys, T. Surić, M. Takahashi, L. Takalo, F. Tavecchio, P. Temnikov, T. Terzić, M. Teshima, N. Torres-Albà, A. Treves, S. Tsujimoto, G. Vanzo, M. Vazquez Acosta, I. Vovk, J.E. Ward, M. Will, D. Zarić
May 7, 2018 astro-ph.IM, astro-ph.HE
A search for tau neutrino induced showers with the MAGIC telescopes is presented. The MAGIC telescopes located at an altitude of 2200 m a.s.l. in the Canary Island of La Palma, can point towards the horizon or a few degrees below across an azimuthal range of about 80 degrees. This provides a possibility to search for air showers induced by tau leptons arising from interactions of tau neutrinos in the Earth crust or the surrounding ocean. In this paper we show how such air showers can be discriminated from the background of very inclined hadronic showers by using Monte Carlo simulations. Taking into account the orography of the site, the point source acceptance and the event rates expected have been calculated for a sample of generic neutrino fluxes from photo-hadronic interactions in AGNs. The analysis of about 30 hours of data taken towards the sealeads to a 90\% C.L. point source limit for tau neutrinos in the energy range from $1.0 \times 10^{15}$ eV to $3.0 \times 10^{18}$ eV of about $E_{\nu_{\tau}}^{2}\times \phi (E_{\nu_{\tau}}) < 2.0 \times 10^{-4}$ GeV cm$^{-2}$ s$^{-1}$ for an assumed power-law neutrino spectrum with spectral index $\gamma$=-2. However, with 300 hours and in case of an optimistic neutrino flare model, limits of the level down to $E_{\nu_{\tau}}^{2}\times \phi (E_{\nu_{\tau}}) < 8.4 \times 10^{-6}$ GeV cm$^{-2}$ s$^{-1}$ can be expected.
A Technique for Estimating the Absolute Gain of a Photomultiplier Tube (1804.10401)
M. Takahashi, Y. Inome, S. Yoshii, A. Bamba, S. Gunji, D. Hadasch, M. Hayashida, H. Katagiri, Y. Konno, H. Kubo, J. Kushida, D. Nakajima, T. Nakamori, T. Nagayoshi, K. Nishijima, S. Nozaki, D. Mazin, S. Mashuda, R. Mirzoyan, H. Ohoka, R. Orito, T. Saito, S. Sakurai, J. Takeda, M. Teshima, Y. Terada, F. Tokanai, T. Yamamoto, T. Yoshida
April 27, 2018 physics.ins-det, astro-ph.IM
Detection of low-intensity light relies on the conversion of photons to photoelectrons, which are then multiplied and detected as an electrical signal. To measure the actual intensity of the light, one must know the factor by which the photoelectrons have been multiplied. To obtain this amplification factor, we have developed a procedure for estimating precisely the signal caused by a single photoelectron. The method utilizes the fact that the photoelectrons conform to a Poisson distribution. The average signal produced by a single photoelectron can then be estimated from the number of noise events, without requiring analysis of the distribution of the signal produced by a single photoelectron. The signal produced by one or more photoelectrons can be estimated experimentally without any assumptions. This technique, and an example of the analysis of a signal from a photomultiplier tube, are described in this study.
Science with the Cherenkov Telescope Array (1709.07997)
The Cherenkov Telescope Array Consortium: B.S. Acharya, I. Agudo, I. Al Samarai, R. Alfaro, J. Alfaro, C. Alispach, R. Alves Batista, J.-P. Amans, E. Amato, G. Ambrosi, E. Antolini, L.A. Antonelli, C. Aramo, M. Araya, T. Armstrong, F. Arqueros, L. Arrabito, K. Asano, M. Ashley, M. Backes, C. Balazs, M. Balbo, O. Ballester, J. Ballet, A. Bamba, M. Barkov, U. Barres de Almeida, J.A. Barrio, D. Bastieri, Y. Becherini, A. Belfiore, W. Benbow, D. Berge, E. Bernardini, M.G. Bernardini, M. Bernardos, K. Bernlöhr, B. Bertucci, B. Biasuzzi, C. Bigongiari, A. Biland, E. Bissaldi, J. Biteau, O. Blanch, J. Blazek, C. Boisson, J. Bolmont, G. Bonanno, A. Bonardi, C. Bonavolontà, G. Bonnoli, Z. Bosnjak, M. Böttcher, C. Braiding, J. Bregeon, A. Brill, A.M. Brown, P. Brun, G. Brunetti, T. Buanes, J. Buckley, V. Bugaev, R. Bühler, A. Bulgarelli, T. Bulik, M. Burton, A. Burtovoi, G. Busetto, R. Canestrari, M. Capalbi, F. Capitanio, A. Caproni, P. Caraveo, V. Cárdenas, C. Carlile, R. Carosi, E. Carquín, J. Carr, S. Casanova, E. Cascone, F. Catalani, O. Catalano, D. Cauz, M. Cerruti, P. Chadwick, S. Chaty, R.C.G. Chaves, A. Chen, X. Chen, M. Chernyakova, M. Chikawa, A. Christov, J. Chudoba, M. Cieślar, V. Coco, S. Colafrancesco, P. Colin, V. Conforti, V. Connaughton, J. Conrad, J.L. Contreras, J. Cortina, A. Costa, H. Costantini, G. Cotter, S. Covino, R. Crocker, J. Cuadra, O. Cuevas, P. Cumani, A. D'Aì, F. D'Ammando, P. D'Avanzo, D. D'Urso, M. Daniel, I. Davids, B. Dawson, F. Dazzi, A. De Angelis, R. de Cássia dos Anjos, G. De Cesare, A. De Franco, E.M. de Gouveia Dal Pino, I. de la Calle, R. de los Reyes Lopez, B. De Lotto, A. De Luca, M. De Lucia, M. de Naurois, E. de Oña Wilhelmi, F. De Palma, F. De Persio, V. de Souza, C. Deil, M. Del Santo, C. Delgado, D. della Volpe, T. Di Girolamo, F. Di Pierro, L. Di Venere, C. Díaz, C. Dib, S. Diebold, A. Djannati-Ataï, A. Domínguez, D. Dominis Prester, D. Dorner, M. Doro, H. Drass, D. Dravins, G. Dubus, V.V. Dwarkadas, J. Ebr, C. Eckner, K. Egberts, S. Einecke, T.R.N. Ekoume, D. Elsässer, J.-P. Ernenwein, C. Espinoza, C. Evoli, M. Fairbairn, D. Falceta-Goncalves, A. Falcone, C. Farnier, G. Fasola, E. Fedorova, S. Fegan, M. Fernandez-Alonso, A. Fernández-Barral, G. Ferrand, M. Fesquet, M. Filipovic, V. Fioretti, G. Fontaine, M. Fornasa, L. Fortson, L. Freixas Coromina, C. Fruck, Y. Fujita, Y. Fukazawa, S. Funk, M. Füßling, S. Gabici, A. Gadola, Y. Gallant, B. Garcia, R. Garcia López, M. Garczarczyk, J. Gaskins, T. Gasparetto, M. Gaug, L. Gerard, G. Giavitto, N. Giglietto, P. Giommi, F. Giordano, E. Giro, M. Giroletti, A. Giuliani, J.-F. Glicenstein, R. Gnatyk, N. Godinovic, P. Goldoni, G. Gómez-Vargas, M.M. González, J.M. González, D. Götz, J. Graham, P. Grandi, J. Granot, A.J. Green, T. Greenshaw, S. Griffiths, S. Gunji, D. Hadasch, S. Hara, M.J. Hardcastle, T. Hassan, K. Hayashi, M. Hayashida, M. Heller, J.C. Helo, G. Hermann, J. Hinton, B. Hnatyk, W. Hofmann, J. Holder, D. Horan, J. Hörandel, D. Horns, P. Horvath, T. Hovatta, M. Hrabovsky, D. Hrupec, T.B. Humensky, M. Hütten, M. Iarlori, T. Inada, Y. Inome, S. Inoue, T. Inoue, Y. Inoue, F. Iocco, K. Ioka, M. Iori, K. Ishio, Y. Iwamura, M. Jamrozy, P. Janecek, D. Jankowsky, P. Jean, I. Jung-Richardt, J. Jurysek, P. Kaaret, S. Karkar, H. Katagiri, U. Katz, N. Kawanaka, D. Kazanas, B. Khélifi, D.B. Kieda, S. Kimeswenger, S. Kimura, S. Kisaka, J. Knapp, J. Knödlseder, B. Koch, K. Kohri, N. Komin, K. Kosack, M. Kraus, M. Krause, F. Krauß, H. Kubo, G. Kukec Mezek, H. Kuroda, J. Kushida, N. La Palombara, G. Lamanna, R.G. Lang, J. Lapington, O. Le Blanc, S. Leach, J.-P. Lees, J. Lefaucheur, M.A. Leigui de Oliveira, J.-P. Lenain, R. Lico, M. Limon, E. Lindfors, T. Lohse, S. Lombardi, F. Longo, M. López, R. López-Coto, C.-C. Lu, F. Lucarelli, P.L. Luque-Escamilla, E. Lyard, M.C. Maccarone, G. Maier, P. Majumdar, G. Malaguti, D. Mandat, G. Maneva, M. Manganaro, S. Mangano, A. Marcowith, J. Marín, S. Markoff, J. Martí, P. Martin, M. Martínez, G. Martínez, N. Masetti, S. Masuda, G. Maurin, N. Maxted, D. Mazin, C. Medina, A. Melandri, S. Mereghetti, M. Meyer, I.A. Minaya, N. Mirabal, R. Mirzoyan, A. Mitchell, T. Mizuno, R. Moderski, M. Mohammed, L. Mohrmann, T. Montaruli, A. Moralejo, D. Morcuende-Parrilla, K. Mori, G. Morlino, P. Morris, A. Morselli, E. Moulin, R. Mukherjee, C. Mundell, T. Murach, H. Muraishi, K. Murase, A. Nagai, S. Nagataki, T. Nagayoshi, T. Naito, T. Nakamori, Y. Nakamura, J. Niemiec, D. Nieto, M. Nikołajuk, K. Nishijima, K. Noda, D. Nosek, B. Novosyadlyj, S. Nozaki, P. O'Brien, L. Oakes, Y. Ohira, M. Ohishi, S. Ohm, N. Okazaki, A. Okumura, R.A. Ong, M. Orienti, R. Orito, J.P. Osborne, M. Ostrowski, N. Otte, I. Oya, M. Padovani, A. Paizis, M. Palatiello, M. Palatka, R. Paoletti, J.M. Paredes, G. Pareschi, R.D. Parsons, A. Pe'er, M. Pech, G. Pedaletti, M. Perri, M. Persic, A. Petrashyk, P. Petrucci, O. Petruk, B. Peyaud, M. Pfeifer, G. Piano, A. Pisarski, S. Pita, M. Pohl, M. Polo, D. Pozo, E. Prandini, J. Prast, G. Principe, D. Prokhorov, H. Prokoph, M. Prouza, G. Pühlhofer, M. Punch, S. Pürckhauer, F. Queiroz, A. Quirrenbach, S. Rainò, S. Razzaque, O. Reimer, A. Reimer, A. Reisenegger, M. Renaud, A.H. Rezaeian, W. Rhode, D. Ribeiro, M. Ribó, T. Richtler, J. Rico, F. Rieger, M. Riquelme, S. Rivoire, V. Rizi, J. Rodriguez, G. Rodriguez Fernandez, J.J. Rodríguez Vázquez, G. Rojas, P. Romano, G. Romeo, J. Rosado, A.C. Rovero, G. Rowell, B. Rudak, A. Rugliancich, C. Rulten, I. Sadeh, S. Safi-Harb, T. Saito, N. Sakaki, S. Sakurai, G. Salina, M. Sánchez-Conde, H. Sandaker, A. Sandoval, P. Sangiorgi, M. Sanguillon, H. Sano, M. Santander, S. Sarkar, K. Satalecka, F.G. Saturni, E.J. Schioppa, S. Schlenstedt, M. Schneider, H. Schoorlemmer, P. Schovanek, A. Schulz, F. Schussler, U. Schwanke, E. Sciacca, S. Scuderi, I. Seitenzahl, D. Semikoz, O. Sergijenko, M. Servillat, A. Shalchi, R.C. Shellard, L. Sidoli, H. Siejkowski, A. Sillanpää, G. Sironi, J. Sitarek, V. Sliusar, A. Slowikowska, H. Sol, A. Stamerra, S. Stanič, R. Starling, Ł. Stawarz, S. Stefanik, M. Stephan, T. Stolarczyk, G. Stratta, U. Straumann, T. Suomijarvi, A.D. Supanitsky, G. Tagliaferri, H. Tajima, M. Tavani, F. Tavecchio, J.-P. Tavernet, K. Tayabaly, L.A. Tejedor, P. Temnikov, Y. Terada, R. Terrier, T. Terzic, M. Teshima, V. Testa, S. Thoudam, W. Tian, L. Tibaldo, M. Tluczykont, C.J. Todero Peixoto, F. Tokanai, J. Tomastik, D. Tonev, M. Tornikoski, D.F. Torres, E. Torresi, G. Tosti, N. Tothill, G. Tovmassian, P. Travnicek, C. Trichard, M. Trifoglio, I. Troyano Pujadas, S. Tsujimoto, G. Umana, V. Vagelli, F. Vagnetti, M. Valentino, P. Vallania, L. Valore, C. van Eldik, J. Vandenbroucke, G.S. Varner, G. Vasileiadis, V. Vassiliev, M. Vázquez Acosta, M. Vecchi, A. Vega, S. Vercellone, P. Veres, S. Vergani, V. Verzi, G.P. Vettolani, A. Viana, C. Vigorito, J. Villanueva, H. Voelk, A. Vollhardt, S. Vorobiov, M. Vrastil, T. Vuillaume, S.J. Wagner, R. Wagner, R. Walter, J.E. Ward, D. Warren, J.J. Watson, F. Werner, M. White, R. White, A. Wierzcholska, P. Wilcox, M. Will, D.A. Williams, R. Wischnewski, M. Wood, T. Yamamoto, R. Yamazaki, S. Yanagita, L. Yang, T. Yoshida, S. Yoshiike, T. Yoshikoshi, M. Zacharias, G. Zaharijas, L. Zampieri, F. Zandanel, R. Zanin, M. Zavrtanik, D. Zavrtanik, A.A. Zdziarski, A. Zech, H. Zechlin, V.I. Zhdanov, A. Ziegler, J. Zorn
Jan. 22, 2018 hep-ex, astro-ph.IM, astro-ph.HE
The Cherenkov Telescope Array, CTA, will be the major global observatory for very high energy gamma-ray astronomy over the next decade and beyond. The scientific potential of CTA is extremely broad: from understanding the role of relativistic cosmic particles to the search for dark matter. CTA is an explorer of the extreme universe, probing environments from the immediate neighbourhood of black holes to cosmic voids on the largest scales. Covering a huge range in photon energy from 20 GeV to 300 TeV, CTA will improve on all aspects of performance with respect to current instruments. The observatory will operate arrays on sites in both hemispheres to provide full sky coverage and will hence maximize the potential for the rarest phenomena such as very nearby supernovae, gamma-ray bursts or gravitational wave transients. With 99 telescopes on the southern site and 19 telescopes on the northern site, flexible operation will be possible, with sub-arrays available for specific tasks. CTA will have important synergies with many of the new generation of major astronomical and astroparticle observatories. Multi-wavelength and multi-messenger approaches combining CTA data with those from other instruments will lead to a deeper understanding of the broad-band non-thermal properties of target sources. The CTA Observatory will be operated as an open, proposal-driven observatory, with all data available on a public archive after a pre-defined proprietary period. Scientists from institutions worldwide have combined together to form the CTA Consortium. This Consortium has prepared a proposal for a Core Programme of highly motivated observations. The programme, encompassing approximately 40% of the available observing time over the first ten years of CTA operation, is made up of individual Key Science Projects (KSPs), which are presented in this document.
Cherenkov Telescope Array Contributions to the 35th International Cosmic Ray Conference (ICRC2017) (1709.03483)
F. Acero, B.S. Acharya, V. Acín Portella, C. Adams, I. Agudo, F. Aharonian, I. Al Samarai, A. Alberdi, M. Alcubierre, R. Alfaro, J. Alfaro, C. Alispach, R. Aloisio, R. Alves Batista, J.-P. Amans, E. Amato, L. Ambrogi, G. Ambrosi, M. Ambrosio, J. Anderson, M. Anduze, E.O. Angüner, E. Antolini, L.A. Antonelli, V. Antonuccio, P. Antoranz, C. Aramo, M. Araya, C. Arcaro, T. Armstrong, F. Arqueros, L. Arrabito, M. Arrieta, K. Asano, A. Asano, M. Ashley, P. Aubert, C. B. Singh, A. Babic, M. Backes, S. Bajtlik, C. Balazs, M. Balbo, O. Ballester, J. Ballet, L. Ballo, A. Balzer, A. Bamba, R. Bandiera, P. Barai, C. Barbier, M. Barcelo, M. Barkov, U. Barres de Almeida, J.A. Barrio, D. Bastieri, C. Bauer, U. Becciani, Y. Becherini, J. Becker Tjus, W. Bednarek, A. Belfiore, W. Benbow, M. Benito, D. Berge, E. Bernardini, M.G. Bernardini, M. Bernardos, S. Bernhard, K. Bernlöhr, C. Bertinelli Salucci, B. Bertucci, M.-A. Besel, V. Beshley, J. Bettane, N. Bhatt, W. Bhattacharyya, S. Bhattachryya, B. Biasuzzi, G. Bicknell, C. Bigongiari, A. Biland, A. Bilinsky, R. Bird, E. Bissaldi, J. Biteau, M. Bitossi, O. Blanch, P. Blasi, J. Blazek, C. Boccato, C. Bockermann, C. Boehm, M. Bohacova, C. Boisson, J. Bolmont, G. Bonanno, A. Bonardi, C. Bonavolontà, G. Bonnoli, J. Borkowski, R. Bose, Z. Bosnjak, M. Böttcher, C. Boutonnet, F. Bouyjou, L. Bowman, V. Bozhilov, C. Braiding, S. Brau-Nogué, J. Bregeon, M. Briggs, A. Brill, W. Brisken, D. Bristow, R. Britto, E. Brocato, A.M. Brown, S. Brown, K. Brügge, P. Brun, P. Brun, F. Brun, L. Brunetti, G. Brunetti, P. Bruno, M. Bryan, J. Buckley, V. Bugaev, R. Bühler, A. Bulgarelli, T. Bulik, M. Burton, A. Burtovoi, G. Busetto, S. Buson, J. Buss, K. Byrum, A. Caccianiga, R. Cameron, F. Canelli, R. Canestrari, M. Capalbi, M. Capasso, F. Capitanio, A. Caproni, R. Capuzzo-Dolcetta, P. Caraveo, V. Cárdenas, J. Cardenzana, M. Cardillo, C. Carlile, S. Caroff, R. Carosi, A. Carosi, E. Carquín, J. Carr, J.-M. Casandjian, S. Casanova, E. Cascone, A.J. Castro-Tirado, J. Castroviejo Mora, F. Catalani, O. Catalano, D. Cauz, C. Celestino Silva, S. Celli, M. Cerruti, E. Chabanne, P. Chadwick, N. Chakraborty, C. Champion, A. Chatterjee, S. Chaty, R. Chaves, A. Chen, X. Chen, K. Cheng, M. Chernyakova, M. Chikawa, V.R. Chitnis, A. Christov, J. Chudoba, M. Cieślar, P. Clark, V. Coco, S. Colafrancesco, P. Colin, E. Colombo, J. Colome, S. Colonges, V. Conforti, V. Connaughton, J. Conrad, J.L. Contreras, R. Cornat, J. Cortina, A. Costa, H. Costantini, G. Cotter, B. Courty, S. Covino, G. Covone, P. Cristofari, S.J. Criswell, R. Crocker, J. Croston, C. Crovari, J. Cuadra, O. Cuevas, X. Cui, P. Cumani, G. Cusumano, A. D'Aì, F. D'Ammando, P. D'Avanzo, D. D'Urso, P. Da Vela, Ø. Dale, V.T. Dang, L. Dangeon, M. Daniel, I. Davids, B. Dawson, F. Dazzi, A. De Angelis, V. De Caprio, R. de Cássia dos Anjos, G. De Cesare, A. De Franco, F. De Frondat, E.M. de Gouveia Dal Pino, I. de la Calle, C. De Lisio, R. de los Reyes Lopez, B. De Lotto, A. De Luca, M. De Lucia, J.R.T. de Mello Neto, M. de Naurois, E. de Oña Wilhelmi, F. De Palma, F. De Persio, V. de Souza, J. Decock, C. Deil, P. Deiml, M. Del Santo, E. Delagnes, G. Deleglise, M. Delfino Reznicek, C. Delgado, J. Delgado Mengual, R. Della Ceca, D. della Volpe, M. Detournay, J. Devin, T. Di Girolamo, C. Di Giulio, F. Di Pierro, L. Di Venere, L. Diaz, C. Díaz, C. Dib, H. Dickinson, S. Diebold, S. Digel, A. Djannati-Ataï, M. Doert, A. Domínguez, D. Dominis Prester, I. Donnarumma, D. Dorner, M. Doro, J.-L. Dournaux, T. Downes, G. Drake, S. Drappeau, H. Drass, D. Dravins, L. Drury, G. Dubus, K. Dundas Morå, A. Durkalec, V. Dwarkadas, J. Ebr, C. Eckner, E. Edy, K. Egberts, S. Einecke, J. Eisch, F. Eisenkolb, T.R.N. Ekoume, C. Eleftheriadis, D. Elsässer, D. Emmanoulopoulos, J.-P. Ernenwein, P. Escarate, S. Eschbach, C. Espinoza, P. Evans, C. Evoli, M. Fairbairn, D. Falceta-Goncalves, A. Falcone, V. Fallah Ramazani, K. Farakos, E. Farrell, G. Fasola, Y. Favre, E. Fede, R. Fedora, E. Fedorova, S. Fegan, M. Fernandez-Alonso, A. Fernández-Barral, G. Ferrand, O. Ferreira, M. Fesquet, E. Fiandrini, A. Fiasson, M. Filipovic, D. Fink, J.P. Finley, C. Finley, A. Finoguenov, V. Fioretti, M. Fiorini, H. Flores, L. Foffano, C. Föhr, M.V. Fonseca, L. Font, G. Fontaine, M. Fornasa, P. Fortin, L. Fortson, N. Fouque, B. Fraga, F.J. Franco, L. Freixas Coromina, C. Fruck, D. Fugazza, Y. Fujita, S. Fukami, Y. Fukazawa, Y. Fukui, S. Funk, A. Furniss, M. Füßling, S. Gabici, A. Gadola, Y. Gallant, D. Galloway, S. Gallozzi, B. Garcia, A. Garcia, R. García Gil, R. Garcia López, M. Garczarczyk, D. Gardiol, F. Gargano, C. Gargano, S. Garozzo, M. Garrido-Ruiz, D. Gascon, T. Gasparetto, F. Gaté, M. Gaug, B. Gebhardt, M. Gebyehu, N. Geffroy, B. Genolini, A. Ghalumyan, A. Ghedina, G. Ghirlanda, P. Giammaria, F. Gianotti, B. Giebels, N. Giglietto, V. Gika, R. Gimenes, P. Giommi, F. Giordano, G. Giovannini, E. Giro, M. Giroletti, J. Gironnet, A. Giuliani, J.-F. Glicenstein, R. Gnatyk, N. Godinovic, P. Goldoni, J.L. Gómez, G. Gómez-Vargas, M.M. González, J.M. González, K.S. Gothe, D. Gotz, J. Goullon, T. Grabarczyk, R. Graciani, J. Graham, P. Grandi, J. Granot, G. Grasseau, R. Gredig, A.J. Green, T. Greenshaw, I. Grenier, S. Griffiths, A. Grillo, M.-H. Grondin, J. Grube, V. Guarino, B. Guest, O. Gueta, S. Gunji, G. Gyuk, D. Hadasch, L. Hagge, J. Hahn, A. Hahn, H. Hakobyan, S. Hara, M.J. Hardcastle, T. Hassan, T. Haubold, A. Haupt, K. Hayashi, M. Hayashida, H. He, M. Heller, J.C. Helo, F. Henault, G. Henri, G. Hermann, R. Hermel, J. Herrera Llorente, A. Herrero, O. Hervet, N. Hidaka, J. Hinton, N. Hiroshima, K. Hirotani, B. Hnatyk, J.K. Hoang, D. Hoffmann, W. Hofmann, J. Holder, D. Horan, J. Hörandel, M. Hörbe, D. Horns, P. Horvath, J. Houles, T. Hovatta, M. Hrabovsky, D. Hrupec, J.-M. Huet, G. Hughes, D. Hui, G. Hull, T.B. Humensky, M. Hussein, M. Hütten, M. Iarlori, Y. Ikeno, J.M. Illa, D. Impiombato, T. Inada, A. Ingallinera, Y. Inome, S. Inoue, T. Inoue, Y. Inoue, F. Iocco, K. Ioka, M. Ionica, M. Iori, A. Iriarte, K. Ishio, G.L. Israel, Y. Iwamura, C. Jablonski, A. Jacholkowska, J. Jacquemier, M. Jamrozy, P. Janecek, F. Jankowsky, D. Jankowsky, P. Jansweijer, C. Jarnot, P. Jean, C.A. Johnson, M. Josselin, I. Jung-Richardt, J. Jurysek, P. Kaaret, P. Kachru, M. Kagaya, J. Kakuwa, O. Kalekin, R. Kankanyan, A. Karastergiou, M. Karczewski, S. Karkar, H. Katagiri, J. Kataoka, K. Katarzyński, U. Katz, N. Kawanaka, L. Kaye, D. Kazanas, N. Kelley-Hoskins, B. Khélifi, D.B. Kieda, T. Kihm, S. Kimeswenger, S. Kimura, S. Kisaka, S. Kishida, R. Kissmann, W. Kluźniak, J. Knapen, J. Knapp, J. Knödlseder, B. Koch, J. Kocot, K. Kohri, N. Komin, A. Kong, Y. Konno, K. Kosack, G. Kowal, S. Koyama, M. Kraus, M. Krause, F. Krauß, F. Krennrich, P. Kruger, H. Kubo, V. Kudryavtsev, G. Kukec Mezek, S. Kumar, H. Kuroda, J. Kushida, P. Kushwaha, N. La Palombara, V. La Parola, G. La Rosa, R. Lahmann, K. Lalik, G. Lamanna, M. Landoni, D. Landriu, H. Landt, R.G. Lang, J. Lapington, P. Laporte, O. Le Blanc, T. Le Flour, P. Le Sidaner, S. Leach, A. Leckngam, S.-H. Lee, W.H. Lee, J.-P. Lees, J. Lefaucheur, M.A. Leigui de Oliveira, M. Lemoine-Goumard, J.-P. Lenain, G. Leto, R. Lico, M. Limon, R. Lindemann, E. Lindfors, L. Linhoff, A. Lipniacka, S. Lloyd, T. Lohse, S. Lombardi, F. Longo, M. Lopez, R. Lopez-Coto, T. Louge, F. Louis, M. Louys, F. Lucarelli, D. Lucchesi, P.L. Luque-Escamilla, E. Lyard, M.C. Maccarone, T. Maccarone, E. Mach, G.M. Madejski, G. Maier, A. Majczyna, P. Majumdar, M. Makariev, G. Malaguti, A. Malouf, S. Maltezos, D. Malyshev, D. Malyshev, D. Mandat, G. Maneva, M. Manganaro, S. Mangano, P. Manigot, K. Mannheim, N. Maragos, D. Marano, A. Marcowith, J. Marín, M. Mariotti, M. Marisaldi, S. Markoff, J. Martí, J.-M. Martin, P. Martin, L. Martin, M. Martínez, G. Martínez, O. Martínez, R. Marx, N. Masetti, P. Massimino, A. Mastichiadis, M. Mastropietro, S. Masuda, H. Matsumoto, N. Matthews, S. Mattiazzo, G. Maurin, N. Maxted, M. Mayer, D. Mazin, M.N. Mazziotta, L. Mc Comb, I. McHardy, C. Medina, A. Melandri, C. Melioli, D. Melkumyan, S. Mereghetti, J.-L. Meunier, T. Meures, M. Meyer, S. Micanovic, T. Michael, J. Michałowski, I. Mievre, J. Miller, I.A. Minaya, T. Mineo, F. Mirabel, J.M. Miranda, R. Mirzoyan, A. Mitchell, T. Mizuno, R. Moderski, M. Mohammed, L. Mohrmann, C. Molijn, E. Molinari, R. Moncada, T. Montaruli, I. Monteiro, D. Mooney, P. Moore, A. Moralejo, D. Morcuende-Parrilla, E. Moretti, K. Mori, G. Morlino, P. Morris, A. Morselli, F. Moscato, D. Motohashi, E. Moulin, S. Mueller, R. Mukherjee, P. Munar, C. Mundell, J. Mundet, T. Murach, H. Muraishi, K. Murase, A. Murphy, A. Nagai, N. Nagar, S. Nagataki, T. Nagayoshi, B.K. Nagesh, T. Naito, D. Nakajima, T. Nakamori, Y. Nakamura, K. Nakayama, D. Naumann, P. Nayman, D. Neise, L. Nellen, R. Nemmen, A. Neronov, N. Neyroud, T. Nguyen, T.T. Nguyen, T. Nguyen Trung, L. Nicastro, J. Nicolau-Kukliński, J. Niemiec, D. Nieto, M. Nievas-Rosillo, M. Nikołajuk, K. Nishijima, K.-I. Nishikawa, G. Nishiyama, K. Noda, L. Nogues, S. Nolan, D. Nosek, M. Nöthe, B. Novosyadlyj, S. Nozaki, F. Nunio, P. O'Brien, L. Oakes, C. Ocampo, J.P. Ochoa, R. Oger, Y. Ohira, M. Ohishi, S. Ohm, N. Okazaki, A. Okumura, J.-F. Olive, R.A. Ong, M. Orienti, R. Orito, A. Orlati, J.P. Osborne, M. Ostrowski, N. Otte, Z. Ou, E. Ovcharov, I. Oya, A. Ozieblo, M. Padovani, S. Paiano, A. Paizis, J. Palacio, M. Palatiello, M. Palatka, J. Pallotta, J.-L. Panazol, D. Paneque, M. Panter, R. Paoletti, M. Paolillo, A. Papitto, A. Paravac, J.M. Paredes, G. Pareschi, R.D. Parsons, P. Paśko, S. Pavy, A. Pe'er, M. Pech, G. Pedaletti, P. Peñil Del Campo, A. Perez, M.A. Pérez-Torres, L. Perri, M. Perri, M. Persic, A. Petrashyk, S. Petrera, P.-O. Petrucci, O. Petruk, B. Peyaud, M. Pfeifer, G. Piano, Q. Piel, D. Pieloth, F. Pintore, C. Pio García, A. Pisarski, S. Pita, L. Pizarro, Ł. Platos, M. Pohl, V. Poireau, A. Pollo, J. Porthault, J. Poutanen, D. Pozo, E. Prandini, P. Prasit, J. Prast, K. Pressard, G. Principe, D. Prokhorov, H. Prokoph, M. Prouza, G. Pruteanu, E. Pueschel, G. Pühlhofer, I. Puljak, M. Punch, S. Pürckhauer, F. Queiroz, J. Quinn, A. Quirrenbach, I. Rafighi, S. Rainò, P.J. Rajda, R. Rando, R.C. Rannot, S. Razzaque, I. Reichardt, O. Reimer, A. Reimer, A. Reisenegger, M. Renaud, T. Reposeur, B. Reville, A.H. Rezaeian, W. Rhode, D. Ribeiro, M. Ribó, M.G. Richer, T. Richtler, J. Rico, F. Rieger, M. Riquelme, P.R. Ristori, S. Rivoire, V. Rizi, J. Rodriguez, G. Rodriguez Fernandez, J.J. Rodríguez Vázquez, G. Rojas, P. Romano, G. Romeo, M. Roncadelli, J. Rosado, S. Rosen, S. Rosier Lees, J. Rousselle, A.C. Rovero, G. Rowell, B. Rudak, A. Rugliancich, J.E. Ruíz del Mazo, W. Rujopakarn, C. Rulten, F. Russo, O. Saavedra, S. Sabatini, B. Sacco, I. Sadeh, E. Sæther Hatlen, S. Safi-Harb, V. Sahakian, S. Sailer, T. Saito, N. Sakaki, S. Sakurai, D. Salek, F. Salesa Greus, G. Salina, D. Sanchez, M. Sánchez-Conde, H. Sandaker, A. Sandoval, P. Sangiorgi, M. Sanguillon, H. Sano, M. Santander, A. Santangelo, E.M. Santos, A. Sanuy, L. Sapozhnikov, S. Sarkar, K. Satalecka, Y. Sato, F.G. Saturni, R. Savalle, M. Sawada, S. Schanne, E.J. Schioppa, S. Schlenstedt, T. Schmidt, J. Schmoll, M. Schneider, H. Schoorlemmer, P. Schovanek, A. Schulz, F. Schussler, U. Schwanke, J. Schwarz, T. Schweizer, S. Schwemmer, E. Sciacca, S. Scuderi, M. Seglar-Arroyo, A. Segreto, I. Seitenzahl, D. Semikoz, O. Sergijenko, N. Serre, M. Servillat, K. Seweryn, K. Shah, A. Shalchi, M. Sharma, R.C. Shellard, I. Shilon, L. Sidoli, M. Sidz, H. Siejkowski, J. Silk, A. Sillanpää, D. Simone, B.B. Singh, G. Sironi, J. Sitarek, P. Sizun, V. Sliusar, A. Slowikowska, A. Smith, D. Sobczyńska, A. Sokolenko, H. Sol, G. Sottile, W. Springer, O. Stahl, A. Stamerra, S. Stanič, R. Starling, D. Staszak, Ł. Stawarz, R. Steenkamp, S. Stefanik, C. Stegmann, S. Steiner, C. Stella, M. Stephan, R. Sternberger, M. Sterzel, B. Stevenson, M. Stodulska, M. Stodulski, T. Stolarczyk, G. Stratta, U. Straumann, R. Stuik, M. Suchenek, T. Suomijarvi, A.D. Supanitsky, T. Suric, I. Sushch, P. Sutcliffe, J. Sykes, M. Szanecki, T. Szepieniec, G. Tagliaferri, H. Tajima, K. Takahashi, H. Takahashi, M. Takahashi, L. Takalo, S. Takami, J. Takata, J. Takeda, T. Tam, M. Tanaka, T. Tanaka, Y. Tanaka, S. Tanaka, C. Tanci, M. Tavani, F. Tavecchio, J.-P. Tavernet, K. Tayabaly, L.A. Tejedor, F. Temme, P. Temnikov, Y. Terada, J.C. Terrazas, R. Terrier, D. Terront, T. Terzic, D. Tescaro, M. Teshima, V. Testa, S. Thoudam, W. Tian, L. Tibaldo, A. Tiengo, D. Tiziani, M. Tluczykont, C.J. Todero Peixoto, F. Tokanai, M. Tokarz, K. Toma, J. Tomastik, A. Tonachini, D. Tonev, M. Tornikoski, D.F. Torres, E. Torresi, G. Tosti, T. Totani, N. Tothill, F. Toussenel, G. Tovmassian, N. Trakarnsirinont, P. Travnicek, C. Trichard, M. Trifoglio, I. Troyano Pujadas, M. Tsirou, S. Tsujimoto, T. Tsuru, Y. Uchiyama, G. Umana, M. Uslenghi, V. Vagelli, F. Vagnetti, M. Valentino, P. Vallania, L. Valore, A.M. Van den Berg, W. van Driel, C. van Eldik, B. van Soelen, J. Vandenbroucke, J. Vanderwalt, G.S. Varner, G. Vasileiadis, V. Vassiliev, J.R. Vázquez, M. Vázquez Acosta, M. Vecchi, A. Vega, P. Veitch, P. Venault, C. Venter, S. Vercellone, P. Veres, S. Vergani, V. Verzi, G.P. Vettolani, C. Veyssiere, A. Viana, J. Vicha, C. Vigorito, J. Villanueva, P. Vincent, J. Vink, F. Visconti, V. Vittorini, H. Voelk, V. Voisin, A. Vollhardt, S. Vorobiov, I. Vovk, M. Vrastil, T. Vuillaume, S.J. Wagner, R. Wagner, P. Wagner, S.P. Wakely, T. Walstra, R. Walter, M. Ward, J.E. Ward, D. Warren, J.J. Watson, N. Webb, P. Wegner, O. Weiner, A. Weinstein, C. Weniger, F. Werner, H. Wetteskind, M. White, R. White, A. Wierzcholska, S. Wiesand, R. Wijers, P. Wilcox, A. Wilhelm, M. Wilkinson, M. Will, D.A. Williams, M. Winter, P. Wojcik, D. Wolf, M. Wood, A. Wörnlein, T. Wu, K.K. Yadav, C. Yaguna, T. Yamamoto, H. Yamamoto, N. Yamane, R. Yamazaki, S. Yanagita, L. Yang, D. Yelos, T. Yoshida, M. Yoshida, S. Yoshiike, T. Yoshikoshi, P. Yu, D. Zaborov, M. Zacharias, G. Zaharijas, A. Zajczyk, L. Zampieri, F. Zandanel, R. Zanin, R. Zanmar Sanchez, D. Zaric, M. Zavrtanik, D. Zavrtanik, A.A. Zdziarski, A. Zech, H. Zechlin, V.I. Zhdanov, A. Ziegler, J. Ziemann, K. Ziętara, A. Zink, J. Ziółkowski, V. Zitelli, A. Zoli, J. Zorn
Oct. 3, 2017 astro-ph.HE
List of contributions from the Cherenkov Telescope Array Consortium presented at the 35th International Cosmic Ray Conference, July 12-20 2017, Busan, Korea.
Constraining Lorentz invariance violation using the Crab Pulsar emission observed up to TeV energies by MAGIC (1709.00346)
MAGIC Collaboration: M. L. Ahnen, L. A. Antonelli, P. Bangale, W. Bednarek, W. Bhattacharyya, G. Bonnoli, S. M. Colak, S. Covino, A. De Angelis, M. Doert, M. Doro, M. Engelkemeier, D. Fidalgo, R. J. García López, M. Gaug, D. Hadasch, J. Herrera, H. Kubo, E. Lindfors, P. Majumdar, K. Mannheim, D. Mazin, A. Moralejo, M. Nievas Rosillo, K. Noda, R. Paoletti, L. Perri, I. Puljak, M. Ribó, S. Schroeder, D. Sobczynska, L. Takalo, M. Teshima, A. Treves, M. Will ETH Zurich, CH-8093 Zurich, Switzerland, Japanese MAGIC Consortium: ICRR, The University of Tokyo, 277-8582 Chiba, Department of Physics, Kyoto University, 606-8502 Kyoto, Tokai University, 259-1292 Kanagawa, The University of Tokushima, 770-8502 Tokushima, Japan, Università di Padova, INFN, I-35131 Padova, Italy, Croatian MAGIC Consortium: University of Rijeka, 51000 Rijeka, University of Split - FESB, 21000 Split, University of Zagreb - FER, 10000 Zagreb, University of Osijek, 31000 Osijek, Rudjer Boskovic Institute, 10000 Zagreb, Croatia, Saha Institute of Nuclear Physics, HBNI, 1/AF Bidhannagar, Salt Lake, Sector-1, Kolkata 700064, India, Max-Planck-Institut für Physik, D-80805 München, Germany, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Universidad de La Laguna, Dpto. Astrofísica, E-38206 La Laguna, Tenerife, Spain, University of Lódź, Department of Astrophysics, PL-90236 Lódź, Poland, , D-15738 Zeuthen, Germany, Humboldt University of Berlin, Institut für Physik, D-12489 Berlin Germany, University of Trieste, INFN Trieste, I-34127 Trieste, Italy, , The Barcelona Institute of Science, Technology, Campus UAB, E-08193 Bellaterra Università di Siena, INFN Pisa, I-53100 Siena, Italy, INAF - National Institute for Astrophysics, I-00136 Rome, Italy, Technische Universität Dortmund, D-44221 Dortmund, Germany, Universität Würzburg, D-97074 Würzburg, Germany, Finnish MAGIC Consortium: Tuorla Observatory, Finnish Centre of Astronomy with ESO, University of Turku, Vaisalantie 20, FI-21500 Piikkiö, Astronomy Division, University of Oulu, FIN-90014 University of Oulu, Finland, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Spain, Inst. for Nucl. Research, Nucl. Energy, Bulgarian Academy of Sciences, BG-1784 Sofia, Bulgaria, Università di Pisa, INFN Pisa, I-56126 Pisa, Italy, , E-08193 Barcelona, Spain)
Sept. 1, 2017 astro-ph.HE
Spontaneous breaking of Lorentz symmetry at energies on the order of the Planck energy or lower is predicted by many quantum gravity theories, implying non-trivial dispersion relations for the photon in vacuum. Consequently, gamma-rays of different energies, emitted simultaneously from astrophysical sources, could accumulate measurable differences in their time of flight until they reach the Earth. Such tests have been carried out in the past using fast variations of gamma-ray flux from pulsars, and more recently from active galactic nuclei and gamma-ray bursts. We present new constraints studying the gamma-ray emission of the galactic Crab Pulsar, recently observed up to TeV energies by the MAGIC collaboration. A profile likelihood analysis of pulsar events reconstructed for energies above 400GeV finds no significant variation in arrival time as their energy increases. Ninety-five percent~CL limits are obtained on the effective Lorentz invariance violating energy scale at the level of $E_{\mathrm{QG}_1} > 5.5\cdot 10^{17}$GeV ($4.5\cdot 10^{17}$GeV) for a linear, and $E_{\mathrm{QG}_2} > 5.9\cdot 10^{10}$GeV ($5.3\cdot 10^{10}$GeV) for a quadratic scenario, for the subluminal and the superluminal cases, respectively. A substantial part of this study is dedicated to calibration of the test statistic, with respect to bias and coverage properties. Moreover, the limits take into account systematic uncertainties, found to worsen the statistical limits by about 36--42\%. Our constraints would have resulted much more competitive if the intrinsic pulse shape of the pulsar between 200GeV and 400GeV was understood in sufficient detail and allowed inclusion of events well below 400GeV.
Performance of the MAGIC telescopes under moonlight (1704.00906)
MAGIC Collaboration: M. L. Ahnen, L. A. Antonelli, P. Bangale, J. Becerra González, W. Bhattacharyya, S. Bonnefoy, A. Chatterjee, S. Covino, B. De Lotto, A. Domínguez, S. Einecke, M. Engelkemeier, M. V. Fonseca, R. J. García López, N. Godinović, D. Hadasch, J. Hose, H. Kubo, E. Lindfors, P. Majumdar, K. Mannheim, U. Menzel, V. Moreno, M. Nievas Rosillo, K. Noda, R. Paoletti, G. Pedaletti, P. G. Prada Moroni, W. Rhode, K. Satalecka, A. Sillanpää, A. Stamerra, P. Temnikov, D. F. Torres, M. Vazquez Acosta ETH Zurich, Institute for Particle Physics, Zurich, Switzerland, Università di Udine, INFN, sezione di Trieste, Italy, Udine, Italy, INAF - National Institute for Astrophysics, Roma, Italy, Dipartimento di Fisica ed Astronomia, Università di Padova, INFN sez. di Padova, Padova, Italy, Croatian MAGIC Consortium: Rudjer Boskovic Institute, University of Rijeka, University of Split - FESB, University of Zagreb-FER, University of Osijek, Split, Croatia, Saha Institute of Nuclear Physics, HBNI, Kolkata, India, Grupo de Altas Energias, Universidad Complutense, Madrid, Madrid, Spain, Instituto de Astrofisica de Canarias, La Laguna Division of Astrophysics, University of Lodz, Lodz, Poland, Deutsches Elektronen-Synchrotron Institut de Fisica d'Altes Energies, The Barcelona Institute of Science, Technology, Bellaterra Dipartimento di Fisica, Università di Siena, INFN sez. di Pisa, Siena, Italy, Institut de Ciencies de l'Espai Technische Universität Dortmund, Dortmund, Germany, Institut für Theoretische Physik und Astrophysik - Fakultät für Physik und Astronomie - Universität Würzburg, Würzburg, Germany, Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Astronomy Division, University of Oulu, Finland, Piikkiö, Finland, Universitat Autònoma de Barcelona, Barcelona, Spain, Japanese MAGIC Consortium, Kyoto, Japan, Institute for Nuclear Research, Nuclear Energy, Sofia, Bulgaria, Universita di Pisa, INFN Pisa, Pisa, Italy, , Bellaterra, Spain, Centro Brasileiro de Pesquisas Físicas Humboldt University of Berlin, Institut für Physik, Berlin, Germany, University of Trieste, , Turku, Finland)
Aug. 2, 2017 astro-ph.IM
MAGIC, a system of two imaging atmospheric Cherenkov telescopes, achieves its best performance under dark conditions, i.e. in absence of moonlight or twilight. Since operating the telescopes only during dark time would severely limit the duty cycle, observations are also performed when the Moon is present in the sky. Here we develop a dedicated Moon-adapted analysis to characterize the performance of MAGIC under moonlight. We evaluate energy threshold, angular resolution and sensitivity of MAGIC under different background light levels, based on Crab Nebula observations and tuned Monte Carlo simulations. This study includes observations taken under non-standard hardware configurations, such as reducing the camera photomultiplier tubes gain by a factor ~1.7 (Reduced HV settings) with respect to standard settings (Nominal HV) or using UV-pass filters to strongly reduce the amount of moonlight reaching the cameras of the telescopes. The Crab Nebula spectrum is correctly reconstructed in all the studied illumination levels, that reach up to 30 times brighter than under dark conditions. The main effect of moonlight is an increase in the analysis energy threshold and in the systematic uncertainties on the flux normalization. The sensitivity degradation is constrained to be below 10%, within 15-30% and between 60 and 80% for Nominal HV, Reduced HV and UV-pass filter observations, respectively. No worsening of the angular resolution was found. Thanks to observations during moonlight, the maximal duty cycle of MAGIC can be increased from ~18%, under dark nights only, to up to ~40% in total with only moderate performance degradation.
Constraints on particle acceleration in SS433/W50 from MAGIC and H.E.S.S. observations (1707.03658)
MAGIC Collaboration: M. L. Ahnen, L. A. Antonelli, P. Bangale, J. Becerra González, B. Biasuzzi, G. Bonnoli, P. Colin, S. Covino, B. De Lotto, A. Domínguez, S. Einecke, M. Engelkemeier, M. V. Fonseca, R. J. García López, N. Godinović, D. Hadasch, J. Herrera (9, 10), J. Hose, H. Kubo, E. Lindfors, A. López-Oramas, M. Manganaro, M. Martínez, R. Mirzoyan, P. Munar-Adrover (20, 35), V. Neustroev, K. Nilsson, S. Paiano, X. Paredes-Fortuny, M. Persic, J. R. Garcia, T. Saito, S. N. Shore, D. Sobczynska, L. Takalo, M. Teshima, G. Vanzo, M. Will, H.E.S.S. Collaboration: H. Abdalla, F. Ait Benkhali, M. Arakawa, A. Balzer, S. Bernhard, C. Boisson, F. Brun, M. Capasso, N. Chakraborty, A. Chen, S. Colafrancesco, Y. Cui, C. Deil, W. Domainko, J. Dyks, S. Eschbach, A. Fiasson, M. Füß ling, T. Garrigoux, D. Gottschall, J. Hawkes, O. Hervet, D. Horns, M. Jamrozy, M. Jingo, M.A. Kastendieck, D. Kerszberg, S. Klepser, Nu. Komin, P.P. Krüger, J. Lefaucheur, J.P. Lenain, R. López-Coto, C. Mariaud, P.J. Meintjes, M. Mohamed, T. Murach, J. Niemiec, S. Ohm, R.D. Parsons, P.O. Petrucci, D. Prokhorov, A. Quirrenbach, M. Renaud, C. Romoli, V. Sahakian, A. Santangelo, A. Schulz, M. Settimo, R. Simoni, L. Stawarz, I. Sushch, A.M. Taylor, M. Tluczykont, D.J. van der Walt, B. van Soelen, P. Vincent, T. Vuillaume, R. White, D. Wouters, M. Zacharias, A. Ziegler ETH Zurich, CH-8093 Zurich, Switzerland, INAF National Institute for Astrophysics, I-00136 Rome, Italy Universitá di Padova, INFN, I-35131 Padova, Italy, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split - FESB, University of Zagreb - FER, University of Osijek, Croatia, Saha Institute of Nuclear Physics, 1/AF Bidhannagar, Salt Lake, Sector-1, Kolkata 700064, India, Universidad Complutense, E-28040 Madrid, Spain, Inst. de Astrofísica de Canarias, E-38200 La Laguna, Tenerife, Spain, Universidad de La Laguna, Dpto. Astrofísica, E-38206 La Laguna, Tenerife, Spain, University of Lódź, PL-90236 Lodz, Poland, Deutsches Elektronen-Synchrotron Institut de Fisica d'Altes Energies, The Barcelona Institute of Science, Technology, Campus UAB, 08193 Bellaterra Universitá di Siena, INFN Pisa, I-53100 Siena, Italy, , E-08193 Barcelona, Spain, Technische Universität Dortmund, D-44221 Dortmund, Germany, Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Astronomy Division, University of Oulu, Finland, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autónoma de Barcelona, E-08193 Bellaterra, Spain, Universitat de Barcelona, ICC, IEEC-UB, E-08028 Barcelona, Spain, Japanese MAGIC Consortium, ICRR, The University of Tokyo, Department of Physics, Hakubi Center, Kyoto University, Tokai University, The University of Tokushima, Japan, Inst. for Nucl. Research, Nucl. Energy, BG-1784 Sofia, Bulgaria, ICREA, Institute for Space Sciences, E-08193, Barcelona, Spain, also at the Department of Physics of Kyoto University, Japan, , R. Dr.Xavier Sigaud, 150 - Urca, Rio de Janeiro - RJ, 22290-180, Brazil, now at NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA, Department of Physics, Department of Astronomy, University of Maryland, College Park, MD 20742, USA, Institut für Physik, Humboldt-Universität zu Berlin, Newtonstr. 15, D 12489 Berlin, Germany, also at Japanese MAGIC Consortium, , Turku, Finland, also at INAF-Trieste, Dept. of Physics & Astronomy, University of Bologna, now at Laboratoire AIM (UMR 7158 CEA/DSM, CNRS, Université Paris Diderot), Irfu / Service d'Astrophysique, CEA-Saclay, 91191 Gif-sur-Yvette Cedex, France, now at INAF/IAPS-Roma, I-00133 Roma, Italy Centre for Space, Research, North-West University, Potchefstroom 2520, South Africa, Universität Hamburg, Institut für Experimentalphysik, Luruper Chaussee 149, D 22761 Hamburg, Germany, Max-Planck-Institut für Kernphysik, P.O. Box 103980, D 69029 Heidelberg, Germany, Dublin Institute for Advanced Studies, 31 Fitzwilliam Place, Dublin 2, Ireland, National Academy of Sciences of the Republic of Armenia, Marshall Baghramian Avenue, 24, 0019 Yerevan, Republic of Armenia, Yerevan Physics Institute, 2 Alikhanian Brothers St., 375036 Yerevan, Armenia, Institut für Physik, Humboldt-Universität zu Berlin, Newtonstr. 15, D 12489 Berlin, Germany, University of Namibia, Department of Physics, Private Bag 13301, Windhoek, Namibia, GRAPPA, Anton Pannekoek Institute for Astronomy, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands, Department of Physics, Electrical Engineering, Linnaeus Unicléaire et de Hautes Energies, 4 place Jussieu, F-75252, Paris Cedex 5, France, Institut für Theoretische Physik, Lehrstuhl IV: Weltraum und Astrophysik, RuhrUniversität Bochum, D 44780 Bochum, Germany, GRAPPA, Anton Pannekoek Institute for Astronomy, Institute of High-Energy Physics, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands, Institut für Astro und Teilchenphysik, LeopoldFranzens- Universität Innsbruck, A-6020 Innsbruck, Austria, School of Physical Sciences, University of Adelaide, Adelaide 5005, Australia, LUTH, Observatoire de Paris, PSL Research University, CNRS, Université Paris Diderot, 5 Place Jules Janssen, 92190 Meudon, France, Sorbonne Universités, UPMC Université Paris 06, Université Paris Diderot, Sorbonne Paris Cité, CNRS, Laboratoire de Physique Nucléaire et de Hautes Energies, 4 place Jussieu, F-75252, Paris Cedex 5, France, Laboratoire Univers et Particules de Montpellier, Université Montpellier, CNRS/IN2P3, CC 72, Place Eugéne Bataillon, F-34095, Montpellier Cedex55, France, DSM/Irfu, CEA Saclay, F-91191 Gif-Sur-Yvette Cedex, France, Astronomical Observatory, The University of Warsaw, Al. Ujazdowskie 4, 00-478 Warsaw, Poland, Aix Marseille Université, CNRS/IN2P3, CPPM UMR 7346, 13288Marseille, France, Instytut Fizyki Ja drowej PAN, ul. Radzikowskiego 152, 31-342 Kraków, Poland, Funded by EU FP7 Marie Curie, grant agreement N. PIEF-GA-2012-332350, School of Physics, University of the Witwatersrand, 1 Jan SmutsAvenue, Braamfontein, Johannesburg, 2050 South Africa, Laboratoire d'Annecy-le-Vieux de Physique des Particules, Université Savoie Mont-Blanc, CNRS/IN2P3, F-74941 Annecy-le-Vieux, France, Landessternwarte, Universität Heidelberg, Königstuhl, D 69117 Heidelberg, Germany, Université Bordeaux, CNRS/IN2P3, Centre d' Études Nucléaires de Bordeaux Gradignan, 33175 Gradignan, France, Oskar Klein Centre, Department of Physics, Stockholm University, Albanova University Center, SE-10691 Stockholm, Sweden, Institut für Astronomie und Astrophysik, Universität Tübingen, Sand 1, D 72076 Tübingen, Germany, Laboratoire Leprince-Ringuet, Ecole Polytechnique, CNRS/IN2P3, F-91128 Palaiseau, France, APC, AstroParticule et Cosmologie, Université Paris Diderot, CNRS/IN2P3, CEA/Irfu, Observatoire de Paris, Sorbonne Paris Cité, 10, rue Alice Domon et Léonie Duquet, 75205 Paris Cedex 13, France, Univ. Grenoble Alpes, IPAG, F-38000 Grenoble, France CNRS, IPAG, F-38000 Grenoble, France, Department of Physics, Astronomy, The University of Leicester, University Road, Leicester, LE1 7RH, United Kingdom, Nicolaus Copernicus Astronomical Center, Polish Academy of Sci- ences, ul. Bartycka 18, 00-716 Warsaw, Poland, Institut für Physik und Astronomie, Universität Potsdam, Karl-Liebknecht-Strasse 24/25, D 14476 Potsdam, Germany, Friedrich-Alexander-Universität Erlangen-Nr̈nberg, Erlangen Centre for Astroparticle Physics, Erwin-Rommel-Str. 1, D 91058 Erlangen, Germany, DESY, D-15738 Zeuthen, Germany, Obserwatorium Astronomiczne, Uniwersytet Jagiellon ski, ul. Orla 171, 30-244 Kraków, Poland, Centre for Astronomy, Faculty of Physics, Astronomy, Informatics, Nicolaus Copernicus University, Grudziadzka 5, 87-100 Torun, Poland, Department of Physics, University of the Free State, PO Box 339, Bloemfontein 9300, South Africa, GRAPPA, Institute of High-Energy Physics, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands, Department of Physics, Rikkyo University, 3-34-1 Nishi-Ikebukuro, Toshima-ku, Tokyo 171-8501, Japan, , Institute of Space, Astronautical Science, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 229-8510, Japan, Now at Santa Cruz Institute for Particle Physics, Department of Physics, University of California at Santa Cruz, Santa Cruz, CA 95064, USA, Department of Physics, Astronomy, University of Manitoba, Winnipeg, MB R3T 2N2, Canada)
July 12, 2017 astro-ph.HE
The large jet kinetic power and non-thermal processes occurring in the microquasar SS 433 make this source a good candidate for a very high-energy (VHE) gamma-ray emitter. Gamma-ray fluxes have been predicted for both the central binary and the interaction regions between jets and surrounding nebula. Also, non-thermal emission at lower energies has been previously reported. We explore the capability of SS 433 to emit VHE gamma rays during periods in which the expected flux attenuation due to periodic eclipses and precession of the circumstellar disk periodically covering the central binary system is expected to be at its minimum. The eastern and western SS433/W50 interaction regions are also examined. We aim to constrain some theoretical models previously developed for this system. We made use of dedicated observations from MAGIC and H.E.S.S. from 2006 to 2011 which were combined for the first time and accounted for a total effective observation time of 16.5 h. Gamma-ray attenuation does not affect the jet/medium interaction regions. The analysis of a larger data set amounting to 40-80 h, depending on the region, was employed. No evidence of VHE gamma-ray emission was found. Upper limits were computed for the combined data set. We place constraints on the particle acceleration fraction at the inner jet regions and on the physics of the jet/medium interactions. Our findings suggest that the fraction of the jet kinetic power transferred to relativistic protons must be relatively small to explain the lack of TeV and neutrino emission from the central system. At the SS433/W50 interface, the presence of magnetic fields greater 10$\mu$G is derived assuming a synchrotron origin for the observed X-ray emission. This also implies the presence of high-energy electrons with energies up to 50 TeV, preventing an efficient production of gamma-ray fluxes in these interaction regions.
MAGIC observations of the microquasar V404 Cygni during the 2015 outburst (1707.00887)
M. L. Ahnen, S. Ansoldi, L. A. Antonelli, C. Arcaro, A. Babić, B. Banerjee, P. Bangale, U. Barres de Almeida, J. A. Barrio, J. Becerra González, W. Bednarek, E. Bernardini, A. Berti, B. Biasuzzi, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, R. Carosi, A. Carosi, A. Chatterjee, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, S. Covino, P. Cumani, P. Da Vela, F. Dazzi, A. De Angelis, B. De Lotto, E. de Oña Wilhelmi, F. Di Pierro, M. Doert, A. Domínguez, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher Glawion, D. Elsaesser, M. Engelkemeier, V. Fallah Ramazani, A. Fernández-Barral, D. Fidalgo, M. V. Fonseca, L. Font, C. Fruck, D. Galindo, R. J. García López, M. Garczarczyk, M. Gaug, P. Giammaria, N. Godinović, D. Gora, S. Griffiths, D. Guberman, D. Hadasch, A. Hahn, T. Hassan, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, G. Hughes, K. Ishio, Y. Konno, H. Kubo, J. Kushida, D. Kuveždić, D. Lelas, E. Lindfors, S. Lombardi, F. Longo, M. López, C. Maggio, P. Majumdar, M. Makariev, G. Maneva, M. Manganaro, K. Mannheim, L. Maraschi, M. Mariotti, M. Martínez, D. Mazin, U. Menzel, M. Minev, R. Mirzoyan, A. Moralejo, V. Moreno, E. Moretti, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, K. Nilsson, D. Ninci, K. Nishijima, K. Noda, L. Nogués, S. Paiano, J. Palacio, D. Paneque, R. Paoletti, J. M. Paredes, X. Paredes-Fortuny, G. Pedaletti, M. Peresano, L. Perri, M. Persic, P. G. Prada Moroni, E. Prandini, I. Puljak, J. R. Garcia, I. Reichardt, W. Rhode, M. Ribó, J. Rico, T. Saito, K. Satalecka, S. Schroeder, T. Schweizer, A. Sillanpää, J. Sitarek, I. Šnidarić, D. Sobczynska, A. Stamerra, M. Strzys, T. Surić, L. Takalo, F. Tavecchio, P. Temnikov, T. Terzić, D. Tescaro, M. Teshima, D. F. Torres, N. Torres-Albà, A. Treves, G. Vanzo, M. Vazquez Acosta, I. Vovk, J. E. Ward, M. Will, D. Zarić, A. Loh, J. Rodriguez
July 4, 2017 astro-ph.HE
The microquasar V404 Cygni underwent a series of outbursts in 2015, June 15-31, during which its flux in hard X-rays (20-40 keV) reached about 40 times the Crab Nebula flux. Because of the exceptional interest of the flaring activity from this source, observations at several wavelengths were conducted. The MAGIC telescopes, triggered by the INTEGRAL alerts, followed-up the flaring source for several nights during the period June 18-27, for more than 10 hours. One hour of observation was conducted simultaneously to a giant 22 GHz radio flare and a hint of signal at GeV energies seen by Fermi-LAT. The MAGIC observations did not show significant emission in any of the analysed time intervals. The derived flux upper limit, in the energy range 200--1250 GeV, is 4.8$\times 10^{-12}$ ph cm$^{-2}$ s$^{-1}$. We estimate the gamma-ray opacity during the flaring period, which along with our non-detection, points to an inefficient acceleration in the V404\,Cyg jets if VHE emitter is located further than $1\times 10^{10}$ cm from the compact object.
Prospects for CTA observations of the young SNR RX J1713.7-3946 (1704.04136)
The CTA Consortium: F. Acero, R. Aloisio, J. Amans, E. Amato, L.A. Antonelli, C. Aramo, T. Armstrong, F. Arqueros, K. Asano, M. Ashley, M. Backes, C. Balazs, A. Balzer, A. Bamba, M. Barkov, J.A. Barrio, W. Benbow, K. Bernlöhr, V. Beshley, C. Bigongiari, A. Biland, A. Bilinsky, E. Bissaldi, J. Biteau, O. Blanch, P. Blasi, J. Blazek, C. Boisson, G. Bonanno, A. Bonardi, C. Bonavolontà, G. Bonnoli, C. Braiding, S. Brau-Nogué, J. Bregeon, A.M. Brown, V. Bugaev, A. Bulgarelli, T. Bulik, M. Burton, A. Burtovoi, G. Busetto, M. Böttcher, R. Cameron, M. Capalbi, A. Caproni, P. Caraveo, R. Carosi, E. Cascone, M. Cerruti, S. Chaty, A. Chen, X. Chen, M. Chernyakova, M. Chikawa, J. Chudoba, J. Cohen-Tanugi, S. Colafrancesco, V. Conforti, J.L. Contreras, A. Costa, G. Cotter, S. Covino, G. Covone, P. Cumani, G. Cusumano, F. D'Ammando, D. D'Urso, M. Daniel, F. Dazzi, A. De Angelis, G. De Cesare, A. De Franco, F. De Frondat, E.M. de Gouveia Dal Pino, C. De Lisio, R. de los Reyes Lopez, B. De Lotto, M. de Naurois, F. De Palma, M. Del Santo, C. Delgado, D. della Volpe, T. Di Girolamo, C. Di Giulio, F. Di Pierro, L. Di Venere, M. Doro, J. Dournaux, D. Dumas, V. Dwarkadas, C. Díaz, J. Ebr, K. Egberts, S. Einecke, D. Elsässer, S. Eschbach, D. Falceta-Goncalves, G. Fasola, E. Fedorova, A. Fernández-Barral, G. Ferrand, M. Fesquet, E. Fiandrini, A. Fiasson, M.D. Filipovíc, V. Fioretti, L. Font, G. Fontaine, F.J. Franco, L. Freixas Coromina, Y. Fujita, Y. Fukui, S. Funk, A. Förster, A. Gadola, R. Garcia López, M. Garczarczyk, N. Giglietto, F. Giordano, A. Giuliani, J. Glicenstein, R. Gnatyk, P. Goldoni, T. Grabarczyk, R. Graciani, J. Graham, P. Grandi, J. Granot, A.J. Green, S. Griffiths, S. Gunji, H. Hakobyan, S. Hara, T. Hassan, M. Hayashida, M. Heller, J.C. Helo, J. Hinton, B. Hnatyk, J. Huet, M. Huetten, T.B. Humensky, M. Hussein, J. Hörandel, Y. Ikeno, T. Inada, Y. Inome, S. Inoue, T. Inoue, Y. Inoue, K. Ioka, M. Iori, J. Jacquemier, P. Janecek, D. Jankowsky, I. Jung, P. Kaaret, H. Katagiri, S. Kimeswenger, S. Kimura, J. Knödlseder, B. Koch, J. Kocot, K. Kohri, N. Komin, Y. Konno, K. Kosack, S. Koyama, M. Kraus, H. Kubo, G. Kukec Mezek, J. Kushida, N. La Palombara, K. Lalik, G. Lamanna, H. Landt, J. Lapington, P. Laporte, S. Lee, J. Lees, J. Lefaucheur, J.-P. Lenain, G. Leto, E. Lindfors, T. Lohse, S. Lombardi, F. Longo, M. Lopez, F. Lucarelli, P.L. Luque-Escamilla, R. López-Coto, M.C. Maccarone, G. Maier, G. Malaguti, D. Mandat, G. Maneva, S. Mangano, A. Marcowith, J. Martí, M. Martínez, G. Martínez, S. Masuda, G. Maurin, N. Maxted, C. Melioli, T. Mineo, N. Mirabal, T. Mizuno, R. Moderski, M. Mohammed, T. Montaruli, A. Moralejo, K. Mori, G. Morlino, A. Morselli, E. Moulin, R. Mukherjee, C. Mundell, H. Muraishi, K. Murase, S. Nagataki, T. Nagayoshi, T. Naito, D. Nakajima, T. Nakamori, R. Nemmen, J. Niemiec, D. Nieto, M. Nievas-Rosillo, M. Nikołajuk, K. Nishijima, K. Noda, L. Nogues, D. Nosek, B. Novosyadlyj, S. Nozaki, Y. Ohira, M. Ohishi, S. Ohm, A. Okumura, R.A. Ong, R. Orito, A. Orlati, M. Ostrowski, I. Oya, M. Padovani, J. Palacio, M. Palatka, J.M. Paredes, S. Pavy, A. Pe'er, M. Persic, P. Petrucci, O. Petruk, A. Pisarski, M. Pohl, A. Porcelli, E. Prandini, J. Prast, G. Principe, M. Prouza, E. Pueschel, G. Pühlhofer, A. Quirrenbach, M. Rameez, O. Reimer, M. Renaud, M. Ribó, J. Rico, V. Rizi, J. Rodriguez, G. Rodriguez Fernandez, J.J. Rodríguez Vázquez, P. Romano, G. Romeo, J. Rosado, J. Rousselle, G. Rowell, B. Rudak, I. Sadeh, S. Safi-Harb, T. Saito, N. Sakaki, D. Sanchez, P. Sangiorgi, H. Sano, M. Santander, S. Sarkar, M. Sawada, E.J. Schioppa, H. Schoorlemmer, P. Schovanek, F. Schussler, O. Sergijenko, M. Servillat, A. Shalchi, R.C. Shellard, H. Siejkowski, A. Sillanpää, D. Simone, V. Sliusar, H. Sol, S. Stanič, R. Starling, Ł. Stawarz, S. Stefanik, M. Stephan, T. Stolarczyk, M. Szanecki, T. Szepieniec, G. Tagliaferri, H. Tajima, M. Takahashi, J. Takeda, M. Tanaka, S. Tanaka, L.A. Tejedor, I. Telezhinsky, P. Temnikov, Y. Terada, D. Tescaro, M. Teshima, V. Testa, S. Thoudam, F. Tokanai, D.F. Torres, E. Torresi, G. Tosti, C. Townsley, P. Travnicek, C. Trichard, M. Trifoglio, S. Tsujimoto, V. Vagelli, P. Vallania, L. Valore, W. van Driel, C. van Eldik, J. Vandenbroucke, V. Vassiliev, M. Vecchi, S. Vercellone, S. Vergani, C. Vigorito, S. Vorobiov, M. Vrastil, M.L. Vázquez Acosta, S.J. Wagner, R. Wagner, S.P. Wakely, R. Walter, J.E. Ward, J.J. Watson, A. Weinstein, M. White, R. White, A. Wierzcholska, P. Wilcox, D.A. Williams, R. Wischnewski, P. Wojcik, T. Yamamoto, H. Yamamoto, R. Yamazaki, S. Yanagita, L. Yang, T. Yoshida, M. Yoshida, S. Yoshiike, T. Yoshikoshi, M. Zacharias, L. Zampieri, R. Zanin, M. Zavrtanik, D. Zavrtanik, A. Zdziarski, A. Zech, H. Zechlin, V. Zhdanov, A. Ziegler, J. Zorn
April 13, 2017 astro-ph.HE
We perform simulations for future Cherenkov Telescope Array (CTA) observations of RX~J1713.7$-$3946, a young supernova remnant (SNR) and one of the brightest sources ever discovered in very-high-energy (VHE) gamma rays. Special attention is paid to explore possible spatial (anti-)correlations of gamma rays with emission at other wavelengths, in particular X-rays and CO/H{\sc i} emission. We present a series of simulated images of RX J1713.7$-$3946 for CTA based on a set of observationally motivated models for the gamma-ray emission. In these models, VHE gamma rays produced by high-energy electrons are assumed to trace the non-thermal X-ray emission observed by {\it XMM-Newton}, whereas those originating from relativistic protons delineate the local gas distributions. The local atomic and molecular gas distributions are deduced by the NANTEN team from CO and H{\sc i} observations. Our primary goal is to show how one can distinguish the emission mechanism(s) of the gamma rays (i.e., hadronic vs leptonic, or a mixture of the two) through information provided by their spatial distribution, spectra, and time variation. This work is the first attempt to quantitatively evaluate the capabilities of CTA to achieve various proposed scientific goals by observing this important cosmic particle accelerator.
MAGIC detection of very high energy gamma-ray emission from the low-luminosity blazar 1ES 1741+196 (1702.06795)
MAGIC Collaboration: M. L. Ahnen, L. A. Antonelli, P. Bangale, W. Bednarek, O. Blanch, T. Bretz, P. Colin, S. Covino, E. de Oña Wilhelmi, A. Domínguez, D. Eisenacher Glawion, V. Fallah Ramazani, L. Font, R. J. García López, P. Giammaria, D. Hadasch, D. Hrupec, H. Kubo, S. Lombardi, P. Majumdar, M. Manganaro, B. Marcote, J. M. Miranda, D. Nakajima, K. Nilsson, S. Paiano, R. Paoletti, M. Peresano, P. G. Prada Moroni, I. Reichardt, K. Satalecka, A. Sillanpää, A. Stamerra, H. Takami, M. Teshima, G. Vanzo, M. H. Wu, Fermi-LAT collaboration: J. Becerra González, F. Verrecchia Università di Udine, INFN Trieste, Università di Siena, INFN Pisa, Croatian MAGIC Consortium, Rudjer Boskovic Institute, University of Rijeka, University of Split, University of Zagreb, Croatia, Saha Institute of Nuclear Physics, 1/AF Bidhannagar, Universidad Complutense, Inst. de Astrofísica de Canarias, Universidad de La Laguna, Dpto. Astrofísica, Deutsches Elektronen-Synchrotron Institut de Fisica d'Altes Energies, The Barcelona Institute of Science, Technology, Campus UAB, Institute for Space Sciences Finnish MAGIC Consortium, Tuorla Observatory, University of Turku, Astronomy Division, University of Oulu, Finland, Unitat de Física de les Radiacions, Departament de Física, CERES-IEEC, Universitat Autònoma de Barcelona, Japanese MAGIC Consortium, ICRR, The University of Tokyo, Department of Physics, Hakubi Center, Kyoto University, Tokai University, The University of Tokushima, Inst. for Nucl. Research, Nucl. Energy, Università di Pisa, INFN Pisa, also at the Department of Physics of Kyoto University, Japan, now at Centro Brasileiro de Pesquisas Físicas now at NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA, Department of Physics, Department of Astronomy, University of Maryland, Humboldt University of Berlin, Institut für Physik Newtonstr. 15, also at University of Trieste, , Lausanne, Switzerland, now at Astrophysical Sciences Division, BARC, Mumbai, India, also at Japanese MAGIC Consortium, now at Finnish Centre for Astronomy with ESO also at INAF-Trieste, Dept. of Physics, Astronomy, University of Bologna, also at ISDC - Science Data Center for Astrophysics, now at IPNS, High Energy Accelerator Research Organization GRAPPA, Anton Pannekoek Institute for Astronomy, University of Amsterdam, NASA Goddard Space Flight Center)
Feb. 22, 2017 astro-ph.GA, astro-ph.HE
We present the first detection of the nearby (z=0.084) low-luminosity BL Lac object 1ES 1741+196 in the very high energy (VHE: E$>$100 GeV) band. This object lies in a triplet of interacting galaxies. Early predictions had suggested 1ES 1741+196 to be, along with several other high-frequency BL Lac sources, within the reach of MAGIC detectability. Its detection by MAGIC, later confirmed by VERITAS, helps to expand the small population of known TeV BL Lacs. The source was observed with the MAGIC telescopes between 2010 April and 2011 May, collecting 46 h of good quality data. These observations led to the detection of the source at 6.0 $\sigma$ confidence level, with a steady flux $\mathrm{F}(> 100 {\rm GeV}) = (6.4 \pm 1.7_{\mathrm{stat}}\pm 2.6_{\mathrm{syst}}) \cdot 10^{-12}$ ph cm$^{-2}$ s$^{-1}$ and a differential spectral photon index $\Gamma = 2.4 \pm 0.2_{\mathrm{stat}} \pm 0.2_{\mathrm{syst}}$ in the range of $\sim$80 GeV - 3 TeV. To study the broad-band spectral energy distribution (SED) simultaneous with MAGIC observations, we use KVA, Swift/UVOT and XRT, and Fermi/LAT data. One-zone synchrotron-self-Compton (SSC) modeling of the SED of 1ES 1741+196 suggests values for the SSC parameters that are quite common among known TeV BL Lacs except for a relatively low Doppler factor and slope of electron energy distribution. A thermal feature seen in the SED is well matched by a giant elliptical's template. This appears to be the signature of thermal emission from the host galaxy, which is clearly resolved in optical observations.
Measurements of $\pi^{\pm}$ differential yields from the surface of the T2K replica target for incoming 31 GeV/c protons with the NA61/SHINE spectrometer at the CERN SPS (1603.06774)
NA61/SHINE Collaboration: N. Abgrall, A. Aduszkiewicz, M. Ajaz, Y. Ali, E. Andronov, T. Antićić, N. Antoniou, B. Baatar, F. Bay, A. Blondel, J. Blümer, M. Bogomilov, A. Brandin, A. Bravar, J. Brzychczyk, S.A. Bunyatov, O. Busygina, P. Christakoglou, M. Ćirković, T. Czopowicz, N. Davis, S. Debieux, H. Dembinski, M. Deveaux, F. Diakonos, S. Di Luise, W. Dominik, J. Dumarchez, K. Dynowski, R. Engel, A. Ereditato, G.A. Feofilov, Z. Fodor, A. Garibov, M. Gaździcki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, A. Haesler, T. Hasegawa, A.E. Hervé, M. Hierholzer, S. Igolkin, A. Ivashkin, S.R. Johnson, K. Kadija, A. Kapoyannis, E. Kaptur, J. Kisiel, T. Kobayashi, V.I. Kolesnikov, D. Kolev, V.P. Kondratiev, A. Korzenev, K. Kowalik, S. Kowalski, M. Koziel, A. Krasnoperov, M. Kuich, A. Kurepin, D. Larsen, A. László, M. Lewicki, V.V. Lyubushkin, M. Maćkowiak-Pawłowska, B. Maksiak, A.I. Malakhov, D. Manic, A. Marcinek, A.D. Marino, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G.L. Melkumov, B. Messerly, G.B. Mills, S. Morozov, S. Mrówczyński, Y. Nagai, T. Nakadaira, M. Naskręt, M. Nirkko, K. Nishikawa, A.D. Panagiotou, V. Paolone, M. Pavin, O. Petukhov, C. Pistillo, R. Płaneta, B.A. Popov, M. Posiadała-Zezula, S. Puławski, J. Puzović, W. Rauch, M. Ravonel, A. Redij, R. Renfordt, E. Richter-Wąs, A. Robert, D. Röhrich, E. Rondio, M. Roth, A. Rubbia, B.T. Rumberger, A. Rustamov, M. Rybczynski, A. Sadovsky, K. Sakashita, R. Sarnecki, K. Schmidt, T. Sekiguchi, I. Selyuzhenkov, A. Seryakov, P. Seyboth, D. Sgalaberna, M. Shibata, M. Słodkowski, P. Staszel, G. Stefanek, J. Stepaniak, H. Ströbele, T. Šuša, M. Szuba, M. Tada, A. Taranenko, A. Tefelska, D. Tefelski, V. Tereshchenko, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberič, V.V. Vechernin, G. Vesztergombi, L. Vinogradov, A. Wilczek, Z. Włodarczyk, A. Wojtaszek-Szwarc, O. Wyszyński, K. Yarritu, L. Zambelli, E.D. Zimmerman, M. Friend, V. Galymov, M. Hartz, T. Hiraki, A. Ichikawa, H. Kubo, K. Matsuoka, A. Murakami, T. Nakaya, K. Suzuki, M. Tzanov, M. Yu
Nov. 29, 2016 hep-ex, physics.ins-det
Measurements of particle emission from a replica of the T2K 90 cm-long carbon target were performed in the NA61/SHINE experiment at CERN SPS, using data collected during a high-statistics run in 2009. An efficient use of the long-target measurements for neutrino flux predictions in T2K requires dedicated reconstruction and analysis techniques. Fully-corrected differential yields of $\pi^\pm$-mesons from the surface of the T2K replica target for incoming 31 GeV/c protons are presented. A possible strategy to implement these results into the T2K neutrino beam predictions is discussed and the propagation of the uncertainties of these results to the final neutrino flux is performed.
Observations of Sagittarius A* during the pericenter passage of the G2 object with MAGIC (1611.07095)
M. L. Ahnen, S. Ansoldi, L. A. Antonelli, P. Antoranz, C. Arcaro, A. Babic, B. Banerjee, P. Bangale, U. Barres de Almeida, J. A. Barrio, J. Becerra González, W. Bednarek, E. Bernardini, A. Berti, B. Biasuzzi, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, F. Borracci, T. Bretz, S. Buson, A. Carosi, A. Chatterjee, R. Clavero, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, S. Covino, P. Da Vela, F. Dazzi, A. De Angelis, B. De Lotto, E. de Oña Wilhelmi, F. Di Pierro, M. Doert, A. Domínguez, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher Glawion, D. Elsaesser, M. Engelkemeier, V. Fallah Ramazani, A. Fernández-Barral, D. Fidalgo, M. V. Fonseca, L. Font, K. Frantzen, C. Fruck, D. Galindo, R. J. García López, M. Garczarczyk, D. Garrido Terrats, M. Gaug, P. Giammaria, N. Godinović, A. González Muñoz, D. Gora, D. Guberman, D. Hadasch, A. Hahn, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, G. Hughes, W. Idec, K. Kodani, Y. Konno, H. Kubo, J. Kushida, A. La Barbera, D. Lelas, E. Lindfors, S. Lombardi, F. Longo, M. López, R. López-Coto, P. Majumdar, M. Makariev, K. Mallot, G. Maneva, M. Manganaro, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martínez, D. Mazin, U. Menzel, J. M. Miranda, R. Mirzoyan, A. Moralejo, E. Moretti, D. Nakajima, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, K. Nilsson, K. Nishijima, K. Noda, L. Nogués, A. Overkemping, S. Paiano, J. Palacio, M. Palatiello, D. Paneque, R. Paoletti, J. M. Paredes, X. Paredes-Fortuny, G. Pedaletti, M. Peresano, L. Perri, M. Persic, J. Poutanen, P. G. Prada Moroni, E. Prandini, I. Puljak, J. R. Garcia, I. Reichardt, W. Rhode, M. Ribó, J. Rico, T. Saito, K. Satalecka, S. Schroeder, T. Schweizer, S. N. Shore, A. Sillanpää, J. Sitarek, I. Snidaric, D. Sobczynska, A. Stamerra, T. Steinbring, M. Strzys, T. Surić, L. Takalo, F. Tavecchio, P. Temnikov, T. Terzić, D. Tescaro, M. Teshima, J. Thaele, D. F. Torres, T. Toyama, A. Treves, G. Vanzo, V. Verguilov, I. Vovk, J. E. Ward, M. Will, M. H. Wu, R. Zanin
Nov. 21, 2016 astro-ph.HE
Context. We present the results of a multi-year monitoring campaign of the Galactic Center (GC) with the MAGIC telescopes. These observations were primarily motivated by reports that a putative gas cloud (G2) would be passing in close proximity to the super-massive black hole (SMBH), associated with Sagittarius A*, located at the center of our galaxy. This event was expected to give astronomers a unique chance to study the effect of in-falling matter on the broad-band emission of a SMBH. Aims. We search for potential flaring emission of very-high-energy (VHE; $\geq$100 GeV) gamma rays from the direction of the SMBH at the GC due to the passage of the G2 object. Using these data we also study the morphology of this complex region. Methods. We observed the GC region with the MAGIC Imaging Atmospheric Cherenkov Telescopes during the period 2012-2015, collecting 67 hours of good-quality data. In addition to a search for variability in the flux and spectral shape of the GC gamma-ray source, we use a point-source subtraction technique to remove the known gamma-ray emitters located around the GC in order to reveal the TeV morphology of the extended emission inside that region. Results. No effect of the G2 object on the VHE gamma-ray emission from the GC was detected during the 4 year observation campaign. We confirm previous measurements of the VHE spectrum of Sagittarius A*, and do not detect any significant variability of the emission from the source. Furthermore, the known VHE gamma-ray emitter at the location of the supernova remnant G0.9+0.1 was detected, as well as the recently discovered VHE source close to the GG radio Arc.
A search for spectral hysteresis and energy-dependent time lags from X-ray and TeV gamma-ray observations of Mrk 421 (1611.04626)
A. U. Abeysekara, S. Archambault, A. Archer, W. Benbow, R. Bird, M. Buchovecky, J. H. Buckley, V. Bugaev, J. V Cardenzana, M. Cerruti, X. Chen, L. Ciupik, M. P. Connolly, W. Cui, J. D. Eisch, A. Falcone, Q. Feng, J. P. Finley, H. Fleischhack, A. Flinders, L. Fortson, A. Furniss, S. Griffin, M. Hütten, N. Håkansson, D. Hanna, O. Hervet, J. Holder, T. B. Humensky, P. Kaaret, P. Kar, M. Kertzman, D. Kieda, M. Krause, S. Kumar, M. J. Lang, G. Maier, S. McArthur, A. McCann, K. Meagher, P. Moriarty, R. Mukherjee, D. Nieto, S. O'Brien, R. A. Ong, A. N. Otte, N. Park, V. Pelassa, M. Pohl, A. Popkow, E. Pueschel, K. Ragan, P. T. Reynolds, G. T. Richards, E. Roache, I. Sadeh, M. Santander, G. H. Sembroski, K. Shahinyan, D. Staszak, I. Telezhinsky, J. V. Tucci, J. Tyler, S. P. Wakely, A. Weinstein, A. Wilhelm, D. A. Williams, M. L. Ahnen, S. Ansoldi, L. A. Antonelli, P. Antoranz, C. Arcaro, A. Babic, B. Banerjee, P. Bangale, U. Barres de Almeida, J. A. Barrio, J. Becerra González, W. Bednarek, E. Bernardini, A. Berti, B. Biasuzzi, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, F. Borracci, T. Bretz, R. Carosi, A. Carosi, A. Chatterjee, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, S. Covino, P. Cumani, P. Da Vela, F. Dazzi, A. De Angelis, B. De Lotto, E. de Oña Wilhelmi, F. Di Pierro, M. Doert, A. Domínguez, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher Glawion, D. Elsaesser, M. Engelkemeier, V. Fallah Ramazani, A. Fernández-Barral, D. Fidalgo, M. V. Fonseca, L. Font, C. Fruck, D. Galindo, R. J. García López, M. Garczarczyk, M. Gaug, P. Giammaria, N. Godinović, D. Gora, D. Guberman, D. Hadasch, A. Hahn, T. Hassan, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, G. Hughes, W. Idec, K. Kodani, Y. Konno, H. Kubo, J. Kushida, D. Lelas, E. Lindfors, S. Lombardi, F. Longo, M. López, R. López-Coto, P. Majumdar, M. Makariev, K. Mallot, G. Maneva, M. Manganaro, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martínez, D. Mazin, U. Menzel, R. Mirzoyan, A. Moralejo, E. Moretti, D. Nakajima, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, K. Nilsson, K. Nishijima, K. Noda, L. Nogués, M. Nöthe, S. Paiano, J. Palacio, M. Palatiello, D. Paneque, R. Paoletti, J. M. Paredes, X. Paredes-Fortuny, G. Pedaletti, M. Peresano, L. Perri, M. Persic, J. Poutanen, P. G. Prada Moroni, E. Prandini, I. Puljak, J. R. Garcia, I. Reichardt, W. Rhode, M. Ribó, J. Rico, T. Saito, K. Satalecka, S. Schroeder, T. Schweizer, S. N. Shore, A. Sillanpää, J. Sitarek, I. Snidaric, D. Sobczynska, A. Stamerra, M. Strzys, T. Surić, L. Takalo, F. Tavecchio, P. Temnikov, T. Terzić, D. Tescaro, M. Teshima, D. F. Torres, N. Torres-Albà, T. Toyama, A. Treves, G. Vanzo, M. Vazquez Acosta, I. Vovk, J. E. Ward, M. Will, M. H. Wu, R. Zanin, T. Hovatta, I. de la Calle Perez, P. S. Smith, E. Racero, M. Baloković
Blazars are variable emitters across all wavelengths over a wide range of timescales, from months down to minutes. It is therefore essential to observe blazars simultaneously at different wavelengths, especially in the X-ray and gamma-ray bands, where the broadband spectral energy distributions usually peak. In this work, we report on three "target-of-opportunity" (ToO) observations of Mrk 421, one of the brightest TeV blazars, triggered by a strong flaring event at TeV energies in 2014. These observations feature long, continuous, and simultaneous exposures with XMM-Newton (covering X-ray and optical/ultraviolet bands) and VERITAS (covering TeV gamma-ray band), along with contemporaneous observations from other gamma-ray facilities (MAGIC and Fermi-LAT) and a number of radio and optical facilities. Although neither rapid flares nor significant X-ray/TeV correlation are detected, these observations reveal subtle changes in the X-ray spectrum of the source over the course of a few days. We search the simultaneous X-ray and TeV data for spectral hysteresis patterns and time delays, which could provide insight into the emission mechanisms and the source properties (e.g. the radius of the emitting region, the strength of the magnetic field, and related timescales). The observed broadband spectra are consistent with a one-zone synchrotron self-Compton model. We find that the power spectral density distribution at $\gtrsim 4\times 10^{-4}$ Hz from the X-ray data can be described by a power-law model with an index value between 1.2 and 1.8, and do not find evidence for a steepening of the power spectral index (often associated with a characteristic length scale) compared to the previously reported values at lower frequencies.
Very High-Energy Gamma-Ray Follow-Up Program Using Neutrino Triggers from IceCube (1610.01814)
IceCube Collaboration: M.G. Aartsen, K. Abraham, M. Ackermann, J.Adams, J.A. Aguilar, M. Ahlers, M.Ahrens, D. Altmann, K. Andeen, T. Anderson, I. Ansseau, G.Anton, M. Archinger, C. Arguelles, J.Auffenberg, S. Axani, X. Bai, S.W. Barwick, V. Baum, R. Bay, J.J. Beatty, J.Becker-Tjus, K.-H.Becker, S. BenZvi, D. Berley, E. Bernardini, A.Bernhard, D.Z. Besson, G. Binder, D. Bindig, M.Bissok, E. Blaufuss, S. Blot, C. Bohm, M. Borner, F. Bos, D. Bose, S. Boser, O. Botner, J. Braun, L. Brayeur, H.-P. Bretz, S. Bron, A. Burgman, T. Carver, M. Casier, E. Cheung, D. Chirkin, A. Christov, K. Clark, L. Classen, S. Coenders, G.H. Collin, J.M. Conrad, D.F. Cowen R. Cross, M. Day, J.P.A.M. de Andre, C.De Clercq, E.del Pino Rosendo, H. Dembinski, S. De Ridder, P. Desiati, K.D. de Vries, G. de Wasseige, M. de With, T. DeYoung, J.C. Diaz-Velez, V. di Lorenzo, H.Dujmovic, J.P. Dumm, M. Dunkman, B. Eberhardt, T. Ehrhardt, B. Eichmann, P. Eller, S. Euler, P.A. Evenson, S. Fahey, A.R. Fazely, J. Feintzeig, J. Felde, K. Filimonov, C.Finley, S. Flis, C.-C. Fosig, A. Franckowiak, R. Franke, E. Friedman, T. Fuchs, T.K. Gaisser, J. Gallagher, L. Gerhardt, K. Ghorbani, W. Giang, L. Gladstone, T. Glauch, T. Glusenkamp, A. Goldschmidt, G. Golup, J.G. Gonzalez, D. Grant, Z. Griffith, C. Haack, A. Haj Ismail, A. Hallgren, F. Halzen, E. Hansen, T. Hansmann, K. Hanson, D. Hebecker, D. Heereman, K. Helbing, R. Hellauer, S. Hickford, J. Hignight, G.C. Hill, K.D. Hoffman, R. Hoffmann, K. Holzapfel, K. Hoshina, F. Huang, M. Huber, K. Hultqvist, S. In, A. Ishihara, E. Jacobi, G.S. Japaridze, M. Jeong, K. Jero, B.J.P. Jones, M. Jurkovic, A. Kappes, T. Karg, A. Karle, U. Katz, M. Kauer, A. Keivani, J.L. Kelley, A. Kheirandish, M. Kim, T. Kintscher, J. Kiryluk, T. Kittler, S.R. Klein, G. Kohnen, R. Koirala, H. Kolanoski, R. Konietz, L. Kopke, C. Kopper, S. Kopper, D.J. Koskinen, M. Kowalski, K. Krings, M. Kroll, G. Kruckl, C. Kruger, J. Kunnen, S. Kunwar, N. Kurahashi, T. Kuwabara, M. Labare, J.L. Lanfranchi, M.J. Larson, F. Lauber, D. Lennarz, M. Lesiak-Bzdak, M. Leuermann, L. Lu, J. Lunemann, J. Madsen, G. Maggi, K.B.M. Mahn, S. Mancina, M. Mandelartz, R. Maruyama, K. Mase, R. Maunu, F. McNally, K. Meagher, M. Medici, M. Meier, A. Meli, T. Menne, G. Merino, T. Meures, S. Miarecki, L. Mohrmann, T. Montaruli, M. Moulai, R. Nahnhauer, U. Naumann, G. Neer, H. Niederhausen, S.C. Nowicki, D.R. Nygren, A. Obertacke Pollmann, A. Olivas, A. O'Murchadha, T. Palczewski, H. Pandya, D.V. Pankova, P. Peiffer, O. Penek, J.A. Pepper, C. Perez de los Heros, D. Pieloth, E. Pinat, P.B. Price, G.T. Przybylski, M. Quinnan, C. Raab, L. Radel, M. Rameez, K. Rawlins, R. Reimann, B. Relethford, M. Relich, E. Resconi, W. Rhode, M. Richman, B. Riedel, S. Robertson, M. Rongen, C. Rott, T. Ruhe, D.Ryckbosch, D. Rysewyk, L.Sabbatini, S.E. Sanchez-Herrera, A. Sandrock, J. Sandroos, S. Sarkar, K. Satalecka, P. Schlunder, T. Schmidt, S. Schoenen, S. Schoneberg, L. Schumacher, D. Seckel, S. Seunarine, D. Soldin, M. Song, G.M. Spiczak, C. Spiering, T. Stanev, A. Stasik, J. Stettner, A. Steuer, T. Stezelberger, R.G. Stokstad, A. Stossl, R. Strom, N.L. Strotjohann, G.W. Sullivan, M. Sutherland, H. Taavola, I. Taboada, J. Tatar, F. Tenholt, S. Ter-Antonyan, A. Terliuk, G. Tevsic, S. Tilav, P.A. Toale, M.N. Tobin, S. Toscano, D. Tosi, M. Tselengidou, A. Turcati, E. Unger, M. Usner, J. Vandenbroucke, N. van Eijndhoven, S. Vanheule, M. van Rossem, J. van Santen, J. Veenkamp, M. Vehring, M. Voge, E. Vogel, M. Vraeghe, C. Walck, A. Wallace, M. Wallraff, N. Wandkowsky, Ch. Weaver, M.J. Weiss, C. Wendt, S. Westerhoff, B.J. Whelan, S. Wickmann, K. Wiebe, C.H. Wiebusch, L. Wille, D.R. Williams, L. Wills, M. Wolf, T.R. Wood, E. Woolsey, K. Woschnagg, D.L. Xu, X.W. Xu, Y. Xu, J.P. Yanez, G. Yodh, S. Yoshida, M. Zoll MAGIC Collaboration: M.L. Ahnen, S. Ansoldi, L.A. Antonelli, P. Antoranz, A. Babic, B. Banerjee, P. Bangale, U.Barres de Almeida, J.A. Barrio, J. Becerra Gonzalez, W. Bednarek, E. Bernardini, A. Berti, B. Biasuzzi, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, F. Borracci, T. Bretz, S. Buson, A. Carosi, A. Chatterjee, R. Clavero, P. Colin, E. Colombo, J.L. Contreras, J. Cortina, S. Covino, P. Da Vela, F. Dazzi, A. De Angelis, B. De Lotto, E. de Ona Wilhelmi, F. Di Pierro, M. Doert, A. Dominguez, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher Glawion, D. Elsaesser, M. Engelkemeier, V. Fallah Ramazani, A. Fernandez-Barral, D. Fidalgo, M.V. Fonseca, L. Font, K. Frantzen, C. Fruck, D. Galindo, R. J. Garcia Lopez, M. Garczarczyk, D. Garrido Terrats, M. Gaug, P. Giammaria, N. Godinovic, A. Gonzalez Munoz, D. Gora, D. Guberman, D. Hadasch, A. Hahn, Y. Hanabata, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, G. Hughes, W. Idec, K. Kodani, Y. Konno, H. Kubo, J. Kushida, A. La Barbera, D. Lelas, E. Lindfors, S. Lombardi, F. Longo, M. Lopez, R. Lopez-Coto, P. Majumdar, M. Makariev, K. Mallot, G. Maneva, M. Manganaro, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martinez, D. Mazin, U. Menzel, J.M. Miranda, R. Mirzoyan, A. Moralejo, E. Moretti, D. Nakajima, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, K. Nilsson, K. Nishijima, K. Noda, L. Nogues, A. Overkemping, S. Paiano, J. Palacio, M. Palatiello, D. Paneque, R. Paoletti, J.M. Paredes, X. Paredes-Fortuny, G. Pedaletti, M. Peresano, L. Perri, M. Persic, J. Poutanen, P.G. Prada Moroni, E.Prandini, I. Puljak, I. Reichardt, W. Rhode, M. Ribo, J. Rico, J. Rodriguez Garcia, T. Saito, K. Satalecka, S. Schroeder, C. Schultz, T. Schweizer, A. Sillanpaa, J. Sitarek, I. Snidaric, D. Sobczynska, A. Stamerra, T. Steinbring, M. Strzys, T. Suric, L. Takalo, F. Tavecchio, P. Temnikov, T.Terzic, D. Tescaro, M. Teshima, J. Thaele, D.F. Torres, T. Toyama, A. Treves, G. Vanzo, V. Verguilov, I. Vovk, J.E. Ward, M. Will, M.H. Wu, R. Zanin VERITAS Collaboration: A.U. Abeysekara, S. Archambault, A. Archer, W. Benbow, R. Bird, E. Bourbeau, M. Buchovecky, V. Bugaev, K. Byrum, J.V Cardenzana, M. Cerruti, L. Ciupik, M.P. Connolly, W. Cui, H.J. Dickinson, J. Dumm, J.D. Eisch, M. Errando, A. Falcone, Q. Feng, J.P. Finley, H. Fleischhack, A. Flinders, L. Fortson, A. Furniss, G.H. Gillanders, S. Griffin, J. Grube, M. Hutten, N. Haakansson, O. Hervet, J. Holder, T.B. Humensky, C.A. Johnson, P. Kaaret, P. Kar, N. Kelley-Hoskins, M. Kertzman, D. Kieda, M. Krause, F. Krennrich, S. Kumar, M.J. Lang, G. Maier, S. McArthur, A. McCann, P. Moriarty, R. Mukherjee, T. Nguyen, D. Nieto, S. O'Brien, R.A. Ong, A.N. Otte, N. Park, M. Pohl, A. Popkow, E. Pueschel, J. Quinn, K. Ragan, P.T. Reynolds, G.T. Richards, E. Roache, C. Rulten, I. Sadeh, M. Santander, G.H. Sembroski, K. Shahinyan, D. Staszak, I. Telezhinsky, J.V. Tucci, J. Tyler, S.P. Wakely, A. Weinstein, P. Wilcox, A. Wilhelm, D.A. Williams, B. Zitzer
Nov. 12, 2016 hep-ex, physics.ins-det, astro-ph.IM, astro-ph.HE
We describe and report the status of a neutrino-triggered program in IceCube that generates real-time alerts for gamma-ray follow-up observations by atmospheric-Cherenkov telescopes (MAGIC and VERITAS). While IceCube is capable of monitoring the whole sky continuously, high-energy gamma-ray telescopes have restricted fields of view and in general are unlikely to be observing a potential neutrino-flaring source at the time such neutrinos are recorded. The use of neutrino-triggered alerts thus aims at increasing the availability of simultaneous multi-messenger data during potential neutrino flaring activity, which can increase the discovery potential and constrain the phenomenological interpretation of the high-energy emission of selected source classes (e.g. blazars). The requirements of a fast and stable online analysis of potential neutrino signals and its operation are presented, along with first results of the program operating between 14 March 2012 and 31 December 2015.
Contributions of the Cherenkov Telescope Array (CTA) to the 6th International Symposium on High-Energy Gamma-Ray Astronomy (Gamma 2016) (1610.05151)
The CTA Consortium: A. Abchiche, U. Abeysekara, Ó. Abril, F. Acero, B. S. Acharya, C. Adams, G. Agnetta, F. Aharonian, A. Akhperjanian, A. Albert, M. Alcubierre, J. Alfaro, R. Alfaro, A. J. Allafort, R. Aloisio, J.-P. Amans, E. Amato, L. Ambrogi, G. Ambrosi, M. Ambrosio, J. Anderson, M. Anduze, E. O. Angüner, E. Antolini, L. A. Antonelli, M. Antonucci, V. Antonuccio, P. Antoranz, C. Aramo, A. Aravantinos, M. Araya, C. Arcaro, B. Arezki, A. Argan, T. Armstrong, F. Arqueros, L. Arrabito, M. Arrieta, K. Asano, M. Ashley, P. Aubert, C. B. Singh, A. Babic, M. Backes, A. Bais, S. Bajtlik, C. Balazs, M. Balbo, D. Balis, C. Balkowski, O. Ballester, J. Ballet, A. Balzer, A. Bamba, R. Bandiera, A. Barber, C. Barbier, M. Barcelo, M. Barkov, A. Barnacka, U. Barres de Almeida, J. A. Barrio, S. Basso, D. Bastieri, C. Bauer, U. Becciani, Y. Becherini, J. Becker Tjus, V. Beckmann, W. Bednarek, W. Benbow, D. Benedico Ventura, J. Berdugo, D. Berge, E. Bernardini, M. G. Bernardini, S. Bernhard, K. Bernlöhr, B. Bertucci, M.-A. Besel, V. Beshley, N. Bhatt, P. Bhattacharjee, W. Bhattacharyya, S. Bhattachryya, B. Biasuzzi, G. Bicknell, C. Bigongiari, A. Biland, A. Bilinsky, W. Bilnik, B. Biondo, R. Bird, T. Bird, E. Bissaldi, M. Bitossi, O. Blanch, P. Blasi, J. Blazek, C. Bockermann, C. Boehm, L. Bogacz, M. Bogdan, M. Bohacova, C. Boisson, J. Boix, J. Bolmont, G. Bonanno, A. Bonardi, C. Bonavolontà, P. Bonifacio, F. Bonnarel, G. Bonnoli, J. Borkowski, R. Bose, Z. Bosnjak, M. Böttcher, J.-J. Bousquet, C. Boutonnet, F. Bouyjou, L. Bowman, C. Braiding, T. Brantseg, S. Brau-Nogué, J. Bregeon, M. Briggs, M. Brigida, T. Bringmann, W. Brisken, D. Bristow, R. Britto, E. Brocato, S. Bron, P. Brook, W. Brooks, A. M. Brown, K. Brügge, F. Brun, P. Brun, P. Brun, G. Brunetti, L. Brunetti, P. Bruno, T. Buanes, N. Bucciantini, G. Buchholtz, J. Buckley, V. Bugaev, R. Bühler, A. Bulgarelli, T. Bulik, M. Burton, A. Burtovoi, G. Busetto, S. Buson, J. Buss, K. Byrum, F. Cadoux, J. Calvo Tovar, R. Cameron, F. Canelli, R. Canestrari, M. Capalbi, M. Capasso, G. Capobianco, A. Caproni, P. Caraveo, J. Cardenzana, M. Cardillo, S. Carius, C. Carlile, A. Carosi, R. Carosi, E. Carquín, J. Carr, M. Carroll, J. Carter, P.-H. Carton, J.-M. Casandjian, S. Casanova, S. Casanova, E. Cascone, M. Casiraghi, A. Castellina, J. Castroviejo Mora, F. Catalani, O. Catalano, S. Catalanotti, D. Cauz, S. Cavazzani, P. Cerchiara, E. Chabanne, P. Chadwick, T. Chaleil, C. Champion, A. Chatterjee, S. Chaty, R. Chaves, A. Chen, X. Chen, X. Chen, K. Cheng, M. Chernyakova, L. Chiappetti, M. Chikawa, D. Chinn, V. R. Chitnis, N. Cho, A. Christov, J. Chudoba, M. Cieślar, M. A. Ciocci, R. Clay, S. Colafrancesco, P. Colin, J.-M. Colley, E. Colombo, J. Colome, S. Colonges, V. Conforti, V. Connaughton, S. Connell, J. Conrad, J. L. Contreras, P. Coppi, S. Corbel, J. Coridian, R. Cornat, P. Corona, D. Corti, J. Cortina, L. Cossio, A. Costa, H. Costantini, G. Cotter, B. Courty, S. Covino, G. Covone, G. Crimi, S. J. Criswell, R. Crocker, J. Croston, J. Cuadra, P. Cumani, G. Cusumano, P. Da Vela, Ø. Dale, F. D'Ammando, D. Dang, V. T. Dang, L. Dangeon, M. Daniel, I. Davids, I. Davids, B. Dawson, F. Dazzi, B. de Aguiar Costa, A. De Angelis, R. F. de Araujo Cardoso, V. De Caprio, R. de Cássia dos Anjos, G. De Cesare, A. De Franco, F. De Frondat, E. M. de Gouveia Dal Pino, I. de la Calle, C. De Lisio, R. de los Reyes Lopez, B. De Lotto, A. De Luca, J. R. T. de Mello Neto, M. de Naurois, E. de Oña Wilhelmi, F. De Palma, F. De Persio, V. de Souza, G. Decock, J. Decock, C. Deil, M. Del Santo, E. Delagnes, G. Deleglise, C. Delgado, J. Delgado, D. della Volpe, P. Deloye, M. Detournay, A. Dettlaff, J. Devin, T. Di Girolamo, C. Di Giulio, A. Di Paola, F. Di Pierro, M. A. Diaz, C. Díaz, C. Dib, J. Dick, H. Dickinson, S. Diebold, S. Digel, J. Dipold, G. Disset, A. Distefano, A. Djannati-Ataï, M. Doert, M. Dohmke, A. Domínguez, N. Dominik, J.-L. Dominique, D. Dominis Prester, A. Donat, I. Donnarumma, D. Dorner, M. Doro, J.-L. Dournaux, T. Downes, K. Doyle, G. Drake, S. Drappeau, H. Drass, D. Dravins, L. Drury, G. Dubus, L. Ducci, D. Dumas, K. Dundas Morå, D. Durand, D. D'Urso, V. Dwarkadas, J. Dyks, M. Dyrda, J. Ebr, E. Edy, K. Egberts, P. Eger, A. Egorov, S. Einecke, J. Eisch, F. Eisenkolb, C. Eleftheriadis, D. Elsaesser, D. Elsässer, D. Emmanoulopoulos, C. Engelbrecht, D. Engelhaupt, J.-P. Ernenwein, P. Escarate, S. Eschbach, C. Espinoza, P. Evans, M. Fairbairn, D. Falceta-Goncalves, A. Falcone, V. Fallah Ramazani, D. Fantinel, K. Farakos, C. Farnier, E. Farrell, G. Fasola, Y. Favre, E. Fede, R. Fedora, E. Fedorova, S. Fegan, D. Ferenc, M. Fernandez-Alonso, A. Fernández-Barral, G. Ferrand, O. Ferreira, M. Fesquet, P. Fetfatzis, E. Fiandrini, A. Fiasson, A. Filipčič, M. Filipovic, D. Fink, C. Finley, J. P. Finley, A. Finoguenov, V. Fioretti, M. Fiorini, H. Fleischhack, H. Flores, D. Florin, C. Föhr, E. Fokitis, M. V. Fonseca, L. Font, G. Fontaine, B. Fontes, M. Fornasa, M. Fornasa, A. Förster, P. Fortin, L. Fortson, N. Fouque, A. Franckowiak, A. Franckowiak, F. J. Franco, I. Freire Mota Albuquerque, L. Freixas Coromina, L. Fresnillo, C. Fruck, M. Fuessling, D. Fugazza, Y. Fujita, S. Fukami, Y. Fukazawa, T. Fukuda, Y. Fukui, S. Funk, A. Furniss, W. Gäbele, S. Gabici, A. Gadola, D. Galindo, D. D. Gall, Y. Gallant, D. Galloway, S. Gallozzi, J. A. Galvez, S. Gao, A. Garcia, B. Garcia, R. García Gil, R. Garcia López, M. Garczarczyk, D. Gardiol, C. Gargano, F. Gargano, S. Garozzo, F. Garrecht, L. Garrido, M. Garrido-Ruiz, D. Gascon, J. Gaskins, J. Gaudemard, M. Gaug, J. Gaweda, B. Gebhardt, M. Gebyehu, N. Geffroy, B. Genolini, L. Gerard, A. Ghalumyan, A. Ghedina, P. Ghislain, P. Giammaria, E. Giannakaki, F. Gianotti, S. Giarrusso, G. Giavitto, B. Giebels, T. Gieras, N. Giglietto, V. Gika, R. Gimenes, M. Giomi, P. Giommi, F. Giordano, G. Giovannini, P. Girardot, E. Giro, M. Giroletti, J. Gironnet, A. Giuliani, J.-F. Glicenstein, R. Gnatyk, N. Godinovic, P. Goldoni, G. Gomez, M. M. Gonzalez, A. González, D. Gora, K. S. Gothe, D. Gotz, J. Goullon, T. Grabarczyk, R. Graciani, J. Graham, P. Grandi, J. Granot, G. Grasseau, R. Gredig, A. J. Green, A. M. Green, T. Greenshaw, I. Grenier, S. Griffiths, A. Grillo, M.-H. Grondin, J. Grube, M. Grudzinska, J. Grygorczuk, V. Guarino, D. Guberman, S. Gunji, G. Gyuk, D. Hadasch, A. Hagedorn, L. Hagge, J. Hahn, H. Hakobyan, S. Hara, M. J. Hardcastle, T. Hassan, K. Hatanaka, T. Haubold, A. Haupt, T. Hayakawa, M. Hayashida, M. Heller, R. Heller, J. C. Helo, F. Henault, G. Henri, G. Hermann, R. Hermel, J. Herrera Llorente, J. Herrera Llorente, A. Herrero, O. Hervet, N. Hidaka, J. Hinton, W. Hirai, K. Hirotani, B. Hnatyk, J. Hoang, D. Hoffmann, W. Hofmann, T. Holch, J. Holder, S. Hooper, D. Horan, J. Hörandel, M. Hörbe, D. Horns, P. Horvath, J. Hose, J. Houles, T. Hovatta, M. Hrabovsky, D. Hrupec, J.-M. Huet, M. Huetten, G. Hughes, D. Hui, T. B. Humensky, M. Hussein, M. Iacovacci, A. Ibarra, Y. Ikeno, J. M. Illa, D. Impiombato, T. Inada, S. Incorvaia, L. Infante, Y. Inome, S. Inoue, T. Inoue, Y. Inoue, F. Iocco, K. Ioka, M. Iori, K. Ishio, K. Ishio, G. L. Israel, Y. Iwamura, C. Jablonski, A. Jacholkowska, J. Jacquemier, M. Jamrozy, P. Janecek, M. Janiak, D. Jankowsky, F. Jankowsky, P. Jean, I. Jegouzo, P. Jenke, J. J. Jimenez, M. Jingo, M. Jingo, L. Jocou, T. Jogler, C. A. Johnson, M. Jones, M. Josselin, L. Journet, I. Jung, P. Kaaret, M. Kagaya, J. Kakuwa, O. Kalekin, C. Kalkuhl, H. Kamon, R. Kankanyan, A. Karastergiou, K. Kärcher, M. Karczewski, S. Karkar, P. Karn, J. Kasperek, H. Katagiri, J. Kataoka, K. Katarzyński, S. Kato, U. Katz, N. Kawanaka, L. Kaye, D. Kazanas, N. Kelley-Hoskins, J. Kersten, B. Khélifi, D. B. Kieda, T. Kihm, S. Kimeswenger, S. Kisaka, S. Kishida, R. Kissmann, S. Klepser, W. Kluźniak, J. Knapen, J. Knapp, J. Knödlseder, B. Koch, F. Köck, J. Kocot, K. Kohri, K. Kokkotas, K. Kokkotas, D. Kolitzus, N. Komin, I. Kominis, A. Kong, Y. Konno, K. Kosack, G. Koss, M. Kossatz, G. Kowal, S. Koyama, J. Kozioł, M. Kraus, J. Krause, M. Krause, H. Krawzcynski, F. Krennrich, A. Kretzschmann, P. Kruger, H. Kubo, V. Kudryavtsev, G. Kukec Mezek, M. Kuklis, H. Kuroda, J. Kushida, A. La Barbera, N. La Palombara, V. La Parola, G. La Rosa, H. Laffon, R. Lahmann, M. Lakicevic, K. Lalik, G. Lamanna, D. Landriu, H. Landt, R. G. Lang, J. Lapington, P. Laporte, J.-P. Le Fèvre, T. Le Flour, P. Le Sidaner, S.-H. Lee, W. H. Lee, J.-P. Lees, J. Lefaucheur, K. Leffhalm, H. Leich, M. A. Leigui de Oliveira, D. Lelas, A. Lemière, M. Lemoine-Goumard, J.-P. Lenain, R. Leonard, R. Leoni, L. Lessio, G. Leto, A. Leveque, B. Lieunard, M. Limon, R. Lindemann, E. Lindfors, L. Linhoff, A. Liolios, A. Lipniacka, H. Lockart, T. Lohse, E. Łokas, S. Lombardi, F. Longo, A. Lopatin, M. Lopez, D. Loreggia, T. Louge, F. Louis, M. Louys, F. Lucarelli, D. Lucchesi, H. Lüdecke, T. Luigi, P. L. Luque-Escamilla, E. Lyard, M. C. Maccarone, T. Maccarone, T. J. Maccarone, E. Mach, G. M. Madejski, A. Madonna, F. Magniette, A. Magniez, M. Mahabir, G. Maier, P. Majumdar, P. Majumdar, M. Makariev, G. Malaguti, G. Malaspina, A. K. Mallot, A. Malouf, S. Maltezos, D. Malyshev, A. Mancilla, D. Mandat, G. Maneva, M. Manganaro, S. Mangano, P. Manigot, N. Mankushiyil, K. Mannheim, N. Maragos, D. Marano, P. Marchegiani, J. A. Marcomini, A. Marcowith, M. Mariotti, M. Marisaldi, S. Markoff, C. Martens, J. Martí, J.-M. Martin, L. Martin, P. Martin, G. Martínez, M. Martínez, O. Martínez, K. Martynyuk-Lototskyy, R. Marx, N. Masetti, P. Massimino, A. Mastichiadis, S. Mastroianni, M. Mastropietro, S. Masuda, H. Matsumoto, S. Matsuoka, N. Matthews, S. Mattiazzo, G. Maurin, N. Maxted, N. Maxted, J. Maya, M. Mayer, D. Mazin, M. N. Mazziotta, L. Mc Comb, N. McCubbin, I. McHardy, C. Medina, F. Mehrez, C. Melioli, D. Melkumyan, T. Melse, S. Mereghetti, M. Merk, P. Mertsch, J.-L. Meunier, T. Meures, M. Meyer, J. L. Meyrelles jr, A. Miccichè, T. Michael, J. Michałowski, P. Mientjes, I. Mievre, A. Mihailidis, J. Miller, T. Mineo, M. Minuti, N. Mirabal, F. Mirabel, J. M. Miranda, R. Mirzoyan, A. Mitchell, T. Mizuno, R. Moderski, I. Mognet, M. Mohammed, R. Moharana, L. Mohrmann, E. Molinari, P. Molyneux, E. Monmarthe, G. Monnier, T. Montaruli, C. Monte, I. Monteiro, D. Mooney, P. Moore, A. Moralejo, C. Morello, E. Moretti, K. Mori, P. Morris, A. Morselli, F. Moscato, D. Motohashi, F. Mottez, Y. Moudden, E. Moulin, S. Mueller, R. Mukherjee, P. Munar, M. Munari, C. Mundell, J. Mundet, H. Muraishi, K. Murase, A. Muronga, A. Murphy, N. Nagar, S. Nagataki, T. Nagayoshi, B. K. Nagesh, T. Naito, D. Nakajima, D. Nakajima, T. Nakamori, K. Nakayama, J. Nanni, D. Naumann, P. Nayman, L. Nellen, R. Nemmen, A. Neronov, N. Neyroud, T. Nguyen, T. T. Nguyen, T. Nguyen Trung, L. Nicastro, J. Nicolau-Kukliński, F. Niederwanger, A. Niedźwiecki, J. Niemiec, D. Nieto, M. Nievas-Rosillo, A. Nikolaidis, M. Nikołajuk, K. Nishijima, K.-I. Nishikawa, G. Nishiyama, K. Noda, K. Noda, L. Nogues, S. Nolan, R. Northrop, D. Nosek, M. Nöthe, B. Novosyadlyj, L. Nozka, F. Nunio, L. Oakes, P. O'Brien, C. Ocampo, G. Occhipinti, J. P. Ochoa, A. OFaolain de Bhroithe, R. Oger, Y. Ohira, M. Ohishi, S. Ohm, H. Ohoka, N. Okazaki, A. Okumura, J.-F. Olive, D. Olszowski, R. A. Ong, S. Ono, M. Orienti, R. Orito, A. Orlati, J. Osborne, M. Ostrowski, D. Ottaway, N. Otte, S. Öttl, E. Ovcharov, I. Oya, A. Ozieblo, M. Padovani, I. Pagano, S. Paiano, A. Paizis, J. Palacio, M. Palatka, J. Pallotta, K. Panagiotidis, J.-L. Panazol, D. Paneque, M. Panter, M. R. Panzera, R. Paoletti, M. Paolillo, A. Papayannis, G. Papyan, A. Paravac, J. M. Paredes, G. Pareschi, N. Park, D. Parsons, P. Paśko, S. Pavy, M. Pech, A. Peck, G. Pedaletti, A. Pe'er, S. Peet, D. Pelat, A. Pepato, M. d. C. Perez, L. Perri, M. Perri, M. Persic, M. Persic, A. Petrashyk, P.-O. Petrucci, O. Petruk, B. Peyaud, M. Pfeifer, G. Pfeiffer, G. Piano, D. Pieloth, E. Pierre, F. Pinto de Pinho, C. Pio García, Y. Piret, A. Pisarski, S. Pita, Ł. Platos, R. Platzer, S. Podkladkin, L. Pogosyan, M. Pohl, P. Poinsignon, A. Pollo, A. Porcelli, J. Porthault, W. Potter, S. Poulios, J. Poutanen, E. Prandini, E. Prandini, J. Prast, K. Pressard, G. Principe, F. Profeti, D. Prokhorov, H. Prokoph, M. Prouza, R. Pruchniewicz, G. Pruteanu, E. Pueschel, G. Pühlhofer, I. Puljak, M. Punch, S. Pürckhauer, R. Pyzioł, F. Queiroz, E. J. Quel, J. Quinn, A. Quirrenbach, I. Rafighi, S. Rainò, P. J. Rajda, M. Rameez, R. Rando, R. C. Rannot, M. Rataj, T. Ravel, S. Razzaque, P. Reardon, I. Reichardt, O. Reimann, A. Reimer, O. Reimer, A. Reisenegger, M. Renaud, S. Renner, T. Reposeur, B. Reville, A. Rezaeian, W. Rhode, D. Ribeiro, R. Ribeiro Prado, M. Ribó, G. Richards, M. G. Richer, T. Richtler, J. Rico, J. Ridky, F. Rieger, M. Riquelme, P. R. Ristori, S. Rivoire, V. Rizi, E. Roache, J. Rodriguez, G. Rodriguez Fernandez, J. J. Rodríguez Vázquez, G. Rojas, P. Romano, G. Romeo, M. Roncadelli, J. Rosado, J. Rose, S. Rosen, S. Rosier Lees, D. Ross, G. Rouaix, J. Rousselle, A. C. Rovero, G. Rowell, F. Roy, S. Royer, A. Rubini, B. Rudak, A. Rugliancich, W. Rujopakarn, C. Rulten, M. Rupiński, F. Russo, F. Russo, K. Rutkowski, O. Saavedra, S. Sabatini, B. Sacco, I. Sadeh, E. O. Saemann, S. Safi-Harb, A. Saggion, V. Sahakian, T. Saito, N. Sakaki, S. Sakurai, A. Salamon, M. Salega, D. Salek, F. Salesa Greus, J. Salgado, G. Salina, L. Salinas, A. Salini, D. Sanchez, M. Sanchez-Conde, H. Sandaker, A. Sandoval, P. Sangiorgi, M. Sanguillon, H. Sano, M. Santander, A. Santangelo, E. M. Santos, R. Santos-Lima, A. Sanuy, L. Sapozhnikov, S. Sarkar, K. Satalecka, K. Satalecka, Y. Sato, R. Savalle, M. Sawada, F. Sayède, S. Schanne, T. Schanz, E. J. Schioppa, S. Schlenstedt, J. Schmid, T. Schmidt, J. Schmoll, M. Schneider, H. Schoorlemmer, P. Schovanek, A. Schubert, E.-M. Schullian, J. Schultze, A. Schulz, S. Schulz, K. Schure, F. Schussler, T. Schwab, U. Schwanke, J. Schwarz, T. Schweizer, S. Schwemmer, U. Schwendicke, C. Schwerdt, E. Sciacca, S. Scuderi, A. Segreto, J.-H. Seiradakis, G. H. Sembroski, D. Semikoz, O. Sergijenko, N. Serre, M. Servillat, K. Seweryn, N. Shafi, A. Shalchi, M. Sharma, M. Shayduk, R. C. Shellard, T. Shibata, A. Shigenaka, I. Shilon, E. Shum, L. Sidoli, M. Sidz, J. Sieiro, H. Siejkowski, J. Silk, A. Sillanpää, D. Simone, H. Simpson, B. B. Singh, A. Sinha, G. Sironi, J. Sitarek, P. Sizun, V. Sliusar, V. Sliusar, A. Smith, D. Sobczyńska, H. Sol, G. Sottile, M. Sowiński, F. Spanier, G. Spengler, R. Spiga, R. Stadler, O. Stahl, A. Stamerra, S. Stanič, R. Starling, D. Staszak, Ł. Stawarz, R. Steenkamp, S. Stefanik, C. Stegmann, S. Steiner, C. Stella, M. Stephan, N. Stergioulas, R. Sternberger, M. Sterzel, B. Stevenson, F. Stinzing, M. Stodulska, M. Stodulski, T. Stolarczyk, G. Stratta, U. Straumann, L. Stringhetti, M. Strzys, R. Stuik, K.-H. Sulanke, T. Suomijärvi, A. D. Supanitsky, T. Suric, I. Sushch, P. Sutcliffe, J. Sykes, M. Szanecki, T. Szepieniec, P. Szwarnog, A. Tacchini, K. Tachihara, G. Tagliaferri, H. Tajima, H. Takahashi, K. Takahashi, M. Takahashi, L. Takalo, S. Takami, J. Takata, J. Takeda, G. Talbot, T. Tam, M. Tanaka, S. Tanaka, T. Tanaka, Y. Tanaka, C. Tanci, S. Tanigawa, M. Tavani, F. Tavecchio, J.-P. Tavernet, K. Tayabaly, A. Taylor, L. A. Tejedor, I. Telezhinsky, F. Temme, P. Temnikov, C. Tenzer, Y. Terada, J. C. Terrazas, R. Terrier, D. Terront, T. Terzic, D. Tescaro, M. Teshima, M. Teshima, V. Testa, D. Tezier, J. Thayer, J. Thornhill, S. Thoudam, D. Thuermann, L. Tibaldo, A. Tiengo, M. C. Timpanaro, D. Tiziani, M. Tluczykont, C. J. Todero Peixoto, F. Tokanai, M. Tokarz, K. Toma, J. Tomastik, Y. Tomono, A. Tonachini, D. Tonev, K. Torii, M. Tornikoski, D. F. Torres, M. Torres, E. Torresi, G. Toso, G. Tosti, T. Totani, N. Tothill, F. Toussenel, G. Tovmassian, T. Toyama, P. Travnicek, C. Trichard, M. Trifoglio, I. Troyano Pujadas, M. Trzeciak, K. Tsinganos, S. Tsujimoto, T. Tsuru, Y. Uchiyama, G. Umana, Y. Umetsu, S. S. Upadhya, M. Uslenghi, V. Vagelli, F. Vagnetti, J. Valdes-Galicia, M. Valentino, P. Vallania, L. Valore, W. van Driel, C. van Eldik, B. van Soelen, J. Vandenbroucke, J. Vanderwalt, G. Vasileiadis, V. Vassiliev, J. R. Vázquez, M. L. Vázquez Acosta, M. Vecchi, A. Vega, I. Vegas, P. Veitch, P. Venault, L. Venema, C. Venter, S. Vercellone, S. Vergani, K. Verma, V. Verzi, G. P. Vettolani, C. Veyssiere, A. Viana, N. Viaux, J. Vicha, C. Vigorito, P. Vincent, S. Vincent, J. Vink, V. Vittorini, N. Vlahakis, L. Vlahos, H. Voelk, V. Voisin, A. Vollhardt, A. Volpicelli, H. von Brand, S. Vorobiov, I. Vovk, M. Vrastil, L. V. Vu, T. Vuillaume, R. Wagner, R. Wagner, S. J. Wagner, S. P. Wakely, T. Walstra, R. Walter, T. Walther, J. E. Ward, M. Ward, K. Warda, D. Warren, S. Wassberg, J. J. Watson, P. Wawer, R. Wawrzaszek, N. Webb, P. Wegner, O. Weiner, A. Weinstein, R. Wells, F. Werner, H. Wetteskind, M. White, R. White, M. Więcek, A. Wierzcholska, S. Wiesand, R. Wijers, P. Wilcox, N. Wild, A. Wilhelm, M. Wilkinson, M. Will, M. Will, D. A. Williams, J. T. Williams, R. Willingale, N. Wilson, M. Winde, K. Winiarski, H. Winkler, M. Winter, R. Wischnewski, E. Witt, P. Wojcik, D. Wolf, M. Wood, A. Wörnlein, E. Wu, T. Wu, K. K. Yadav, H. Yamamoto, T. Yamamoto, N. Yamane, R. Yamazaki, S. Yanagita, L. Yang, D. Yelos, A. Yoshida, M. Yoshida, T. Yoshida, S. Yoshiike, T. Yoshikoshi, P. Yu, V. Zabalza, D. Zaborov, M. Zacharias, G. Zaharijas, A. Zajczyk, L. Zampieri, F. Zandanel, R. Zanmar Sanchez, D. Zaric, D. Zavrtanik, M. Zavrtanik, A. Zdziarski, A. Zech, H. Zechlin, A. Zhao, V. Zhdanov, A. Ziegler, J. Ziemann, K. Ziętara, A. Zink, J. Ziółkowski, V. Zitelli, A. Zoli, J. Zorn, P. Żychowski
Oct. 17, 2016 astro-ph.HE
List of contributions from the Cherenkov Telescope Array (CTA) Consortium presented at the 6th International Symposium on High-Energy Gamma-Ray Astronomy (Gamma 2016), July 11-15, 2016, in Heidelberg, Germany.
Readout technologies for directional WIMP Dark Matter detection (1610.02396)
J. B. R. Battat, I. G. Irastorza, A. Aleksandrov, M. Ali Guler, T. Asada, E. Baracchini, J. Billard, G. Bosson, O. Bourrion, J. Bouvier, A. Buonaura, K. Burdge, S. Cebrian, P. Colas, L. Consiglio, T. Dafni, N. D'Ambrosio, C. Deaconu, G. De Lellis, T. Descombes, A. Di Crescenzo, N. Di Marco, G. Druitt, R. Eggleston, E. Ferrer-Ribas, T. Fusayasu, J. Galan, G. Galati, J. A. Garcia, J. G. Garza, V. Gentile, M. Garcia-Sciveres, Y. Giomataris, N. Guerrero, O. Guillaudin, J. Harton, T. Hashimoto, M. T. Hedges, F. Iguaz, T. Ikeda, I. Jaegle, J. A. Kadyk, T. Katsuragawa, S. Komura, H. Kubo, K. Kuge, J. Lamblin, A. Lauria, E. R. Lee, P. Lewis, M. Leyton, D. Loomba, J. P. Lopez, G. Luzon, F. Mayet, H. Mirallas, K. Miuchi, T. Mizumoto, Y. Mizumura, P. Monacelli, J. Monroe, M. C. Montesi, T. Naka, K. Nakamura, H. Nishimura, A. Ochi, T. Papevangelou, J. D. Parker, N. S. Phan, F. Pupilli, J. P. Richer, Q. Riffard, G. Rosa, D. Santos, T. Sawano, H. Sekiya, I. S. Seong, D. P. Snowden-Ifft, N. J. C. Spooner, A. Sugiyama, R. Taishaku, A. Takada, A. Takeda, M. Tanaka, T. Tanimori, T. N. Thorpe, V. Tioukov, H. Tomita, A. Umemoto, S. E. Vahsen, Y. Yamaguchi, M. Yoshimoto, E. Zayas
Oct. 6, 2016 hep-ex, physics.ins-det, astro-ph.CO, astro-ph.IM
The measurement of the direction of WIMP-induced nuclear recoils is a compelling but technologically challenging strategy to provide an unambiguous signature of the detection of Galactic dark matter. Most directional detectors aim to reconstruct the dark-matter-induced nuclear recoil tracks, either in gas or solid targets. The main challenge with directional detection is the need for high spatial resolution over large volumes, which puts strong requirements on the readout technologies. In this paper we review the various detector readout technologies used by directional detectors. In particular, we summarize the challenges, advantages and drawbacks of each approach, and discuss future prospects for these technologies.
Fermi Large Area Telescope Observations of the Monoceros Loop Supernova Remnant (1608.06380)
H. Katagiri, S. Sugiyama, M. Ackermann, J. Ballet, J.M. Casandjian, Y. Hanabata, J.W. Hewitt, M. Kerr, H. Kubo, M. Lemoine-Goumard, P.S. Ray
Aug. 23, 2016 astro-ph.HE
We present an analysis of the gamma-ray measurements by the Large Area Telescope onboard the \textit{Fermi Gamma-ray Space Telescope} in the region of the supernova remnant~(SNR) Monoceros Loop~(G205.5$+$0.5). The brightest gamma-ray peak is spatially correlated with the Rosette Nebula, which is a molecular cloud complex adjacent to the southeast edge of the SNR. After subtraction of this emission by spatial modeling, the gamma-ray emission from the SNR emerges, which is extended and fit by a Gaussian spatial template. The gamma-ray spectra are significantly better reproduced by a curved shape than a simple power law. The luminosities between 0.2--300~GeV are $\sim$~$4 \times 10^{34}$~erg~s$^{-1}$ for the SNR and $\sim$~$3 \times 10^{34}$~erg~s$^{-1}$ for the Rosette Nebula, respectively. We argue that the gamma rays likely originate from the interactions of particles accelerated in the SNR. The decay of neutral pions produced in nucleon-nucleon interactions of accelerated hadrons with interstellar gas provides a reasonable explanation for the gamma-ray emission of both the Rosette Nebula and the Monoceros SNR.
Insights into the emission of the blazar 1ES 1011+496 through unprecedented broadband observations during 2011 and 2012 (1603.06776)
J. Aleksić, S. Ansoldi, L. A. Antonelli, P. Antoranz, C. Arcaro, A. Babic, P. Bangale, U. Barres de Almeida, J. A. Barrio, J. Becerra González, W. Bednarek, E. Bernardini, B. Biasuzzi, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, F. Borracci, T. Bretz, E. Carmona, A. Carosi, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, S. Covino, P. Da Vela, F. Dazzi, A. De Angelis, G. De Caneva, B. De Lotto, E. de O. na Wilhelmi, C. Delgado Mendez, F. Di Pierro, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher, D. Elsaesser, A. Fernández-Barral, D. Fidalgo, M. V. Fonseca, L. Font, K. Frantzen, C. Fruck, D. Galindo, R. J. García López, M. Garczarczyk, D. Garrido Terrats, M. Gaug, N. Godinović, A. González Mu. noz, S. R. Gozzini, D. Hadasch, Y. Hanabata, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, W. Idec, V. Kadenius, H. Kellermann, M. L. Knoetig, K. Kodani, Y. Konno, J. Krause, H. Kubo, J. Kushida, A. La Barbera, D. Lelas, N. Lewandowska, E. Lindfors, S. Lombardi, F. Longo, M. López, R. López-Coto, A. López-Oramas, E. Lorenz, I. Lozano, M. Makariev, K. Mallot, G. Maneva, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martínez, D. Mazin, U. Menzel, J. M. Miranda, R. Mirzoyan, A. Moralejo, P. Munar-Adrover, D. Nakajima, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, K. Nilsson, K. Nishijima, K. Noda, R. Orito, A. Overkemping, S. Paiano, M. Palatiello, D. Paneque, R. Paoletti, J. M. Paredes, X. Paredes-Fortuny, M. Persic, J. Poutanen, P. G. Prada Moroni, E. Prandini, I. Puljak, R. Reinthal, W. Rhode, M. Ribó, J. Rico, J. Rodriguez Garcia, T. Saito, K. Saito, K. Satalecka, V. Scalzotto, V. Scapin, T. Schweizer, S. N. Shore, A. Sillanpää, J. Sitarek, I. Snidaric, D. Sobczynska, A. Stamerra, T. Steinbring, M. Strzys, L. Takalo, H. Takami, F. Tavecchio, P. Temnikov, T. Terzić, D. Tescaro, M. Teshima, J. Thaele, D. F. Torres, T. Toyama, A. Treves, P. Vogler, M. Will, R. Zanin, S. Buson, F. D'Ammando, A. Lähteenmäki, T. Hovatta, Y. Y. Kovalev, M. L. Lister, W. Max-Moerbeck, C. Mundell, A. B. Pushkarev, E. Rastorgueva-Foi, A. C. S. Readhead, J. L. Richards, J. Tammi, D. A. Sanchez, M. Tornikoski, T. Savolainen, I. Steele
June 6, 2016 astro-ph.HE
1ES 1011+496 $(z=0.212)$ was discovered in very high energy (VHE, E >100 GeV) $\gamma$-rays with MAGIC in 2007. The absence of simultaneous data at lower energies led to a rather incomplete characterization of the broadband spectral energy distribution (SED). We study the source properties and the emission mechanisms, probing whether a simple one-zone synchrotron-self-Compton (SSC) scenario is able to explain the observed broadband spectrum. We analyzed VHE to radio data from 2011 and 2012 collected by MAGIC, $Fermi$-LAT, $Swift$, KVA, OVRO, and Mets\"ahovi in addition to optical polarimetry data and radio maps from the Liverpool Telescope and MOJAVE. The VHE spectrum was fit with a simple power law with a photon index of $3.69\pm0.22$ and a flux above 150 GeV of $(1.46\pm0.16)\times10^{-11}$ ph cm$^{-2}$ s$^{-1}$. 1ES 1011+496 was found to be in a generally quiescent state at all observed wavelengths, showing only moderate variability from radio to X-rays. A low degree of polarization of less than 10% was measured in optical, while some bright features polarized up to 60% were observed in the radio jet. A similar trend in the rotation of the electric vector position angle was found in optical and radio. The radio maps indicated a superluminal motion of $1.8\pm0.4\,c$, which is the highest speed statistically significantly measured so far in a high-frequency-peaked BL Lac. For the first time, the high-energy bump in the broadband SED of 1ES 1011+496 could be fully characterized from 0.1 GeV to 1 TeV which permitted a more reliable interpretation within the one-zone SSC scenario. The polarimetry data suggest that at least part of the optical emission has its origin in some of the bright radio features, while the low polarization in optical might be due to the contribution of parts of the radio jet with different orientations of the magnetic field to the optical emission.
Multi-Wavelength Observations of the Blazar 1ES 1011+496 in Spring 2008 (1603.07308)
The MAGIC Collaboration: M. L. Ahnen, S. Ansoldi, L. A. Antonelli, P. Antoranz, A. Babic, B. Banerjee, P. Bangale, U. Barres de Almeida, J. A. Barrio, J. Becerra Gonzalez, W. Bednarek, E. Bernardini, B. Biasuzzi, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, F. Borracci, T. Bretz, E. Carmona, A. Carosi, A. Chatterjee, R. Clavero, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, S. Covino, P. Da Vela, F. Dazzi, A. De Angelis, G. De Caneva, B. De Lotto, E. de Ona Wilhelmi, C. Delgado Mendez, F. Di Pierro, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Elsaesser, A. Fernandez-Barral, D. Fidalgo, M. V. Fonseca, L. Font, K. Frantzen, C. Fruck, D. Galindo, R. J. Garcia Lopez, M. Garczarczyk, D. Garrido Terrats, M. Gaug, P. Giammaria, D. Glawion, N. Godinovic, A. Gonzalez Munoz, D. Guberman, Y. Hanabata, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, G. Hughes, W. Idec, K. Kodani, Y. Konno, H. Kubo, J. Kushida, A. La Barbera, D. Lelas, E. Lindfors, S. Lombardi, F. Longo, M. Lopez, R. Lopez-Coto, A. Lopez-Oramas, E. Lorenz, P. Majumdar, M. Makariev, K. Mallot, G. Maneva, M. Manganaro, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martinez, D. Mazin, U. Menzel, J. M. Miranda, R. Mirzoyan, A. Moralejo, D. Nakajima, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, K. Nilsson, K. Nishijima, K. Noda, R. Orito, A. Overkemping, S. Paiano, J. Palacio, M. Palatiello, D. Paneque, R. Paoletti, J. M. Paredes, X. Paredes-Fortuny, M. Persic, J. Poutanen, P. G. Prada Moroni, E. Prandini, I. Puljak, R. Reinthal, W. Rhode, M. Ribo, J. Rico, J. Rodriguez Garcia, S. Rugamer, T. Saito, K. Satalecka, V. Scapin, C. Schultz, T. Schweizer, S. N. Shore, A. Sillanpaa, J. Sitarek, I. Snidaric, D. Sobczynska, A. Stamerra, T. Steinbring, M. Strzys, L. Takalo, H. Takami, F. Tavecchio, P. Temnikov, T. Terzic, D. Tescaro, M. Teshima, J. Thaele, D. F. Torres, T. Toyama, A. Treves, V. Verguilov, I. Vovk, J. E. Ward, M. Will, M. H. Wu, R. Zanin F. Lucarelli, C. Pittori, S.Vercellone, A. Berdyugin, M. T. Carini, A. Lahteenmaki, M. Pasanen, A. Pease, J. Sainio, M. Tornikoski, R. Walters
March 23, 2016 astro-ph.HE
The BL Lac object 1ES 1011+496 was discovered at Very High Energy gamma-rays by MAGIC in spring 2007. Before that the source was little studied in different wavelengths. Therefore a multi-wavelength (MWL) campaign was organized in spring 2008. Along MAGIC, the MWL campaign included the Metsahovi radio observatory, Bell and KVA optical telescopes and the Swift and AGILE satellites. MAGIC observations span from March to May, 2008 for a total of 27.9 hours, of which 19.4 hours remained after quality cuts. The light curve showed no significant variability. The differential VHE spectrum could be described with a power-law function. Both results were similar to those obtained during the discovery. Swift XRT observations revealed an X-ray flare, characterized by a harder when brighter trend, as is typical for high synchrotron peak BL Lac objects (HBL). Strong optical variability was found during the campaign, but no conclusion on the connection between the optical and VHE gamma-ray bands could be drawn. The contemporaneous SED shows a synchrotron dominated source, unlike concluded in previous work based on nonsimultaneous data, and is well described by a standard one zone synchrotron self Compton model. We also performed a study on the source classification. While the optical and X-ray data taken during our campaign show typical characteristics of an HBL, we suggest, based on archival data, that 1ES 1011+496 is actually a borderline case between intermediate and high synchrotron peak frequency BL Lac objects.
Search for VHE gamma-ray emission from Geminga pulsar and nebula with the MAGIC telescopes (1603.00730)
M. L. Ahnen, S. Ansoldi, L. A. Antonelli, P. Antoranz, A. Babic, B. Banerjee, P. Bangale, U. Barres de Almeida, J. A. Barrio, J. Becerra Gonzalez, W. Bednarek, E. Bernardini, A. Berti, B. Biasuzzi, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, F. Borracci, T. Bretz, S. Buson, A. Carosi, A. Chatterjee, R. Clavero, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, S. Covino, P. Da Vela, F. Dazzi, A. De Angelis, B. De Lotto, E. de Ona Wilhelmi, F. Di Pierro, M. Doert, A. Dominguez, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher Glawion, D. Elsaesser, V. Fallah Ramazani, A. Fernandez-Barral, D. Fidalgo, M. V. Fonseca, L. Font, K. Frantzen, C. Fruck, D. Galindo, R. J. Garcia Lopez, M. Garczarczy, D. Garrido Terrats, M. Gaug, P. Giammaria, N. Godinovic, A. Gonzalez Munoz, D. Gora, D. Guberman, D. Hadasch, A. Hahn, Y. Hanabata, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, G. Hughes, W. Idec, K. Kodani, Y. Konno, H. Kubo, J. Kushida, A. La Barbera, D. Lelas, E. Lindfors, S. Lombardi, F. Longo, M. Lopez, R. Lopez-Coto, P. Majumdar, M. Makariev, K. Mallot, G. Maneva, M. Manganaro, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martinez, D. Mazin, U. Menzel, J. M. Miranda, R. Mirzoyan, A. Moralejo, E. Moretti, D. Nakajima, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, K. Nilsson, K. Nishijima, K. Noda, L. Nogues, A. Overkemping, S. Paiano, J. Palacio, M. Palatiello, D. Paneque, R. Paoletti, J. M. Paredes, X. Paredes-Fortuny, G. Pedaletti, M. Peresano, L. Perri, M. Persic, J. Poutanen, P. G. Prada Moroni, E. Prandini, I. Puljak, I. Reichardt, W. Rhode, M. Ribo, J. Rico, J. Rodriguez Garcia, T. Saito, K. Satalecka, C. Schultz, T. Schweizer, S. N. Shore, A. Sillanpaa, J. Sitarek, I. Snidaric, D. Sobczynska, A. Stamerra, T. Steinbring, M. Strzys, T. Suric, L. Takalo, F. Tavecchio, P. Temnikov, T. Terzic, D. Tescaro, M. Teshima, J. Thaele, D. F. Torres, T. Toyama, A. Treves, G. Vanzo, V. Verguilov, I. Vovk, J. E. Ward, M. Will, M. H. Wu, R. Zanin
March 5, 2016 astro-ph.HE
The Geminga pulsar, one of the brighest gamma-ray sources, is a promising candidate for emission of very-high-energy (VHE > 100 GeV) pulsed gamma rays. Also, detection of a large nebula have been claimed by water Cherenkov instruments. We performed deep observations of Geminga with the MAGIC telescopes, yielding 63 hours of good-quality data, and searched for emission from the pulsar and pulsar wind nebula. We did not find any significant detection, and derived 95% confidence level upper limits. The resulting upper limits of 5.3 x 10^{-13} TeV cm^{-2} s^{-1} for the Geminga pulsar and 3.5 x 10^{-12} TeV cm^{-2} s^{-1} for the surrounding nebula at 50 GeV are the most constraining ones obtained so far at VHE. To complement the VHE observations, we also analyzed 5 years of Fermi-LAT data from Geminga, finding that the sub-exponential cut-off is preferred over the exponential cut-off that has been typically used in the literature. We also find that, above 10 GeV, the gamma-ray spectra from Geminga can be described with a power law with index softer than 5. The extrapolation of the power-law Fermi-LAT pulsed spectra to VHE goes well below the MAGIC upper limits, indicating that the detection of pulsed emission from Geminga with the current generation of Cherenkov telescopes is very difficult.
MAGIC observations of the February 2014 flare of 1ES 1011+496 and ensuing constraint of the EBL density (1602.05239)
M. L. Ahnen, S. Ansoldi, L. A. Antonelli, P. Antoranz, A. Babic, B. Banerjee, P. Bangale, U. Barres de Almeida, J. A. Barrio, J. Becerra González, W. Bednarek, E. Bernardini, B. Biasuzzi, A. Bil, O. Blanch, S. Bonnefoy, G. Bonnoli, F. Borracci, T. Bretz, E. Carmona, A. Carosi, A. Chatterjee, R. Clavero, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, S. Covino, P. Da Vela, F. Dazzi, A. De Angelis, B. De Lotto, E. de O na Wilhelmi, C. Delgado Mendez, F. Di Pierro, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher Glawion, D. Elsaesser, A. Fernández-Barral, D. Fidalgo, M. V. Fonseca, L. Font, K. Frantzen, C. Fruck, D. Galindo, R. J. García López, M. Garczarczyk, D. Garrido Terrats, M. Gaug, P. Giammaria, N. Godinovic, A. González Muñoz, D. Guberman, A. Hahn, Y. Hanabata, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, G. Hughes, W. Idec, K. Kodani, Y. Konno, H. Kubo, J. Kushida, A. La Barbera, D. Lelas, E. Lindfors, S. Lombardi, F. Longo, M. López, R. López-Coto, A. López-Oramas, E. Lorenz, P. Majumdar, M. Makariev, K. Mallot, G. Maneva, M. Manganaro, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martínez, D. Mazin, U. Menzel, J. M. Mira, R. Mirzoyan, A. Moralejo, E. Moretti, D. Nakajima, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, K. Nilsson, K. Nishijima, K. Noda, R. Orito, A. Overkemping, S. Paiano, J. Palacio, M. Palatiello, D. Paneque, R. Paoletti, J. M. Paredes, X. Paredes-Fortuny, M. Persic, J. Poutanen, P. G. Prada Moroni, E. Prini, I. Puljak, W. Rhode, M. Ribó, J. Rico, J. Rodriguez Garcia, T. Saito, K. Satalecka, C. Schultz, T. Schweizer, S. N. Shore, A. Sillanpaa, J. Sitarek, I. Snidaric, D. Sobczynska, A. Stamerra, T. Steinbring, M. Strzys, L. Takalo, H. Takami, F. Tavecchio, P. Temnikov, T. Terzic, D. Tescaro, M. Teshima, J. Thaele, D. F. Torres, T. Toyama, A. Treves, V. Verguilov, I. Vovk, J. E. Ward, M. Will, M. H. Wu, R. Zanin
Feb. 18, 2016 astro-ph.CO, astro-ph.HE
In February-March 2014, the MAGIC telescopes observed the high-frequency peaked BL Lac 1ES 1011+496 (z=0.212) in flaring state at very-high energy (VHE, E>100GeV). The flux reached a level more than 10 times higher than any previously recorded flaring state of the source. We present the description of the characteristics of the flare presenting the light curve and the spectral parameters of the night-wise spectra and the average spectrum of the whole period. From these data we aim at detecting the imprint of the Extragalactic Background Light (EBL) in the VHE spectrum of the source, in order to constrain its intensity in the optical band. For this we implement the method developed by the H.E.S.S. collaboration in which the intrinsic energy spectrum of the source is modeled with a simple function, and the EBL-induced optical depth is calculated using a template EBL model. The likelihood of the observed spectrum is then maximized, including a normalization factor for the EBL opacity among the free parameters. From the data collected differential energy spectra was produced for all nights of the observed period. Evaluating the changes in the fit parameters we conclude that the spectral shape for most of the nights were compatible, regardless of the flux level, which enabled us to produce an average spectrum from which the EBL imprint could be constrained. The likelihood ratio test shows that the model with an EBL density 1.07(-0.20,+0.24)_{stat+sys}, relative to the one in the tested EBL template (Dominguez et al.2011), is preferred at the 4.6 sigma level to the no-EBL hypothesis, with the assumption that the intrinsic source spectrum can be modeled as a log-parabola. This would translate into a constraint of the EBL density in the wavelength range [0.24 um,4.25 um], with a peak value at 1.4 um of F=12.27_{-2.29}^{+2.75} nW m^{-2} sr^{-1}, including systematics.
Limits to dark matter annihilation cross-section from a combined analysis of MAGIC and Fermi-LAT observations of dwarf satellite galaxies (1601.06590)
M. L. Ahnen, S. Ansoldi, L. A. Antonelli, P. Antoranz, A. Babic, B. Banerjee, P. Bangale, U. Barres de Almeida, J. A. Barrio, J. Becerra González, W. Bednarek, E. Bernardini, B. Biasuzzi, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, F. Borracci, T. Bretz, E. Carmona, A. Carosi, A. Chatterjee, R. Clavero, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, S. Covino, P. Da Vela, F. Dazzi, A. De Angelis, B. De Lotto, E. de Oña Wilhelmi, C. Delgado Mendez, F. Di Pierro, D., Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher Glawion, D. Elsaesser, A. Fernández-Barral, D. Fidalgo, M. V. Fonseca, L. Font, K. Frantzen, C. Fruck, D. Galindo, R. J. García López, M. Garczarczyk, D. Garrido Terrats, M. Gaug, P. Giammaria, N. Godinović, A. González Muñoz, D. Guberman, A. Hahn, Y. Hanabata, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, G. Hughes, W. Idec, K. Kodani, Y. Konno, H. Kubo, J. Kushida, A. La Barbera, D. Lelas, E. Lindfors, S. Lombardi, F. Longo, M. López, R. López-Coto, A. López-Oramas, E. Lorenz, P. Majumdar, M. Makariev, K. Mallot, G. Maneva, M. Manganaro, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martínez, D. Mazin, U. Menzel, J. M. Miranda, R. Mirzoyan, A. Moralejo, E. Moretti, D. Nakajima, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, K. Nilsson, K. Nishijima, K. Noda, R. Orito, A. Overkemping, S. Paiano, J. Palacio, M. Palatiello, D. Paneque, R. Paoletti, J. M. Paredes, X. Paredes-Fortuny, M. Persic, J. Poutanen, P. G. Prada Moroni, E. Prandini, I. Puljak, W. Rhode, M. Ribó, J. Rico, J. Rodriguez Garcia, T. Saito, K. Satalecka, C. Schultz, T. Schweizer, S. N. Shore, A. Sillanpää, J. Sitarek, I. Snidaric, D. Sobczynska, A. Stamerra, T. Steinbring, M. Strzys, L. Takalo, H. Takami, F. Tavecchio, P. Temnikov, T. Terzić, D. Tescaro, M. Teshima, J. Thaele, D. F. Torres, T. Toyama, A. Treves, V. Verguilov, I. Vovk, J. E. Ward, M. Will, M. H. Wu, R. Zanin, , J. Aleksić, M. Wood, B. Anderson, E. D. Bloom, J. Cohen-Tanugi, A. Drlica-Wagner, M. N. Mazziotta, M. Sánchez-Conde, L. Strigari
We present the first joint analysis of gamma-ray data from the MAGIC Cherenkov telescopes and the Fermi Large Area Telescope (LAT) to search for gamma-ray signals from dark matter annihilation in dwarf satellite galaxies. We combine 158 hours of Segue 1 observations with MAGIC with 6-year observations of 15 dwarf satellite galaxies by the Fermi-LAT. We obtain limits on the annihilation cross-section for dark matter particle masses between 10 GeV and 100 TeV - the widest mass range ever explored by a single gamma-ray analysis. These limits improve on previously published Fermi-LAT and MAGIC results by up to a factor of two at certain masses. Our new inclusive analysis approach is completely generic and can be used to perform a global, sensitivity-optimized dark matter search by combining data from present and future gamma-ray and neutrino detectors.
Deep observation of the NGC 1275 region with MAGIC: search of diffuse gamma-ray emission from cosmic rays in the Perseus cluster (1602.03099)
MAGIC Collaboration: M. L. Ahnen, S. Ansoldi, L. A. Antonelli, P. Antoranz, A. Babic, B. Banerjee, P. Bangale, U. Barres de Almeida, J. A. Barrio, J. Becerra González, W. Bednarek, E. Bernardini, B. Biasuzzi, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, F. Borracci, T. Bretz, S. Buson, E. Carmona, A. Carosi, A. Chatterjee, R. Clavero, P. Colin, E. Colombo, J. L. Contreras, J. Cortina, S. Covino, P. Da Vela, F. Dazzi, A. De Angelis, B. De Lotto, E. de Oña Wilhelmi, C. Delgado Mendez, F. Di Pierro, A. Domínguez, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher Glawion, D. Elsaesser, A. Fernández-Barral, D. Fidalgo, M. V. Fonseca, L. Font, K. Frantzen, C. Fruck, D. Galindo, R. J. García López, M. Garczarczyk, D. Garrido Terrats, M. Gaug, P. Giammaria, N. Godinović, A. González Muñoz, D. Gora, D. Guberman, D. Hadasch, A. Hahn, Y. Hanabata, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, G. Hughes, W. Idec, K. Kodani, Y. Konno, H. Kubo, J. Kushida, A. La Barbera, D. Lelas, E. Lindfors, S. Lombardi, F. Longo, M. López, R. López-Coto, E. Lorenz, P. Majumdar, M. Makariev, K. Mallot, G. Maneva, M. Manganaro, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martínez, D. Mazin, U. Menzel, J. M. Miranda, R. Mirzoyan, A. Moralejo, E. Moretti, D. Nakajima, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, K. Nilsson, K. Nishijima, K. Noda, R. Orito, A. Overkemping, S. Paiano, J. Palacio, M. Palatiello, D. Paneque, R. Paoletti, J. M. Paredes, X. Paredes-Fortuny, G. Pedaletti, M. Persic, J. Poutanen, P. G. Prada Moroni, E. Prandini, I. Puljak, W. Rhode, M. Ribó, J. Rico, J. Rodriguez Garcia, T. Saito, K. Satalecka, C. Schultz, T. Schweizer, A. Sillanpää, J. Sitarek, I. Snidaric, D. Sobczynska, A. Stamerra, T. Steinbring, M. Strzys, L. Takalo, H. Takami, F. Tavecchio, P. Temnikov, T. Terzić, D. Tescaro, M. Teshima, J. Thaele, D. F. Torres, T. Toyama, A. Treves, M. Vazquez Acosta, V. Verguilov, I. Vovk, J. E. Ward, M. Will, M. H. Wu, R. Zanin, and: C. Pfrommer, A. Pinzke, F. Zandanel
Feb. 9, 2016 astro-ph.HE
Clusters of galaxies are expected to be reservoirs of cosmic rays (CRs) that should produce diffuse gamma-ray emission due to their hadronic interactions with the intra-cluster medium. The nearby Perseus cool-core cluster, identified as the most promising target to search for such an emission, has been observed with the MAGIC telescopes at very-high energies (VHE, E>100 GeV) for a total of 253 hr from 2009 to 2014. The active nuclei of NGC 1275, the central dominant galaxy of the cluster, and IC 310, lying at about 0.6$^\circ$ from the centre, have been detected as point-like VHE gamma-ray emitters during the first phase of this campaign. We report an updated measurement of the NGC 1275 spectrum, which is well described by a power law with a photon index of $3.6\pm0.2_{stat}\pm0.2_{syst}$ between 90 GeV and 1.2 TeV. We do not detect any diffuse gamma-ray emission from the cluster and set stringent constraints on its CR population. In order to bracket the uncertainties over the CR spatial and spectral distributions, we adopt different spatial templates and power-law spectral indexes $\alpha$. For $\alpha=2.2$, the CR-to-thermal pressure within the cluster virial radius is constrained to be below 1-2%, except if CRs can propagate out of the cluster core, generating a flatter radial distribution and releasing the CR-to-thermal pressure constraint to <20%. Assuming that the observed radio mini-halo of Perseus is generated by secondary electrons from CR hadronic interactions, we can derive lower limits on the central magnetic field, $B_0$, that depend on the CR distribution. For $\alpha=2.2$, $B_0\gtrsim5-8 \mu$G, which is below the 25 $\mu$G inferred from Faraday rotation measurements, whereas, for $\alpha\lesssim2.1$, the hadronic interpretation of the diffuse radio emission is in contrast with our gamma-ray flux upper limits independently of the magnetic field strength.
Fermi LAT Discovery of Extended Gamma-Ray Emissions in the Vicinity of the HB3 Supernova Remnant (1601.01407)
H. Katagiri, K. Yoshida, J. Ballet, M. H. Grondin, Y. Hanabata, J. W. Hewitt, H. Kubo, M. Lemoine-Goumard
Jan. 7, 2016 astro-ph.HE
We report the discovery of extended gamma-ray emission measured by the Large Area Telescope (LAT) onboard the Fermi Gamma-ray Space Telescope in the region of the supernova remnant (SNR) HB3 (G132.7+1.3) and the W3 HII complex adjacent to the southeast of the remnant. W3 is spatially associated with bright 12CO (J=1-0) emission. The gamma-ray emission is spatially correlated with this gas and the SNR. We discuss the possibility that gamma rays originate in interactions between particles accelerated in the SNR and interstellar gas or radiation fields. The decay of neutral pions produced in nucleon-nucleon interactions between accelerated hadrons and interstellar gas provides a reasonable explanation for the gamma-ray emission. The emission from W3 is consistent with irradiation of the CO clouds by the cosmic rays accelerated in HB3.
Measurement of the $\nu_\mu$ CCQE cross section on carbon with the ND280 detector at T2K (1411.6264)
T2K Collaboration: K. Abe, J. Adam, H. Aihara, T. Akiri, C. Andreopoulos, S. Aoki, A. Ariga, S. Assylbekov, D. Autiero, M. Barbi, G. J. Barker, G. Barr, M. Bass, M. Batkiewicz, F. Bay, V. Berardi, B. E. Berger, S. Berkman, S. Bhadra, F. d. M. Blaszczyk, A. Blondel, C. Bojechko, S. Bolognesi, S. Bordoni, S. B. Boyd, D. Brailsford, A. Bravar, C. Bronner, N. Buchanan, R. G. Calland, J. Caravaca Rodríguez, S. L. Cartwright, R. Castillo, M. G. Catanesi, A. Cervera, D. Cherdack, G. Christodoulou, A. Clifton, J. Coleman, S. J. Coleman, G. Collazuol, K. Connolly, L. Cremonesi, A. Dabrowska, I. Danko, R. Das, S. Davis, P. de Perio, G. De Rosa, T. Dealtry, S. R. Dennis, C. Densham, D. Dewhurst, F. Di Lodovico, S. Di Luise, S. Dolan, O. Drapier, T. Duboyski, K. Duffy, J. Dumarchez, S. Dytman, M. Dziewiecki, S. Emery-Schrenk, A. Ereditato, L. Escudero, T. Feusels, A. J. Finch, G. A. Fiorentini, M. Friend, Y. Fujii, Y. Fukuda, A. P. Furmanski, V. Galymov, A. Garcia, S. Giffin, C. Giganti, K. Gilje, D. Goeldi, T. Golan, M. Gonin, N. Grant, D. Gudin, D. R. Hadley, L. Haegel, A. Haesler, M. D. Haigh, P. Hamilton, D. Hansen, T. Hara, M. Hartz, T. Hasegawa, N. C. Hastings, T. Hayashino, Y. Hayato, C. Hearty, R. L. Helmer, M. Hierholzer, J. Hignight, A. Hillairet, A. Himmel, T. Hiraki, S. Hirota, J. Holeczek, S. Horikawa, K. Huang, A. K. Ichikawa, K. Ieki, M. Ieva, M. Ikeda, J. Imber, J. Insler, T. J. Irvine, T. Ishida, T. Ishii, E. Iwai, K. Iwamoto, K. Iyogi, A. Izmaylov, A. Jacob, B. Jamieson, M. Jiang, S. Johnson, J. H. Jo, P. Jonsson, C. K. Jung, M. Kabirnezhad, A. C. Kaboth, T. Kajita, H. Kakuno, J. Kameda, Y. Kanazawa, D. Karlen, I. Karpikov, T. Katori, E. Kearns, M. Khabibullin, A. Khotjantsev, D. Kielczewska, T. Kikawa, A. Kilinski, J. Kim, S. King, J. Kisiel, P. Kitching, T. Kobayashi, L. Koch, A. Kolaceke, A. Konaka, L. L. Kormos, A. Korzenev, Y. Koshio, W. Kropp, H. Kubo, Y. Kudenko, R. Kurjata, T. Kutter, J. Lagoda, I. Lamont, E. Larkin, M. Laveder, M. Lawe, M. Lazos, T. Lindner, C. Lister, R. P. Litchfield, A. Longhin, J. P. Lopez, L. Ludovici, L. Magaletti, K. Mahn, M. Malek, S. Manly, A. D. Marino, J. Marteau, J. F. Martin, P. Martins, S. Martynenko, T. Maruyama, V. Matveev, K. Mavrokoridis, E. Mazzucato, M. McCarthy, N. McCauley, K. S. McFarland, C. McGrew, A. Mefodiev, C. Metelko, M. Mezzetto, P. Mijakowski, C. A. Miller, A. Minamino, O. Mineev, A. Missert, M. Miura, S. Moriyama, Th. A. Mueller, A. Murakami, M. Murdoch, S. Murphy, J. Myslik, T. Nakadaira, M. Nakahata, K. G. Nakamura, K. Nakamura, S. Nakayama, T. Nakaya, K. Nakayoshi, C. Nantais, C. Nielsen, M. Nirkko, K. Nishikawa, Y. Nishimura, J. Nowak, H. M. O'Keeffe, R. Ohta, K. Okumura, T. Okusawa, W. Oryszczak, S. M. Oser, T. Ovsyannikova, R. A. Owen, Y. Oyama, V. Palladino, J. L. Palomino, V. Paolone, D. Payne, O. Perevozchikov, J. D. Perkin, Y. Petrov, L. Pickard, E. S. Pinzon Guerra, C. Pistillo, P. Plonski, E. Poplawska, B. Popov, M. Posiadala-Zezula, J. M. Poutissou, R. Poutissou, P. Przewlocki, B. Quilain, E. Radicioni, P. N. Ratoff, M. Ravonel, M. A. M. Rayner, A. Redij, M. Reeves, E. Reinherz-Aronis, C. Riccio, P. A. Rodrigues, P. Rojas, E. Rondio, S. Roth, A. Rubbia, D. Ruterbories, R. Sacco, K. Sakashita, F. Sánchez, F. Sato, E. Scantamburlo, K. Scholberg, S. Schoppmann, J. Schwehr, M. Scott, Y. Seiya, T. Sekiguchi, H. Sekiya, D. Sgalaberna, R. Shah, F. Shaker, D. Shaw, M. Shiozawa, S. Short, Y. Shustrov, P. Sinclair, B. Smith, M. Smy, J. T. Sobczyk, H. Sobel, M. Sorel, L. Southwell, P. Stamoulis, J. Steinmann, B. Still, Y. Suda, A. Suzuki, K. Suzuki, S. Y. Suzuki, Y. Suzuki, R. Tacik, M. Tada, S. Takahashi, A. Takeda, Y. Takeuchi, H. K. Tanaka, H. A. Tanaka, M. M. Tanaka, D. Terhorst, R. Terri, L. F. Thompson, A. Thorley, S. Tobayama, W. Toki, T. Tomura, Y. Totsuka, C. Touramanis, T. Tsukamoto, M. Tzanov, Y. Uchida, A. Vacheret, M. Vagins, G. Vasseur, T. Wachala, K. Wakamatsu, C. W. Walter, D. Wark, W. Warzycha, M. O. Wascko, A. Weber, R. Wendell, R. J. Wilkes, M. J. Wilking, C. Wilkinson, Z. Williamson, J. R. Wilson, R. J. Wilson, T. Wongjirad, Y. Yamada, K. Yamamoto, C. Yanagisawa, T. Yano, S. Yen, N. Yershov, M. Yokoyama, K. Yoshida, T. Yuan, M. Yu, A. Zalewska, J. Zalipska, L. Zambelli, K. Zaremba, M. Ziembicki, E. D. Zimmerman, M. Zito, J. Zmuda
Dec. 11, 2015 hep-ex, nucl-ex
The Charged-Current Quasi-Elastic (CCQE) interaction, $\nu_{l} + n \rightarrow l^{-} + p$, is the dominant CC process at $E_\nu \sim 1$ GeV and contributes to the signal in accelerator-based long-baseline neutrino oscillation experiments operating at intermediate neutrino energies. This paper reports a measurement by the T2K experiment of the $\nu_{\mu}$ CCQE cross section on a carbon target with the off-axis detector based on the observed distribution of muon momentum ($p_\mu$) and angle with respect to the incident neutrino beam ($\theta_\mu$). The flux-integrated CCQE cross section was measured to be $(0.83 \pm 0.12) \times 10^{-38}\textrm{ cm}^{2}$ in good agreement with NEUT MC value of ${0.88 \times 10^{-38}} \textrm{ cm}^{2}$. The energy dependence of the CCQE cross section is also reported. The axial mass, $M_A^{QE}$, of the dipole axial form factor was extracted assuming the Smith-Moniz CCQE model with a relativistic Fermi gas nuclear model. Using the absolute (shape-only) $p_{\mu}cos\theta_\mu$ distribution, the effective $M_A^{QE}$ parameter was measured to be ${1.26^{+0.21}_{-0.18} \textrm{ GeV}/c^{2}}$ (${1.43^{+0.28}_{-0.22} \textrm{ GeV}/c^{2}}$). | CommonCrawl |
Dual-drug loaded nanoneedles with targeting property for efficient cancer therapy
Xiangrui Yang†1, 2, 3,
Shichao Wu†1, 2, 3Email authorView ORCID ID profile,
Wanyi Xie1,
Anran Cheng1,
Lichao Yang1,
Zhenqing Hou2Email author and
Xin Jin1Email author
Received: 14 June 2017
Since the anticancer drugs have diverse inhibited mechanisms to the cancer cells, the use of two or more kinds of anticancer agents may achieve excellent therapeutic effects, especially to the drug-resistant tumors.
In this study, we developed a kind of dual drug [methotrexate (MTX) and 10-hydroxycamptothecine (HCPT)] loaded nanoneedles (DDNDs) with pronounced targeting property, high drug loading and prolonged drug release. The anti-solvent precipitation of the HCPT and MTX modified PEG-b-PLGA (PEG-b-PLGA-MTX, PPMTX) leads to nucleation of nanoneedles with nanocrystalline HCPT as the core wrapped with PPMTX as steric stabilizers. In vitro cell uptake studies showed that the DDNDs revealed an obviously targeting property and entered the HeLa cells easier than the nanoneedles without MTX modification. The cytotoxicity tests illustrated that the DDNDs possessed better killing ability to HeLa cells than the individual drugs or their mixture in the same dose, indicating its good synergistic effect and targeting property. The in vivo studies further confirmed these conclusions.
This approach led to a promising sustained drug delivery system for cancer diagnosis and treatment.
10-Hydroxycamptothecine
Dual-drug
Nanoneedle
Duo to the rapid development of the drug resistance in cancer cells [1, 2], the use of a single agent often fails to achieve the all-right therapeutic efficacy. To overcome this problem and improve anticancer efficacy, co-delivery of multifunctional agents is a promising strategy, which have received considerable research interest in cancer therapy [3–5]. It is well known that cancer cells exist at different stages in the cell cycle for the heterogeneity of a tumor and different antitumor drugs have diverse inhibited mechanisms at varying stages of the cell cycle [6, 7]. Thus the delivery system loaded with two or more anticancer drugs would have specific activity on cells at different growth stages and act synergistically. As a result, the combination therapy would bypass the drug resistance of cancer cells and significantly enhance the therapeutic efficiency than individual drug agents [8, 9]. Nevertheless, the combination therapy is largely hindered by their associated side effects, which can deteriorate patient health condition. To address this problem, tumor-specific targeting is proposed for its positive effect on not only reducing the serious side effects, but also enhancing the treatment. Hence, it has become one of the most effective and promising techniques for combination therapy. Folic acid (FA) is one of the most common used targeting ligands, as the folate receptor has been found to be overexpressed on the surface of many types of cancer cells [10, 11]. In recent years, the anticancer drug methotrexate (MTX), whose structure is analogous to that of FA, is also found to have targeting action [12, 13]. Therefore, the MTX loaded in the particles would serve not only as a drug but also as a potential targeting ligand [14]. The targeted therapeutic drug delivery, with MTX as the targeting ligand on the surface cooperated with another anticancer drug inside, was conducive to highly improve the therapeutic efficiency and simplify the nanoparticle-based drug delivery systems simultaneously.
The discussion of dual drug loaded nanostructures could be classified into the nanoparticle-based and carrier-free drug delivery systems. Nanoparticle-based drug delivery systems have received considerable research interest in the past decades [15–19]. As the suboptimal pharmacokinetic properties of the chemotherapy would be significantly improved with the protection of the carrier, such as higher stability, longer circulating half-life, and so on. The nanoparticles that have been demonstrated to deliver therapeutic drugs in combination include polymeric nanoparticles [20–24], polymer–drug conjugates [25–27], mesoporous silica nanoparticles [28], iron oxide nanoparticles [29], and so on. In carrier-based drug delivery systems, the carrier typically make up the bulk of the nanostructures, and the drugs would be loaded in the carrier-based nanostructures via physical adsorption or chemical binding [20–27]. In spite of the improved properties, the low drug loading is the major shortcoming of carrier-based drug delivery systems. On the contrary, carrier-free drug delivery systems have a high drug loading, for the drug make up the major components of the nanostructures [30]. Precisely because of this, the pharmaceutical properties of the carrier-free drug delivery systems may be not as good as those of the carrier-based drug delivery systems. Hence, the concentration of research has been focused on how to combine the advantages of the two systems.
Moreover, another way to improve the efficiency is to change the shape of the nanoparticles. There is already evidence that the shape plays an important role in the cellular internalization, and would affect the result of the treatment to a large degree [31–36]. In our previous studies, it was found that the cancer cells preferred particles with high aspect ratio and sharp ends. The pointed-end, 10-hydroxycamptothecine (HCPT) nanoneedles with an average length of 5 µm were internalized much more rapidly and efficiently by three types of cancer cells than the nanorods with the same size and the nanospheres with a much smaller size of 150 nm [37].
In this study, we developed both methotrexate and 10-hydroxycamptothecine loaded nanoneedles (DDNDs) with high drug loading, targeting and imaging properties. The DDNDs are characteristic of possessing the nanocrystalline HCPT core integrated with the PEG-b-PLGA-MTX (PPMTX) conjugated shell, the latter of which functions as the targeting agent and stabilizer as the same time in the system. The nanoneedles with high HCPT loading show the remarkably prolonged and sustained release property due to the presence of the polymeric layer. In the cytotoxicity tests, the nanoneedles showed more excellent killing ability to HeLa cells than the individual drugs or their mixture, which evidenced the good synergistic effect of the dual ingredients and the targeting property of the MTX ingredient. The subsequent in vivo studies further illustrate that the DDNDs has combined the advantages of the carrier-based and carrier-free drug delivery systems. These results highlight the great potential of multidrug-loaded, imaging-functional nanoneedles for highly efficient chemotherapy, as well as for cancer diagnostic applications.
All the chemicals were analytical grade and used as received without further purification. MTX (purity > 99%) was purchased from Bio Basic Inc. The HCPT (purity > 99%) was purchased from Lishizhen Pharmaceutical Co., Ltd. The monomethoxy (polyethylene glycol)-poly (lactide-co-glycolide) (PEG-b-PLGA, PEG: 10%, 2000 Da, PLGA: 20,000 Da, 85/15) was obtained from Daigang Biotechnology Co., Ltd. N-hydroxysuccinimide (NHS) and dicyclohexylcarbodiimide (DCC) were purchased from Sigma-Aldrich. The ultrapure water (18 MΩ/cm) was used throughout the work.
Animals and cell cultures
HeLa cells were was obtained from the American Type Culture Collection. The complete growth medium was DMEM supplemented with 10% FBS and 1% penicillin/streptomycin. The cells were cultivated in an incubator (Thermo Scientific) at 37 °C in the presence of 5% CO2 for 24 h.
The BALB/C mice (5–6 weeks, 16–20 g) and BALB/C nude mice (5–6 weeks, 16–20 g) were purchased from Shanghai Laboratory Animal Center, Chinese Academy of Sciences. The tumor models were set up by subcutaneously injecting 1 × 106 HeLa cells in the selected positions of the mice.
Synthesis of the PPMTX conjugate
MTX (5 mg), PEG-b-PLGA (20 mg), DCC (4 mg), NHS (4 mg) and DMAP (2 mg) were added into 2 mL DMF and stirred at rt for 12 h to obtain the PPMTX. Then, the suspension was filtered and dialyzed against a buffer solution (pH 10.0) to remove excess MTX molecules. The remaining suspension was then centrifuged at 5000 rpm and lyophilized for 24 h to obtain the dry PPMTX powder.
Preparation of DDNDs
First, HCPT (10 mg) and PPMTX (10 mg) were dissolved in 20 mL acetone at 40 °C. Afterwards, the mixture were added dropwise into pure water (100 mL) under sonication (200 W) in an ice bath for 5 min. Then the suspension was centrifuged (10,000 rpm, 5 min) and lyophilized for 24 h to get the DDNDs power. For the preparation of NDs, the PEG-b-PLGA was used to replace PPMTX.
Morphology of the DDNDs was examined by SEM (UV-70) at 10 kV. The Size and zeta-potential values were determined by a Malvern Zetasizer Nano-ZS machine (Malvern Instruments, Malvern). Three parallel measurements were carried out to determine the average values. The content of MTX in PPMTX was determined by UV spectrophotometry (Beckman DU800). All samples were assayed at 305 nm. The content of HCPT in DDNDs was determined by fluorescence spectrophotometry (excitation at 382 nm, emission at 525 nm). The content and entrapment efficiency were calculated by Eqs. (1)–(4):
$$\begin{aligned} {\text{Drug loading content of HCPT }}({\%}) &= ({\text{weight of HCPT in DDNDs}})/({\text{weight of DDNDs}}) \\ & \quad \times 100{\%}\end{aligned}$$
$$\begin{aligned} {\text{Entrapment efficiency of HCPT}}({\%}) &= ( {\text{weight of drug in DDNDs}})/( {\text{weight of feeding drug}}) \\ & \quad \times 100{\%} \end{aligned}$$
$$\begin{aligned} {\text{Percentage of MTX in PPMTX }}({\%}) & = ( {\text{weight of MTX}})/( {\text{weight of PPMTX}}) \\ & \quad \times 100{\text{\% }}\end{aligned}$$
$$\begin{aligned} {\text{Drug loading content of MTX }}({\%}) &= ( {1 - {\text{Drug loading content of HCPT}}} ) \\ & \quad \times {\text{percentage of MTX in PPMTX}} \times 100{\%} \end{aligned}$$
In vitro drug release study
The in vitro drug release studies of DDNDs were performed using the dialysis technique. The DDNDs were dispersed in a PBS buffer solution (15 mL) and placed in a pre-swelled dialysis bag (MWCO = 3500 Da). The dialysis bag was then immersed in PBS (0.1 M, 150 mL, pH 7.4 and pH 5.5) and oscillated continuously in a shaker incubator (150 rpm) at 37 °C. All samples were assayed by high performance liquid chromatography (HPLC).
Confocal imaging of cells
The confocal imaging of cells were performed using a Leica laser scanning confocal microscope with the wavelength of 405 nm as the excitation source. The fluorescent emission was collected from 500 to 600 nm. HeLa cells were incubated in six-well plates at a density of 1 × 106 cells per well. The cells were incubated at 37 °C and 5% CO2 for 24 h. The NDs/DDNDs/DDNDs + FA [(HCPT) = 60 µg/mL] were added to the cells for 4 h. After incubation, the cells were washed three times with PBS and fixed with 4% paraformaldehyde. Subsequently, the cells were further washed thrice with PBS before confocal imaging.
Cellular uptake measured by fluorescence measurement
HeLa cells were seeded in a 24-well plate (5 × 106/well), which was incubated at 37 °C for 24 h in a humidified atmosphere (5% CO2). The cells were then incubated with equivalent concentrations of DDNDs/NDs/DDNDs + FA. The drug-treated cells were incubated for 4 h at 37 °C, followed by being washed three times with cold PBS to remove excess nanoparticles. And the cells were then digested with the trypsin (0.05%)/EDTA. The suspensions were centrifuged at 3000 rpm at 4 °C for 5 min. The supernatant was discarded, and the precipitate were washed with PBS to remove the background fluorescence in the medium. After two cycles of centrifugation and washing, cells were resuspended in 2 mL PBS and disrupted by vigorous sonication. The amount of HCPT uptake by cells would release into the sonicated mixture, which was analyzed with fluorescence spectroscopy (excitation at 382 nm). Blank cells without treatment of drug nanocrystals were measured to determine the cells auto-fluorescence level as the control.
Cytotoxicity assays
The cytotoxicity of DDNDs was determined by MTT assay. Briefly, an adequate number of HeLa cells were planted in quintuplicate in a 96-well plate and incubated for 24 h in the presence of different formulations [(HCPT) = 0.25, 0.50, 1.00, 2.00, 4.00, and 8.00 µg/mL, (MTX) = 0.008, 0.016, 0.032, 0.064, 0.128, 0.256 µg/mL]. In this study, 20 µL 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2-H-tetrazolium bromide (MTT) solution (5 mg/mL in PBS) was added in each well, and the plate was incubated at 37 °C for another 4 h. Afterwards, a volume of 150 µL dimethylsulfoxide (DMSO) were added, and the plate was agitated in a water bath chader at 37 °C for 30 min. The absorbance at 570 nm was measured using a Microplate Reader (model 680; Bio-Rad).
Biodistribution
For in vivo fluorescence imaging, DiR was encapsulated into the NDs and DDNDs. DiR-NDs and DiR-DDNDs [(HCPT) = 1 mg/mL] were intravenously administered into the HeLa tumor-bearing nude mice via tail veins at a HCPT-dose of 6 mg/kg. At 1 and 24 h post-injection, the mice were anesthetized and imaged with the Maestro in vivo imaging system (Cambridge Research & Instrumentation, Woburn, MA, USA). After 24 h, the mice were sacrificed, and the tumor and the major organs (liver, kidney, lung, spleen, and heart) were excised, followed by washing the surface with 0.9% NaCl for fluorescence intensity measurement.
In the preparation of DiR-DDNDs, 100 µL DiR was added into the pure water used in the experiment. After sonication, the suspension would dialyze against pure water for 10 h to remove excess DiR. Then the suspension was lyophilized for 24 h to get the DiR-DDNDs power. DiR-NDs was prepared via the same method. Before the biodistribution experiment, several batches of DiR-DDNDs and DiR-NDs would be prepared, and their HCPT drug loading would be characterized. And they would be confected into solutions with the same concentration of HCPT. Then their fluorescence intensity of DiR would be characterized. The solutions whose difference of DiR fluorescence were below 5% would be selected in the experiment.
Tumor inhibition in vivo
When the tumor volume of the HeLa tumor-bearing mice was approximately 60 mm3, the mice were divided into four groups, and treated with 0.9% NaCl aqueous solution, free HCPT and MTX, NDs + free MTX, and DDNDs [(HCPT) = 1 mg/mL] every 3 days at a HCPT-dose of 4 mg/kg per mouse. The tumor volume and body weight were monitored every 3 days. The tumor volume was calculated by the following formula: tumor volume = 0.5 × length × width2.
After 21 days, the mice were sacrificed, followed by the tumors excised and weighed. Then, the tumors were fixed in 4% paraformaldehyde overnight at 4 °C, embedded into paraffin, sectioned (4 μm), stained with hematoxylin and eosin (H&E), and observed using a digital microscopy system.
The statistical significance of treatment outcomes was assessed using Student's t test (two-tailed); P < 0.05 was considered statistically significant in all analyses (95% confidence level).
First, we conjugated MTX to PEG-b-PLGA by an esterification reaction between the carboxylic end group of MTX and the hydroxy of PEG-b-PLGA (Fig. 1A). The structure of the conjugation (PPMTX) was confirmed by Fourier Transform infrared spectroscopy (FT-IR). As shown in Fig. 1B, a new peak at 1630 cm−1 appeared in the IR spectrum of PPMTX, corresponding to C=O stretching vibration of the new ester bond. These results indicated that MTX was successfully conjugated to the hydroxy of PEG-b-PLGA via ester bond. In order to investigate the percentage of MTX in the conjugation, a standard curve was set up by ultraviolet spectrophotometry. The fitted linear regression equation for the calibration curve was as follows (Additional file 1: Figure S1). And the percentage of MTX was calculated to be 5.1 ± 0.5%.
A Synthetic route and characterization of PPMTX. B FTIR spectra of (a) MTX, (b) PEG-b-PLGA and (c) PPMTX
$${\text{y}} = 0.0474 {\text{x}} - 0.00862 ,\;\;{\text{R}}^{ 2} = 0.9999.$$
(y: the absorbance intensity of UV–Vis; x: the MTX concentration, µg/mL; the detection limits: 1.0–15.0 µg/mL, solvent: DMF).
Preparation and characterizations of DDNDs
The DDNDs were prepared by an ultrasound-assisted emulsion crystallization method. HCPT and PPMTX were codissolved in acetone, forming a hybrid solution of drug and excipient. When the hybrid solution was injected into deionized water under sonication, a sudden change of solvent environment occurred, inducing the nucleation of HCPT nanocrystallines and the accompanying coprecipitation of PPMTX onto the growing HCPT nanocrystallines [38, 39].
Figure 2a, b shows the needle-shaped morphology of the DDNDs with an average length of about 1 μm, and the width of about 80 nm. The result of DLS measurement shows that the DDNDs possessed a size of 102.6 nm (Fig. 2c) and a zeta potential of − 19.3 mv (Fig. 2d). Since only HCPT possessed the property of fluorescent in the drug delivery system, fluorescence spectrophotometry was employed to investigate the drug loading of HCPT in DDNDs. The calibration curve was established (Additional file 1: Figure S2) and the fitted linear regression equation was as below.
The SEM images (a, b), the size distributions (c), and the Zeta potential (d) of DDNDs
$${\text{y}} = 394{,}123{\text{x}} + 10{,}465,\;\;{\text{R}}^{ 2} = 0.99999.$$
(y: the fluorescence intensity; x: the HCPT concentration, µg/mL; the detection limits: 1.0–15.0 µg/mL, solvent: DMF).
It follows that the drug loading content of HCPT was 62.56% and the encapsulation efficiency was 92.43%. And the drug loading content of MTX was calculated to be 2.03%, according to the percentage of MTX in PPMTX.
The in vitro release studies of the DDNDs were performed using a dialysis technique, alongside with free HCPT/MTX powders. All samples were assayed by High performance liquid chromatography (HPLC). The release profiles are shown in Fig. 3. The profile of free HCPT powers showed 30% drug-release at the first sampling time of 1 h and nearly 100% by 8 h (Fig. 3A). The HCPT release profile of the DDNDs appears to consist of two components with a slight burst release of about 40% in the first 8 h and followed by a distinctly prolonged release in the next 380 h. This was probably because that the polymeric shell of PPMTX could limit the release of the drug in the core. The profile of free MTX powers was even faster than that of HCPT powers. It only took 4 h to achieve 100% drug-release. However, the MTX release of the DDNDs revealed more remarkably pH-independent and prolonged release, which was most probably attribute to the ester bond between MTX and PEG-b-PLGA. Although, a little burst release still existed, the prolonged drug release brought a huge improvement to the free drug. This could compare to the formulations synthesized by other approaches, [39–41] and could greatly promote the application of the DDNDs for sustained drug delivery system.
The drug release profiles of free drug and DDNDs under 37 °C and 100 rpm. A HCPT: (a) free HCPT (pH 7.4), (b) DDNDs (pH 7.4). B MTX: (a) free MTX (pH 7.4), (b) DDNDs (pH 7.4), (c) DDNDs (pH 5.5)
To evaluate their efficiency of cellular uptake by HeLa cells, the DDNDs and the HCPT loaded nanoneedles without MTX modified (NDs, drug loading = 63.6%, dDLS = 121.7 nm) were incubated with HeLa cells for 4 h at 37 °C (The NDs were prepared by HCPT and PEG-b-PLGA via the same method as the DDNDs). As shown in Fig. 4A, E, the fluorescence emission of HCPT detected from the cells exposed to the DDNDs was much more intense than that of those exposed to NDs after 4 h of incubation. This illustrated that the MTX on the surface of the particles could greatly enhance the cellular uptake. This was probable duo to its specific affinity to the FA receptors. To further address the specificity of the MTX functionalized nanoparticles for FA receptors, a competition assay was performed. HeLa cells were pretreated with an excess of the free FA (0.50 mg/mL) for 30 min, and then incubated with the DDNDs for 4 h. As shown in Fig. 4I, the fluorescence emissions detected from the group with excess FA molecules became much weaker than that without FA molecules.
The CLSM images and the fluorescence measurement. The HeLa cells incubated with DDNDs (A), NDs (E) and DDNDs + folate (I) [(HCPT) = 60 µg/mL] for 4 h at 37 °C. All images were taken under identical instrumental conditions and presented at the same intensity scale. All scale bars are 25 μm. B–D, F–H, J–L was the enlarge figure of the red frame in A, E, I, respectively. M The fluorescence measurements of the HeLa cells incubated with DDNDs, NDs, and NDs + FA over a 4 h incubation period at 37 °C, P < 0.05
When HCPT enter the cells, they would first aggregate in the cytoplasm. This is why the HCPT signals mainly came from the cytoplasm at the group of NDs and NDs + FA (Fig. 4E, I). Nevertheless, there were still weak signals from the nuclei (Fig. 4F, J), which illustrated that HCPT molecules could enter the nuclei with the increase of HCPT concentration. Hence, at extremely high HCPT concentration in the cytoplasm, the nuclei HCPT concentration would increase to a relative high level, which was far more than the limit of detection of CLSM. This led to the result that the HCPT signals from the nuclei were also very intense (Fig. 4A, B), and this phenomena could also been seen in other literatures [42, 43]. Meanwhile, we would still see the difference of the signal intensity between the cytoplasm and the nuclei (Fig. 4B).
The quantification of the fluorescence in the cells also illustrated that DDNDs entered HeLa cells more efficiently than NDs, which would be inhibited by excess FA. This was because the two particles entered the cells via different routes. The NDs were taken into the cells via bulk-phase endocytosis. While the DDNDs can be internalized via the receptors mediated endocytosis as well as the bulk-phase endocytosis. The MTX on the surface of the DDNDs could latch onto FA receptor in the cytomembrane of the HeLa cells and thus enter the cells more efficiently. However, when excess FA molecules were added, they would bind with FA receptors for the enhanced affinities between the FA molecules and the FA receptors. In that case, the DDNDs could not enter the cells via receptors mediated endocytosis, but they could also be uptake via the bulk-phase endocytosis. This was why the Fig. 4C emerged a weak fluorescence.
To further investigate the possibility of utilizing the DDNDs for drug delivery, we tested the killing ability of the DDNDs to cancer cell. The cytotoxicity of DDNDs was evaluated using the MTT assay with the HeLa cells. The NDs, PPMTX, the mixture of NDs and PPMTX containing equivalent concentrations of HCPT or/and MTX were used as control. The concentrations of HCPT were 0.25, 0.50, 1.00, 2.00, 4.00, and 8.00 µg/mL. And the corresponding concentrations of MTX were 0.008, 0.016, 0.032, 0.064, 0.128, 0.256 µg/mL.
As is shown in Fig. 5a, the PPMTX tend to be nontoxic, mainly because that the concentration of MTX was far below the effective concentration. As to NDs, their cytotoxicity was much higher than that of PPMTX (Fig. 5b. And the theoretical cytotoxicity of the mixture of NDs and PPMTX was calculated by adding the percentage of the cells killed by the NDs and PPMTX. And the experimental cytotoxicity of the mixture of NDs and PPMTX was also tested, which was much higher than the theoretical value. This was because the synergistic effect between the two drugs. MTX could integrate with dihydrofolate reductase to disrupt cellular FA metabolism and then kill cancer cells, while HCPT could inhibit mitosis by acting on DNA topoisomerase I. Hence, the combination of the two drugs would kill the cancer cells through different routes, and act synergistically. Moreover, the cytotoxicity of the DDNDs was even much higher than that of the mixture of NDs and PPMTX. This was probable duo to the targeting property of MTX on the surface of the DDNDs, which could help the particles to enter the cells and kill them. Thus the DDNDs presented surprisingly good killing ability to the cancer cells. This was in according with the result of the CLSM (Fig. 4). These results confirm that MTX on the surface of the DDNDs can increase the cellular uptake of the particles and thus increase their killing ability to cancer cells by binding with FA receptors, just in according with the well-established study [44].
In vitro cell viability of HeLa cells incubated with free MTX (a), NDs (b), NDs + MTX (d) and DDNDs (e) at different concentrations [(HCPT) = 0.125, 0.25, 0.5, 1.0, 2.0, and 4.0 µg/mL; (MTX) = 0.004, 0.008, 0.016, 0.032, 0.064, and 0.128 μg/mL] for 24 h. (c) The theoretical value of free MTX (a) and NDs (b). Data are presented as mean ± SD (n = 6). *P < 0.05
To evaluate the tumor target ability of DDNDs, DiR was used as a near-infrared fluorescence probe to be encapsulated into NDs and DDNDs at the equivalent DiR concentration. DiR-NDs, and DiR-DDNDs were injected intravenously into the mice bearing tumors derived from human cervical carcinoma HeLa cells, and their in vivo biodistribution was investigated.
As depicted in Fig. 6A, while no fluorescent signals were detected at tumor sites in the group of DiR-NDs, an obvious fluorescent signal was visualized at the tumor site of the DiR-DDNDs group. When the total fluorescence counts were reduced with the time, the intensity of the signal at the tumor site was enhanced from 1 to 24 h, indicating that the DDNDs were accumulating in tumors during this time. After 24 h, the mice were sacrificed and the tumor tissues as well as the normal tissues were isolated for analysis (Fig. 6B). The fluorescence intensity in the tumor tissue of DiR-DDNDs-treated mice was significantly higher than the other group. It was validated that the introduction of MTX offered the nanoneedles an excellent tumor targeting efficacy, leading to a higher highly efficient cancer treatment.
A In vivo DiR fluorescence imaging of HeLa tumor-bearing BALB/c nude mice after intravenous injection of the DiR-NDs (a) or DiR-DDNDs (b) at 1.0 and 24 h post-injection. Circles indicated the sites of tumors. B Ex vivo fluorescence intensity of tumors and normal organs and tissues harvested from HeLa tumor-bearing Balb/c nude mice intravenously treated with the DiR-NDs (a) or DiR-DDNDs (b) at 24 h post-injection. Data are presented as mean ± SD (n = 3). *P < 0.05
To evaluate the in vivo antitumor effects, we generated HeLa tumor xenografts in Kunming mice and assessed tumor growth following the intravenous administration of 0.9% NaCl, free HCPT + MTX, NDs + free MTX, and DDNDs with the same concentration of HCPT and MTX. Compared to the mice treated with 0.9% NaCl as control, the growth rate of the tumors in mice receiving free HCPT + MTX or NDs + MTX decreased gradually (Fig. 7A, B), indicating the significantly effective tumor growth inhibition. Of note, the DDNDs led to the most pronounced inhibition of tumor growth. At the end of experiment, the tumors were excised and weighed. As shown in Fig. 7C, it was found that the DDNDs had superior therapeutic efficacy compared with the other groups (P < 0.05). An additional evidence of the enhanced anticancer effect of the DDNDs was shown in the histologic images (Fig. 7D). Compared to the control group, several observed necrotic regions could be observed in the tumor section of the group of free MTX and HCPT. More notably, the group of DDNDs displayed the majority of necrosis, indicating their more outstanding anticancer efficacy than other groups. The result suggested that the DDNDs were significantly more effective in inducing cell death and reducing cell proliferation than the combination of the individual drugs or the group of NDs + MTX. This may be owing to the synergistic effect of the two drugs and the targeting effect of MTX on the surface of DDNDs. For any drug delivery systems, the systemic toxicity that is usually encountered in the free HCPT-mediated treatment should be considered to ensure safety and effectiveness. In this work, the administration of the free HCPT + MTX resulted in the listlessness/laziness and severe body weight loss of mice (Fig. 7C), indicative of the undesirable side effects of chemotherapy. On the contrary, no obvious side effects were shown in the mice treated with the DDNDs. Overall, it was indicated that the dual-drug nanoneedles with the superior anticancer effects as well as lower toxicity would greatly improve the efficacy of quality of life therapy.
Anticancer effects of different formulations. A Volume change of tumor in mice during the treatment. B Weights of HeLa tumors after being treated by different (nano)formulations. C Weight change of the tumor-bearing mice during the treatment. D Histological section of the tumor of the mice after the treatment. (a) 0.9% NaCl aqueous solution, (b) free HCPT and MTX, (c) NDs + free MTX, and (d) DDNDs. All HCPT-MTX formulations used the same concentration of HCPT and MTX in mice bearing HeLa tumor. *P < 0.05
The study herein prepared a kind of both MTX and HCPT loaded nanoneedles for the high efficient combination chemotherapy with high drug loading, targeting property and imaging capability. The in vitro drug release profile revealed that the DDNDs showed a sustained and prolonged release. The CLSM images demonstrated the more efficient cellular internalization of DDNDs than that of NDs. The MTT experiment indicated that the DDNDs showed a much higher cytotoxicity than the individual drugs, which illustrated the good synergistic effect of the dual drug. This work opens a door to design new dosages of dual drug loaded nanoparticles.
Xiangrui Yang and Shichao Wu contributed equally to this work and should be considered as co-first authors
MTX:
HCPT:
PPMTX:
PEG-b-PLGA-MTX
NDs:
HCPT loaded nanoneedles
DDNDs:
both MTX and HCPT loaded nanoneedles
XY and SW conceived and carried out experiments, analysed data and wrote the paper. SW, ZH and XJ designed the study, supervised the project, and analysed data. WX performed the experiment in the revised manuscript and make a great contribution to the response to the comments. AC and LY assisted in the synthesis and characterizations of the NPs. All authors read and approved the final manuscript.
All data generated or analyzed during this study are included in this published article.
All procedures were carried out in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and were approved by the Animal Ethical and Welfare Committee of Xiamen University.
This work was supported by the National Natural Science Foundation of China (Nos. 21572027 and 21372267).
12951_2017_326_MOESM1_ESM.docx Additional file 1: Figure S1. Standard curves of MTX in DMF via ultraviolet spectroscopy. Figure S2. Standard curves of HCPT in DMF via fluorescence spectroscopy.
Department of Basic Medical Science, Medical College, Xiamen University, Xiamen, 361102, China
Research Center of Biomedical Engineering, College of Materials, Xiamen University, Xiamen, 361005, China
Department of Chemistry, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China
Noguchi K, Katayama K, Mitsuhashi J, Sugimoto Y. Functions of the breast cancer resistance protein (BCRP/ABCG2) in chemotherapy. Adv Drug Deliv Rev. 2009;61:26–33.View ArticleGoogle Scholar
Gottesman MM. Mechanisms of cancer drug resistance. Annu Rev Med. 2002;53:615–27.View ArticleGoogle Scholar
DeVita VT Jr, Young RC, Canellos GP. Combination versus single agent chemotherapy: a review of the basis for selection of drug treatment of cancer. Cancer. 1975;35:98–110.View ArticleGoogle Scholar
Al-Lazikani B, Banerji U, Workman P. Combinatorial drug therapy for cancer in the post-genomic era. Nat Biotechnol. 2012;30:679–91.View ArticleGoogle Scholar
Bahadur RKC, Xu P. Multicompartment intracellular self-expanding nanogel for targeted delivery of drug cocktail. Adv Mater. 2012;24:6479–83.View ArticleGoogle Scholar
Lehar J, Krueger AS, Avery W, Heilbut AM, Johansen LM, Price ER, Rickles RJ, Short GF III, Staunton JE, Jin X, et al. Synergistic drug combinations tend to improve therapeutically relevant selectivity. Nat Biotechnol. 2009;27:659–66.View ArticleGoogle Scholar
Chabner BA, Roberts TG. Timeline—chemotherapy and the war on cancer. Nat Rev Cancer. 2005;5:65–72.View ArticleGoogle Scholar
McDaid HM, Johnston PG. Synergistic interaction between paclitaxel and 8-chloro-adenosine 3′,5′-monophosphate in human ovarian carcinoma cell lines. Clin Cancer Res. 1999;5:215–20.Google Scholar
Calabro F, Lorusso V, Rosati G, Manzione L, Frassineti L, Sava T, Di Paula ED, Alonso S, Sternberg CN. Gemcitabine and paclitaxel every 2 weeks in patients with previously untreated urothelial carcinoma. Cancer. 2009;115:2652–9.View ArticleGoogle Scholar
Anderson RGW, Kamen BA, Rothberg KG, Lacey SW. Potocytosis—sequestration and transport of small molecules by caveolae. Science. 1992;255:410–1.View ArticleGoogle Scholar
Weitman SD, Lark RH, Coney LR, Fort DW, Frasca V, Zurawski VR, Kamen BA. Distribution of the folate receptor GP38 in normal and malignant-cell lines and tissues. Cancer Res. 1992;52:3396–401.Google Scholar
Rosenholm JM, Peuhu E, Eriksson JE, Sahlgren C, Linden M. Targeted intracellular delivery of hydrophobic agents using mesoporous hybrid silica nanoparticles as carrier systems. Nano Lett. 2009;9:3308–11.View ArticleGoogle Scholar
Jia M, Li Y, Yang X, Huang Y, Wu H, Huang Y, Lin J, Li Y, Hou Z, Zhang Q. Development of both methotrexate and mitomycin C loaded PEGylated chitosan nanoparticles for targeted drug codelivery and synergistic anticancer effect. ACS Appl Mater Interfaces. 2014;6:11413–23.View ArticleGoogle Scholar
Duthie SJ. Folic-acid-mediated inhibition of human colon-cancer cell growth. Nutrition. 2001;17:736–7.Google Scholar
Ma L, Kohli M, Smith A. Nanoparticles for combination drug therapy. ACS Nano. 2013;7:9518–25.View ArticleGoogle Scholar
LoRusso PM, Canetta R, Wagner JA, Balogh EP, Nass SJ, Boerner SA, Hohneker J. Accelerating cancer therapy development: the importance of combination strategies and collaboration. Summary of an institute of medicine workshop. Clin Cancer Res. 2012;18:6101–9.View ArticleGoogle Scholar
Martello LA, McDaid HM, Regl DL, Yang CPH, Meng DF, Pettus TRR, Kaufman MD, Arimoto H, Danishefsky SJ, Smith AB, Horwitz SB. Taxol and discodermolide represent a synergistic drug combination in human carcinoma cell lines. Clin Cancer Res. 2000;6:1978–87.Google Scholar
Ahmed F, Pakunlu RI, Brannan A, Bates F, Minko T, Discher DE. Biodegradable polymersomes loaded with both paclitaxel and doxorubicin permeate and shrink tumors, inducing apoptosis in proportion to accumulated drug. J Control Release. 2006;116:150–8.View ArticleGoogle Scholar
Liao L, Liu J, Dreaden EC, Morton SW, Shopsowitz KE, Hammond PT, Johnson JA. A convergent synthetic platform for single-nanoparticle combination cancer therapy: ratiometric loading and controlled release of cisplatin, doxorubicin, and camptothecin. J Am Chem Soc. 2014;136:5896–9.View ArticleGoogle Scholar
Aryal S, Hu C-MJ, Zhang L. Combinatorial drug conjugation enables nanoparticle dual-drug delivery. Small. 2010;6:1442–8.View ArticleGoogle Scholar
Kolishetti N, Dhar S, Valencia PM, Lin LQ, Karnik R, Lippard SJ, Langer R, Farokhzad OC. Engineering of self-assembled nanoparticle platform for precisely controlled combination drug therapy. Proc Natl Acad Sci USA. 2010;107:17939–44.View ArticleGoogle Scholar
Sengupta S, Eavarone D, Capila I, Zhao GL, Watson N, Kiziltepe T, Sasisekharan R. Temporal targeting of tumour cells and neovasculature with a nanoscale delivery system. Nature. 2005;436:568–72.View ArticleGoogle Scholar
Wang Z, Ho PC. A nanocapsular combinatorial sequential drug delivery system for antiangiogenesis and anticancer activities. Biomaterials. 2010;31:7115–23.View ArticleGoogle Scholar
Wang Y, Gao S, Ye W-H, Yoon HS, Yang Y-Y. Co-delivery of drugs and DNA from cationic core–shell nanoparticles self-assembled from a biodegradable copolymer. Nat Mater. 2006;5:791–6.View ArticleGoogle Scholar
Li C, Wallace S. Polymer–drug conjugates: recent development in clinical oncology. Adv Drug Deliv Rev. 2008;60:886–98.View ArticleGoogle Scholar
Lammers T, Subr V, Ulbrich K, Peschke P, Huber PE, Hennink WE, Storm G. Simultaneous delivery of doxorubicin and gemcitabine to tumors in vivo using prototypic polymeric drug carriers. Biomaterials. 2009;30:3466–75.View ArticleGoogle Scholar
Wang YS, Chen HL, Liu YY, Wu J, Zhou P, Wang Y, Li RS, Yang XY, Zhang N. pH-sensitive pullulan-based nanoparticle carrier of methotrexate and combretastatin A4 for the combination therapy against hepatocellular carcinoma. Biomaterials. 2013;34:7181–90.View ArticleGoogle Scholar
Chen AM, Zhang M, Wei D, Stueber D, Taratula O, Minko T, He H. Co-delivery of doxorubicin and Bcl-2 siRNA by mesoporous silica nanoparticles enhances the efficacy of chemotherapy in multidrug-resistant cancer cells. Small. 2009;5:2673–7.View ArticleGoogle Scholar
Dilnawaz F, Singh A, Mohanty C, Sahoo SK. Dual drug loaded superparamagnetic iron oxide nanoparticles for targeted cancer therapy. Biomaterials. 2010;31:3694–706.View ArticleGoogle Scholar
Zhou MJ, Zhang XJ, Yang YL, Liu Z, Tian BS, Jie JS, Zhang XH. Carrier-free functionalized multidrug nanorods for synergistic cancer therapy. Biomaterials. 2013;34:8960–7.View ArticleGoogle Scholar
Champion JA, Mitragotri S. Role of target geometry in phagocytosis. Proc Natl Acad Sci USA. 2006;103:4930–4.View ArticleGoogle Scholar
Hu X, Hu J, Tian J, Ge Z, Zhang G, Luo K, Liu S. Polyprodrug amphiphiles: hierarchical assemblies for shape-regulated cellular internalization, trafficking, and drug delivery. J Am Chem Soc. 2013;135:17617–29.View ArticleGoogle Scholar
Geng Y, Dalhaimer P, Cai SS, Tsai R, Tewari M, Minko T, Discher DE. Shape effects of filaments versus spherical particles in flow and drug delivery. Nat Nanotechnol. 2007;2:249–55.View ArticleGoogle Scholar
Champion JA, Katare YK, Mitragotri S. Particle shape: a new design parameter for micro- and nanoscale drug delivery carriers. J Control Release. 2007;121:3–9.View ArticleGoogle Scholar
Petros RA, DeSimone JM. Strategies in the design of nanoparticles for therapeutic applications. Nat Rev Drug Discov. 2010;9:615–27.View ArticleGoogle Scholar
Mitragotri S, Lahann J. Physical approaches to biomaterial design. Nat Mater. 2009;8:15–23.View ArticleGoogle Scholar
Wu S, Yang X, Li Y, Wu H, Huang Y, Xie L, Zhang Y, Hou Z, Liu X. Preparation of HCPT-loaded nanoneedles with pointed ends for highly efficient cancer chemotherapy. Nanoscale Res Lett. 2016;11:294.View ArticleGoogle Scholar
Yang X, Wu S, Wang Y, Li Y, Chang D, Luo Y, Ye S, Hou Z. Evaluation of self-assembled HCPT-loaded PEG-b-PLA nanoparticles by comparing with HCPT-loaded PLA nanoparticles. Nanoscale Res Lett. 2014;9:687.View ArticleGoogle Scholar
Yang X, Wu S, Li Y, Huang Y, Lin J, Chang D, Ye S, Xie L, Jiang Y, Hou Z. Integration of an anti-tumor drug into nanocrystalline assemblies for sustained drug release. Chem Sci. 2015;6:1650–4.View ArticleGoogle Scholar
Pang JM, Luan YX, Li FF, Cai XQ, Du JM, Li ZH. Ibuprofen-loaded poly(lactic-co-glycolic acid) films for controlled drug release. Int J Nanomed. 2011;6:659–65.View ArticleGoogle Scholar
Li Y, Wu HJ, Yang XR, Jia MM, Li YX, Huang Y, Lin JY, Wu SC, Hou ZQ. Mitomycin C-soybean phosphatidylcholine complex-loaded self-assembled PEG-lipid-PLA hybrid nanoparticles for targeted drug delivery and dual-controlled drug release. Mol Pharm. 2014;11:2915–27.View ArticleGoogle Scholar
Li W, Yang Y, Wang C, Liu Z, Zhang X, An F, Diao X, Hao X, Zhang X. Carrier-free, functionalized drug nanoparticles for targeted drug delivery. Chem Commun. 2012;48:8120–2.View ArticleGoogle Scholar
Li W, Zhang X, Hao X, Jie J, Tian B, Zhang X. Shape design of high drug payload nanoparticles for more effective cancer therapy. Chem Commun. 2013;49:10989–91.View ArticleGoogle Scholar
Rosenholm JM, Peuhu E, Bate-Eya LT, Eriksson JE, Sahlgren C, Linden M. Cancer-cell-specific induction of apoptosis using mesoporous silica nanoparticles as drug-delivery vectors. Small. 2010;6:1234–41.View ArticleGoogle Scholar | CommonCrawl |
Rutherford's Alpha Ray Scattering Experiment
I understood the result of this experiment that the nucleus is nearly empty and things like that. But what I have on mind is that when an alpha particle goes nearer to the thin gold foil why couldn't it capture electrons from gold and convert to very stable helium gas? It could also happen right? All the textbooks only mention the deviation of alpha particles. Why couldn't the above happen or is there anything wrong about my question?
atoms radioactivity
edited Dec 12 '16 at 0:20
Preetham KrishnaPreetham Krishna
Your assumption is correct. For alpha particles, the main contribution to the total stopping power can be attributed to the electronic stopping power, i.e. inelastic collisions with electrons. Only a small contribution comes from the nuclear stopping power, i.e. elastic Coulomb collisions in which recoil energy is imparted to atoms.
The stopping power of a material is defined as the average energy loss per path length that the alpha particle suffers when travelling through the material.
According to the International Commission on Radiation Units and Measurements (ICRU) Report 49 Stopping Powers and Ranges for Protons and Alpha Particles (1993), the contributions to the total stopping power for alpha particles in gold are as follows.
Typical low-energy alpha particles with $E=1\ \mathrm{MeV}$:
Electronic stopping power: $3.887\times10^2\ \mathrm{MeV\ cm^2\ g^{-1}}$
Nuclear stopping power: $8.394\times10^{-1}\ \mathrm{MeV\ cm^2\ g^{-1}}$
Typical high-energy alpha particles with $E=10\ \mathrm{MeV}$:
Since the energy that is required for excitation or ionization is only a few $\mathrm{eV}$, an alpha particle with an initial energy of a few $\mathrm{MeV}$ can liberate many electrons on its path. When the alpha particle has sufficiently slowed down due to the stopping interactions, it can finally catch two electrons to form a neutral helium atom. Hence, you can release and detect helium when you dissolve old minerals of uranium or thorium.
In the Rutherford experiment, however, the used gold foil was very thin ($<1\ \mathrm{\mu m}$; note that gold is very malleable and can be beaten thin enough to become semi-transparent). Even for low-energy alpha particles of only $E=1\ \mathrm{MeV}$, the range in gold is still $r/\rho=3.974\times10^{-3}\ \mathrm{g\ cm^{-2}}$, which can be calculated using the continuous-slowing-down approximation (CSDA), i.e. by integrating the reciprocal of the total stopping power with respect to energy. Considering a density for gold of $\rho=19.3\ \mathrm{g\ cm^{-3}}$, the range $r$ can be calculated as
$$\begin{align} r&=\frac{3.974\times10^{-3}\ \mathrm{g\ cm^{-2}}}{19.3\ \mathrm{g\ cm^{-3}}}\\[6pt] &=2.06\times10^{-4}\ \mathrm{cm}\\[6pt] &=2.06\ \mathrm{\mu m} \end{align}$$
I.e., the experiment was deliberately designed so that the alpha particles were not completely stopped and converted to neutral helium atoms within the gold foil.
The atom, not the nucleus, is mostly empty. That's a pretty big conceptual difference.
Now, there's no reason to believe that the alpha particles haven't picked up an electron on their way to the gold foil before they even strike the foil. The detector on the other side just detects high energy particles, so we can't actually tell if how they're charged. The deviation of the alpha particles (very slight at best) would still be the roughly the same.
ZheZhe
Not the answer you're looking for? Browse other questions tagged atoms radioactivity or ask your own question.
Why do Alpha particles not collide with electrons during alpha decay?
Why did the alpha-particles in Rutherford's experiment not collide with the electrons?
Deflection of Alpha Particles in Rutherford's model of atom
Why can't light pass through a gold foil, but alpha particles can?
What is the product of alpha decay of curium-226? | CommonCrawl |
tangent graphs asymptotes
This indicates how strong in your memory this concept is. For example the function $y=\sqrt[3]x$ has the vertical tangent $x=0$ even though its slope $y=dy/dx$ is undefined. Therefore, the tangent function has a vertical asymptote whenever cos ( x) = 0 . . The calculator can find horizontal, vertical, and slant asymptotes. 1 Answer. Learn how to graph the tangent function and to visualize and change the amplitude, period, phase shift, and vertical shift of a tangent function. The vertical asymptotes for y = tan(x) y = tan ( x) occur at 2 - 2, 2 2 , and every n n, where n n is an integer. x^2. tan x are all odd multiples of !#2, the shrink factor causes the Step 5: Draw the rest of the tangent graph in between the asymptotes. The opposite of this is also true. sine cosine tangent zeros x intercepts vertical asymptotes. Go back to the x-intercept and draw down and out to the asymptote on the left. These numbers are vertical asymptotes to y= tanx. asymptotes on each side. vertical asymptotes (not shown) of the secant function occur when the cosine function is zero. Tangent Lines. Tan x must be 0 (0 / 1) At x = 90 degrees, sin x = 1 and cos x = 0. More on Tangent Lines. Transformation New. Tangent and Cotangent Graphs. Also the line you are seeing is the top of the diverging plot because the domain is too large and that's TikZ trying to fit the graph on a page hence pushing the rest out of the page. With tangent graphs, it is often necessary to determine a vertical stretch using a point on the graph.
. Learn how to graph a tangent function. Precalculus: An Investigation of Functions is a free, open textbook covering a two-quarter pre . y=tan (x). When graphing a tangent transformation, start by using a theta and tan (theta) t-table for -pi/2 to pi/2. The tangent, being a fraction, will be undefined wherever its denominator (that is, the value of the cosine for that angle measure) is zero. These numbers are vertical asymptotes to y= tanx. But flipping a fraction (that is, finding its reciprocal) does not change the sign of the fraction. ( t) will be different than the periods of the graphs of y= tan(t) y = tan. Interactive online graphing calculator - graph functions, conics, and inequalities free of charge As the period for tangent is `pi` the graph repeats . Gravity. The phaseshift is 0. x^ {\msquare} y=tan (x). . This is a lesson from the tutorial, Functions II and you are encouraged to log in or register , so that you can track your progress. The cotangent is the reciprocal of the tangent. For the function , it is not necessary to graph the function. Since the ver-tical asymptotes of y! 100% (3 ratings) for this solution. Notice wherever cosine is zero, secant has a vertical asymptote and where cos. . Created by. The graph of y=tan x has vertical asymptotes at certain values of x because the tangent ratio is _____ at those values. Functions. Graph of the tangent function. Step 1: Enter the function you want to find the asymptotes for into the editor. Asymptotes would be needed to illustrate the repeated cycles when the beam runs parallel to the wall because, seemingly, the beam of light could appear to extend forever. Definition of the tangent function and exploration of the graph of the general tangent function and its properties such as period and asymptotes are presented. The tangent function f (x) = a tan (b x + c) + d and its properties such as graph, period, phase shift and asymptotes are explored interactively by changing the parameters a, b, c and d using an applet. Vertical asymptotes can be found by solving the equation n(x) = 0 where n(x) is the denominator of the function ( note: this only applies if the numerator t(x) is not zero . The asymptotes for the graph of the tangent function are vertical lines that occur regularly, each of them , or 180 degrees, apart. Well let's investigate that. The domain of the tangent function is the set of all real numbers other than and the range is the set of all real numbers. Ok, I came up with this formula to find the vertical asymptotes. Step 6: Extend the graph on either side of the drawn graph as required by the problem. Where n is an integer. How do you find the domain, range, and asymptote for #y = 1 - tan ( x/2 - pi/8 )#? Graphs hug asymptotes. The best videos and questions to learn about Graphing Tangent, Cotangent, Secant, and Cosecant. Precalculus: Graphs of Tangent, Cotangent, Secant, and Cosecant The Tangent Function The tangent function is tanx= sinx cosx. Trigonometry . Step 1 of 3. Involve asymptotes spaced pi radians apart. The tangent function has period . f(x) = Atan(Bx C) + D is a tangent with vertical and/or horizontal stretch/compression and shift. For graph, see graphing calculator From the distance graph the wavelength may be determined Just enter the trigonometric equation by selecting the correct sine or the cosine function and click on calculate to get the results It is the same shape as the cosine function but displaced to the left 90 3) Consider the function g(x) = cos(x) 3) Consider the function . Get smarter on Socratic. right?? Videos and lessons with examples and solutions to help High School Algebra 2 students learn about the transformation of tangent graphs. The horizontal axis of a trigonometric graph represents the angle, usually written as \theta , and the y -axis is the tangent function of that angle. The cosecant goes down to the top of the sine curve and up . The calculator can find horizontal, vertical, and slant asymptotes. We do not have an amplitude for tangent (which is what "A" represents for sine and cosine. full pad . Revision of The Tangent Function. Wherever the tangent is zero, the cotangent will have a vertical asymptote; wherever the tangent has a vertical asymptote, the cotangent will have a zero. From the graphs of the tangent and cotangent functions, we see that the period of tangent and cotangent are both \pi .In trigonometric identities, we will see how to prove the periodicity of these functions using trigonometric . How do you find the domain, range, and asymptote for #y = 3 + 2 csc ( x/2 - pi/3 ) #? Explanation: . Given a rational function, we can identify the vertical asymptotes by following these steps: Step 1: Factor the numerator and denominator. an even vertical asymptote of the derivative indicates vertical tangent line on the graph of the function, but not an extreme value. This means that we will have NPV's when cos = 0, that is, the denominator equals 0. cos = 0 when = 2 and = 3 2 for the . It's free, and a wonderful product. repeats every 180^o ; Not a continuous curve; Vertical asymptotes at 90^o \pm 180^o Calculus. Match. This will produce the graph of one wave of the function. The vertical asymptotes of y = csc x are at x = n, where 'n' is an integer. And it will just continue to do this. Test. Tangent Function. In the case of y = Atan (Bx) or y = Atan (B (x - h)), define Bx or B (x-h) to be equal to theta and . Progress % Practice Now. We will discuss concepts, then work an example. Locate the vertical asymptotes and sketch two periods of the function. Terms in this set (15) General tanget function.
The secant was defined by the reciprocal identity sec x = 1 cos x. sec x = 1 cos x. % Progress . If you can remember the graphs of the sine and cosine functions, you can use the identity above (that you need to learn anyway!) It is of the form x = k. A cycle of the tangent function has two asymptotes and a zero pointhalfway in between. Rational Functions - Intercepts. Locate the vertical asymptotes and graph four periods of the function. Algebra. Step 3: Simplify the expression by canceling common factors in the numerator and . Graphs to Know and Love. The vertical asymptotes (not shown) of the each function occur when the The result, as seen above, is rather jagged curve that goes to positive infinity in one direction and negative infinity in the other. Analyzing the Graphs of y = sec x and y = cscx. Note: If & Asymptotes Calculator. Since division by 0 is undefined, this gives three points (/4,1), (0,0) and (-/4,-1) and two vertical asymptotes, x=/2 and x=-/2.Remember that tangent does not have an amplitude (although it can have a stretch which is why we included the points at /4.) Where the graph of the tangent function increases, the graph of the cotangent function decreases. Step 2: Let me go back, pi, and I can draw these asymptotes. And, thinking back to when you learned about graphing rational functions, you know that a zero in the denominator of a function means you'll have a vertical asymptote.So the tangent will have vertical asymptotes wherever the cosine is zero. Dividing the period into quarters, we can get the 3 key points for graphing. The asymptote calculator takes a function and calculates all asymptotes and also graphs the function. Remember the -2 is not going to affect asymptotes or x intercepts because it's a vertical stretch and then a reflection, it's this guy that affects the asymptotes and . Sometimes on your homework, you'll be asked to find the x intercepts and asymptotes of a tangent function. You can graph a secant function f (x) = sec x by using steps similar to those for tangent and cotangent. (on the basic graph these are . Example: L @ F A. What does it mean? We start with the identity tangent theta equals sine theta over cosine theta. Algebra. The range of cotangent is ( , ), and the function is decreasing at each point in its range. the graph of has vertical asymptotesat and as shown in Figure 4.59. The easiest way to graph a tangent function with transformations is to figure out what happens to the period where for the basic . Step 1: Enter the function you want to find the asymptotes for into the editor. It is an odd function defined by the reciprocal identity cot (x) = 1 / tan (x). It will just continue to do this every pi radians, actually, let me do that as a dotted line, every pi radians over and over and over again. Conic Sections. The domain of the tangent function is all real numbers except whenever cos()=0, where the tangent function is undefined. The y-intercept does not affect the location of the asymptotes. . amelia_munro5 PLUS. (\dfrac{x}{y}\), so it would make sense that where ever the tangent had an asymptote, now the cotangent will be zero. Go back to the x-intercept and draw down and out to the asymptote on the left. The tangent and cotangent graphs satisfy the following properties: range: ( , ) (-\infty, \infty) ( , ) period: \pi both are odd functions. The graph is drawn taking into account that it never crosses the asymptotes. The vertical asymptotes occur at the zeros of these factors. For any curve, an asymptote is a line such that the distance between the curve and the line approaches to zero as they approach infinity. The cosine graph crosses the x-axis on the interval. In the diagram above, drag the point A around in a . The vertical asymptotes occur at the NPV's: = 2 + n,n Z. The graph of tangent is periodic, meaning that it repeats itself indefinitely. To graph y= Atan[B(x C)] + D: 1. Summary and Main Ideas. Free Maths Tutorials and Problems. Practice. It is possible for a graph to have a vertical tangent. Since secant is the inverse of cosine the graphs are very closely related. Solve the equation cscx = 1 in the interval 2 x 5/2. Tangent graphs. When the tangent is zero, now the cotangent will have an asymptote. Determine the period =B, the phase shift C, and the vertical translation D. 2.
What is the tan graph? The six trigonometric functions are sine, cosine, tangent, cotangent, secant and cosecant. It has the same period as its reciprocal, the tangent function. Learn how to graph a tangent function. Let f be a twice-differentiable function defined on the interval -1.2 less than or equal to x less than or equal to 3.2 with . Sketching Cosine Graphs.
Polynomials. Let's graph 2Tan x = y first 1 Graphing Sine, Cosine, and Tangent Functions 14 Unit 2: Functions, Equations, & Graphs of Degree One 5 Modeling with Trigonometric Functions 14 Then sketch the graph using radians Then sketch the graph using radians. Since, tan ( x) = sin ( x) cos ( x) the tangent function is undefined when cos ( x) = 0 . The equations of the tangent's asymptotes are all of the form. Set the inner quantity of equal to zero to determine the shift of the asymptote. The parent graph has: an x-intercept at 0 a vertical asymptote at pi/2 a vertical asymptote at -pi/2 Videos . . to make sure you get your asymptotes and x-intercepts in the right places when graphing the tangent function. These functions in trignometry are the elementary functions that demonstrate the relationship between the sides and the angles of a right-angled triangle. Similarly, the tangent and sine functions each have zeros at integer multiples of because tan ( x) = 0 when sin ( x) = 0 . The cotangent function has period and vertical asymptotes at 0, , 2 ,.. PLAY.
Otis And Oliver's Coupons
Hellenic Center Bethesda
Walls Of Maxillary Sinus
Vegan City Hawaii Menu Toasttab
Is Angie Harmon Still Engaged
Who Sells Jitterbug Phones For Seniors
tangent graphs asymptotesgladiators game hockey | CommonCrawl |
2-morphisms in structured 2-categories
Modified 11 years, 11 months ago
There are many $2$-categories, which are first specified by certain categories with extra structure; then the $1$- and $2$-morphisms are functors and natural transformations that preserve the extra structure. I want to understand the general procedure in finding the "correct" definitions of these $2$-morphisms, if there is any.
Example 1: Objects are tensor categories. Then $1$-morphisms should be tensor functors (some allow them to be lax) and $2$-morphisms are natural transformations $\eta$ which are compatible with the tensor structure. This means that $\eta(1)$ is an isomorphism and that for every pair of objects $x,y$ we have a commutative diagram which identifies $\eta_{x \otimes y}$ with $\eta_x \otimes \eta_y$.
Example 2: Take as objects cocomplete categories. Then $1$-morphisms are cocontinuous functors and $2$-morphisms are natural transformations $\eta$ which preserve colimits. The latter means that that for every colimit $\colim_i x_i$ the morphism $\eta(x)$ is the colimit of the morphisms $\eta(x_i)$. But wait, this is automatically true! This follows easily from the cocontinuity of the functors and the naturality of $\eta$. In how far is this "coincidence"?
So far I have never seen this definition of a "cocontinuous natural transformation", but actually this property is used very often when dealing with natural transformations in this situation. So perhaps it should be included in the definition? For example the "correct" definition of a homomorphism $f : G \to H$ of groups includes that $f$ preserves the unit, inversion and multiplication, although everyone knows that multiplication is enough and unfortunately some authors then take the "wrong" definition and get the correct one by a lemma. I hope it's clear that I don't want to offend anyone here and there is no "correct" definition, but perhaps the one which fits best into general patters of category theory.
Example 3: Objects are symmetric tensor categories. Then $1$-morphisms are tensor functors which preserve the symmetry (the functor $F$ maps the symmetry $x \otimes y \cong y \otimes x$ to the symmetry $F(x) \otimes F(y) \cong F(y) \otimes F(x)$; again this is a commutative diagram) and $2$-morphisms are natural transformations $\eta$ which are compatible with the tensor structure as in Example 1 and also are compatible with the symmetry. But what should this compatibility mean? Actually I have not been able to write down a diagram which connects $\eta$ with the symmetry and does not directly follow from the naturality. So perhaps we cannot even formulate a compatibiltiy condition here? Again I'm interested in how far this is "coincidence".
ct.category-theory
edited Feb 6, 2011 at 16:54
Harry Gindi
asked Feb 6, 2011 at 16:01
Martin BrandenburgMartin Brandenburg
$\begingroup$ these aren't sub 2-categories $\endgroup$
– Buschi Sergio
In all of your examples, the "correct" definition of 2-morphism can be obtained from the fact that there is a 2-monad whose algebras and morphisms are the structured categories and functors you describe, so that the 2-morphisms are the 2-morphisms of algebras over that monad. In Example 2, the vacuity of the condition to be a "cocontinuous natural transformation" follows from the fact that the 2-monad is "fully property-like," which in turn follows from its being "lax-idempotent," i.e. every functor between algebras is uniquely a lax structured functor. Some references on 2-monads can be found at http://nlab.mathforge.org/nlab/show/2-monad .
Mike ShulmanMike Shulman
A. This is really just an aspect of Mike Shulman's answer, but could be of some use in particular cases.
There's a 2-categorical limit called the power (or cotensor) of an object $B$ by the arrow-category $2$. This is an object $B^2$ with the property that morphisms from $A$ to $B^2$ are in bijection with pairs of morphism from A to B with a 2-cell between them. For example if B is a category then $B^2$ is the functor category $[2,B]$. If $B$ is a monoidal category then $B^2$ is $[2,B]$ with the evident (pointwise) monoidal structure.
In each of your examples, and more generally in Mike's setting, this limit exists in the structured 2-category, and is preserved by the forgetful 2-functor into Cat. Normally you would prove this given the definition of 2-cell. But you can also turn this around. Given a structure on B, if you know how to make $B^2$ into a structured object, then you can use this to define the structured 2-cells.
In examples where the structure is given by a 2-monad, and in particular in examples which involve structure described by operations $B^n\to B$, natural transformations between these, and equations, then you can always do this in a "pointwise way". (But if you choose a strange way to make $B^2$ into a structured object you will get a strange notion of 2-cell.)
Suppose, for example, that $B$ is a monoidal category. Once you agree to make $[2,B]$ monoidal in the pointwise way, then you can define a monoidal transformation to be a monoidal functor with codomain $[2,B]$, and this will agree with the standard definition which you referred to.
In the case of a cocomplete category $B$, you don't need to choose how to make $[2,B]$ cocomplete, it just is. And then you can consider cocontinuous functors with codomain $[2,B]$; once again this will give no extra condition to be satisfied by a natural transformation between cocontinuous functors
The case of symmetric monoidal categories can be treated in the same way.
B. Regarding the case of symmetric monoidal categories, there is a general phenomenon here. As you add structure to your objects in the form of operations $B^n\to B$ (like a tensor product) you generally introduce preservation conditions on both morphisms and 2-cells (although there are special cases, as in your Example 2, where the 2-cell part is automatic). But if you introduce structure in the form of natural transformations between the operations $B^n\to B$ (such as a symmetry), this results in new preservation conditions for the morphisms but not for the 2-cells.
C. Despite all this, there can be more than one choice for the 2-cells. The general principles described by Mike (and by me) would suggest that if our structure is categories with pullback, so that our morphisms are pullback-preserving functors, the 2-cells should be all natural transformations between these. But sometimes it's good to consider only those natural transformations for which the naturality squares are pullbacks. (These are sometimes called cartesian natural transformations.) See this paper for example.
edited Feb 10, 2011 at 8:01
Steve LackSteve Lack
Let me focus in on your third example, but I'll also wave at your other two. Here's everything in full:
A symmetric strong monoidal category consists of:
0-morphisms: a category $C$.
1-morphisms: a functor $\otimes: C \times C \to C$; and a functor $1: \{\ast\} \to C$.
2-morphisms: a natural isomorphism $\alpha: \otimes \circ (\otimes,\operatorname{id}) \to \otimes \circ (\operatorname{id},\otimes)$ of functors $C^{\times 3} \to C$; natural isomorphisms $\lambda: \otimes \circ (1,\operatorname{id}) \to \operatorname{id}$ and $\rho: \otimes \circ (\operatorname{id},1) \to \operatorname{id}$ of functors $C \to C$; and a natural isomorphisms $\sigma: \operatorname{flip}\circ \otimes \to \otimes$ of functors $C^{\times 2} \to C$.
3-morphisms: a pentagon, a triangle, two hexagons, and $\sigma^2 = 1$.
(4-morphisms and higher: trivially satisfied, because we've reached the highest category number.)
A strong morphisms of symmetric strong monoidal categories $(C,\otimes_C,\dots) \to (D,\otimes_D,\dots)$ consists of:
1-morphisms: a functor $f: C \to D$.
2-morphisms: a natural isomorphism $\phi: f\circ \otimes_C \to \otimes_D \circ (f,f)$ of functors $C^{\times 2} \to D$; and a natural isomorphism $\varphi: f\circ 1_C \to 1_D$.
3-morphisms: $\phi,\varphi$ should commute with $\alpha,\lambda,\rho,\sigma$
(4-morphisms, which would intertwine the data $\phi,\varphi$ with the pentagon, etc., are trivially satisfied.)
(You can, of course, replace the two words "strong" by "lax" or "oplax" by allowing some of the natural isomorphisms to be simply natural transformations, but then you have to decide which direction you want them to go.)
For example, here's the compatibility between $\sigma$ and $\phi$. Let me continue to use $\circ$ for composition of 1-morphisms, and if $f:C\to D$ is a functor and $\xi$ a morphism in $C$, I'll write $f(\xi)$ for the corresponding morphisms in $D$. I'll write composition of 2-morphisms of functors (= morphisms in a category) as $\bullet$. Then the axiom is that $\sigma_D \bullet \phi = \phi \bullet f(\sigma_C)$ as natural isomorphisms $f\circ \otimes_C \to \otimes_D \circ \operatorname{flip} \circ (f,f)$ of functors $C^{\times 2} \to D$. You absolutely do need this axiom. For example, up to isomorphism there is a unique monoidal functor from super vector spaces to $\mathbb Z/2$-representations, but it is not symmetric monoidal.
But you're interested in the next level --- you know this much. You want:
A natural transformation of strong morphisms of symmetric monoidal categories $(f,\phi_f,\varphi_f),(g,\phi_g,\varphi_g) : (C,\dots) \to (D,\dots)$ consists of:
2-morphisms: a natural transformation $\eta: f \to g$.
3-morphisms: $\eta$ should intertwine the $(\phi,\varphi)$s.
(4-morphisms and higher are trivially satisfied)
Now for your actual question: why doesn't $\eta$ care about the symmetry $\sigma$? The answer is that you should demand a compatibility between $\eta$ and $\sigma$, but this demand is a certain 4-morphism, which automatically exists.
More generally, your "coincidences" begin to occur at codimension $-2$, because there is a unique $(-2)$-category. We're working at categorical dimension $2$ (objects are categories, which is to say 0-morphisms of a 2-category), so the $(-2)$-codimensional things are 4-morphisms. At codimension $-1$, a condition either holds or it doesn't --- it's a property. At the level of this discussion, the $(-1)$-codimensional morphisms are commutative diagrams of natural transformations. At codimension $0$ and above, it's actual data.
Something similar happens in your second example. For a functor to preserve colimits is a property. So a colimit preserving functor is actually two pieces of data: a 1-morphism (the functor) and a (hell of a lot of) 3-morphism(s). A natural transformation of this should be a 2-morphism and a (bunch of) 4-morphism(s), except that 4-morphisms always canonically exist.
Here is a very good exercise, to test your 2-category-fu.
You know what is the 2-category of strong monoidal categories. Fix a strong monoidal category $(C,\otimes,1,\alpha,\lambda,\rho)$. Write down the 2-category of strong $C$-categories, which is to say categories $X$ with a $C$-action. I'll start. A $C$-category is: (0-morphisms) a category $X$; (1-morphisms) a functor $\otimes: C \times X \to X$; (2-morphisms) ...; (3-morphisms) .... I'll let you figure out what the rest is. I'll also let you figure out what 1-morphisms of $C$-categories are and what are the 2-morphisms.
Repeat the exercise, but this time fix a symmetric structure on $C$, and figure out which are the symmetric $C$-categories. Hint: if $X$ is a symmetric $C$-category, then it shouldn't matter whether $C$ acts from the left or from the right.
Just like not every monoidal functor is symmetric, not every $C$-category is symmetric. How much extra is being symmetric? At which levels of morphisms of $C$-categories do you need to demand more data/properties/...?
Theo Johnson-FreydTheo Johnson-Freyd
$\begingroup$ I'm not sure I agree entirely with your description of the second example. A colimit-preserving functor has a 2-morphism as well: the comparison isomorphism between the image of the colimit and the colimit of the image. It just so happens that that isomorphism is uniquely determined if it exists. $\endgroup$
– Mike Shulman
$\begingroup$ Hrm. No, I disagree. In part, I disagree because I don't think "the colimit" is as well-defined as your comment implies (although I think you'll agree with me). Namely, a colimit of a diagram is any initial cocone over the diagram. So when I say that a functor "preserves colimits", what I'm saying is that any colimit cocone you feed into it comes out as some colimit cocone. In particular, I disagree that the comparison isomorphism should need to exist. $\endgroup$
– Theo Johnson-Freyd
What are natural transformations in 1-categories?
2-colimits in the category of cocomplete categories
Reference request: colimits of locally presentable categories
Kan extensions in the $2$-category of monoidal categories
A model category of abelian categories?
Adjunction between locally presentable and ordinary categories?
What structure do natural isomorphisms preserve?
Characterising natural transformations between tensor functors
Naturality up to (inner) automorphism? | CommonCrawl |
History of definitions for an ellipse?
Recently I've been learning about ellipses.
It seems as though there are four (from what I've learned of so far) different ways to define ellipses, all which seem to be connected in kind of obscure ways:
An ellipse is a stretched circle. We get the formula for a unit circle, $x^2 + y^2 = 1$, and stretch it by dividing the terms like so: $\displaystyle \left(\frac{x}{a}\right)^2 + \left(\frac{y}{b}\right)^2 = 1$. In order to satisfy the same equation, for every $y$ we previously had, $x$ must get stretched by a factor of $a$, and for every $x$ we previously had, $y$ must be stretched (multiplied) by a factor of $b$.
An ellipse is the set of all points whose sum of the distances from two points, the foci, is a constant. We can represent this with the equation $\sqrt{(x+f)^2 + y^2} + \sqrt{(x-f)^2 + y^2} = c^2$, where $c = 2a$ from the previous equation, $f$ is the distance from a focus to the origin, and $x$ and $y$ are the variables.
An ellipse is a slice of a cone at an angle. This means it's the intersection of a plane ($ax+by+cz-d=0$) and a cone ($x^2 + y^2 - z^2=0$), which begets the equation $Ax^2 + Bxy+Cy^2+Dx+Ey+F=0$ for the case that $B^2-4AC<0$.
An ellipse is a locus of points whose distance from the focus at every $(x,y)$ is proportional to the horizontal distance from a vertical line, the directrix, where the ratio is less than 1.
I'd really like to know more about the history of the ellipse.
Were all of these definitions discovered at around the same time? If not, in what order were they discovered, and by whom? Did the same people that came up with one definition somehow come up with others? And how did mathematicians see the connections between them, and realize they were looking at the same family of curves?
The connections between, for example, the "squashed circle" definition and the "constant sum of distances" definition are pretty hard to notice...who noticed that these were the same family of shapes? I mean, without being told that the foci DO exist, I'm not sure how I would be able to figure out, only from the squashed circle definition, that they indeed exist...(I asked this in another question, but in this one I'm more interested, about the history.)
definition conic-sections math-history
edited Jan 8 at 22:11
Alexander Gruber♦
asked Jan 4 at 14:43
Joshua RonisJoshua Ronis
$\begingroup$ We do have a sister side dedicated specifically to the history of math and science. I think your question is better suited there. $\endgroup$ – Arthur Jan 4 at 14:49
$\begingroup$ Regarding "appearing at the same time": The conic section (i.e. slice of a cone) definition is from ancient Greece or earlier, which was a couple of thousand years before anybody even thought about drawing coordinate systems and writing equations for the ellipse. $\endgroup$ – Hans Lundmark Jan 4 at 15:52
$\begingroup$ sites.math.rutgers.edu/~cherlin/History/Papers1999/… $\endgroup$ – Hans Lundmark Jan 4 at 15:53
$\begingroup$ This might be of help: math.stackexchange.com/questions/2221890/… $\endgroup$ – Aretino Jan 4 at 22:02
$\begingroup$ Here is a video showing the connection between sliced cone and sum of foci: youtube.com/watch?v=pQa_tWZmlGs $\endgroup$ – Shrey Joshi Jan 8 at 23:57
This my answer is far from being complete, but can be useful. I apologize for all imperfections or mistakes. Various excellent sources can be found in French language.
In this work we learn that conic sections were likely discovered by Menaechmus (380-320 BC). The priority of Menaechmus is justified here or in this Wikipedia article. Menaechmus sufficiently advanced the theory of the conic sections so that in antiquity these curves were called the Curves of Menaechmus. The cones were obtained by rotating a right triangle around one of its short sides, and a conic section obtained as section of conic surface by a plane orthogonal to the generating line. The picture from the work quoted bellow shows a rightangle, obtusangle or acutangle conic, as they were called.
Besides Menaechmus, other Greek geometers were studying conic sections and their properties before Apollonius. Some of them were
Aristaeus the Elder, 370 – 300 BC
Euclid of Alexandria, Mid. 4th - Mid. 3rd. century BC
Archimedes of Syracuse, c. 287 – c. 212 BC
Apollonius of Perga (c. 262 – c. 190 BC) gave to the ellipse (acutangle conic), parabola (rightangle conic) and hyperbola (obtusangle conic) the names under which we now know them. He is famous for his writings on conic sections, and is often wrongly designated as their inventor. His (plagiarism?) is criticized by Eutocius, as quoted bellow.
The author of this text states,
On this point, it seems that our commentator [E] has been impressed by another "historian" of the theory of conics, Pappus. The presentation that he devotes to the treaty of Apollonius begins:
"Apollonius has given us eight books on the conics, having completed the four books of the Conics of Euclid, and having added four other books. Aristaeus, author of The Five Books concerning Solid Loci, still available today, following the Conics, had however, as the predecessors of Apollonius, called one of the conical sections the 'acutangle cone section', the another the 'rightangle cone section' and the other still, the 'obtusangle cone section'."
Skipping centuries, I would attract attention to Philippe de la Hire, 1640 – 1718 who was inspired by Apollonius. An abundant source is this thesis. De la Hire used an early projective method: Every conic section can be obtained from a circle by a projection. This method was developed in later centuries.
It is worth noting the approach of Steiner.
A nice proof that the focal definition of ellipse (sum of distances to two given points, foci, is constant), and ellipse as a section of a conic surface by a plane, is due to Germinal Dandelin (and Adolphe Quetelet) and is dated to 1822. The picture (bellow) is from wikipedia.
The article Conic sections on Wikipedia is excellent. For the early history of conics in Europe, see the paragraph Europe and the references therein.
The links provided in comments to the present question are useful as well.
$\begingroup$ A proof that the sum (for an ellipse) or difference (for a hyperbola) of focal distances is constant can already be found in Apollonius (prop. 52 and 53, book III - prop. 73 in Heath's translation). $\endgroup$ – Aretino Jan 11 at 21:50
$\begingroup$ Thanks @Aretino, right. Edited. $\endgroup$ – user376343 Jan 11 at 22:03
$\begingroup$ Thank you @Xander Henderson for the revision. The names of conics were in antiquity obtusangle, ... , so I return the wording back. $\endgroup$ – user376343 Jan 13 at 16:26
$\begingroup$ Sorry, I thought I had accepted it a while ago. Thank you! $\endgroup$ – Joshua Ronis Jan 13 at 20:40
Not the answer you're looking for? Browse other questions tagged definition conic-sections math-history or ask your own question.
What is the cone of the conic section?
How did people figure out that parabolas, hyperbolas, circles, and ellipses were conic sections?
How do we deduce that an ellipse, when defined as a "stretched circle", has foci?
Eccentricity of an ellipse
Given an semi-ellipse inscribed about a square, how do I find the equation of the ellipse?
Why are two definitions of ellipses equivalent?
Focus-Focus Definition for a Parabola
Equivalence of geometric and algebraic definitions of conic sections
What is the minimum information required to define an equation for ellipse?
Ellipses Conics Proof
Show that the curve $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$ form an ellipse
What is the name of this "ellipse"? | CommonCrawl |
Effect of core electrical conductivity on core surface flow models
Masaki Matsushima ORCID: orcid.org/0000-0001-8198-343X1
Earth, Planets and Space volume 72, Article number: 180 (2020) Cite this article
The electrical conductivity of the Earth's core is an important physical parameter that controls the core dynamics and the thermal evolution of the Earth. In this study, the effect of core electrical conductivity on core surface flow models is investigated. Core surface flow is derived from a geomagnetic field model on the presumption that a viscous boundary layer forms at the core–mantle boundary. Inside the boundary layer, where the viscous force plays an important role in force balance, temporal variations of the magnetic field are caused by magnetic diffusion as well as motional induction. Below the boundary layer, where core flow is assumed to be in tangentially geostrophic balance or tangentially magnetostrophic balance, contributions of magnetic diffusion to temporal variation of the magnetic field are neglected. Under the constraint that the core flow is tangentially geostrophic beneath the boundary layer, the core electrical conductivity in the range from \({10}^{5} ~\mathrm{S}~{\mathrm{m}}^{-1}\) to \({10}^{7}~ \mathrm{S}~{\mathrm{m}}^{-1}\) has less significant effect on the core flow. Under the constraint that the core flow is tangentially magnetostrophic beneath the boundary layer, the influence of electrical conductivity on the core flow models can be clearly recognized; the magnitude of the mean toroidal flow does not increase or decrease, but that of the mean poloidal flow increases with an increase in core electrical conductivity. This difference arises from the Lorentz force, which can be stronger than the Coriolis force, for higher electrical conductivity, since the Lorentz force is proportional to the electrical conductivity. In other words, the Elsasser number, which represents the ratio of the Lorentz force to the Coriolis force, has an influence on the difference. The result implies that the ratio of toroidal to poloidal flow magnitudes has been changing in accordance with secular changes of rotation rate of the Earth and of core electrical conductivity due to a decrease in core temperature throughout the thermal evolution of the Earth.
The intrinsic magnetic field of the Earth is generated by dynamo action due to electromagnetic fluid motion in the outer core, the main components of which are iron and nickel. It is essential that the motional induction overcomes the magnetic diffusion through Ohmic dissipation to maintain the geomagnetic field. The magnetic diffusivity, \(\eta\), is given as \(\eta ={\left({\mu }_{0}\sigma \right)}^{-1}\), where \({\mu }_{0}\) and \(\sigma\) are the magnetic permeability of a vacuum and core electrical conductivity, respectively. Therefore, higher electrical conductivity of core fluid is preferred for easier generation of the geomagnetic field against the magnetic diffusion. Higher electrical conductivity means higher thermal conductivity, \(k\), of the metallic core, as both electrical and thermal conduction are dominated by the electron contribution. This can also be found from the Wiedemann–Franz law, \(k={L}_{o}T\sigma\), where \({L}_{o}=2.44\times {10}^{-8} ~\mathrm{W ~\Omega }~{\mathrm{K}}^{-2}\) and \(T\) are the Lorentz number and temperature, respectively. Excessive thermal conductivity indicates that no thermal convection in the core is required to release heat from the core to the mantle (e.g., Pozzo et al. 2012). This does not necessarily mean that another type of convection, such as compositional convection, does not occur. In reality, the Earth has possessed its intrinsic magnetic field generated by the geodynamo since around 3.45 Ga (Tarduno et al. 2010).
An assessment of the convective motions generating the geomagnetic field has been advanced by numerical simulations of the geodynamo. Noticeable columnar convective structures parallel to the rotational axis of the Earth are found to explain the generation mechanism of the axial dipole magnetic field (e.g., Kageyama and Sato 1997; Olson et al. 1999). Furthermore, numerical geodynamo models have succeeded in explaining certain properties of the geomagnetic field, such as the dominance of the dipole field, and secular variations including polarity reversals (e.g., Christensen and Wicht 2015). However, the convective motions produced by numerical simulations do not necessarily show proper core dynamics, mainly because the parameters adopted in numerical simulations are far from the real ones.
Useful information on core dynamics, features of the core–mantle boundary (CMB), and core–mantle coupling, for example, can be provided by realistic fluid motion in the core of the Earth. This core fluid motion can be estimated from spatial and temporal distributions of the geomagnetic field (e.g., Holme 2015). Most core surface flow models rely on the frozen-flux approximation (Roberts and Scott 1965), in which the magnetic diffusion is neglected. However, a viscous boundary layer is present at the CMB, where the magnetic diffusion plays an important role in secular variations of the geomagnetic field (Takahashi et al. 2001). Therefore, a new approach to estimate fluid flow near the core surface has been devised by Matsushima (2015). In this approach, the magnetic diffusion is explicitly incorporated within the viscous boundary layer at the CMB, whereas it is neglected below the boundary layer. Moreover, the fluid flow below the boundary layer is presumed to be tangentially geostrophic. A core flow model inside and below the viscous boundary layer at the CMB can then be derived from a geomagnetic field model.
In this method, core electrical conductivity can play a role in estimating core surface flows. The temporal variations in the radial component of the magnetic field, \({B}_{r}\), at the CMB are caused only by magnetic diffusion because of the no-slip condition for core flows there. The second partial derivative of \({B}_{r}\) with respect to the radius is thus related to core electrical conductivity. This suggests that core electrical conductivity can be influential in inferring \({B}_{r}\) inside the core. It should be noted that core electrical conductivity can also be crucial in the estimation of core flow, because the Lorentz force in the equation of motion depends on the electrical current density proportional to electrical conductivity.
Hence, in this paper, the effects of core electrical conductivity on core surface flow models are investigated for various values of core electrical conductivity that are still controversial (e.g., Ohta et al. 2016; Konôpková et al. 2016; Xu et al. 2018). First, the method of Matsushima (2015), in which core flow is presumed to be tangentially geostrophic below a viscous boundary layer at the core surface, is recalled. The method is then developed to include not only the effect of the Coriolis force, but also that of the Lorentz force. To investigate the effect of core electrical conductivity, this value is varied as a parameter. These results are discussed and summarized.
Matsushima (2015) gave a method for estimating fluid flow near the core surface with magnetic diffusion in a viscous boundary layer. Here this theory is recalled in brief, and the method is extended to include not only the effect of the Coriolis force for a tangentially geostrophic flow, but also that of the Lorentz force for a tangentially magnetostrophic flow. The radial component, which is denoted by subscript r, of the induction equation is given as
$${\dot{B}}_{ri}=\{-\left({{\varvec{V}}}_{i}\cdot \nabla \right){B}_{ri}+\left({{\varvec{B}}}_{i}\cdot \nabla \right){V}_{ri}\}({\delta }_{i1}+{\delta }_{i2})+\frac{\eta }{{r}_{i}}{\nabla }^{2}\left({r}_{i}{B}_{ri}\right)({\delta }_{i0}+{\delta }_{i1}),$$
where \({\varvec{B}}\) is the magnetic field, \({\varvec{V}}\) the velocity of incompressible core fluid, and \({\delta }_{ij}\) the Kronecker delta. A dot denotes partial differentiation with respect to time, \(t\). The other subscript, \(i\), indicates the depth to be considered: \(i=0\) at the CMB (assumed to be a spherical surface with radius \(r={r}_{0}=\) 3480 km), \(i=1\) inside the boundary layer (\(r={r}_{1}={r}_{0}-{\xi }_{1}\)) at a depth of \({\xi }_{1}\) from the CMB, and \(i=2\) below the boundary layer (\(r={r}_{2}={r}_{0}-{\xi }_{2}\)) at a depth of \({\xi }_{2}\). For \(i=0\), where the core flow relative to a reference frame rotating with the mantle must vanish under the no-slip condition, the first and second terms of the right-hand side of Eq. (1) must also vanish, and only the third term, or magnetic diffusion term, remains; that is, temporal variations of the magnetic field at the CMB arise from the magnetic diffusion only. For \(i=1\), all three right-hand-side terms contribute to temporal variation of the geomagnetic field. For \(i=2\), the magnetic diffusion term is presupposed to be negligible, although the thickness of the magnetic boundary layer would be thicker than that of the viscous boundary layer (e.g., Chulliat and Olsen 2010), because contribution of the motional induction to temporal variations in the magnetic field is likely to be much larger than that of the magnetic diffusion, as in the frozen-flux approximation (e.g., Holme 2015).
Core flow is assumed to be tangentially geostrophic below the boundary layer, while the viscous force is presumed to play an important role inside the boundary layer. Therefore, core flow \({{\varvec{V}}}_{1}\) and \({{\varvec{V}}}_{2}\) should satisfy Eqs. (2a) and (2b), respectively:
$$\hat{{\varvec{r}}}\cdot \nabla \times (-2{\varvec{\Omega}}\times {{\varvec{V}}}_{1}+{\nu }_{\mathrm{edd}}{\nabla }^{2}{{\varvec{V}}}_{1})=0,$$
$$\hat{{\varvec{r}}}\cdot \nabla \times (-2{\varvec{\Omega}}\times {{\varvec{V}}}_{2})=0,$$
where \({\varvec{\Omega}}\) denotes the angular velocity vector of the mantle, \(\hat{{\varvec{r}}}\) the radial unit vector, and \({\nu }_{\mathrm{edd}}\) the eddy kinematic viscosity. The typical length scale parallel to the boundary layer is likely to be much larger than the thickness of the boundary layer. The horizontal flow, \({\varvec{V}}_{H}\), near the core surface is then expressed as in classical Ekman layer theory (e.g., Pedlosky 1987),
$$\begin{aligned}{\varvec{V}}_{H}&={\overline{\varvec{V}}}_{H}\left\{1-{\mathrm{exp}}\left(-\frac{\xi }{{\delta }_{E}}\right){\mathrm{cos}}\left(\frac{\xi }{{\delta }_{E}}\right)\right\} \\ & \quad +({\mathrm{sgn cos}}\,\theta )\hat{{\varvec{r}}}\times {\overline{\varvec{V}}}_{H}{\mathrm{exp}}\left(-\frac{\xi }{{\delta }_{E}}\right){\mathrm{sin}}\left(\frac{\xi }{{\delta }_{E}}\right),\end{aligned}$$
where sgn is the signum function, \({\delta }_{E}={\left({\nu }_{\mathrm{edd}}/\Omega |\mathrm{cos}~\theta |\right)}^{1/2}\) with \(\Omega =\left|{\varvec{\Omega}}\right|=7.29\times {10}^{-5} ~\mathrm{rad}~{\mathrm{s}}^{-1}\), and \(\theta\) is the colatitude in the spherical coordinates \(\left(r, \theta , \phi \right).\) The tangentially geostrophic flow, \({\overline{\varvec{V}}}_{H}\), significantly below the viscous boundary layer should satisfy Eq. (4) obtained from Eq. (2b):
$${\nabla}_{H}\cdot (\mathrm{cos}~\theta ~{\overline{\varvec{V}}}_{H})=0,$$
where \({\nabla }_{H}\) is the horizontal gradient, and \({\overline{V}}_{r}\) is significantly smaller than \(|\overline{\varvec{V}}_{\it H}|\) and can be neglected. For the case of tangentially geostrophic flow, core electrical conductivity\(\sigma\) has an effect on the magnetic diffusion alone, which leads to second partial derivatives of \({B}_{r}\) at \(r={r}_{0}\) with respect to the radius.
To examine the effect of \(\sigma\) on a core flow model, core flow is next assumed to be tangentially magnetostrophic below the boundary layer, which is an Ekman–Hartmann layer in this case. Therefore, core flow \({{\varvec{V}}}_{1}\) and \({{\varvec{V}}}_{2}\) should satisfy Eqs. (5a) and (5b), respectively:
$$\hat{{\varvec{r}}}\cdot \nabla \times (-2{\varvec{\Omega}}\times {{\varvec{V}}}_{1}+{\rho }^{-1}{{\varvec{J}}}_{1}\times {{\varvec{B}}}_{1}+\nu {\nabla }^{2}{{\varvec{V}}}_{1})=0,$$
$$\hat{{\varvec{r}}}\cdot \nabla \times (-2{\varvec{\Omega}}\times {{\varvec{V}}}_{2}+{\rho }^{-1}{{\varvec{J}}}_{2}\times {{\varvec{B}}}_{2})=0,$$
where \(\rho\) and \({\varvec{J}}\) denote the mass density of the core fluid and the electric current density, respectively. In this study, as mentioned in Appendix, contribution of the electric field to the current density is ignored (e.g., Shimizu 2006), and \({J}_{r}\) is likely to be much smaller than \(|{{\varvec{J}}}_{H}|\) near the mantle, which is assumed to be an electrical insulator (Benton and Muth 1979). The horizontal component of current density is then given as
$${{\varvec{J}}}_{H}=\sigma {\left({\varvec{V}}\times {\varvec{B}}\right)}_{H}\approx \sigma {B}_{r}{{\varvec{V}}}_{H}\times \hat{{\varvec{r}}}.$$
The horizontal flow, \({{\varvec{V}}}_{H}\), near the core surface is expressed as
$$\begin{aligned}{{\varvec{V}}}_{H}&={\overline{{\varvec{V}}}}_{H}\left\{1-{\mathrm{exp}}\left(-\frac{\xi }{{\delta }_{EH}^{+}}\right)\mathrm{cos}\left(\frac{\xi }{{\delta }_{EH}^{-}}\right)\right\}\\ & \quad +({\mathrm{sgn\ cos}}\,\theta )\hat{\varvec{r}}\times {\overline{{\varvec{V}}}}_{H}\mathrm{exp}\left(-\frac{\xi }{{\delta }_{EH}^{+}}\right)\mathrm{sin}\left(\frac{\xi }{{\delta }_{EH}^{-}}\right),\end{aligned}$$
and the tangentially magnetostrophic flow, \({\overline{\varvec{V}}}_{H}\), significantly below the viscous boundary layer satisfies
$${\nabla}_{H}\cdot (2\Omega {\mathrm{cos}}\ \theta {\overline{{\varvec{V}}}}_{H}+{\rho }^{-1}\sigma {B}_{r2}^{2}{\overline{\varvec{V}}}_{H}\times {\hat{\varvec{r}}})=0.$$
Here, \({\delta }_{EH}^{+}\) and \({\delta }_{EH}^{-}\) are given by
$${\delta }_{EH}^{\pm }=\frac{{\delta }_{E}}{{\{{\left(1+{\Lambda }^{2}/4\right)}^{1/2}\pm\Lambda /2\}}^{1/2}}$$
(double sign correspondence), and
$$\Lambda =\frac{\sigma {B}_{r}^{2}}{\rho\Omega |\mathrm{cos}~\theta |}$$
is the Elsasser number. For the case of tangentially magnetostrophic flow, core electrical conductivity \(\sigma\) has an effect not only on the magnetic diffusion, but also on the magnetostrophy through the Lorentz force.
The horizontal geostrophic (or magnetostrophic) velocity can be expressed in terms of poloidal and toroidal constituents as
$${\overline{\varvec{V}}}_{H}=r{\nabla }_{H}\overline{U}+\nabla \times ({\varvec{r}}\overline{W}),$$
$$\overline{U}\left(\theta ,\phi ,t\right)=\sum_{l=1}^{L}\sum_{m=0}^{l}\left\{{\overline{U}}_{l}^{mc}\left(t\right)\mathrm{cos}~{\it m}\phi +{\overline{U}}_{l}^{\it ms}\left(t\right)\mathrm{sin}~m\phi \right\}{P}_{l}^{\it m}(\mathrm{cos}~\theta ),$$
$$\overline{W}\left(\theta ,\phi ,t\right)=\sum_{l=1}^{L}\sum_{m=0}^{l}\left\{{\overline{W}}_{l}^{mc}\left(t\right)\mathrm{cos}~m\phi +{\overline{W}}_{l}^{ms}\left(t\right)\mathrm{sin}~m\phi \right\}{P}_{l}^{m}(\mathrm{cos}~\theta ),$$
where \({P}_{l}^{m}\) is a Schmidt-normalized associated Legendre function of degree \(l\) and order \(m\), \(L\) is the truncation level, \({\varvec{r}}\) is a position vector, and \(\overline{U}\) and \(\overline{W}\) are poloidal and toroidal scalar functions, respectively.
To obtain a core surface flow model for a long duration, a geomagnetic field model, COV-OBS. x1 (Gillet et al. 2015), ranging from 1840 to 2015, is adopted. It should be noted that the period 1840–1880 in the COV-OBS.x1 model may contain a problem (Metman et al. 2018). Therefore, means and standard deviations are calculated in the range from 1880 to 2015 in this study. The magnetic field at the CMB is derived through downward continuation of a geomagnetic potential field by assuming the mantle to be an electrical insulator as
$${B}_{r0}\left(\theta , \phi , t\right)=\sum_{l=1}^{L}\left(l+1\right){\left(\frac{{r}_{e}}{{r}_{o}}\right)}^{l+2}\sum_{m=0}^{l}\left\{{g}_{l}^{m}\left(t\right)\mathrm{cos}~m\phi +{h}_{l}^{m}\left(t\right)\mathrm{sin}~m\phi \right\}{P}_{l}^{m}(\mathrm{cos}~\theta ),$$
where \({r}_{e}=\) 6371 km is the mean radius of the Earth, and \({g}_{l}^{m}\) and \({h}_{l}^{m}\)are the Gauss coefficients given by the COV-OBS.x1 model. The truncation level of spherical harmonic coefficients is set at degree L = 14.
The radial component of the geomagnetic field shallow inside the core, \({B}_{r1}\) and \({B}_{r2}\), can be estimated using a Taylor expansion at \(r={r}_{0}\) as
$${B}_{ri}={B}_{r0}-{\xi }_{i}\frac{\partial {B}_{r0}}{\partial r}+\frac{{\xi }_{i}^{2}}{2}\frac{{\partial }^{2}{B}_{r0}}{\partial {r}^{2}} ~~\left(i=1, 2\right),$$
where the second term of the right-hand side of Eq. (14) can be obtained from \(\nabla \cdot {\varvec{B}}=0\), and the third term from Eq. (1) at \(r={r}_{0}\) (Matsushima 2015). Their time derivatives, \({\dot{B}}_{r1}\) and \({\dot{B}}_{r2}\), can be derived from \(\nabla \cdot \dot{{\varvec{B}}}=0\) and \({\ddot{B}}_{r0}=\left(\eta /{r}_{0}\right){\nabla }^{2}({r}_{0}{\dot{B}}_{r0})\). Then, Eq. (1) at \(r={r}_{1}\) and \(r={r}_{2}\) with a constraint Eq. (4) or Eq. (8) are solved in physical space, at grid points \(({\theta }_{k},{\phi }_{k})\) as
$$\left(\begin{array}{c}{{\varvec{d}}}_{1}\\ {{\varvec{d}}}_{2}\\ {\varvec{0}}\end{array}\right)=\left(\begin{array}{c}{\mathbf{A}}_{1}\\ {\mathbf{A}}_{2}\\ \alpha {\mathbf{A}}_{g}\end{array}\right)\cdot {\varvec{g}},$$
where \({{\varvec{d}}}_{1}\) and \({{\varvec{d}}}_{2}\) contain \({\dot{B}}_{r1}({\theta }_{k},{\phi }_{k})\) and \({\dot{B}}_{r2}({\theta }_{k},{\phi }_{k})\), respectively; \({\mathbf{A}}_{1}\) and \({\mathbf{A}}_{2}\) are matrices that contain \({B}_{r1}({\theta }_{k},{\phi }_{k})\) and \({B}_{r2}({\theta }_{k},{\phi }_{k})\), respectively, as well as their horizontal derivatives; \({\mathbf{A}}_{g}\) is a matrix derived from Eq. (4) or Eq. (8); \(\alpha\) is a parameter that controls the weight of tangential geostrophy or tangential magnetostrophy; and \({\varvec{g}}\) contains \({\overline{U}}_{l}^{mc}\), \({\overline{U}}_{l}^{ms}\), \({\overline{W}}_{l}^{mc}\), and \({\overline{W}}_{l}^{ms}\).
The number of unknowns for poloidal and toroidal scalars expanded into spherical harmonics is \(2L\left(L+2\right)=448\). The number of grid points on spherical surfaces at \(r={r}_{1}\) and \(r={r}_{2}\) is 45 in the \(\theta\)-direction and 90 in the \(\phi\)-direction. The linear Eq. (15) is solved using a Householder method. The parameter \(\alpha\) is determined from a trade-off relationship between the tangentially geostrophic or tangentially magnetostrophic constraint and correlation of \({\dot{B}}_{r}^{\mathrm{mod}}\) due to estimated flow to \({\dot{B}}_{r}^{\mathrm{obs}}\) obtained from geomagnetic field data, or a relative misfit defined as
$${M}_{i}=\sqrt{\frac{\int {\left({\dot{B}}_{ri}^{\mathrm{mod}}-{\dot{B}}_{ri}^{\mathrm{obs}}\right)}^{2}d{S}_{i}}{\int {\left({\dot{B}}_{ri}^{\mathrm{obs}}\right)}^{2}d{S}_{i}}},$$
where \(\int d{S}_{i}\) is an integral over a spherical surface of radius \(r={r}_{i}\).
To solve the linear equation, two physical parameters, eddy kinematic viscosity and electrical conductivity of the Earth's core, must be given. Various values of eddy kinematic viscosity were proposed, e.g., \({\nu }_{\mathrm{edd }}\sim 3 ~{\mathrm{m}}^{2}~ {\mathrm{s}}^{-1}\) (Braginsky 1991) and \({\nu }_{\mathrm{edd}} \sim 7 ~{\mathrm{m}}^{2} ~{\mathrm{s}}^{-1}\) (Davis and Whaler 1997). Matsushima (2015) adopted their average, \({\nu }_{\mathrm{edd}} \sim 5 ~{\mathrm{m}}^{2} ~{\mathrm{s}}^{-1}\), corresponding to an Ekman number, \(E={\nu }_{\mathrm{edd}}/\Omega {r}_{0}^{2} \sim 6\times {10}^{-9}\). In this paper, the same value of eddy kinematic viscosity is used.
The thickness of the viscous boundary layer, \({\delta }_{E}\) or \({\delta }_{EH}\), contains \(\left|\mathrm{cos}~\theta \right|\) in the denominator. The value of \(\left|\mathrm{cos}~\theta \right|\) in the range between \(\theta =5\pi /12\) and \(\theta =7\pi /12\) is set at \(\left|\mathrm{cos}(5\pi /12)\right|\) to avoid a singularity of \({\delta }_{E}\) at \(\theta =\pi /2\) (Matsushima 2015). Even for another range between \(\theta =17\pi /36\) and \(\theta =19\pi /36\), correlation coefficients between the resultant core flow model and the one by Matsushima (2015) are found to be more than 0.99. This indicates that the procedure to avoid a singularity at \(\theta =\pi /2\) does not have a severe influence on the core flow modeling. Thus, \({\delta }_{E} \sim 270-540~\mathrm{m}\), \({\xi }_{1}=0.2 ~\mathrm{km}<{\delta}_{E}\), and \({\xi }_{2}=2 ~\mathrm{km}\gg {\delta}_{E}\) are adopted in this paper. As found from Eq. (9), \({\delta }_{EH}^{+} \sim 174-348~\mathrm{m}\) even for \(\Lambda =2\).
Regarding the other physical parameter, core electrical conductivity, Matsushima (2015) adopted \(\sigma =3\times {10}^{5} ~\mathrm{S} ~{\mathrm{m}}^{-1}\) (Stacey 1992). However, recent first-principles calculations and high-pressure high-temperature experiments suggest that core electrical conductivity can be higher than \(\sigma =3\times {10}^{5} ~\mathrm{S} ~{\mathrm{m}}^{-1}\). For example, Pozzo et al. (2012) obtained \(\sigma =1.11\times {10}^{6} ~\mathrm{S} ~{\mathrm{m}}^{-1}\) at the CMB from first-principle calculations. From high-pressure high-temperature experiments, Ohta et al. (2016) estimated \(\sigma \sim 1\times {10}^{6} ~\mathrm{S} ~{\mathrm{m}}^{-1}\) for liquid Fe67.5Ni10Si22.5 at the CMB. It should be noted, however, that Konôpková et al. (2016) obtained a rather low value, \(\sigma \sim 2.7\times {10}^{5} ~\mathrm{S} ~{\mathrm{m}}^{-1}\), from experiments. In this paper, core electrical conductivity, as a parameter, is investigated in the range from \(\sigma =1\times {10}^{5} ~\mathrm{S} ~{\mathrm{m}}^{-1}\) to \(\sigma =1\times {10}^{7} ~\mathrm{S} ~{\mathrm{m}}^{-1}\).
First, the effect of core electrical conductivity on tangentially geostrophic core flow below the viscous boundary layer at the CMB is investigated. Table 1 shows the mean velocity over spherical surfaces at \(r={r}_{1}\) and at \(r={r}_{2}\) for core electrical conductivity between \(\sigma =1\times {10}^{5}~\mathrm{S}~{\mathrm{m}}^{-1}\) and \(\sigma =1\times {10}^{7}~\mathrm{S}~{\mathrm{m}}^{-1}\). It is found that differences among mean velocity are 0.5% at most. Table 1 also lists correlation coefficients of \({\nabla}_{H}\cdot {\varvec{V}}_{Hi}\) and \(\hat{\varvec{r}}\cdot \nabla \times {\varvec{V}}_{Hi}\) for \(\sigma =1\times {10}^{6}~\mathrm{S}~{\mathrm{m}}^{-1}\) and for another \(\sigma\). The former and the latter correspond to the correlation coefficients for the poloidal and the toroidal components, respectively. It is found that the correlation coefficients are at least 0.98. The core electrical conductivity is related to the diffusion term in the induction equation. That is, \({\partial }^{2}{B}_{r}/\partial {r}^{2}\) depends on \(\sigma\), and it is used to estimate \({B}_{r1}\) and \({B}_{r2}\) through the Taylor expansion. The result implies that core electrical conductivity has a limited effect on core flow models through the magnetic diffusion term under the tangentially geostrophic constraint.
Table 1 Mean velocity and correlation coefficients for the respective electrical conductivity values under the tangentially geostrophic constraint in 2010
It is worth noting, however, that the tangentially geostrophic constraint is known to be too strong, in particular, near the geographic equator; the \(\theta\)-component of core flow, \({V}_{\theta }\), at the equator must vanish under the constraint. Therefore, the geomagnetic secular variation around the equator is not well explained by such tangentially geostrophic flows (Wardinski et al. 2008). In other words, ageostrophic flows crossing the equator are necessary to explain the secular variation there. Such ageostrophic flows can be regarded as deviations from tangentially geostrophic flows, and they can be estimated by relaxing the tangentially geostrophic constraint (Pais et al. 2004; Asari and Lesur 2011). In their approach, a parameter, which corresponds to the controlling parameter, \(\alpha\), in Eq. (15) in the present study, is changed so as to relax the constraint. The ageostrophic flows thus obtained are considered to result from the Lorentz force.
Hence, secondly, the effect of core electrical conductivity on tangentially magnetostrophic flow below the boundary layer is investigated. Under the tangentially magnetostrophic constraint, the core electrical conductivity is related not only to the magnetic diffusion, but also to the Lorentz force. Figure 1a–c show fluid motions near the CMB at \(r={r}_{1}\) and \(r={r}_{2}\) for \(\sigma =1\times {10}^{5}~\mathrm{S}~{\mathrm{m}}^{-1}\), \(\sigma =1\times {10}^{6}~\mathrm{S}~{\mathrm{m}}^{-1}\), and \(\sigma =1\times {10}^{7}~\mathrm{S}~{\mathrm{m}}^{-1}\), respectively, at the epoch of 2010. Core flows for \(\sigma =1\times {10}^{5}~\mathrm{S}~{\mathrm{m}}^{-1}\) are found to be similar to those for \(\sigma =1\times {10}^{6}~\mathrm{S}~{\mathrm{m}}^{-1}\), whereas those for \(\sigma =1\times {10}^{7}~\mathrm{S}~{\mathrm{m}}^{-1}\) are clearly different from those for \(\sigma =1\times {10}^{6}~\mathrm{S}~{\mathrm{m}}^{-1}\). In fact, the twist of horizontal flows seen in an Ekman layer at \(r={r}_{1}\) and \(r={r}_{2}\) is similarly found in Fig. 1a and b, but horizontal flows at \(r={r}_{1}\) and \(r={r}_{2}\) in Fig. 1c for large \(\Lambda\) are found to be much more parallel. The flow velocity averaged over spherical surfaces at \(r={r}_{2}\) decreases with an increase of \(\sigma\), as listed in Table 2. It should be noted, however, that the horizontal divergence for \(\sigma =1\times {10}^{7}~\mathrm{S}~{\mathrm{m}}^{-1}\) appears larger than that for \(\sigma =1\times {10}^{6}~\mathrm{S}~{\mathrm{m}}^{-1}\). This dependence of poloidal and toroidal mean-flow magnitudes on the core electrical conductivity is obviously found in Fig. 2. The mean velocity for the toroidal component does not increase or decrease with increasing core electrical conductivity. In contrast, the mean velocity for the poloidal component increases with increasing core electrical conductivity, as found from larger horizontal divergence for higher electrical conductivity. The reason why poloidal flow is larger for higher core electrical conductivity is likely to result from the Lorentz force on the tangentially magnetostrophic constraint, because tangentially geostrophic core flows are found not to be influenced by core electrical conductivity. The Coriolis force can be relatively unimportant for very high \(\sigma\), as found in Eq. (8).
Fluid motions near the core–mantle boundary under the tangentially magnetostrophic constraint. Upper and lower figures show fluid motions at \(r={r}_{1}\) and at \(r={r}_{2}\), respectively, for a \(\sigma =1\times {10}^{5}~\mathrm{S}~{\mathrm{m}}^{-1}\), b \(\sigma =1\times {10}^{6}~\mathrm{S}~{\mathrm{m}}^{-1}\), and c \(\sigma =1\times {10}^{7}~\mathrm{S}~{\mathrm{m}}^{-1}\) at the epoch of 2010. Arrows show the horizontal flows, and color contours denote upwellings and downwellings given by \({\nabla }_{H}\cdot {{\varvec{V}}}_{H}\). For \(\rho =1.1\times {10}^{4}~\mathrm{kg}~{\mathrm{m}}^{-3}\), \(\Omega =7.29\times {10}^{-5}~\mathrm{rad}~{\mathrm{s}}^{-1}\), and a root-mean-square value of the radial magnetic field at the CMB, \({B}_{r}\approx 0.2~\mathrm{mT}\), \(\Lambda =\sigma {B}_{r}^{2}/\rho\Omega \approx 0.005\) for \(\sigma ={10}^{5}~\mathrm{S}~{\mathrm{m}}^{-1}\), \(\Lambda \approx 0.05\) for \(\sigma ={10}^{6}~\mathrm{S}~{\mathrm{m}}^{-1}\), and \(\Lambda \approx 0.5\) for \(\sigma ={10}^{7}~\mathrm{S}~{\mathrm{m}}^{-1}\)
Table 2 Mean velocity and correlation coefficients for the respective electrical conductivity values under the tangentially magnetostrophic constraint in 2010
Mean toroidal and poloidal velocity with respect to the core electrical conductivity. Circles and error bars represent means and \(\pm\) standard deviations, respectively, obtained for COV-OBS. x1 ranging from 1880 to 2015, at a \(r={r}_{1}\) and b \(r={r}_{2}\)
To determine the cause, mean flow velocity is investigated under the tangentially geostrophic and tangentially magnetostrophic constraints. Equation (4) for the tangentially geostrophic flow can be given as
$$\sum_{l=1}^{L}\sum_{m=0}^{l}[l\left(l+1\right)\mathrm{cos}~\theta \left\{{\overline{U}}_{l}^{mc}\mathrm{cos}~m\phi +{\overline{U}}_{l}^{ms}\mathrm{sin}~m\phi \right\}{P}_{l}^{m}\left(\mathrm{cos}~\theta \right)+\mathrm{sin}~\theta \left\{{\overline{U}}_{l}^{mc}\mathrm{cos}~m\phi +{\overline{U}}_{l}^{ms}\mathrm{sin}~m\phi \right\}\frac{d{P}_{l}^{m}}{d\theta }+m\left\{-{\overline{W}}_{l}^{mc}\mathrm{sin}~m\phi +{\overline{W}}_{l}^{ms}\mathrm{cos}~m\phi \right\}{P}_{l}^{m}\left(\mathrm{cos}~\theta \right)]=0.$$
The solutions of Eq. (17) are non-unique and underdetermined, although the number of unknowns is decreased as found in a basis for the tangentially geostrophic flow determined from the selection rule of the Gaunt integral (Le Mouël et al. 1985). Equation (17) has \(m\)-dependence in the \(\phi\)-direction as found from the selection rule, and it is possible to consider \({cos}~m\,\phi\) terms only from orthogonality of cosine and sine functions. Equation (17) can be reduced to
$$\left\{l\left(l+1\right)\mathrm{cos}~\theta {P}_{l}^{m}+\mathrm{sin}~\theta \frac{d{P}_{l}^{m}}{d\theta }\right\}{\overline{U}}_{l}^{mc}+\left\{(l+2)\left(l+3\right)\mathrm{cos}~\theta {P}_{l+2}^{m}+\mathrm{sin}~\theta \frac{d{P}_{l+2}^{m}}{d\theta }\right\}{\overline{U}}_{l+2}^{mc}+m{P}_{l+1}^{m}{\overline{W}}_{l+1}^{ms}=0 ~~~(l=m, m+2, m+4, \cdots )$$
As mentioned above, the problem is underdetermined. In this study, therefore, a constraint
$$\int \left\{{\left({\overline{V}}_{\theta 2}\right)}^{2}+{\left({\overline{V}}_{\phi 2}\right)}^{2}\right\}dS\to \mathrm{min},$$
$$\sum_{j=0}^{m+2j+1\le L}\frac{(m+2j)(m+2j+1)}{2\left(m+2j\right)+1}\left\{{\left({\overline{U}}_{m+2j}^{mc}\right)}^{2}+{\left({\overline{W}}_{m+2j+1}^{ms}\right)}^{2}\right\}\to \mathrm{min }$$
is added, where \(\int dS\) denotes a surface integral over a unit sphere. Hence, \({\overline{U}}_{l}^{mc}=1\) is given, and the other terms are computed as relative values by minimizing the following function, \({\Psi }_{g}\):
$${\Psi }_{g}={\left[{\nabla }_{H}\cdot \left(\mathrm{cos}~\theta {\overline{{\varvec{V}}}}_{H}\right)\right]}^{2}+{\alpha }_{g}\int \left\{{\left({\overline{V}}_{\theta 2}\right)}^{2}+{\left({\overline{V}}_{\phi 2}\right)}^{2}\right\}dS,$$
where \({\alpha }_{g}\) is a controlling parameter. Figure 3 shows the ratio of magnitude of mean toroidal flow to that of mean poloidal flow for \(m=1\) to \(m=6\) with respect to \({\alpha }_{g}\). The truncation level for the spherical harmonics is increased from \(L=14\) to \(L=19\), to keep the number of toroidal components and poloidal components the same. The ratio of the magnitude of mean toroidal flow to that of mean poloidal flow is found to be approximately 2. To confirm the effect of the truncation level, the ratio of the magnitude of mean toroidal flow to that of mean poloidal flow is computed for various values of \(L\), as listed in Table 3. The result suggests that the effect of \(L\) is not very significant.
Control parameter, \({\alpha }_{g}\), dependence of toroidal and poloidal mean flow ratios. Circles represent the ratio of the mean toroidal flow to the mean poloidal flow magnitudes at \(r={r}_{2}\) under the tangentially geostrophic constraint, for spherical harmonic order \(m=1\) to \(m=6\)
Table 3 Mean toroidal flow to poloidal flow ratio for different truncation levels of spherical harmonics
Next, mean flow velocity under the tangentially magnetostrophic constraint is investigated. Using the same method as for the tangentially geostrophic flow, unknowns \({\overline{U}}_{l}^{mc}\), \({\overline{U}}_{l}^{ms}\), \({\overline{W}}_{l}^{mc}\), and \({\overline{W}}_{l}^{ms}\) relative to the provided \({\overline{W}}_{1}^{0}=1\) are computed by minimizing the function, \({\Psi }_{m}\):
$${\Psi }_{m}={\left[{\nabla }_{H}\cdot \left(2\mathrm{\Omega cos}~\theta {\overline{{\varvec{V}}}}_{H}+{\rho }^{-1}\sigma {B}_{r2}^{2}{\overline{{\varvec{V}}}}_{H}\times \hat{{\varvec{r}}}\right)\right]}^{2}+{\alpha }_{m}\int \left\{{\left({\overline{V}}_{\theta 2}\right)}^{2}+{\left({\overline{V}}_{\phi 2}\right)}^{2}\right\}dS,$$
where \({\alpha }_{m}\) is a controlling parameter. Figure 4 shows the \({\alpha }_{m}\)-dependence of the ratio of the magnitude of mean toroidal flow to that of mean poloidal flow in 2010 for \(\sigma ={10}^{5} ~\mathrm{S}~{\mathrm{m}}^{-1}\), \({10}^{6} ~\mathrm{S}~{\mathrm{m}}^{-1}\), and \({10}^{7} ~\mathrm{S}~{\mathrm{m}}^{-1}\). The ratio clearly decreases with increasing \(\sigma\). It is likely that this result arises from the effect of the Lorentz force proportional to \(\sigma\). This tendency is invariable for other epochs. It should be noted that these ratios are not necessarily equal to those obtained from a core flow model, simply because core flows are derived from geomagnetic field data including secular variations under the tangentially magnetostrophic constraint.
Control parameter, \({\alpha }_{m}\), dependence of toroidal to poloidal mean flow ratios. Circles represent the ratio of the mean toroidal to that of the mean poloidal flow magnitudes at \(r={r}_{2}\) under the tangentially magnetostrophic constraint, for \(\sigma =1\times {10}^{5} ~\mathrm{S}~{\mathrm{m}}^{-1}\), \(\sigma =1\times {10}^{6} ~\mathrm{S}~{\mathrm{m}}^{-1}\), and \(\sigma =1\times {10}^{7} ~\mathrm{S}~{\mathrm{m}}^{-1}\) at the epoch of 2010
Asari and Lesur (2011) attempted to compare the tangentially geostrophic and the tangentially magnetostrophic constraints by examining the resolution matrix. They found that the tangentially geostrophic constraint mainly influences the poloidal flow. On the other hand, they pointed out that the tangentially magnetostrophic constraint rather mitigates the poloidal flow. This may be related with the present result that the mean poloidal flow magnitude increases with increasing core electrical conductivity.
Driscoll and Du (2019) derived a phase diagram of the dynamo regime determined from core electrical conductivity and temperature. To maintain dynamo action driven by thermal convection, higher heat flux at the CMB is required for higher electrical conductivity, which implies higher thermal conductivity. The core electrical conductivity is likely to have been increasing, as the core temperature has been decreasing. This suggests that a thermally driven dynamo was converted into a compositionally driven dynamo by way of a thermally and compositionally driven dynamo. A similar transition of core dynamics due to the thermal history of the Earth is discussed later.
To this point, focus has been on core electrical conductivity for investigation of core flow estimation. However, as demonstrated in Eq. (8), the tangentially magnetostrophic constraint does not depend on core electrical conductivity alone. The relative importance of the Lorentz force to the Coriolis force can be measured by the Elsasser number, \(\Lambda\), as given by Eq. (10). The Elsasser number employed in the present study can be defined as the traditional form appropriate for steady imposed magnetic fields. This was shown by Soderlund et al. (2012, 2015), who found that the dynamic Elsasser number, \({\Lambda }_{d}\), represents the ratio of the Lorentz to Coriolis forces better than the traditional one. However, it should be indicated that \({\Lambda }_{d}\) is obtained from the magnetic field strength averaged over the core. In contrast, \(\Lambda\), as used in this paper, is locally defined at respective points on a spherical surface. In this sense, the traditional form of the Elsasser number is likely to be more appropriate for use in this study.
It is known that the rotation rate of the Earth has been decreasing due to tidal friction with the Moon. It is currently approximately 24 h, but it could have been as little as 4–6 h immediately after the Moon formed (e.g., Goldreich 1966; Mignard 1982). It follows that \(\Omega\) was approximately six to four times larger than the present value. Thus, the denominator of \(\Lambda =\sigma {B}_{r}^{2}/\rho\Omega |\mathrm{cos}~\theta |\) is likely to have been decreasing throughout the history of the Earth. However, the core temperature has been decreasing since the formation of the core, as found from the thermal evolution of the Earth. This suggests that the core electrical conductivity in the past could have been smaller than the present value, and that the numerator of \(\Lambda\) has been increasing. These circumstances indicate that the Elsasser number, \(\Lambda\), could have been smaller in the past than at present. That is, core flow in the past could have been more geostrophic than the present flow state. With time, the rotation rate of the Earth decreases, and the core electrical conductivity increases. Hence, the Elsasser number, \(\Lambda\), will be increasing. This implies that the style of magnetic field generation by poloidal and toroidal motions in the core has been changing.
In the present study, it is implicitly assumed that convective motions occur in the entire outer core. Alternatively, presence of a thermally or compositionally stably stratified layer at the top of the outer core is suggested by some seismic studies (e.g., Tanaka 2007; Helffrich and Kaneshima 2010), material studies (e.g., Pozzo et al. 2012; Gomi et al. 2013), and geomagnetic studies (e.g., Whaler 1980; Buffett 2014). If it is the case, the core surface flow should be purely toroidal without upwelling or downwelling. This point was examined by Whaler (1980), whose conclusion supported stable stratification at the core surface. However, this result depends on the validity of frozen-flux hypothesis, as Gubbins (2007) pointed out a possible effect of magnetic diffusion on geomagnetic secular variations. Asari and Lesur (2011) found that the purely toroidal flow is incompatible with the tangentially magnetostrophic flow. Then, Lesur et al. (2015) concluded that the purely toroidal flow at the core surface cannot explain the observed geomagnetic field, although a small poloidal flow or magnetic diffusion may compensate for an incompatible part of geomagnetic secular variations. Furthermore, Takehiro and Lister (2001) demonstrated that a stably stratified layer at the top of the core can be penetrated by columnar convection depending on the rotation rate of the Earth and the horizontal scale of vortices. Thus, even if a stably stratified layer is present at the top of the outer core, it is impossible to ignore the presence of poloidal motion near the core surface. Further discussion on this point is beyond the scope of the present study, and the related problems should be addressed in the future.
In this paper, the effect of core electrical conductivity in the range between \({10}^{5} ~\mathrm{S}~{\mathrm{m}}^{-1}\) and \({10}^{7} ~\mathrm{S}~{\mathrm{m}}^{-1}\) on core surface flow models was investigated. Core electrical conductivity is related to two terms in the equation to be solved: the magnetic diffusion term in the induction equation and the Lorentz force term in the Navier–Stokes equation. Tangentially geostrophic and tangentially magnetostrophic constraints were imposed for core flow beneath the viscous boundary layer at the core–mantle boundary to derive a core surface flow model from a geomagnetic field model. Magnetic diffusivity is inversely proportional to electrical conductivity, whereas the Lorentz force term is proportional to electrical conductivity. Under the tangentially geostrophic constraint, only the magnetic diffusion term has any effect on core surface flow models. It was found that core electrical conductivity has a limited effect on core flow models. In contrast, under the tangentially magnetostrophic constraint, it was found that the mean poloidal flow increases with an increase of core electrical conductivity (Fig. 2). This result arises from the Lorentz force, as found from Figs. 3 and 4, where the ratio of the magnitude of mean toroidal flow to that of mean poloidal flow is shown with respect to control parameters under the tangentially geostrophic and tangentially magnetostrophic constraints, respectively.
Furthermore, this result suggests that the ratio of the magnitude of mean toroidal flow to that of mean poloidal flow has been changing with secular change of the Elsasser number given by the ratio of the Lorentz and Coriolis forces. The Elsasser number has been increasing throughout the evolution of the Earth, because the rotation rate of the Earth has been decreasing and the core electrical conductivity has been increasing due to the decrease in core temperature. If the ratio can be estimated from magnetic field measurements of a planet, it may provide information on the core electrical conductivity of the planet.
The results of core surface flow are available from the author ([email protected]). A geomagnetic field model, COV-OBS.x1, is available from https://www.spacecenter.dk/files/magnetic-models/COV-OBSx1/
CMB:
Core–mantle boundary
Asari S, Lesur V (2011) Radial vorticity constraint in core flow modeling. J Geophys Res 116:B11101. https://doi.org/10.1029/2011JB008267
Benton ER, Muth LA (1979) On the strength of electric currents and zonal magnetic fields at the top of the Earth's core: Methodology and preliminary estimates. Phys Earth Planet Int 20:127–133
Braginsky SI (1991) Towards a realistic theory of the geodynamo. Geophys Astrophys Fluid Dyn 60:89–134
Busse F, Dormy E, Simitev R, Soward A (2007) Dynamics of rotating fluids. In: Dormy E, Soward AM (eds) Mathematical Aspects of Natural Dynamos. CRC Press, New York
Buffett B (2014) Geomagnetic fluctuations reveal stable stratification at the top of the Earth's core. Nature 507:484–487
Christensen UR, Wicht J (2015) Numerical dynamo simulation. In: Olson P (ed) Treatise on Geophysics, 2nd edn, vol 8, Elsevier, Amsterdam
Chulliat A, Olsen N (2010) Observation of magnetic diffusion in the Earth's outer core from Magsat, Ørsted, and CHAMP data. J Geophys Res 115:2009JB006994
Davis RG, Whaler KA (1997) The 1969 geomagnetic impulse and spin-up of Earth's liquid core. Phys Earth Planet Inter 103:181–194
Driscoll PE, Du Z (2019) Geodynamo conductivity limits. Geophys Res Lett 46:7982–7989. https://doi.org/10.1029/2019GL082915
Gillet N, Barrois O, Finlay CC (2015) Stochastic forecasting of the geomagnetic field from the COV-OBS.x1 geomagnetic field model, and candidate models for IGRF-12. Earth Planets Space 67:71. https://doi.org/10.1180/s40623-015-0225-z
Goldreich P (1966) History of the lunar orbit. Rev Geophys 4:411–439
Gomi H, Ohta K, Hirose K, Labrosse S, Caracas R, Verstraete MJ, Hernlund JW (2013) The high conductivity of iron and thermal evolution of the Earth's core. Phys Earth Planet Inter 224:88–103
Gubbins D (2007) Geomagnetic constraints on stratification at the top of Earth's core. Earth Planets Space 59:661–664
Helffrich G, Kaneshima S (2010) Outer-core compositional stratification from observed core wave speed profiles. Nature 468:807–812
Holme R (2015) Large-scale flow in the core. In: Olson P (ed) Treatise on Geophysics, 2nd edn, vol 8, Elsevier, Amsterdam
Kageyama A, Sato T (1997) Generation mechanism of a dipole field by a magnetohydrodynamic dynamo. Phys Rev E 55:4617–4626
Konôpková Z, McWilliams RS, Gómez-Pérez N, Goncharov AF (2016) Direct measurement of thermal conductivity in solid iron at planetary core conditions. Nature 534:99–101
Le Mouël JL, Gire C, Madden T (1985) Motions at the core surface in geostrophic approximation. Phys Earth Planet Int 39:270–287
Lesur V, Whaler K, Wardinski I (2015) Are geomagnetic data consistent with stably stratified flow at the core−mantle boundary? Geophys J Int 201(2):929–946
Matsushima M (2015) Core surface flow modelling with geomagnetic diffusion in a boundary layer. Geophys J Int 202:1495–1504. https://doi.org/10.1093/gji/ggv233
Metman MC, Livermore PW, Mound JE (2018) The reversed and normal flux contribution to axial dipole decay for 1880–2015. Phys Earth Planet Int 276:106–117. https://doi.org/10.1016/j.pepi.2017.06.007
Mignard F (1982) Long time integration of the Moon's orbit. In: Brosche P, Sündermann J (eds) Tidal friction and the Earth's rotation II. Springer, Berlin
Ohta K, Kuwayama Y, Hirose K, Shimizu K, Ohishi Y (2016) Experimental determination of the electrical resistivity of iron at Earth's core conditions. Nature 534:95–98
Olson P, Christensen U, Glatzmaier GA (1999) Numerical modeling of the geodynamo: Mechanics of field generation and equilibration. J Geophys Res 104:10383–10404
Pais MA, Oliveira O, Nogueira F (2004) Nonuniqueness of inverted core–mantle boundary flows and deviations from tangential geostrophy. J Geophys Res 109:B08103. https://doi.org/10.1029/2004JB003012
Pedlosky J (1987) Geophysical Fluid Dynamics, 2nd edn. Springer, Berlin
Pozzo M, Davies C, Gubbins D, Alfè D (2012) Thermal and electrical conductivity of iron at Earth's core conditions. Nature 485:355–358
Roberts PH, Scott S (1965) On analysis of the secular variations, 1: A hydromagnetic constraint: theory. J Geomag Geoelectr 17:137–151
Shimizu H (2006) On the use of boundary layer compatibility conditions for geodynamo modeling. E221-P001, Japan Geoscience Union Meeting 2006
Soderlund KM, King EM, Aurnou JM (2012) The influence of magnetic fields in planetary dynamo models. Earth Planet Sci Lett 333–334:9–20. https://doi.org/10.1016/j.epsl.2012.03.038
Soderlund KM, Sheyko A, King EM, Aurnou JM (2015) The competition between Lorentz and Coriolis forces in planetary dynamos. Prog Earth Planet Sci 2:24. https://doi.org/10.1186/s40645-015-0054-5
Stacey FD (1992) Physics of the Earth, 3rd edn. Bookfield Press, Australia, p 513
Takahashi F, Katayama JS, Matsushima M, Honkura Y (2001) Effects of boundary layers on magnetic field behavior in an MHD dynamo model. Phys Earth Planet Inter 128:149–161
Takehiro S, Lister JR (2001) Penetration of columnar convection into an outer stably stratified layer in rapidly rotating spherical fluid shells. Earth Planet Sci Lett 187(3–4):357–366
Tanaka S (2007) Possibility of a low P-wave velocity layer in the outermost core from global SmKS waveforms. Earth Planet Sci Lett 259(3–4):486–499
Tarduno JA, Cottrell RD, Watkeys MK, Hofmann A, Doubrovine PV, Mamajek EE, Liu D, Sibeck DG, Neukirch LP, Usui T (2010) Geodynamo, solar wind, and magnetopause 3.4 to 3.45 billion years ago. Science 327:1238–1240
Wardinski I, Holme R, Asari S, Mandea M (2008) The 2003 geomagnetic jerk and its relation to the core surface flows. Earth Planet Sci Lett 267:468–481
Whaler K (1980) Does the whole of the Earth's core convect? Nature 287(5782):528–530
Xu J, Zhang P, Haule K, Minar J, Wimmer S, Ebert H, Cohen RE (2018) Thermal conductivity and electrical resistivity of solid iron at Earth's core conditions from first principles. Phys Rev Lett 121:096601. https://doi.org/10.1103/PhysRevLett.121.096601
The author is very grateful to the two anonymous reviewers for their useful and constructive comments. The author thanks Editage (https://www.editage.com) for English language editing of the original manuscript.
This study was supported by JSPS KAKENHI Grant Numbers JP16H01116 and 15H05832.
Department of Earth and Planetary Sciences, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro, Tokyo, 152-8551, Japan
MM carried out everything related to this manuscript. The author read and approved the final manuscript.
Correspondence to Masaki Matsushima.
The author declares that he has no competing interests.
Appendix: Expression of horizontal flow in an Ekman–Hartmann boundary layer
The Elsasser number in this study is defined as \(\Lambda =\sigma {B}_{r}^{2}/\rho\Omega\) (without \(\theta\)-dependence). According to Soderlund et al. (2012), this is a traditional form denoted by \({\Lambda }_{i}\), being the ratio of the Lorentz force \({\rho }^{-1}{\varvec{J}}\times {\varvec{B}}\) to the Coriolis force \({\varvec{\Omega}}\times {\varvec{V}}\) with \({\varvec{J}}=\sigma \left({\varvec{E}}+{\varvec{V}}\times {\varvec{B}}\right)\approx \sigma {\varvec{V}}\times {\varvec{B}}\) as
$${\Lambda }_{i}=\frac{JB}{\rho\Omega V}=\frac{\sigma V{B}^{2}}{\rho\Omega V}=\frac{\sigma {B}^{2}}{\rho\Omega }.$$
Alternatively, Soderlund et al. (2012) defined the dynamic Elsasser number, \({\Lambda }_{d}\) with \({\varvec{J}}={\mu }_{0}^{-1}\nabla \times {\varvec{B}}\) as
$${\Lambda }_{d}=\frac{JB}{\rho\Omega V}=\frac{{B}^{2}}{\rho {\mu}_{0}\Omega V{\ell}_{B}},$$
where the electric current density is estimated as \(J\sim B/{\mu }_{0}{\ell}_{B}\). Soderlund et al. (2012) pointed out that \({\Lambda }_{d}\) represents the ratio of the Lorentz force to the Coriolis force better than the traditional one. \({\varvec{E}}={\varvec{0}},\) as assumed above, means that the magnetic field is not strongly time variant. This corresponds to a magnetoconvective system, in which a magnetic field is imposed.
Temporal variation of the radial magnetic field, \({\dot{B}}_{r}\), is generated by the interaction between \({{\varvec{V}}}_{H}\) and imposed \({B}_{r}\). Therefore, \({\Lambda }_{i}\) corresponding to \({\varvec{J}}\approx \sigma {\varvec{V}}\times {\varvec{B}}\) is appropriate. Then, the horizontal component of current density, which is much larger than the radial component, \({J}_{r}\ll |{{\varvec{J}}}_{H}|\), is given as
$${{\varvec{J}}}_{H}\approx \sigma {\left({\varvec{V}}\times {\varvec{B}}\right)}_{H}\approx \sigma {B}_{r}{{\varvec{V}}}_{H}\times \hat{{\varvec{r}}}.$$
Next, using Eq. (24), I derive the expression of horizontal flow in an Ekaman–Hartmann boundary layer (e.g., Busse et al. 2007). The equation of motion including the effect of Earth's rotation and magnetic field, neglecting the inertia term, can be given as
$$-\frac{1}{\rho }\nabla p-2{\varvec{\Omega}}\times {\varvec{V}}+\frac{A}{\rho }\hat{{\varvec{r}}}+\frac{1}{\rho }{\varvec{J}}\times {\varvec{B}}+\nu {\nabla }^{2}{\varvec{V}}={\varvec{0}}.$$
The horizontal components of Eq. (25) can be written as
$$-\frac{1}{\rho }\frac{1}{r}\frac{\partial p}{\partial \theta }+2\mathrm{\Omega cos}~\theta {V}_{\phi }+\frac{1}{\rho }{J}_{\phi }{B}_{r}+\nu \frac{{\partial }^{2}{V}_{\theta }}{\partial {r}^{2}}\approx 0,$$
$$-\frac{1}{\rho }\frac{1}{r\mathrm{sin}~\theta }\frac{\partial p}{\partial \phi }-2\mathrm{\Omega cos}~\theta {V}_{\theta }-\frac{1}{\rho }{J}_{\theta }{B}_{r}+\nu \frac{{\partial }^{2}{V}_{\phi }}{\partial {r}^{2}}\approx 0,$$
where \({\nabla }^{2}\approx {\partial }^{2}/\partial {r}^{2}\) is presumed. Using Eq. (24), one can obtain the magnetostrophic part of Eqs. (26a) and (26b) as
$$-\frac{1}{\rho }\frac{1}{r}\frac{\partial p}{\partial \theta }+2\mathrm{\Omega cos}~\theta {\overline{V}}_{\phi}-\frac{\sigma }{\rho }{B}_{r}^{2}{\overline{V}}_{\theta }=0,$$
$$-\frac{1}{\rho }\frac{1}{r\mathrm{sin}~\theta }\frac{\partial p}{\partial \phi }-2\mathrm{\Omega cos}~\theta {\overline{V}}_{\theta }-\frac{\sigma }{\rho }{B}_{r}^{2}{\overline{V}}_{\phi }=0.$$
$${V}_{\theta}={\overline{V}}_{\theta }+{v}_{\theta }, ~{V}_{\phi }={\overline{V}}_{\phi }+{v}_{\phi }, ~\frac{{\partial }^{2}{\overline{V}}_{\theta }}{\partial {r}^{2}}=\frac{{\partial }^{2}{\overline{V}}_{\phi }}{\partial {r}^{2}}=0,$$
one can obtain
$$2\mathrm{\Omega cos}~\theta {v}_{\phi }-\frac{\sigma }{\rho}{B}_{r}^{2}{v}_{\theta }+\frac{{\partial }^{2}{v}_{\theta }}{\partial {r}^{2}}=0,$$
$$-2\mathrm{\Omega cos}~\theta {v}_{\theta }-\frac{\sigma }{\rho }{B}_{r}^{2}{v}_{\phi }+\frac{{\partial }^{2}{v}_{\phi }}{\partial {r}^{2}}=0.$$
Solving these equations, one obtains a solution in terms of linear combination as
$${v}_{\theta }={c}_{1}{e}^{\zeta \gamma {e}^{i\beta }}+{c}_{2}{e}^{-\zeta \gamma {e}^{i\beta }}+{c}_{3}{e}^{\zeta \gamma {e}^{-i\beta }}+{c}_{4}{e}^{-\zeta \gamma {e}^{-i\beta }},$$
$${v}_{\phi}=\left(\mathrm{sgn~cos}~\theta \right)\left(-i{c}_{1}{e}^{\zeta \gamma {e}^{i\beta }}-i{c}_{2}{e}^{-\zeta \gamma {e}^{i\beta }}+i{c}_{3}{e}^{\zeta \gamma {e}^{-i\beta }}+i{c}_{4}{e}^{-\zeta \gamma {e}^{-i\beta }}\right),$$
where \(\zeta =\sqrt{2}\xi /{\delta }_{E}\), \(\xi ={r}_{0}-r\), \(\gamma ={\left(1+{\Lambda }^{4}/4\right)}^{1/4}\), \(\mathrm{cos}~2\beta =\Lambda /2{\gamma }^{2}\), \(\mathrm{sin}~2\beta =1/{\gamma }^{2}\), and \({c}_{1}\), \({c}_{2}\), \({c}_{3}\), \({c}_{4}\) are constants. \({v}_{\theta }\) and \({v}_{\phi }\) must be finite for \(\zeta \to \infty\), so that \({c}_{1}={c}_{3}=0\), and \({v}_{\theta }\) and \({v}_{\phi }\) must vanish at \(\zeta =0\), so that \({v}_{\theta }=-{\overline{V}}_{\theta}\) and \({v}_{\phi }=-{\overline{V}}_{\phi}\). Then, one can obtain the solution as
$${V}_{\theta }={\overline{V}}_{\theta }\left\{1-{e}^{-\zeta \gamma \mathrm{cos}\beta }\mathrm{cos}\left(\zeta \gamma \mathrm{sin}\beta \right)\right\}-\left(\mathrm{sgn~cos}~\theta \right){\overline{V}}_{\phi }{e}^{-\zeta \gamma \mathrm{cos}\beta }\mathrm{sin}\left(\zeta \gamma \mathrm{sin}\beta \right),$$
$${V}_{\phi }={\overline{V}}_{\phi }\left\{1-{e}^{-\zeta \gamma \mathrm{cos}\beta }\mathrm{cos}\left(\zeta \gamma \mathrm{sin}\beta \right)\right\}+\left(\mathrm{sgn~cos}~\theta \right){\overline{V}}_{\theta }{e}^{-\zeta \gamma \mathrm{cos}\beta }\mathrm{sin}\left(\zeta \gamma \mathrm{sin}\beta \right).$$
Using \(\mathrm{cos}\beta =\sqrt{{\gamma }^{2}+\Lambda /2}/\sqrt{2}\gamma\) and \(\mathrm{sin}\beta =\sqrt{{\gamma }^{2}-\Lambda /2}/\sqrt{2}\gamma\), one can obtain
$$\zeta \gamma \mathrm{cos}~\beta =\frac{\xi }{{\delta }_{EH}^{+}} ~~\mathrm{with}~~{\delta }_{EH}^{+}=\frac{{\delta }_{E}}{{\left\{{\left(1+{\Lambda }^{2}/4\right)}^{1/2}+\Lambda /2\right\}}^{1/2}},$$
$$\zeta \gamma \mathrm{sin}~\beta =\frac{\xi }{{\delta }_{EH}^{-}} ~~\mathrm{with}~~{\delta }_{EH}^{-}=\frac{{\delta }_{E}}{{\left\{{\left(1+{\Lambda }^{2}/4\right)}^{1/2}-\Lambda /2\right\}}^{1/2}}.$$
$${{\varvec{V}}}_{H}={\overline{{\varvec{V}}}}_{H}\left\{1-\mathrm{exp}\left(-\frac{\xi }{{\delta }_{EH}^{+}}\right)\mathrm{cos}\left(\frac{\xi }{{\delta }_{EH}^{-}}\right)\right\}+\left(\mathrm{sgn~cos~\theta }\right)\hat{{\varvec{r}}}\times {{{\overline {\varvec V}}}}_{H}\, \mathrm{exp}\left(-\frac{\xi }{{\delta }_{EH}^{+}}\right)\mathrm{sin}\left(\frac{\xi }{{\delta }_{EH}^{-}}\right).$$
Matsushima, M. Effect of core electrical conductivity on core surface flow models. Earth Planets Space 72, 180 (2020). https://doi.org/10.1186/s40623-020-01269-0
Core electrical conductivity
Core surface flow
Tangentially geostrophic flow
Tangentially magnetostrophic flow
Elsasser number
1. Geomagnetism | CommonCrawl |
Neural Machine Translation by Jointly Learning to Align and Translate
(Bahdanau et al., 2014) orally at ICLR 2015
I'm starting a new thing where I write about a paper every day, inspired by The Morning Paper. Let me know what you think.
This paper was the first to show that an end-to-end neural system for machine translation (MT) could compete with the status quo. When neural models started devouring MT, the dominant model was encoder–decoder. This reduces a sentence to a fixed-size vector (basically smart hashing), then rehydrates that vector using the decoder into a target-language sentence. The main contribution of this paper, an "attention mechanism", lets the system focus on little portions of the source sentence as needed, bringing alignment to the neural world. Information can be distributed, rather than squeezed into monolithic chunks.
In general, the approach to MT looks for the sentence $\mathbf{e}$ that maximizes $p(\mathbf{e} \mid \mathbf{f})$. (Think of E and F representing English and French; the goal is to translate into English.) Popular neural approaches from Cho et al. and Sutskever et al. used recurrent neural networks for their encoder and decoder. The RNN combines the current word with information it has learnt about past words to produce a vector for each input token. Sutskever et al. (2014) took the last of these outputs as their fixed-size representation, the context. The danger here is that information from early in the sentence can be heavily diluted.
The decoder is less interesting conceptually. It defines the probability of a word in terms of the context and all prior generated words. The decoder's job is to find a sentence that maximizes the probability of an entire sentence.
The encoder–decoder framework's fixed-size representations make long sentences challenging. Bahdanau et al. reintroduce the "alignment" idea from non-neural MT—a mapping between positions in the source and target sentences. It shows, e.g., which words in the blue house give rise to the word maison in le maison bleu. In simpler words, which parts translated into which parts?
The attention mechanism is an alteration of both the decoder and the encoder.
This time, the encoder change is less exciting: they use a bidirectional RNN now, which is two RNNs where one starts from the end of the sentence. Its outputs now contain information from before and after the word in question.
The decoder is now no longer conditioned on just the single, sentence-level context. A weighted sum of the encoder outputs is used instead. The weights are a softmax of the alignment scores between the given decoder position and the encoder output vectors. The scores come out of a simple neural network.
Note that unlike in traditional machine translation, the alignment is not considered to be a latent variable. Instead, the alignment model directly computes a soft alignment, which allows the gradient of the cost function to be backpropagated through. This gradient can be used to train the alignment model as well as the whole translation model jointly.
With this new approach the information can be spread throughout the sequence of annotations, which can be selectively retrieved by the decoder accordingly.
Another benefit of their approach is that it's a soft alignment, rather than a hard one. Not only does this make it differentiable (and learnable by a NN), but also it helps for agreement.
Consider the source phrase [the man] which was translated into [l' homme]. Any hard alignment will map [the] to [l'] and [man] to [homme]. This is not helpful for translation, as one must consider the word following [the] to determine whether it should be translated into [le], [la], [les] or [l']. Our soft-alignment solves this issue naturally by letting the model look at both [the] and [man], and in this example, we see that the model was able to correctly translate [the] into [l'].
Quantitatively, their attention model outperforms the normal encoder-decoder framework. I'm doubtful of one claim, though—that the encoder-decoder model choked on long sentences. Scores kept going up, by about the same percentage as the attention model did. They're just lower to begin with.
The Bahdanau et al. model also nears the performance of a big-deal phrase-based model that supplemented the training data with a monolingual corpus. This is big for showing the viability of end-to-end neural MT.
Written on February 1, 2018 | CommonCrawl |
How to approximate gaussian kernel for image blur
From wiki, a $3 \times3$ gaussian kernel is approximated as: $$\frac{1}{16}\begin{bmatrix}1&2&1\\2&4&2\\1&2&1 \end{bmatrix}.$$ Applying this kernel in an image equals to applying an one-dimensional kernel in x-direction then again in y-direction, so one-dimensional kernel is $$\frac{1}{4}\begin{bmatrix}1&2&1\end{bmatrix}.\qquad \text{for 3*3 kernel}$$ $$\frac{1}{16}\begin{bmatrix}1&4&6&4&1\end{bmatrix}.\qquad \text{for 5*5 kernel}$$ My question is how to derive this approximation?
image-processing gaussian blur gaussian-kernel
FinleyFinley
The continuous Gaussian, whatever its dimension (1D, 2D), is a very important function in signal and image processing. As most data is discrete, and filtering can be costly, it has been and still is, subject of quantities of optimization and quantification/quantization schemes. In one 1D, the three most direct for a finite-length filter are illustrated below:
In solid blue, the continuous Gaussian. For an $L=7$-length, I choose a Gaussian with parameter $s=\sqrt{L/2}$, in consistence with the first approximation $(P)$ given by the binomial formula or the Pascal triangle, as detailed by @Olli Niemitalo.
Each filter coefficient (red crosses) is of the form $$h_P[k] = \frac{\binom{L}{k}}{2^L}\,.$$ Their sum is naturally equal to unity, and the coefficients are dyadic-rationals, in the form of $a/2^b$. They are practical for integer operations, on integer-valued pixels. As you can see, they are slightly above the Gaussian around the center, and below on the tails. When $L\to \infty, the shape tends to a certain bell function shape, see From Pascal's Triangle to the Bell-shaped Curve.
A second approximation (green circles), often called the truncated Gaussian, consists in sampling ($S$) $x\mapsto G(x)=\frac{1}{s\sqrt{\pi}}e^{-\frac{x^2}{s^2}}$ at integer values of $x$: $[-3,-2,\ldots,0,\ldots,3]$. They need to be normalized to 1-unit sum:
$$h_S[k] = \frac{G(k-(L-1)/2)}{\sum_{l=1}^L G(l-(L-1)/2)}\,.$$
The third approximation ($A$), blue circles and bars) is area dependent: once again after normalization, the coefficients are proportional to the area under the Gaussian, on the interval $[k-1/2,k+1/2]$:
$$h_A[k] \propto \int_{k-1/2}^{k+1/2} G(x)dx\,.$$
The exact value depends on the quadrature formulae.
In our settings (with the chosen $s$), coefficients are, respectively from left to right (pascal, Sampling, Area):
$$ \begin{Bmatrix} 0.0156 & 0.0232 & 0.0255\\ 0.0938 & 0.0968 & 0.0998\\ 0.2344 & 0.2282 & 0.2262\\ 0.3125 & 0.3036 & 0.2970\\ 0.2344 & 0.2282 & 0.2262\\ 0.0938 & 0.0968 & 0.0998\\ 0.0156 & 0.0232 & 0.0255\\ \end{Bmatrix} $$
Only the first column admits dyadic/integer operations. To work with integer arithmetic, one can round to the nearest integer. As $2^7=128$, this quantization choice (another approximation) leads to:
$$\frac{1}{128} \begin{Bmatrix} 2 & 3 & 3\\ 12 & 12 & 13\\ 30 & 29 & 29\\ 40 & 39 & 38\\ 30 & 29 & 29\\ 12 & 12 & 13\\ 2 & 3 & 3\\ \end{Bmatrix} $$
With $2^6=64$, you get:
$$\frac{1}{64} \begin{Bmatrix} 1 & 1 & 2\\ 6 & 6 & 6\\ 15 & 15 & 14\\ 20 & 19 & 19\\ 15 & 15 & 14\\ 6 & 6 & 6\\ 1 & 1 & 2\\ \end{Bmatrix} $$
In the case you mention in comments, the formula for $\sigma$ is a bit different: $\sigma=(3L+1)/20$, equal to $1.4$ for $L=7$. I don't know its origin yet. As said above, those are the main principles. In the Gaussian Kernel Calculator demo, with those values, you get:
$$\begin{Bmatrix} 0.031251\\ 0.106235 \\0.221252\\ 0.282524 \\0.221252\\ 0.106235\\ 0.031251 \end{Bmatrix} $$ which is quite close (up to the integral quadrature precision) to your: $$\begin{Bmatrix} 0.03125\\0.109375\\0.21875\\0.28125\\0.21875\\0.109375\\0.03125 \end{Bmatrix} $$
as the former, multiplied by 64, gives:
$$\begin{Bmatrix} 2.0001 \\ 6.7990 \\ 14.1601 \\ 18.0815 \\ 14.1601 \\ 6.7990 \\ 2.0001 \end{Bmatrix} $$
The ($A$) approximation is possibly more accurate with respect to the integral version of the convolution, with a continuous Gaussian and a discrete or piecewise constant signal.
In 2D, one can tensorize two 1D filters, in which case the 2D filter has rank one. Or one extends the above method to 2D. The 2D coefficients can be obtained by two-dimensional integral estimations:
$$h_A[m,n] \propto \iint_{[m-1/2,m+1/2]\times [n-1/2,n+1/2]} G(x,y)dxdy\,.$$
See for instance Gaussian smoothing, esp. Figure 3, Discrete approximation to Gaussian function with $\sigma=1.0$:
Innokentiy Alaytsev
Laurent DuvalLaurent Duval
Some possible explanations for the coefficients:
Binomial coefficients
The 1-d kernels are probability mass functions of binomial distributions with probability parameter $p=1/2$ to make them symmetrical. Binomial distributions can be approximated by Gaussian distributions, so it should be true that Gaussian distributions can also be approximated by binomial distributions. You can obtain binomial distributions with $p = 1/2$ by convolving the length 2 kernel:
$$\frac{1}{2}\begin{bmatrix}1&1\end{bmatrix}$$
by the length 2 kernel multiple times. In Octave, without the normalization factors for clarity:
>> f = [1 1];
>> g = f #1/2
g =
>> g = conv(g, f) #1/4
1 2 1
1 3 3 1
>> g = conv(g, f) #1/16
1 4 6 4 1
1 5 10 10 5 1
1 6 15 20 15 6 1
For large kernels the first and last binomial coefficients become very small and have hardly any effect on the result.
In comments to this answer, you also mention the length 7 kernel: $$\begin{bmatrix}0.03125&0.109375&0.21875&0.28125&0.21875&0.109375&0.03125\end{bmatrix}\\ = \frac{1}{64}\begin{bmatrix}2&7&14&18&14&7&2\end{bmatrix}.$$
This can be found in the source code of OpenCV function cv::getGaussianKernel where the small kernels up to that kernel are hard-coded:
static const float small_gaussian_tab[][SMALL_GAUSSIAN_SIZE] =
{1.f},
{0.25f, 0.5f, 0.25f},
{0.0625f, 0.25f, 0.375f, 0.25f, 0.0625f},
{0.03125f, 0.109375f, 0.21875f, 0.28125f, 0.21875f, 0.109375f, 0.03125f}
These also match the kernels cited in the question. The function is also documented explaining the general calculation, but not the rounding in the small kernels. For sizes $3,$ $5,$ $7, $ this appears to be to the nearest multiple of $\frac{1}{4},$ $\frac{1}{16},$ $\frac{1}{64},$ respectively, using the general calculation and then rounding. Based on the documentation, the general calculation goes as:
$$\begin{array}{ll}N&\quad\text{filter length}\\ \sigma = 0.3\times((N - 1)\times0.5 - 1) + 0.8&\quad\text{standard deviation}\\ \exp(- (i - (N - 1)/2)^2/(2\sigma^2))&\quad\text{unnormalized coefficients, with }0\le i<N\end{array}$$
They do not document how that was derived. Seems ad hoc / empirical. After the above calculation, the coefficients are normalized so that their sum is 1.
Olli NiemitaloOlli Niemitalo
$\begingroup$ Hi, your derivation corresponds to cases of $3*3 \text{ and } 5*5$. But what I see is $0.03125, 0.109375, 0.21875, 0.28125, 0.21875, 0.109375, 0.03125$(normailzed to $[0,1]$) in $7*7$ case which doesn't correspond to $1 , 6 , 15 , 20 , 15 , 6 , 1$, can you help out? $\endgroup$ – Finley Dec 25 '18 at 2:56
$\begingroup$ Where are those coefficients from? $\endgroup$ – Olli Niemitalo Dec 25 '18 at 9:15
$\begingroup$ From OpenCV too, ;). It is exactly the function cv::getGaussianKernel $\endgroup$ – Finley Dec 26 '18 at 1:20
Not the answer you're looking for? Browse other questions tagged image-processing gaussian blur gaussian-kernel or ask your own question.
How to Make a Gaussian Filter?
Having trouble calculating the correct Gaussian Kernel values from the Gaussian function formula
Choice of Gaussian kernel parameters when lowpass filtering before image resampling?
Applying Gaussian Blur in Frequency Domain
Gaussian Filter Close to Image Border
How Does Gaussian Blur Affect Image Variance
How to derive the $\log$ of fraction of two Gaussian distribution
How to Calculate the Hann Low Pass Filter / Kernel for Image Processing
Prewitt operator and central difference?
Image Processing/ Computer Vision: How is separability of filters implemented?
How Come the Low Pass Filter in Sobel Operator Isn't Normalized? | CommonCrawl |
Elimination algorithm of complex network redundant data stream based on information theory
An adaptive genetic algorithm for solving the optimization model of car flow organizat
Space-time kernel based numerical method for generalized Black-Scholes equation
Marjan Uddin , and Hazrat Ali
Department of Basic Sciences, University of Engineering and Technology Peshawar, Pakistan
* Corresponding author: Marjan Uddin
Received February 2019 Revised June 2019 Published December 2019
Fund Project: This work is supported by HEC Pakistan
In approximating time-dependent partial differential equations, major error always occurs in the time derivatives as compared to the spatial derivatives. In the present work the time and the spatial derivatives are both approximated using time-space radial kernels. The proposed numerical scheme avoids the time stepping procedures and produced sparse differentiation matrices. The stability and accuracy of the proposed numerical scheme is tested for the generalized Black-Scholes equation.
Keywords: Space-time numerical scheme, Meshless method, Radial kernels, Black-Scholes equation.
Citation: Marjan Uddin, Hazrat Ali. Space-time kernel based numerical method for generalized Black-Scholes equation. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020221
V. R. Ambati and O. Bokhove, Space-time discontinuous Galerkin finite element method for shallow water flows, J. Comput. Appl. Math., 204 (2007), 452-462. doi: 10.1016/j.cam.2006.01.047. Google Scholar
C. Canuto, M. Y. Hussaini, A. Z. Quarteroni and T. Zang, Option Pricing: Mathematical Models and Computation, Springer, 1993. Google Scholar
C. Canuto, M. Y. Hussaini, A. Quarteroni and T. A. Zang, Spectral Methods, Evolution to Complex Geometries and Applications to Fluid Dynamics, Scientific Computation, Springer, Berlin, 2007. Google Scholar
C. Chen, A. Karageorghis and Y. Smyrlis, The Method of Fundamental Solutions: A Meshless Method, Dynamic Publishers Atlanta, 2008. Google Scholar
A. Cohen, Numerical Analysis of Wavelet Methods, Studies in Mathematics and its Applications, 32. North-Holland Publishing Co., Amsterdam, 2003. Google Scholar
J. C. Cox, S. A. Ross and M. Rubinstein, Option pricing: A simplified approach, J. Financ. Econ., 7 (1979), 229-263. doi: 10.1016/0304-405X(79)90015-1. Google Scholar
G. E. Fasshauer, Meshfree Approximation Methods with MATLAB, With 1 CD-ROM (Windows, Macintosh and UNIX), Interdisciplinary Mathematical Sciences, 6. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2007. doi: 10.1142/6437. Google Scholar
G. E. Fasshauer and M. McCourt, Kernel-Based Approximation Methods Using Matlab, World Scientific pub. Co, 2015. Google Scholar
G. E. Fasshauer and J. G. Zhang, On choosing optimal shape parameters for RBF approximation, Numerical Algorithms, 45 (2007), 345-368. doi: 10.1007/s11075-007-9072-8. Google Scholar
R. Geske and K. Shastri, Valuation by approximation: A comparison of alternative option valuation techniques, Journal of Financial and Quantitative Analysis, 20 (1985), 45-71. doi: 10.2307/2330677. Google Scholar
M. Hamaidi, A. Naji and A. Charafi, Space-time localized radial basis function collocation method for solving parabolic and hyperbolic equations, Eng. Anal. Bound. Elem., 67 (2016), 152-163. doi: 10.1016/j.enganabound.2016.03.009. Google Scholar
J. Hull and A. White, The use of the control variate technique in option pricing, Journal of Financial and Quantitative analysis, 23 (1988), 237-251. doi: 10.2307/2331065. Google Scholar
M. K. Kadalbajoo, L. P. Tripathi and A. Kumar, A cubic B-spline collocation method for a numerical solution of the generalized Black-Scholes equation, Math. Comput. Modelling, 55 (2012), 1483-1505. doi: 10.1016/j.mcm.2011.10.040. Google Scholar
C. M. Klaija, J. J. W. van der Vegta and H. van der Venb, Space-time discontinuous Galerkin method for the compressible Navier–Stokes equations, J. Comput. Phys., 217 (2006), 589-611. doi: 10.1016/j.jcp.2006.01.018. Google Scholar
M. Li, W. Chen and C. S. Chen, The localized RBFs collocation methods for solving high dimensional PDEs, Eng. Anal. Bound. Elem., 37 (2013), 1300-1304. doi: 10.1016/j.enganabound.2013.06.001. Google Scholar
Z. Li and X. Z. Mao, Global multiquadric collocation method for groundwater contaminant source identification, Environmental modelling & software, 26 (2011), 1611-1621. doi: 10.1016/j.envsoft.2011.07.010. Google Scholar
Z. Li and X. Z. Mao, Global space-time multiquadric method for inverse heat conduction problem, Internat. J. Numer. Methods Engrg., 85 (2011), 355-379. doi: 10.1002/nme.2975. Google Scholar
F. Moukalled, L. Mangani and M. Darwish, The Finite Volume Method in Computational Fluid Dynamics, Fluid Mechanics and its Applications, 113. Springer, Cham, 2016. doi: 10.1007/978-3-319-16874-6. Google Scholar
H. Netuzhylov, A Space-Time Meshfree Collocation Method for Coupled Problems on Irregularly-Shaped Domains, Ph.D thesis, Zugl., Braunschweig, Techn. Univ., Diss., 2008. Google Scholar
R. Mohammadi, Quintic B-spline collocation approach for solving generalized Black–Scholes equation governing option pricing, Comput. Math. Appl., 69 (2015), 777-797. doi: 10.1016/j.camwa.2015.02.018. Google Scholar
S. A. Sauter and C. Schwab, Boundary Element Methods, Springer Series in Computational Mathematics, 39. Springer-Verlag, Berlin, 2011. doi: 10.1007/978-3-540-68093-2. Google Scholar
J. J.Sudirham, J. J. W. van der Vegt and R. M. J. van Damme, Space-time discontinuous Galerkin method for advection-diffusion problems on time-dependent domains, Appl. Numer. Math., 56 (2006), 1491-1518. doi: 10.1016/j.apnum.2005.11.003. Google Scholar
T. E. Tezduyar, S. Sathe, R. Keedy and K. Stein, Space-time finite element techniques for computation of fluid-structure interactions, Comput. Methods Appl. Mech. Engrg., 195 (2006), 2002-2027. doi: 10.1016/j.cma.2004.09.014. Google Scholar
C. Turchetti, M. Conti, P. Crippa and S. Orcioni, On the approximation of stochastic processes by approximate identity neural networks, IEEE Transactions on Neural Networks, 9 (1998), 1069-1085. doi: 10.1109/72.728353. Google Scholar
C. Turchetti, P. Crippa, M. Pirani and G. Biagetti, Representation of nonlinear random transformations by non-Gaussian stochastic neural networks, IEEE transactions on neural networks, 19 (2008), 1033-1060. doi: 10.1109/TNN.2007.2000055. Google Scholar
M. Uddin, H. Ali and A. Ali, Kernel-based local meshless method for solving multi-dimensional wave equations in irregular domain, CMES-Computer Modeling In Engineering & Sciences, 107 (2015), 463-479. Google Scholar
M. Uddin and H. Ali, The space time kernel based numerical method for Burgers equations, Mathematics, 6 (2018), 212-222. doi: 10.3390/math6100212. Google Scholar
M. Uddin, K. Kamran, M. Usman and A. Ali, On the Laplace-transformed-based local meshless method for fractional-order diffusion equation, Int. J. Comput. Methods Eng. Sci. Mech., 19 (2018), 221-225. doi: 10.1080/15502287.2018.1472150. Google Scholar
B. M. Vaganan and E. E. Priya, Generalized Cole-Hopf transformations for generalized Burgers equations, Pramana, 85 (2015), 861-867. doi: 10.1007/s12043-015-1107-4. Google Scholar
D. L. Young, C. C. Tsai, K. Murugesana, C. M. Fan and C. W. Chen, Time-dependent fundamental solutions for homogeneous diffusion problems, Engineering Analysis with Boundary Elements, 28 (2004), 1463-1473. doi: 10.1016/j.enganabound.2004.07.003. Google Scholar
H. Zhang, F. Liu, I. Turner and Q. Yang, Numerical solution of the time fractional Black-Scholes model governing European options, Comput. Math. Appl., 71 (2016), 1772-1783. doi: 10.1016/j.camwa.2016.02.007. Google Scholar
Figure 1. A typical centers arrangements in global space-time domain as well as in a local sub-domain, and sparsity of descretized operator of problem 1, where $ m = 100 $, $ n = 10 $
Figure 2. The exact solution versus the approximate solution in space-time domain corresponding to problem 1, when $ m = 1600 $ and $ n = 10 $ in domain $ (x,t)\in (-2,2)\times(0,1) $
Figure 3. The numerical solution and error in space-time domain, corresponding to problem 2 when $ m = 400 $ and $ n = 10 $ in the domain $ (x,t)\in (0,1)\times(0,1) $
Figure 4. Double barrier option prices obtained by space-time local kernel method
Figure 5. Call option prices obtained by space-time local kernel method
Figure 6. Put option prices obtained by space-time local kernel method
Table 1. Sapce-time (ST) solution of problem 1 for different total collocation points $ m $ and stencil size $ n $, and time integration (TI) solution in domain $ (x,t)\in (-2,2)\times(0,1) $
$ m $ $ n $ $ L_{\infty} $ ST method (C.time) TI method (C.time)
100 10 1.23E-02 0.6783 3.2891
400 9.49E-02 0.7123 6.2821
1600 1.08E-04 0.9129 10.2370
400 1.45E-03 10.2356 9.2371
1600 2.45E-03 11.2916 13.9820
100 20 2.22E-02 9.1835 10.10491
400 2.89E-03 11.2349 12.7146
Table 2. Observed maximum absolute error for example 1 in Reza [20] and Kadabajoo [13] for different $ \theta $ in domain $ (x,t)\in (-2,2)\times(0,1) $
$ M = N $ 10 20 40 80 160
[20] for $ \theta=1 $ 7.24E-02 3.12E-02 1.39E-02 6.08E-02 2.71E-04
[20] for $ \theta=\frac{1}{2} $ 1.12E-03 2.08E-02 3.91E-05 7.19E-06 1.31E-06
Rodrigue Gnitchogna Batogna, Abdon Atangana. Generalised class of Time Fractional Black Scholes equation and numerical analysis. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 435-445. doi: 10.3934/dcdss.2019028
Rehana Naz, Imran Naeem. Exact solutions of a Black-Scholes model with time-dependent parameters by utilizing potential symmetries. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020122
Erik Ekström, Johan Tysk. A boundary point lemma for Black-Scholes type operators. Communications on Pure & Applied Analysis, 2006, 5 (3) : 505-514. doi: 10.3934/cpaa.2006.5.505
Kais Hamza, Fima C. Klebaner. On nonexistence of non-constant volatility in the Black-Scholes formula. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 829-834. doi: 10.3934/dcdsb.2006.6.829
Junkee Jeon, Jehan Oh. (1+2)-dimensional Black-Scholes equations with mixed boundary conditions. Communications on Pure & Applied Analysis, 2020, 19 (2) : 699-714. doi: 10.3934/cpaa.2020032
Chaoxu Pei, Mark Sussman, M. Yousuff Hussaini. A space-time discontinuous Galerkin spectral element method for the Stefan problem. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3595-3622. doi: 10.3934/dcdsb.2017216
Dong-Ho Tsai, Chia-Hsing Nien. On space-time periodic solutions of the one-dimensional heat equation. Discrete & Continuous Dynamical Systems - A, 2019, 0 (0) : 0-0. doi: 10.3934/dcds.2020037
Xiaomao Deng, Xiao-Chuan Cai, Jun Zou. A parallel space-time domain decomposition method for unsteady source inversion problems. Inverse Problems & Imaging, 2015, 9 (4) : 1069-1091. doi: 10.3934/ipi.2015.9.1069
Yanzhao Cao, Li Yin. Spectral Galerkin method for stochastic wave equations driven by space-time white noise. Communications on Pure & Applied Analysis, 2007, 6 (3) : 607-617. doi: 10.3934/cpaa.2007.6.607
Yuming Zhang. On continuity equations in space-time domains. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 4837-4873. doi: 10.3934/dcds.2018212
Imtiaz Ahmad, Siraj-ul-Islam, Mehnaz, Sakhi Zaman. Local meshless differential quadrature collocation method for time-fractional PDEs. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020223
Georgios T. Kossioris, Georgios E. Zouraris. Finite element approximations for a linear Cahn-Hilliard-Cook equation driven by the space derivative of a space-time white noise. Discrete & Continuous Dynamical Systems - B, 2013, 18 (7) : 1845-1872. doi: 10.3934/dcdsb.2013.18.1845
Susanne Pumplün, Thomas Unger. Space-time block codes from nonassociative division algebras. Advances in Mathematics of Communications, 2011, 5 (3) : 449-471. doi: 10.3934/amc.2011.5.449
Gerard A. Maugin, Martine Rousseau. Prolegomena to studies on dynamic materials and their space-time homogenization. Discrete & Continuous Dynamical Systems - S, 2013, 6 (6) : 1599-1608. doi: 10.3934/dcdss.2013.6.1599
Dmitry Turaev, Sergey Zelik. Analytical proof of space-time chaos in Ginzburg-Landau equations. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1713-1751. doi: 10.3934/dcds.2010.28.1713
Frédérique Oggier, B. A. Sethuraman. Quotients of orders in cyclic algebras and space-time codes. Advances in Mathematics of Communications, 2013, 7 (4) : 441-461. doi: 10.3934/amc.2013.7.441
Grégory Berhuy. Algebraic space-time codes based on division algebras with a unitary involution. Advances in Mathematics of Communications, 2014, 8 (2) : 167-189. doi: 10.3934/amc.2014.8.167
Montgomery Taylor. The diffusion phenomenon for damped wave equations with space-time dependent coefficients. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5921-5941. doi: 10.3934/dcds.2018257
Vincent Astier, Thomas Unger. Galois extensions, positive involutions and an application to unitary space-time coding. Advances in Mathematics of Communications, 2019, 13 (3) : 513-516. doi: 10.3934/amc.2019032
Marjan Uddin Hazrat Ali | CommonCrawl |
The data stored in a database is generally about a single topic. For example: Patients' files in a hospital The contents of an address book A catalog of movies in a video store. A relational database is a collection of tables, where each row of the table is a record and each column is a field.
Sealmaster bearing cross reference
The results show that total lateral forces on the piles are influenced by the shadow effect as well as the superstructure mass attached to the pile cap. ... computed results with experimental data ...
World cup zone wars code
Dickens dinner ohio village
36 string harp for sale
Used georgia buggy for sale
Large bird cage for sale near me
Al capone house in st charles il
A student conducted an experiment with a toy cart by varying the force applied toward the right and observing the result. The force applied to the left was constant at 50 N. The table contains the data collected from the experiment.
And similarly, rows 4 and 5 show that a halving of the mass results in a doubling of the acceleration (if force is held constant). Acceleration is inversely proportional to mass. The analysis of the table data illustrates that an equation such as F net = m*a can be a guide to thinking about how a variation in one quantity might affect another ...
View Lab 4-Vector Force Table.pdf from PHYS 1122 at The University of Lahore - Raiwind Road, Lahore. Chapter 1 Forces in Equilibrium 1.1 1.1.1 General Information Purpose The object of this
Will lenovo m10 get android 11
Sylvania challenger panel
Naruto emiya fanfiction
Determine the axial force N, shear force V, and bending moment M acting at a cross section Substitute numerical data Problem 4.5-16 A beam ABC with an overhang at one end supports a uniform load of intensity 12 kN/m and a concentrated load of magnitude 2.4 kN Problem 4.5-20 The beam ABCD shown in the figure has overhangs that extend in both directions for a distance of 4.2 m...
determine how forces, masses, and accelerations are interrelated. For Newton's Third Law, the primary objective was to determine how the direction of accelerating pairs are related to each other. Data and Calculations Part I: Newton's 2nd Law Accelerometer Force Sensor Figure 1: Experimental Set up for testing Newton's 2nd Law.
The normal force sometimes called the loading force arises from the elastic properties of the bodies. Where μk is the coefficient of static friction and N is the magnitude of the normal force. Both μs and μk are dimensionless constants, each being the ratio of the magnitudes of two forces.
investigate machine kinematics and resulting dynamic forces. The position, velocity, acceleration and shaking forces generated by a slider-crank mechanism during operation can be determined analytically. Certain factors are often neglected from analytical calculations, causing results to differ from experimental data.
Feb 09, 2017 · These analyses are based on a subset of forces that were able to supply detailed data of sufficient quality and are published as experimental statistics in advance of all forces being able to do so. They present data on violent and sexual offences recorded by the police in the year ending March 2016, broken down by age and sex of the victim.
Bode plot asymptotes
Lock screen message android
2009 nissan sentra timing chain replacement
Note: this data is experimental and subject to further quality assurance tests. The PHDA dataset was used in order to calculate the ASMRs by vaccination status and Table 3. One of the main strengths of the linked PHDA is that it combines a rich set of demographic and socio-economic...
The magnitude and direction of each component for the sample data are shown in the table below the diagram. The data in the table above show that the forces nearly balance. An analysis of the horizontal components shows that the leftward component of A nearly balances the rightward component of B.
If the three forces sum to zero, the sum of the first and second forces is a force with the same magnitude as the third force, but with the opposite direction (F 1 +F 2 = F 2). Test this quan-titatively by calculating the magnitude of the sum of the two forces at 270° and 180° using the Pythagorean theorem.
table shows experimental of magnitude of 4 forces exerted on 2 kg object as it slides across horizontal surface. which of the following can show magnitude of bet forces. 6N and 10 N blue sphere and red sphere same diameter released from top of the ramp. red sphere takes a longer time to reach bottom of the ramp. spheres then rolled off table ...
Azure devops scrum process
Contemptor dreadnought assembly instructions
Triforce chan emergency page
Notice that the magnitude of the force is a maximum when and is identically zero when . Figure 1 shows two charged particles entering a uniform magnetic field . The velocity vector of each particle is given as , indicating that both velocity vectors are perpendicular to the direction of the magnetic field.
An engineer is collecting data on four different satellites orbiting Earth. The engineer records the satellites' distances from Earth in kilometers (km) and their forces due to gravity in Newtons (N). Estimate the values for the two missing quantities. Enter your estimates into the blank boxes in the table.
• In the figure, forces F g and F N and are the only two forces on the block and they are both vertical. Thus, for the block we can write Newton's 2nd law for a positive-upward y axis, 𝑭 𝒚 = ma y as: 5.7: Some particular forces Fig. 5-7 (a) A block resting on a table experiences a normal force perpendicular to the tabletop. (b) The
3. experimental work - field and laboratory…..27 3.1 overview Now, a constitutive model comprising MR-stress relation is chosen, describing the resilient property of the material. The model employed in this study can be seen in equation 2.2.
The table shows experimental data of the magnitude of four forces exerted on a 2kg object as it slides across a horizontal surface. Which of the following could represent the magnitude of the net force that is exerted on the object? Select two answers.
The directions of the arrows show the directions of the forces, and the lengths of the arrows represent the strengths of the forces. ... This force continues to act during the time shown in the table. Which row of the table could be a correct representation of the object's speed between 0 seconds and 6 seconds? ... . Scientists share the ...
How much is a yorkie puppy worth
How to read survey stakes
Blood in baby poop pictures
of the four-particle system, least negative first. ——— The three vector forces to sum are all of the same magnitude, but their directions are most nearly parallel in case c. So: (a) c> b> a. The contributions to the potential are each equal, so: (b) a= b= c. Q4. In Fig. 13-23, two particles, of masses m and 2m, are fixed in place on an ...
These forces balance each other out at the actual prediction of the data instance. SHAP is based on magnitude of feature attributions. The feature importance plot is useful, but contains You can cluster your data with the help of Shapley values. The goal of clustering is to find groups of similar instances.
Download Experiment 3C Equilibrium of Concurrent Forces Survey. yes no Was this document useful for you? Thank you for your participation!
Jun 25, 2014 · Table 5 shows the surface tension results of the three analyzed liquids, which have been attained from experimental data presented in table 4. Table 5. Burette B. Results of surface tension at 25 °C.
A book sits on the table: What forces act on it? ... are used to show the relative magnitude and direction of all forces acting on an object. This diagram shows four forces acting upon an object. There aren't always four forces. Problem 1 A book is at rest on a table top. Diagram the forces acting on the book. In this diagram, there are ...
2 days ago · Table 6 lists the various experimental data and geometry taken from literature to validate the numerical results. The interfacial interactions of various bubble sizes, when divided into several velocity groups, have been found to be different, i.e. the interaction in the bubbly regime is very different from that in the churn-turbulent regime.
Cocalico school board election 2021
Nov 03, 2021 · Abstract The aim of the present study was to estimate the volume CT dose index (CTDIvol), dose length product (DLP) and effective dose (ED) to patients from five multi-detector computed tomography angiography (MDCTA) procedures: brain, carotid, coronary, entire aorta and lower limb from four medical institutions in Tanzania; to compare these doses to those reported in the literature, and to ...
Brooklyn College 4 Part II: Rectangular resolution and equilibrium of coplanar forces On another piece of plain white paper, redraw Figure 4, but this time to scale, letting 1.0 cm correspond to the force due to 10 grams. See Figure 6. 1. Draw a line from the end of F 1
interval. This force is the magnitude of the kinetic frictional force. 5. Repeat Steps 2-4 two more measurements. Record the values in the data table. 6. Average the results to determine the reliability of your measurements. 7. Add masses totaling 250 g to the block. Repeat Steps 2 - 6, recording values in the data table. 8.
Determine the axial force N, shear force V, and bending moment M acting at a cross section Substitute numerical data Problem 4.5-16 A beam ABC with an overhang at one end supports a uniform load of intensity 12 kN/m and a concentrated load of magnitude 2.4 kN Problem 4.5-20 The beam ABCD shown in the figure has overhangs that extend in both directions for a distance of 4.2 m...The table shows experimental data of the magnitude of four forces exerted on an object. Describe the horizontal and vertical motion of this object? Force Gravity Friction Horizontal Applied Normal Magnitude (N) 20 N 20 NType of Force Magnitude Force due to gravity Force of friction Horizontal applied force Normal force The table shows experimental data of the magnitude of four forces exerted on a object as it slides across a horizontal surface. Which of the following could represent the magnitude of the net force that is exerted on the object? Select two answers.
Transcribed image text: Type of Force Force due to gravity Force of friction Horizontal applied force Normal force Magnitude (N) 10 N 2N 8N 10 N The table shows experimental data of the magnitude of four forces exerted on a 2 kg object as it slides across a horizontal surface. Which of the following could represent the magnitude of the net force that is exerted on the object?
The table shows experimental data of the magnitude of four forces
• Data from 31 countries over the period 2014 to 2019 show that about 1 in 5 people reported having experienced discrimination on at least one of the Readers are encouraged to visit the websites of the contributing organizations, where they can find additional information on the impact of COVID-19...
Mina regular font free download
The table shows the vertical position as a function of time for an object that is dropped from a height of 5 m. A student must determine the acceleration of the object. Which of the following procedures could the student use to make the determination? Justify your selections. Select two answers. Time (s) Vertical Position (m) 0.0 5.0 0.3 4.7 0 ...
Best thanksgiving buffets in nycA book sits on the table: What forces act on it? ... are used to show the relative magnitude and direction of all forces acting on an object. This diagram shows four forces acting upon an object. There aren't always four forces. Problem 1 A book is at rest on a table top. Diagram the forces acting on the book. In this diagram, there are ...
Hryfine fitness trackervalues in the data table 2. Data Analysis (Give attention to use correct units and significant figures.) 1- Use equation (3) and calculate the period of oscillation for each trial and record the results in the data table 1. 2- Use equation (2) and Calculate gravitational acceleration (g) for each trial and record the values in the data table 1.
How to use totems in islands roblox 2021Blueberry girl tik tok
Gather data: Keeping the mass at 1.0 kg and the velocity at 10.0 m/s, record the magnitude of centripetal acceleration for each given radius value. Include units. Radius: 2.0 m 4.0 m 6.0 m 8.0 m 10.0 m Acceleration: Radius factor: Acceleration factor: 4.Inputting the values above yields a magnitude of 48.693 N. Finally, the angle of the force is calculated using the formula tan^-1 ( Fy/Fx) = 28.92 degrees. Example #2: This next example will involve only 3 different forces and angles, but some of the forces will have angles greater than 90 degrees. Those are as follows: 20 N @ 110 degrees3. experimental work - field and laboratory…..27 3.1 overview Now, a constitutive model comprising MR-stress relation is chosen, describing the resilient property of the material. The model employed in this study can be seen in equation 2.2.Leaked results of an NYPD poll showed 56 percent of respondents would've chosen a different career if they'd known then what they know now. And gun violence in the borough has significantly increased, with 28 percent more shootings reported this year as compared to last, according to police data.
Drama china romantis 2020 sub indo
Blu view 2 frp bypass without pc
Roblox grand piece online wiki
A student conducted an experiment with a toy cart by varying the force applied toward the right and observing the result. The force applied to the left was constant at 50 N. The table contains the data collected from the experiment.Experiment 4 Vector Addition The Force Table PDF. Force Table Lab Abi Riddle s Physics Lab Objectives The purpose of this experiment is to show that the magnitude and direction of the June 16th, 2018 - The Force Table Is An Apparatus That Allows The Experimental Determination Of The.
To prepare effective tables and figures in a scientific paper, authors must first know when and how to use them. Article provides tips on preparing effective tables and figures. At the manuscript screening stage, these display items offer reviewers and journal editors a quick overview of the study findings...
table shows experimental of magnitude of 4 forces exerted on 2 kg object as it slides across horizontal surface. which of the following can show magnitude of bet forces. 6N and 10 N blue sphere and red sphere same diameter released from top of the ramp. red sphere takes a longer time to reach bottom of the ramp. spheres then rolled off table ...100% (3 ratings) Transcribed image text: Type of Force Force due to gravity Force of friction Horizontal applied force Normal force Magnitude (N) 10 N 2N 8N 10 N The table shows experimental data of the magnitude of four forces exerted on a 2 kg object as it slides across a horizontal surface. Which of the following could represent the magnitude of the net force that is exerted on the object? Sketch the vectors and show the vector sum. Include a coordinate system. ... is that the forces on the force table are closely related to the masses you attach to the string. In fact, if you use a mass m, the magnitude of the force is ... Setup the force table with a force F that has a magnitude of 2.5 N at an angle.
Terryza mini pc not turning on
Nov 03, 2021 · Abstract The aim of the present study was to estimate the volume CT dose index (CTDIvol), dose length product (DLP) and effective dose (ED) to patients from five multi-detector computed tomography angiography (MDCTA) procedures: brain, carotid, coronary, entire aorta and lower limb from four medical institutions in Tanzania; to compare these doses to those reported in the literature, and to ... Leaked results of an NYPD poll showed 56 percent of respondents would've chosen a different career if they'd known then what they know now. And gun violence in the borough has significantly increased, with 28 percent more shootings reported this year as compared to last, according to police data.
Telecom dataset kaggle
For example, most of my friends have never thought about the UN Convention on the Rights of the Child. a date printed on something you buy that shows that it may be less safe to eat or less effective after this date. Small crimes against the planet.
Vampire diaries fanfiction stefan hides injury
Set up the table at shown. Place 50 grams in pans 1 and 2. Make both angles b b equal to 5 degrees. Experimentally determine the mass needed to hang from pan 3 to put the system into equilibrium. Repeat the measurement, changing the angles b b each time, in 5° increments, until you reach 80°.
The magnitude and direction of each component for the sample data are shown in the table below the diagram. The data in the table above show that the forces nearly balance. An analysis of the horizontal components shows that the leftward component of A nearly balances the rightward component of B.Sign-magnitude notation is the simplest and one of the most common methods of representing positive and negative numbers either side of zero, (0). Thus negative numbers are obtained When dealing with binary arithmetic operations, it is more convenient to use the complement of the negative number.The figure shows four cylinders of various diameters filled to different heights with the same fluid. ... it is allowed to return to a room temperature of 20°C. The experimental data is shown in the table. Calculate the number of molecules and moles of gas in the cylinder. ... All four particles receive the same magnitude of force but not all ...
Mansfield accident today
3 Force Table and Equilibrium Condition for Forces In this experiment, you will use the force table shown on the right. The force table is round table with a center pin and angle divisions marked around its outside edge. A ring is placed around the center pin that has strings attached to it. Those strings are run over pulleys at different locations around the table and various masses are hung ...In this video David explains how to find the magnitude of the electric field created by a point charge and solves a few examples problems to find the okay so we know that electric charges create electric fields and we know the definition of the electric field is the amount of force per charge what charge...
How much is a kennel license in massachusetts
Play this game to review 1D Motion. A student pushes a 12 N book to the right with a force of 10 N. The book experiences a frictional force of 3 N. The free-body force diagram below represents the forces acting on the book. What is the magnitude of the net force acting on the book?The magnitude of the slope of the line is the magnitude of the acceleration the masses experienced. (The slope will be positive or negative depending on the direction of rotation of the pulley.) Record, in the Constant Net Force Table, the experimental acceleration (aex) for Run 6. 10.
2.1.4 Classification of forces: External forces, constraint forces and internal forces. When analyzing forces in a structure or machine, it is conventional to classify forces as external forces; constraint forces or internal forces. External forces arise from interaction between the system of interest and its surroundings. Examples of external forces include gravitational forces; lift or drag ...raw data. Then you might do some calculations on the raw data, and plot the results. In that case, your results section should show (1) table of original data, (2) graph of original data, (3) calculations (equations and table of calculated results), and (4) graphs of calculated results.interval. This force is the magnitude of the kinetic frictional force. 5. Repeat Steps 2-4 two more measurements. Record the values in the data table. 6. Average the results to determine the reliability of your measurements. 7. Add masses totaling 250 g to the block. Repeat Steps 2 - 6, recording values in the data table. 8.Distribution of statistical data shows how frequent the values in a data set occurs. In the graph above, the percentages represent the amount of values that fall within each section. The highlighted percentages basically show how much of the data falls close to middle of the graph.The following equations both demonstrate equilibrating forces, one by setting the vector sum equal to zero and another by showing the resultant force to be equal in magnitude, but opposite in direction of vectors. OBJECTIVE. The purpose of this lab is to use vector addition by graphical and component methods in order to show equilibrating forces.Nov 03, 2021 · Abstract The aim of the present study was to estimate the volume CT dose index (CTDIvol), dose length product (DLP) and effective dose (ED) to patients from five multi-detector computed tomography angiography (MDCTA) procedures: brain, carotid, coronary, entire aorta and lower limb from four medical institutions in Tanzania; to compare these doses to those reported in the literature, and to ...
Nov 05, 2020 · The Hall Potential in a Silver Ribbon. Figure \(\PageIndex{2}\) shows a silver ribbon whose cross section is 1.0 cm by 0.20 cm. The ribbon carries a current of 100 A from left to right, and it lies in a uniform magnetic field of magnitude 1.5 T. (a)€€€€ Figure 1 shows an aircraft flying at a constant velocity and at a constant height above the ground. Figure 1 Complete the free body diagram in Figure 2 to show the other two forces acting on the aircraft. Figure 2 (2) 1 (b)€€€€ A small aircraft accelerated down a runway at 4.0 m/s2In this case r and v are in the plane of the gure, the torque cross product must be oriented perpendicular to the plane. A counterweight of mass m = 4.40 kg is attached to a light cord that is wound around a pulley as shown in the gure below.Rotherham council housing association phone numberClayton homes tiny homes pricesExperimental measurements show that particulate layers may experi- ence electrical breakdown at average electric field strengths across the layers of approximately 5-15 kV/cm.11'12 For temperatures and pressures encountered in precipitators, it takes an electric field strength of approximately 15-30 kV/cm to cause electrical breakdown of the ...
Gather data: Keeping the mass at 1.0 kg and the velocity at 10.0 m/s, record the magnitude of centripetal acceleration for each given radius value. Include units. Radius: 2.0 m 4.0 m 6.0 m 8.0 m 10.0 m Acceleration: Radius factor: Acceleration factor: 4. The results show that total lateral forces on the piles are influenced by the shadow effect as well as the superstructure mass attached to the pile cap. ... computed results with experimental data ... Gather data: Keeping the mass at 1.0 kg and the velocity at 10.0 m/s, record the magnitude of centripetal acceleration for each given radius value. Include units. Radius: 2.0 m 4.0 m 6.0 m 8.0 m 10.0 m Acceleration: Radius factor: Acceleration factor: 4. We calibrated our model to experimental data that consists of measurements of mitochondrial A sensitivity analysis of the respiration rates showed that only seven parameters can be identified The parameters of the calcium uniporter flux are given in Table 9. 2.1.6. External proton leakage.4 150 129.9 75.0 80 27.4 75.2 110 0 110.0 100 96.6 25.9 force mag x comp y comp F F F R 199.12 14.32 R 199.6N • Calculate the magnitude and direction. 199.1N 14.3N tan 4.1 • Determine the components of the resultant by adding the corresponding force components. Rx 199.1Ry 14.3 Four forces act on bolt A as shown. Determine theData Structures. To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. By defining an explicit function which computes the magnitude of a given vector based on the below mathematical formula
We present a forwarding table that allows fast IP routing lookups in software. Pessimistic calculations based on experimental data show that Pentium Pro and Alpha 21164 processors can do at least two million full IP routing lookups per second. No traffic locality is assumed.
A unit of measurement is a definite magnitude of a physical quantity, defined and adopted by convention and/or by law, that is used as a standard or measurement of the same physical quantity. Moreover, tables of thermodynamic data, especially the older ones, use calories instead of joules.3. experimental work - field and laboratory…..27 3.1 overview Now, a constitutive model comprising MR-stress relation is chosen, describing the resilient property of the material. The model employed in this study can be seen in equation 2.2.
The rising of the shield hero volume 23 illustrations
How to connect samsung tv to spectrum cable
Hotel toiletries business
San ramon permit centerThe table shows experimental data of the magnitude of four forces exerted on a 2kg object as it slides across a horizontal surface. Which of the following could represent the magnitude of the net force that is exerted on the object? Select two answers.)
Your choice between Primary data collection and secondary data collection depends on the nature, scope, and area of your research as well as its aims and objectives. Walking you through them, here are a few reasons; Integrity of the Research.1998 lexus ls400 upper control armof the four-particle system, least negative first. ——— The three vector forces to sum are all of the same magnitude, but their directions are most nearly parallel in case c. So: (a) c> b> a. The contributions to the potential are each equal, so: (b) a= b= c. Q4. In Fig. 13-23, two particles, of masses m and 2m, are fixed in place on an ...The hypothetical data in Figure 11.5 show the combined effect of pH and temperature on the Since most mathematical models have four parameters, the minimum number of experimental data As the procedure of fitting the model to the experimental data is based on the minimization of the residuals...Label all forces with their agents, and indicate the direction of the acceleration and of the net force. The formulas are concise and can be used to predict new data. Solved: Chapter 4, Problem 4/067 Determine The Magnitude O ... Chapter 5 Supplemental Problems Forces In Two Dimensions ...
Illustrator eyedropper between documents
Gather data: Keeping the mass at 1.0 kg and the velocity at 10.0 m/s, record the magnitude of centripetal acceleration for each given radius value. Include units. Radius: 2.0 m 4.0 m 6.0 m 8.0 m 10.0 m Acceleration: Radius factor: Acceleration factor: 4.
Swift performance cdnTable 2 species the various conversion factors between mks, cgs, and fps units. Note that, rather confusingly (unless you are an engineer in the US If one of the quantities in your calculation turns out to the the small difference between two much larger num-bers, then you may need to keep more than...
forces. Note that this has the same magnitude as R~ . PART 2: Force Table 9. Use the level to level the force table. 10. Set three pulleys on the force table in the magni-tude and direction of A~ , B~ , and C~ . Note: the mass hanger has its own mass. Let 1.00 N = 100 g on the force table. 11. Add a fourth vector to equalize the forces. This ..., the experiment, frictional forces CANNOT be neglected. The student uses experimental data to create two graphs. Figure 1 is a graph of kinetic energy of the object as a function of time. Figure 2 is a graph of the object-Earth system's gravitational potential energy as a function of time. HowBesides viewing table names and table definitions in SQL Developer, you can view the data stored in the table, and the SQL statement used to To do so, from the Tools menu, select SQL Worksheet. A detailed description of the SELECT statement is in Oracle Database SQL Language Reference.One has magnitude 7 lb and points in the direction of the positive $x$ -axis, so it is represented by the vector $7 \mathrm{i}$. The So if we have an object located at the origin in a three dimensional accordance system, it's an equilibrium by four forces and we know the forces F one through four.100% (3 ratings) Transcribed image text: Type of Force Force due to gravity Force of friction Horizontal applied force Normal force Magnitude (N) 10 N 2N 8N 10 N The table shows experimental data of the magnitude of four forces exerted on a 2 kg object as it slides across a horizontal surface. Which of the following could represent the magnitude of the net force that is exerted on the object? Determine the magnitude of force F so that the resultant of the three forces is as small as possible. It is clearly impossible to make the resultant force zero so they are in Could anyone tell me the direction I should be heading in? (incidentally, the book's answer for the magnitude of F is 2.03KN).The table shows experimental data of the magnitude of four forces exerted on a 2kg object as it slides across a horizontal surface. Which of the following could represent the magnitude of the net force that is exerted on the object? Select two answers. A- 6N. B- 10N. C- 28N. D- 30N
Nike sb undefeated dunk
Arnica and hypericum for root canal
Married at first sight season 3 castJun 21, 2016 · The photograph above shows the setup of the apparatus. Note: The masses must be able to move at least 50 cm before hitting the ground, to collect enough data to analyze. Also, when one mass touches the ground, the other mass should hang at least 10 cm below the pulley, so that the lighter mass will not launch up into the pulley and damage the ...
Play this game to review 1D Motion. A student pushes a 12 N book to the right with a force of 10 N. The book experiences a frictional force of 3 N. The free-body force diagram below represents the forces acting on the book. What is the magnitude of the net force acting on the book?These individual forces are shown in Figure 2a.2 below. (Note that the vertical forces on m 2 have been omitted as they will not be needed in this particular analysis.) The net force on m 1 is downward and equals m 1g - T. The net force on m 2 is to the right and equals T - f. Since the accelerations of m 1 and m 2 must have the same magnitude ...3. Describe how intermolecular forces influence the relative vapor pressure of a pure substance. 4. Understand the use of graphical methods to extract thermodynamic information from experimental pressure and temperature data. 5. Utilize Dalton's Law of Partial Pressures, and the Ideal Gas Law, to relate experimental data to properties of 3 Force Table and Equilibrium Condition for Forces In this experiment, you will use the force table shown on the right. The force table is round table with a center pin and angle divisions marked around its outside edge. A ring is placed around the center pin that has strings attached to it. Those strings are run over pulleys at different locations around the table and various masses are hung ...Transcribed image text: Type of Force Force due to gravity Force of friction Horizontal applied force Normal force Magnitude (N) 10 N 2N 8N 10 N The table shows experimental data of the magnitude of four forces exerted on a 2 kg object as it slides across a horizontal surface. Which of the following could represent the magnitude of the net force that is exerted on the object?the sum of the external forces. 7.2 Worked Examples 7.2.1 Linear Momentum 1. A 3.00kg particle has a velocity of (3.0i−4.0j) m s. Find its x and y components of momentum and the magnitude of its total momentum. Using the definition of momentum and the given values of m and v we have: p = mv = (3.00kg)(3.0i−4.0j) m s = (9.0i−12.j) kg·m s
Mchenry border terriers
Zotac firestorm oc scanner
Realsense depth to point cloud
2.1.4 Classification of forces: External forces, constraint forces and internal forces. When analyzing forces in a structure or machine, it is conventional to classify forces as external forces; constraint forces or internal forces. External forces arise from interaction between the system of interest and its surroundings. Examples of external forces include gravitational forces; lift or drag ...Other properties do not; the diameter of a planet, for example, although quoted in tables of data, is a mean value. The same is true for the thickness of a piece of paper or the diameter of a wire. These measurements will vary somewhat at different places. It is important to realize what sort of data you are dealing with. Sampling. Brooklyn College 4 Part II: Rectangular resolution and equilibrium of coplanar forces On another piece of plain white paper, redraw Figure 4, but this time to scale, letting 1.0 cm correspond to the force due to 10 grams. See Figure 6. 1. Draw a line from the end of F 1The results show that total lateral forces on the piles are influenced by the shadow effect as well as the superstructure mass attached to the pile cap. ... computed results with experimental data ... Jun 21, 2016 · The photograph above shows the setup of the apparatus. Note: The masses must be able to move at least 50 cm before hitting the ground, to collect enough data to analyze. Also, when one mass touches the ground, the other mass should hang at least 10 cm below the pulley, so that the lighter mass will not launch up into the pulley and damage the ...
Pet friendly houses for rent in san francisco
The magnitude of the work depends on the mass of the object, the strength of the gravitational pull on it, and the height through which it is raised. The First Law of Thermodynamics evolved from the experimental demonstration that heat and mechanical work are interchangeable forms of energy.Determine the axial force N, shear force V, and bending moment M acting at a cross section Substitute numerical data Problem 4.5-16 A beam ABC with an overhang at one end supports a uniform load of intensity 12 kN/m and a concentrated load of magnitude 2.4 kN Problem 4.5-20 The beam ABCD shown in the figure has overhangs that extend in both directions for a distance of 4.2 m...The results show that total lateral forces on the piles are influenced by the shadow effect as well as the superstructure mass attached to the pile cap. ... computed results with experimental data ... , , Legit online dispensaries ship all 50 states paypalThe table shows experimental data of the magnitude of four forces exerted on a 2kg object as it slides across a horizontal surface. Which of the following could represent the magnitude of the net force that is exerted on the object? Select two answers.Tsunami coastal hazard is modeled along the US East Coast (USEC), at a coarse regional (450 m) resolution, from coseismic sources located in the Açores Convergence Zone (ACZ) and the Puerto Rico Trench (PRT)/Caribbean Arc areas. While earlier work only considered probable maximum tsunamis, here we parameterize and simulate 18 coseismic sources, with magnitude M8-9 and return periods $$\\sim ...
4 link suspension for s10
:In this case r and v are in the plane of the gure, the torque cross product must be oriented perpendicular to the plane. A counterweight of mass m = 4.40 kg is attached to a light cord that is wound around a pulley as shown in the gure below.
:The table shows experimental data of the magnitude of four forces exerted on a 2kg object as it slides across a horizontal surface. Which of the following could represent the magnitude of the net force that is exerted on the object? Select two answers. Brooklyn College 4 Part II: Rectangular resolution and equilibrium of coplanar forces On another piece of plain white paper, redraw Figure 4, but this time to scale, letting 1.0 cm correspond to the force due to 10 grams. See Figure 6. 1. Draw a line from the end of F 1the experiment, frictional forces CANNOT be neglected. The student uses experimental data to create two graphs. Figure 1 is a graph of kinetic energy of the object as a function of time. Figure 2 is a graph of the object-Earth system's gravitational potential energy as a function of time. How
Embryo transfer equipmentJun 21, 2016 · The photograph above shows the setup of the apparatus. Note: The masses must be able to move at least 50 cm before hitting the ground, to collect enough data to analyze. Also, when one mass touches the ground, the other mass should hang at least 10 cm below the pulley, so that the lighter mass will not launch up into the pulley and damage the ... , , Why did people support manifest destinyThe experimental data are rather controversial, and there is no general agreement about ... Reference to tables. As Table 3 shows, there is a significant difference between the two groups. The most striking result to emerge from the data is that ... Interestingly, this correlation is related to ...Q1. The figure below shows overhead views of four situations in which forces act on a block that lies on a frictionless floor. If the force magnitudes are chosen properly, in which situation it is possible that the block is (a) stationary and (b) moving with constant velocity? a y≠0 a=0 a y≠0 a=0 F net F net Q5. In which situations does theMom and son date ideas.
Cheap hotels in sumter sc
A quantity growing by four orders of magnitude implies it has grown by a factor of 10,000 or 104. 128 bits (16 bytes) - size of addresses in IPv6, the successor protocol of IPv4 - minimum cipher strength of the Rijndael and AES encryption standards, and of the widely used MD5 cryptographic...
Rewrite sentences exercises pet pdfGather data: Keeping the mass at 1.0 kg and the velocity at 10.0 m/s, record the magnitude of centripetal acceleration for each given radius value. Include units. Radius: 2.0 m 4.0 m 6.0 m 8.0 m 10.0 m Acceleration: Radius factor: Acceleration factor: 4.An engineer is collecting data on four different satellites orbiting Earth. The engineer records the satellites' distances from Earth in kilometers (km) and their forces due to gravity in Newtons (N). Estimate the values for the two missing quantities. Enter your estimates into the blank boxes in the table.
Baltimore tv tower locationsthe experiment, frictional forces CANNOT be neglected. The student uses experimental data to create two graphs. Figure 1 is a graph of kinetic energy of the object as a function of time. Figure 2 is a graph of the object-Earth system's gravitational potential energy as a function of time. HowMay 01, 2021 · Figure 1b shows a small peak that is 1 s (20 data points) wide with the peak height representing k = 2, a common usage in LOD calculations. Note that the peak does not appear symmetrical, as the random noise still comprises much of the signal. This is typical of actual experimental data for analyses performed on samples at or near the LOD. Recent experiments have shown that it is possible to generate a plasma in a magnetic nozzle that is separated by a significant distance from the rf antenna that supplied the power to maintain the plasma. A lot of the physics is yet to be discovered/explained and the experiment will shed considerable light...
Dressed kd hardwood sizesTo prepare effective tables and figures in a scientific paper, authors must first know when and how to use them. Article provides tips on preparing effective tables and figures. At the manuscript screening stage, these display items offer reviewers and journal editors a quick overview of the study findings...values in the data table 2. Data Analysis (Give attention to use correct units and significant figures.) 1- Use equation (3) and calculate the period of oscillation for each trial and record the results in the data table 1. 2- Use equation (2) and Calculate gravitational acceleration (g) for each trial and record the values in the data table 1. 2 days ago · Table 6 lists the various experimental data and geometry taken from literature to validate the numerical results. The interfacial interactions of various bubble sizes, when divided into several velocity groups, have been found to be different, i.e. the interaction in the bubbly regime is very different from that in the churn-turbulent regime. • In the figure, forces F g and F N and are the only two forces on the block and they are both vertical. Thus, for the block we can write Newton's 2nd law for a positive-upward y axis, 𝑭 𝒚 = ma y as: 5.7: Some particular forces Fig. 5-7 (a) A block resting on a table experiences a normal force perpendicular to the tabletop. (b) TheThe data stored in a database is generally about a single topic. For example: Patients' files in a hospital The contents of an address book A catalog of movies in a video store. A relational database is a collection of tables, where each row of the table is a record and each column is a field.
Can dogs eat cooked potatoesmass is a property - a quantity with magnitude ; force is a vector - a quantity with magnitude and direction; The acceleration of gravity can be observed by measuring the change of velocity related to change of time for a free falling object: a g = dv / dt (2) where. dv = change in velocity (m/s, ft/s) dt = change in time (s) 2.1.4 Classification of forces: External forces, constraint forces and internal forces. When analyzing forces in a structure or machine, it is conventional to classify forces as external forces; constraint forces or internal forces. External forces arise from interaction between the system of interest and its surroundings. Examples of external forces include gravitational forces; lift or drag ...The magnitude of displacement is the distance covered by the body. # ŞŦλ¥ Şλ₣E. New questions in Physics. A charge Q is to be divided on to small objcts what shold be the value of the charges on the objects so that force between the object will we maximum. …The directions of the arrows show the directions of the forces, and the lengths of the arrows represent the strengths of the forces. ... This force continues to act during the time shown in the table. Which row of the table could be a correct representation of the object's speed between 0 seconds and 6 seconds? ... . Scientists share the ...Oct 19, 2020 · The table shows experimental data of the magnitude of four forces exerted on a 2kg object as it - Brainly.com. Answer:6N and 10NExplanation:The forces acting along the horizontal axis arw the horizontal force applied and the frictional force. Frictional force is a force …. pa0296594pa0296594. 10/19/2020. Experimental Data Experimental Data 40 0 0.2 0.4 0.6 0.8 1.0 Distance Time 5 cm 15 cm 25 cm 35 cm 0.2 s 0.4 s 0.6 s 0.8 s Distan c e (c m) 30 20 10 0 Time (s) CALIFORNIA STANDARDS TEST GRADE Released Test Questions Science 8 1 The graph below shows the movement of an object at several points in time. Object Movement . 55 50 45 40 35 30 25 20 15 ...
The table shows experimental data of the magnitude of four forces exerted on a 2kg object as it slides across a horizontal surface. Which of the following could represent the magnitude of the net force that is exerted on the object? Select two answers.table shows the grouping of data from lowest to highest. It shows that the frequency of that number an the numbers before. Frequency table results for Falls: Count = 40. FallsFrequencyRelative FrequencyCumulative Relative Frequency 0 8 0.2 0. 1 17 0.425 0. 2 4 0.1 0. 3 5 0.125 0. 4 3 0.075 0.4 4 4 2 4 4 FIGURE 13.4 A faceted shallow thermal etch pit on the (001) surface of spinel. 2 and 4 refer to the height of the steps (0.2 nm and (A) 0.4 nm). ation refers to the average change in the spacing of the must form if a ridge forms. Determine the axial force N, shear force V, and bending moment M acting at a cross section Substitute numerical data Problem 4.5-16 A beam ABC with an overhang at one end supports a uniform load of intensity 12 kN/m and a concentrated load of magnitude 2.4 kN Problem 4.5-20 The beam ABCD shown in the figure has overhangs that extend in both directions for a distance of 4.2 m...Experimental measurements show that particulate layers may experi- ence electrical breakdown at average electric field strengths across the layers of approximately 5-15 kV/cm.11'12 For temperatures and pressures encountered in precipitators, it takes an electric field strength of approximately 15-30 kV/cm to cause electrical breakdown of the ... The table shows experimental data of the magnitude of four forces exerted on a 2kg object as it slides across a horizontal surface. Which of the following could represent the magnitude of the net force that is exerted on the object? Select two answers.A quantity growing by four orders of magnitude implies it has grown by a factor of 10000 or 104. Orders of magnitude (entropy) — The following list shows different orders of magnitude of To help compare different orders of magnitude, the following list describes various mass levels between...The data stored in a database is generally about a single topic. For example: Patients' files in a hospital The contents of an address book A catalog of movies in a video store. A relational database is a collection of tables, where each row of the table is a record and each column is a field.The results show that total lateral forces on the piles are influenced by the shadow effect as well as the superstructure mass attached to the pile cap. ... computed results with experimental data ...
Can you get reactivated with amazon flex
The finite element model correlates well with experimental data obtained from the four-point bending tests. The figure below shows experimental and numerical results for a laminate with a Moment Index of 15 and core thicknesses of 6.35 mm and 12.7 mm, respectively.Turning to the variation in coverage, Table 2 shows the fraction of eligible people who actually received a pension in each of the 13 regions of Namibia in Alternatively, we can use the expected pension variable; estimates have the same magnitude and the opposite sign. 13In theory, it should have been...
1963 rambler ambassador 990 for sale near florida
Tips to find magnitude of 2 forces when given the magnitude of their resultant. How can I fix this table so that the mass in the first row stays on top of the first three columns and mass 2 over the other three. Data Science. Arduino. Bitcoin.
How to find gas meter capacity
The table shows experimental data of the magnitude of four forces exerted on a 2kg object as it slides across a horizontal surface. Which of the following could represent the magnitude of the net force that is exerted on the object? Select two answers. A- 6N. B- 10N. C- 28N. D- 30N
determine how forces, masses, and accelerations are interrelated. For Newton's Third Law, the primary objective was to determine how the direction of accelerating pairs are related to each other. Data and Calculations Part I: Newton's 2nd Law Accelerometer Force Sensor Figure 1: Experimental Set up for testing Newton's 2nd Law.Lab 1 - Force Table Introduction All measurable quantities can be classified as either a scalar or a vector. A scalar has only magnitude while a vector has both magnitude and direction. Examples of scalar quantities are the number of students in a class, the mass of an object, or the speed of an object, to name a few.
Ultrasound technician schools near me
Tsunami coastal hazard is modeled along the US East Coast (USEC), at a coarse regional (450 m) resolution, from coseismic sources located in the Açores Convergence Zone (ACZ) and the Puerto Rico Trench (PRT)/Caribbean Arc areas. While earlier work only considered probable maximum tsunamis, here we parameterize and simulate 18 coseismic sources, with magnitude M8-9 and return periods $$\\sim ...
Music player for google drive
2007 north river seahawk for sale
The normal force sometimes called the loading force arises from the elastic properties of the bodies. Where μk is the coefficient of static friction and N is the magnitude of the normal force. Both μs and μk are dimensionless constants, each being the ratio of the magnitudes of two forces.Gather data: Keeping the mass at 1.0 kg and the velocity at 10.0 m/s, record the magnitude of centripetal acceleration for each given radius value. Include units. Radius: 2.0 m 4.0 m 6.0 m 8.0 m 10.0 m Acceleration: Radius factor: Acceleration factor: 4.
Sand epoxy primer before filler
Nras properties for rent
2.1.4 Classification of forces: External forces, constraint forces and internal forces. When analyzing forces in a structure or machine, it is conventional to classify forces as external forces; constraint forces or internal forces. External forces arise from interaction between the system of interest and its surroundings. Examples of external forces include gravitational forces; lift or drag ...
Japan tsunami 2020 effects
Simplex door lock manual
4. Billie Budten and Mia Neezhirt are having an intense argument at the lunch table. They are adding two force vectors together to determine the resultant force. The magnitude of the two forces are 3 N and 4 N. Billie is arguing that the sum of the two forces is 7 N. Mia argues that the two forces add together to equal 5 N. Who is right? Explain.We calibrated our model to experimental data that consists of measurements of mitochondrial A sensitivity analysis of the respiration rates showed that only seven parameters can be identified The parameters of the calcium uniporter flux are given in Table 9. 2.1.6. External proton leakage.
Steam games crashing on startup reddit
Is my partner good for me quiz
Meritor pinion seal cross reference
Cumulus evpn configuration
Mina noodle font free download
Gather data: Keeping the mass at 1.0 kg and the velocity at 10.0 m/s, record the magnitude of centripetal acceleration for each given radius value. Include units. Radius: 2.0 m 4.0 m 6.0 m 8.0 m 10.0 m Acceleration: Radius factor: Acceleration factor: 4. Sketch the vectors and show the vector sum. Include a coordinate system. ... is that the forces on the force table are closely related to the masses you attach to the string. In fact, if you use a mass m, the magnitude of the force is ... Setup the force table with a force F that has a magnitude of 2.5 N at an angle
Orion ez finder deluxe
Shadow vpn free fire hack
Barn door tracker 3d print
Lutefisk dinners 2021 minnesota
Oracle ex employee portal
2) Add the vectors by the polygon method to find each resultant. Record the magnitude and direction of the resultant (that you measure by the ruler-protractor set) in the Table 2 shown below. These are your measured values. 3) Solve for the same resultant that you found in Step 2, but this time by using the analytical method (by calculation and ...Lab 1 - Force Table Introduction All measurable quantities can be classified as either a scalar or a vector. A scalar has only magnitude while a vector has both magnitude and direction. Examples of scalar quantities are the number of students in a class, the mass of an object, or the speed of an object, to name a few.
Capital one software engineer leetcode
The table shows experimental data of the magnitude of four forces exerted on a 2kg object as it slides across a horizontal surface A B 6-10 A blue sphere and red sphere with the same diameter
Pastebin mega nz chat
How to lower launch angle with driver
However, experiments show that Highway Network performs no better than ResNet, which is kind of The overall archiecture is shown in the below table: DenseNet architectures for ImageNet. To investigate the relationship between path length and the magnitude of the gradients flowing through it.• Data from 31 countries over the period 2014 to 2019 show that about 1 in 5 people reported having experienced discrimination on at least one of the Readers are encouraged to visit the websites of the contributing organizations, where they can find additional information on the impact of COVID-19...
Craigslist used motorcycles for sale near me
How to bypass samsung a10s google account
Toyota prius actuator
2018 jeep renegade seat belt chime disable
Free pos software download
Torque and moment of inertia answer key
Screenwriting agents near me
School of prophets and seers pdf
Laser training cartridge review
Kenworth air assist clutch
Streamlit select slider
Elf headband lord of the rings
(c) LO 3.A.1.3, SP 5.1; LO 4.C.1.1, SP 2.2; LO 5.A.2.1, SP 6.4; LO 5.B.3.3, SP 1.4, 2.2 2 points Describe how the experimental data could be analyzed to confirm or disconfirm the hypothesis that the spring constant of the spring inside the launcher has the same value for different compression distances.The magnitude of the work depends on the mass of the object, the strength of the gravitational pull on it, and the height through which it is raised. The First Law of Thermodynamics evolved from the experimental demonstration that heat and mechanical work are interchangeable forms of energy.
Plotly express color map
Download Experiment 3C Equilibrium of Concurrent Forces Survey. yes no Was this document useful for you? Thank you for your participation! The table shows experimental data of the magnitude of four forces exerted on a 2kg object as it slides across a horizontal surface. Which of the following could represent the magnitude of the net force that is exerted on the object? Select two answers. A- 6N. B- 10N. C- 28N. D- 30N
Solving systems of equations by elimination steps pdf
A unit of measurement is a definite magnitude of a physical quantity, defined and adopted by convention and/or by law, that is used as a standard or measurement of the same physical quantity. Moreover, tables of thermodynamic data, especially the older ones, use calories instead of joules.
Amerigroup vision providers
Pricing template ppt free download
One has magnitude 7 lb and points in the direction of the positive $x$ -axis, so it is represented by the vector $7 \mathrm{i}$. The So if we have an object located at the origin in a three dimensional accordance system, it's an equilibrium by four forces and we know the forces F one through four.forces. Note that this has the same magnitude as R~ . PART 2: Force Table 9. Use the level to level the force table. 10. Set three pulleys on the force table in the magni-tude and direction of A~ , B~ , and C~ . Note: the mass hanger has its own mass. Let 1.00 N = 100 g on the force table. 11. Add a fourth vector to equalize the forces. This ...
Cypress sso recipe
Keep2share payment options
The hypothetical data in Figure 11.5 show the combined effect of pH and temperature on the Since most mathematical models have four parameters, the minimum number of experimental data As the procedure of fitting the model to the experimental data is based on the minimization of the residuals...forces. Note that this has the same magnitude as R~ . PART 2: Force Table 9. Use the level to level the force table. 10. Set three pulleys on the force table in the magni-tude and direction of A~ , B~ , and C~ . Note: the mass hanger has its own mass. Let 1.00 N = 100 g on the force table. 11. Add a fourth vector to equalize the forces. This ...The magnitude of the slope of the line is the magnitude of the acceleration the masses experienced. (The slope will be positive or negative depending on the direction of rotation of the pulley.) Record, in the Constant Net Force Table, the experimental acceleration (aex) for Run 6. 10.
Amazon flex how to change delivery area | CommonCrawl |
Population Health Metrics
Estimating the current mean age of mothers at the birth of their first child from household surveys
John Bongaarts1 &
Ann K. Blanc1
Population Health Metrics volume 13, Article number: 25 (2015) Cite this article
Estimates of the period mean age at first birth are readily available for countries with accurate vital statistics (i.e., in much of the developed world). In contrast, in most developing countries vital statistics are lacking or incomplete and estimates of the period mean age at first birth are therefore often unavailable. The Demographic and Health Surveys (DHS) program provides a large set of demographic and health statistics for many developing countries, but not the mean age at childbearing or the mean age at first birth.
We propose two different methods for the estimation of the period mean age at first birth from information collected in DHS surveys. The first method is the same as the one used in populations with accurate vital statistics and is based on a weighted average of single year of age first birth rates. The second is the singulate mean age at first birth.
A comparison of the two estimates obtained from the latest surveys in 62 countries shows excellent agreement in countries in which there is no evidence of a rise in childlessness. But, as expected on theoretical grounds, there is less agreement in populations that have experienced an increase in the proportion childless.
Based on these results, we recommend the first method. The measure is relatively straightforward to calculate and, since it refers to recent births, is presumably more accurately reported than indicators based on events that occurred in the more distant past. This measure makes it possible for the first time to assess recent trends in the onset of childbearing in developing countries with multiple DHS surveys and to compare recent period estimates of the mean age at first birth among countries.
Becoming a parent for the first time is one of life's most important and influential events. It signals the onset of the responsibility for insuring the well-being and success of one's offspring and of the next generation. For women, the age at which they have a first birth can have implications for schooling, labor force participation, and overall family size [1]. Early childbearing is also associated with elevated risks to the health of the mother and her child [2]. As a consequence, there is a large literature on the individual, social, and cultural determinants and consequences of this event and its timing in the life cycle [3–5]. In addition, a renewed interest in the wellbeing of adolescent girls has led to investments in programs intended to delay childbearing and increase access to family planning [6, 7]. Thus, the age at which women have a first birth is an important indicator of the success of these efforts. Finally, delayed childbearing slows population growth through increasing the length between generations and decreasing population momentum [8].
Estimates of both cohort and period mean ages at first birth are available for countries with reliable vital statistics. For example, EUROSTAT [9] and the Human Fertility Database [10] provide historical estimates for many countries in Europe and other high-income countries for single years from the 1980s to around 2010 and for a substantial number of birth cohorts. In contrast, in most developing countries vital statistics are lacking or incomplete and estimates of period and cohort mean ages at first birth are therefore often unavailable. The Demographic and Health Surveys (DHS) program – under which nationally representative household surveys are conducted in developing countries – provides many valuable statistics on demographic and health processes, but does not report on the period age at childbearing or age at first birth (mean or median). Instead, the standard reports provide the cohort median age at first birth as calculated from a birth history reported retrospectively by women of reproductive age. In principle, the DHS could also report the mean age at childbearing for cohorts of women but such means would be biased downward because of incomplete childbearing experience of all but the oldest women.
For many analytic purposes estimates of period measures are of greatest interest because, in contrast to cohort medians, they allow assessments of recent trends in the timing of the onset of childbearing for specific reference periods. The objective of this research note is to propose two different methods for the estimation of mean age at first birth from information collected in DHS and similar surveys. Both measures are unaffected by changes in the population age structure, thus allowing undistorted comparisons of the timing of the onset of childbearing between populations and over time within populations. Estimates are calculated for the most recent surveys in 62 countries.
The equation for estimating the period mean age at first birth used widely in countries with vital statistics [10] is
$$ M(t)=\frac{{\displaystyle {\sum}_0^{a_{max}}}\left(a+0.5\right)b\left(a,t\right)}{{\displaystyle {\sum}_0^{a_{max}}}b\left(a,t\right)} $$
M(t) = Average age at first birth at time t
b(a,t) = the age-specific birth rate for birth order one at (single) age a and time t.
a max = the highest age at which first births are observed
This period mean age at first birth is defined as the mean age at which women would bear their first child if they went through the reproductive years having the first birth rates observed in a particular period.
Numbers of births recorded in vital statistics are typically large and birth rates are available by single age and single year. As a result, annual estimates of M(t) can be estimated.
In contrast, in applications of this equation to DHS surveys samples of births in a single year are relatively small. To obtain more robust estimates of the mean age at first birth for a survey, we calculate b(,a,t) by single year of age for a period of three years before each survey. In addition, we exclude surveys with sample sizes of currently married women below 3000 to minimize sampling errors.
An alternative approach to estimating the period mean age at first birth is to rely on a method that is widely used to estimate the mean age at first marriage, called the "singulate mean age at marriage" [11, 12]. The application of this approach to estimate the mean age at first birth was first mentioned by Casterline and Trussell [13] and subsequently implemented by Afzal and Kiani [14] and Booth [15]. The equation is as follows
$$ {M}^{*}(t)=\frac{{\displaystyle {\sum}_0^{a_{max}}}p\left(a,t\right)-p\left({a}_{max},t\right)\ {a}_{max}}{1-p\left({a}_{max},t\right)} $$
M*(t) = Average age at first birth at time t
p(a,t) = Proportion of women that has not yet given birth at age a and time t
p max = The proportion of women that has never had a birth at a max
This mean age at first birth is defined as the mean age at which women would bear their first child if they went through the reproductive years experiencing the age-specific proportions childless observed at time t.
In Additional file 1 we demonstrate that the two means are equal (i.e., M (t) = M* (t)) under the condition that the shape of the function p(a,t) by age is invariant with respect to time. This implies that p(a,t) can shift to high or lower ages over time (with corresponding changes in first birth rates and in the mean age) but with no change in shape and with constant p max .
Estimates of M(t) and M*(t) were obtained with equations (1) and (2) for the most recent DHS surveys in 62 developing countries for which data files are available for public use (and with sample sizes of married women above 3000).Footnote 1 The number of respondents in each survey varies but typically is between 5000 and 10,000 women of reproductive age. For many countries several surveys are available, so time series of M(t) and M*(t) can also be calculated. Further details about the surveys are available on the DHS website [16].
Estimates of b(a,t) are obtained from birth histories with a simple variant of the standard DHS method for calculating age-specific birth rates by age for the three years before the survey [17]. To estimate b(a,t) two changes are made in this method: (1) birth rates are calculated by single year rather than by five year age intervals and (2) the numerators of the birth rates exclude births of order two and higher. Estimates of p(a,t) are also calculated with a variant of the standard DHS method estimating the proportion nulliparous by single year of age rather than five year age intervals.
Finally, it should be noted that values of p(a,t) are subject to substantial sampling errors at ages above 40, because the proportions childless at these ages are usually less than five percent and the number of respondents is smaller than at lower ages. To minimize the effects of these errors on estimates of the mean age at firth birth, the value of a max is set at 40 years and p max is estimated as the average of single age values of p(a,t) between ages 35 and 45.
Figure 1 plots the estimates of M(t) on the horizontal axis and the value of M*(t) on the vertical axis. Each marker represents the most recent survey in each of the 62 countries. The results are presented in two clusters: the solid markers represent surveys in which p max is less than 5 % and the open circles represent surveys with p max >5 %. This distinction is made to separate observations in which the conditions are met for M(t) to be equal to M*(t) from observations in which they are not. As noted in Additional file 1, a key condition for the equality of M(t) and M*(t) is that p max is constant. Unfortunately, it is not easy to determine the rate of change in p max , because some countries have only one survey and, even in countries with multiple surveys, the rate of change in p max is erratic due to small sample sizes. Instead, we assume that countries with p max less than 5 % have seen little change in p max over time, thus approximating the condition that p max is constant. In surveys where p max is higher than 5 % there has likely been change over time because early in the fertility transition p max is typically a very small number.
Period mean age at first birth (M* vs M)
It is therefore expected that M(t) is closer to M*(t) for surveys in the cluster with p max <5 %. As is evident from Fig. 1, this is indeed the case. For these surveys the average value of M(t) and M*(t) are respectively 21.0 and 21.2, a difference of only 0.2 year (which is not statistically significant). However, the agreement is not perfect and the solid markers are spread around the diagonal in Fig. 1 with a standard error of 0.32 years.Footnote 2 The second cluster of countries with open circles includes several surveys in which M*(t) is usually substantially higher than M(t). This finding is likely attributable to an upward bias in M*(t) when values of p max are rising (the rare cases in this cluster with M(t) higher than M*(t) are probably attributable to measurement or reporting errors). Our working assumption therefore is that M(t) is an unbiased estimator of the mean age at first birth even in surveys in the second cluster. In addition, all except one of the countries in the second cluster have a mean age at birth of 22 or higher. This result is not unexpected as there tends to be a positive correlation between age at first birth and the proportion of women who remain childless.
A full analysis of levels and trends in all 62 countries is beyond the scope of this methodological study, but a few findings can be noted. Estimates of M(t) vary widely among countries from a low of 19.1 in Niger (2006) to a high of 24.7 in the Maldives (2009). The unweighted averages of M(t) for countries in each of four regions are presented in Table 1. The low value for sub-Saharan Africa is unsurprising since this continent has not progressed as far through the fertility transition as the other regions. North Africa/West Asia and South Asia have the highest averages and Latin America has intermediate values.
Table 1 Average and standard deviation of country estimates of M(t) by region
Figure 2 presents trends in M(t) for selected countries in the developing and developed world. Estimates for Egypt, Nigeria, India, Kenya, and Bangladesh show very modest increases from the 1990s to near 2010. The mean ages at first birth for the Japan, Czech Republic, UK, and US are mostly substantially higher and have been rising at a more rapid pace than in the five developing countries included in the figure.
Period mean age at first birth for selected developing and developed countries
As noted, DHS published reports provide estimates of the retrospectively reported cohort median age at first birth. These medians are estimated from birth histories obtained from respondents of reproductive age. The age at first birth is calculated by subtracting the woman's date of birth from the date of birth of her first child. Medians for the cohorts aged 25–29 at the time of the survey and above are available for nearly all DHS surveys because the medians are reached before age 25 (i.e., at least half of women have had a birth before age 25). For a small number of surveys medians are available for the cohorts aged 20–24 when the median is below age 20.
These cohort medians have the advantage of being available for all DHS surveys but there are also drawbacks: 1) the median refers to past experience of cohorts and is therefore not as current as is preferable for many analytic purposes; 2) the retrospective reporting of the date of the first birth may suffer from recall errors that are likely to increase as the time since the event rises; and 3) the cohort median as calculated by DHS is not independent of the quantum of first births and can change over time even if the mean is constant. The first two of these disadvantages also apply to cohort mean ages at first birth, a measure we do not discuss because it is very rarely used as it can only be estimated accurately for women who have completed their childbearing.
To illustrate, Fig. 3 presents the estimates of the medians obtained from women aged 25–29, 30–34, 35–39 and 40–44 from six surveys in Kenya. Time series of medians are plotted as the thin lines, with one line for each of the age groups of women. Each data point is plotted in the year in which a given cohort reaches its median age. For example, if women aged 30 to 34 reported a median age at first birth of 20 years in a survey conducted during 2010 then this data point is plotted at 1998.0 years. This assumes that women aged 30 to 34 are on average 32.5 years old and with a median age at first birth of 20 years, their first birth occurred 12.5 years before the survey (i.e., age at survey – median age at first birth = 32.5–20 = 12.5). The reference date to which the median age at first birth applies is therefore 12.5 years before the survey date (i.e., reference date of survey – time before the survey to which the median age at first birth refers = 2010.5–12.5 = 1998.0). This approach allows the comparison of cohort medians reported in different surveys and of cohort and period means [18, 19]. With fully accurate reporting of the timing of first births the lines of medians plotted in Fig. 3 would exactly overlap (assuming no selectivity of migration and mortality). For example, women aged 35–39 should report a median that is the same as the median reported by women aged 25–29 in a survey conducted ten years earlier. The fact that the lines do not match indicates misreporting. In particular, it seems that the older cohorts have moved the time of the first birth closer to the survey date so that their reported medians are higher for most years than the medians reported by younger cohorts for the same years. This pattern is consistent with earlier analyses of data quality undertaken by Blanc and Rutenburg [18] and Gage [20].
Period mean and cohort median age at first birth, based on five surveys in Kenya
Figure 3 also plots the time series of the period mean age at birth, M(t) as a solid line based on five surveys between 1989 and 2008/9. (The points in this line are plotted 1.5 years before the survey date to account for the fact that the mean is based on births in a three year period before the survey.) The period mean shows a rise between the 1989 and 1998 surveys but remains flat from 1998 to 2008/9.
The period means and cohort medians are not directly comparable because they are different metrics of different distributions, but by plotting the data in comparable years (as discussed above) some tentative conclusions can be reached. In particular, the medians reported by women aged 25–29 are lower than the means. This pattern is as expected because the distribution of first births is skewed to higher ages. Comparisons of period means and cohort medians in other countries yield broadly similar results (data not shown).
It should be emphasized that survey data and any measures derived from them are subject to various reporting and non-reporting errors including omission of births, displacement of births in time, and variations in sample selection and implementation [18, 21–24]. In particular, misreporting of the date of recent births has implications for assessing levels and trends in fertility. As shown by Schoumaker [24], in a number of countries with DHS surveys such errors are non-trivial and lead to underestimation of total fertility rates (TFR). Given that the calculation of M(t) is based on recent births, the known biases in the reporting of distant first births by older women are likely to be minimized. Interestingly, our estimate of M(t) remains unaffected if errors are proportionally the same at all ages. The reason is that age-specific birth rates b(,a,t) appear in the numerator and the denominator of Equation 1. An error of say 10 % in all b(,a,t) values would lead to an error of 10 % in the TFR, but there would be no error in M(t). In reality errors in b(,a,t) are likely to vary somewhat by age and that would lead to a bias in M(t). Furthermore, errors in birth histories would not affect M*(t), unless women misreport their childlessness status at the time of the survey.
In addition to reporting errors in the birth history, the mean age at first birth estimates could be biased by women's misreports of their own date of birth, especially if the misreporting is linked to fertility. If, for example, a woman who has begun childbearing early overstates her age due to negative social norms around early childbearing or if an interviewer estimates her age based on her childbearing status (in places where knowledge of birth dates is uncommon), then the mean age at birth would be overestimated. The completeness and accuracy of birth date reporting, of both women and their children, is likely to have improved over time, a factor that should be kept in mind when assessing trends.
The timing of the onset of parenthood is a key indicator used in studies of the determinants and consequences of early childbearing as well as an indicator of the success of various programmatic interventions. Annual estimates of the period mean age at first birth from vital statistics are widely available in most developed countries. In contrast, vital statistics of high quality are lacking in the large majority of developing countries and sample surveys such as the DHS are the primary source of demographic and health indicators. The published indicators from these data include the retrospectively reported median but not the period mean age at first birth. Both medians and means are dependent on the quality of reporting in the birth history as well as reporting of their own birth dates by women.
We assessed two methods to estimate the period mean age at first birth. The first method is the same as the one used in populations with accurate vital statistics, and the second is the singulate mean age at first birth. A comparison of the two estimates obtained from 62 DHS surveys shows excellent agreement in countries in which there is no evidence of an increase in childlessness. But, as expected on theoretical grounds, there is less agreement in populations that have experienced a rise in the proportion childless. We therefore prefer the first method. The measure is readily calculated as a straightforward variant of the standard procedure used by DHS to estimate period fertility rates and its reference period (the three years prior to the survey) is the same as the published total fertility rates. In addition, it refers to recent births and is, therefore, presumably more accurately reported than indicators based on events that occurred in the distant past. Since this new measure makes it possible for the first time to assess recent trends in the onset of childbearing in developing countries with multiple DHS surveys and to compare recent period estimates of the mean age at first birth among countries, we suggest that it be considered for inclusion in published DHS reports.
DHS surveys do not provide estimates of birth rates or proportions ever having a birth for women under age 15. However, in the average survey 2.3 % of 15 year olds have ever given birth and very small proportions of all births therefore occur below age 15. These are estimated as follows: The proportion ever having a birth at age 14 is assumed to be one third of the proportion at age 15. The proportion ever having a birth at age 13 is assumed to be one third the proportion at age 14, etc. Age-specific birth rates under age 15 are calculated directly from these proportions.
Furthermore, the two means are not exactly comparable because the first method estimates the mean for the three years before the survey and the second method estimates the mean at the time of the survey. As a result, the timing of the means is about 18 months apart. This implies that when childbearing is being postponed, the first mean is slightly lower than the second. For example if the mean is rising at a rate of 1 year per decade (i.e., 0.1 per year) then the two means will differ by 0.15 years.
DHS:
Demographic and health surveys
TFR:
Total fertility rates
National Research Council and Institute of Medicine. Growing up Global. The Changing Transitions to Adulthood in Developing Countries. Panel on Transition to Adulthood in Developing Countries. In: Cynthia L, editor. Committee on Population and Board on Children, Youth, and Families. Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press; 2005.
Blanc AK, Winfrey W, Ross J. New findings for maternal mortality age patterns: aggregated results for 38 countries. PLoS One. 2013;8(4):e59864.
Nations U. Adolescent Fertility since the International Conference on Population and Development (ICPD) in Cairo. New York: United Nations Population Division; 2013.
Dixon-Mueller R. How young is "too young? Comparative perspectives on adolescent sexual, marital, and reproductive transitions. Stud Fam Plan. 2008;39(4):247–62.
Gupta N, Mahy M. Adolescent childbearing in sub-Saharan Africa: Can increased schooling alone raise ages at first birth? Demogr Res. 2003;8(4):93–106.
DFID. A new strategic vision for girls and women: stopping poverty before it starts. London: UK Department for International Development; 2011.
Ki-Moon B. Global Strategy for Women's and Children's Health. New York: United Nations; 2010.
Bongaarts J. Population policy options in the developing world. Science. 1994;263(5148):771–6.
EUROSTAT. http://appsso.eurostat.ec.europa.eu/nui/submitViewTableAction.do;jsessionid6=rYMZ9wi2xbfmambPuCVcQOk3W6yPPH26neBsnUxmIVgQUUW3K9wQ!1673912419. Accessed March 15 2015.
Human Fertility Database. Max Planck Institute for Demographic Research (Germany) and Vienna Institute of Demography (Austria). 2014. http://www.humanfertility.org. Accessed 12 January 2015.
Hajnal J. Age at marriage and proportions marrying. Popul Stud. 1953;7(2):111–36.
United Nations. World Marriage Data 2012. In: United Nations, Department of Economic and Social Affairs, Population Division. 2013. http://www.un.org/en/development/desa/population/publications/dataset/marriage/wmd2012/MainFrame.html. Accessed 12 January 2015.
Casterline JB, Trussell J. Age at first birth. WFS Comparative Studies no. 15. Voorburg, Netherlands: ISI; 1980.
Afzal M, Kiani MF. Mean ages at parities: an indirect estimation. Pak Dev Rev. 1995;34(4 Pt. II):545–61.
Booth H. Trends in mean age at first birth and first birth intervals in the Pacific Islands. Genus. 2001;LVII(3–4):165–90.
ICF International. The Demographic and Health Surveys. http://dhsprogram.com/. Accessed 12 January 2015.
Rutstein S, Rojas G. Guide to DHS statistics. Demographic and Health Surveys. Calverton, MD: ORC Macro; 2006.
Blanc A, Rutenberg N. Assessment of the quality of data on age at first sexual intercourse, age at first marriage and age at first birth in the Demographic and Health Surveys. In: An Assessment of DHS-I Data Quality. DHS Methodological Reports. Columbia, MD: Institute of Resource Development/Macro System; 1990. p. 41–79.
Feeney G. The population census as a time machine, Demography – Statistics – Information Technology Letter, Number 4, 15 January 2014, available online (last accessed 27 February 2015): http://demographer.com/dsitl/04-population-census-as-time-machine.
Gage A. An assessment of the quality of data on age at first union, first birth, and first sexual intercourse for Phase II of the Demographic and Health Surveys Program. Occasional Papers No. 4. Calverton, Maryland: Macro International Inc; 1995.
Arnold F. Assessment of the quality of birth history data in the demographic and health surveys. In: An Assessment of DHS-I Data Quality. Methodological Reports no. 1. Maryland: Institute for Resource Development/Macro Systems, Inc; 1990. p. 83–111.
Hertrich V, Lardoux S. Estimating age at first union in Africa. Are census and survey data comparable? Population–E. 2014;69(3):357–89.
Pullum T. An assessment of the age and date reporting in the DHS surveys,1985-2003, DHS Methodological Reports 5. Calverton: Maryland Macro International Inc; 2006.
Schoumaker B. Quality and Consistency of DHS Fertility Estimates, 1990 to 2012. Methodological Reports No. 12. Rockville, Maryland: ICF International, DHS; 2014.
Bongaarts J, Feeney G. Estimating mean lifetime. Proc Natl Acad Sci. 2003;100(23):13127–33.
This research was supported by a grant from the William and Flora Hewlett Foundation to the Population Council. The authors are grateful to Katharine McCarthy for data analysis assistance.
Population Council, 1 Dag Hammarskjold Plaza, New York, NY, 10017, USA
John Bongaarts & Ann K. Blanc
John Bongaarts
Ann K. Blanc
Correspondence to John Bongaarts.
JB carried out the statistical analysis and drafted the manuscript. AB helped to interpret the results and draft the manuscript. Both authors read, reviewed and approved the final manuscript.
Additional file
Estimating the mean age at first birth. Includes two equations that provide alternative estimates of the age-standardized mean age at first birth [25]. (DOCX 17 kb)
Bongaarts, J., Blanc, A.K. Estimating the current mean age of mothers at the birth of their first child from household surveys. Popul Health Metrics 13, 25 (2015). https://doi.org/10.1186/s12963-015-0058-9
Vital Statistic
Total Fertility Rate
Fertility Transition
Birth History
Recent Birth | CommonCrawl |
Can the US military seize a country which has the ability to kill anyone based on the victim's name and face?
Setup: Earth, current day. US president decides to attack fictional country of Lalalistan, because terrorism/drugs/whatever. This attack has big support in NATO countries and Russia with China officialy object the attack, but remain neutral. (Example: Lybia attack).
So, while it is officially NATO action, it is mainly driven by US Army.
The twist The dictator of Lalalistan sealed a deal with the devil and obtained Death Note and Name vision
=== For those who do not know how Death note works: ===
If you know Anime series Death Note, you can skip this section, because the dictator of Lalalistan obtained exact Death Note as used in this Anime series. So for these who do not know how the Death Note works:
Death Note is magical artifact in form of usual paper notebook
The human whose name is written in this note shall die.
This note will not take effect unless the writer has the person's face in their mind when writing his/her name. Therefore, people sharing the same name will not be affected.
If the cause of death is written within the next 40 seconds of writing the person's name, it will happen.
If the cause of death is implausible, or not specified, the person will simply die of a heart attack.
After writing the cause of death, details of the death should be written in the next 6 minutes and 40 seconds. Source
The Name Vision works simply: If you look at a person (even through video recording), you can see their real name and therefore write it into the Death Note correctly.
=== End of Death Note mechanics explanation ===
Lets give the country of Lalalistan shape: The Libya example could work well: Lalalistan is about as big as Libya and has military force of Libya a year before NATO attack. However, it differs from Libya in two details:
First, the people of Lalalistan support their dictator and think that the NATO attack is act of aggression and second, I doubt that the fine people of Libya ever dealt with a devil.
The question: Could the US military with joint help of other NATO countries sucessfully attack this country without using nuclear weapons?
Also, bear in mind, that the Death Note is only one example in this world and the artifact works even if you tear the pages to small pieces.
To add more details: If only names should be written into the notebook, and someone clever would be using the notebook, you could fit in 250,000 names. The dictator is not that clever, so lets give him capacity of 100,000 names before the Death Note is fully used.
magic warfare
Pavel JanicekPavel Janicek
$\begingroup$ I just wanted to mention, that choosing the way the Person dies has a bigger impact then you might think. For example, the dictator could just Mass execute famous people left and right, but this would make him look like Satan himself. If he uses his Power to the full potential he could look like a victim and trick the people in the USA/NATO to fight for him. To give you an example, Barack Obama gets killed by the deathnote, he doesn't get killed with a stroke but gets eaten while hunting Lions. The next elected president dies while having sex with a hooker, etc. After some time people would at $\endgroup$ – Johannes Geidel Apr 20 '16 at 7:03
$\begingroup$ A hundred thousand people later: YES. YES. YES. Tittle edit: Would the US military still have the wherewithal to seize countries after its current leaders and key personnel (and the next 95k of them) were assassinated? None of this death note stuff is needed and it would still not be a WB question. VTCed. "What's the defense against Death Notes?" would be much more succinct (and still off-topic?). $\endgroup$ – Mazura Apr 21 '16 at 0:17
$\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – HDE 226868♦ Apr 21 '16 at 22:35
In order for the dictator to remain in power he will have to keep his powers secret (this is vital), while strategically eliminating foes in the international political arena.
The US has decided to attack. This is bad, however it is NATO that is backing the US's access to your country. The US is bound to have political opposition in NATO who would be able to vote against the invasion, you just need to nudge them into action.
If you simply start executing politicians left, right, and center you'll paint a target a mile wide on your back. You want to create an image of being a victim, not as a dangerous maniac with magical assassination abilities.
Any US invasion is going to face both international and homegrown opposition. You could start by assassinating a few key US politicians, and loosening the President's grip on the opposition in Congress/the Capitol.
Remember that you can't just kill the President and end the war. The US military is a machine that cannot be stopped simply by cutting off its head. They must be ordered off the attack.
Your army must be ready to hold the US off for as long as possible. During this time you must contact the leaders of China, and Russia. Kill one of their powerful enemies and tell them that you have "agents" in place that can solve their problems, in exchange for a little political help.
In the mean time destabilize the governments backing American foreign policy. Cause the "accidental", and very public deaths of the leaders of Germany, France, etc. In order to make their deaths appear accidental you could try killing them while they are all attending an event together, and dictate that they die in a fire, etc.
With so many NATO states in turmoil the US is going to find the political landscape shifting beneath their feet. Now would be a good time to assassinate the president, maybe by having him commit suicide in a very public way. Can death note also make the victim say some words before they die? "This invasion was a mistake, I am so sorry" would be a good little script before blowing his own brains out in front of a room full of journalists, for example.
You can come out of this smelling like roses, all while redefining international politics, and changing the political landscape to suit your needs.
As I mentioned, your military forces are going to have to oppose the US army/navy/air force for a little while. This time might be as short as a day, or as long as a few weeks. You can use your assassination abilities to seriously screw with the American war effort.
For example, it is relatively common knowledge which military leaders are in charge of the invasion, or which admirals are running the show off of which carrier.
Heck, this information is even provided in the news sometimes. It's also quite easy to find pictures of these officers on naval websites, on books they have published or been the subject of, etc.
So here's an idea:
Kill the admiral commanding the attack fleet by having him detonate a ship-board nuclear device. (or some other powerful bomb in one of the ship's ammo bunkers)
Now that is going to put a crimp in the US's invasion plan, and buy you some time to bring down the political alliance against you. In response to the tragic accident that the US fleet has suffered you can then initiate talks for a temporary cease fire in order to allow the American fleet to care for their wounded.
If you are seen as a reasonable and compassionate person you will receive a lot more support from the international community. You will also have complete and utter deniability as far as how the attack took place, as your country clearly doesn't have the capabilities to attack the US Navy on that scale.
Revealing your abilities to anyone will cause the world to come crashing down on you. You will become the most hunted man to ever have lived. Keep it a secret.
AndreiROMAndreiROM
$\begingroup$ Note to self: never, ever, anger AndreiROM. $\endgroup$ – Matthieu M. Apr 19 '16 at 17:36
$\begingroup$ @MatthieuM. - Good man. When I am Master of the World I will allow you to live. For a while. $\endgroup$ – AndreiROM Apr 19 '16 at 17:43
$\begingroup$ @MatthieuM. HAHAHA! Seriously though, this was such a strategic and cold blooded answer! $\endgroup$ – Revetahw Apr 19 '16 at 17:56
$\begingroup$ This wasn't really explained in the question, but it's worth noting that the Death Note can only cause people to die in ways that are physically plausible. If you write a cause of death that isn't physically plausible (such as "explodes with the force of a small nuclear device") the target simply dies of a heart attack. On the other hand, suicide is always an option, and you can use the cause of death to manipulate the target's actions prior to their suicide, so having the admiral sabotage something on the ship before dying isn't out of the question. $\endgroup$ – Ajedi32 Apr 19 '16 at 19:42
$\begingroup$ Really good answer, but death by detonating nuclear warhead would kill other peoples, so the admiral would simply die by cardiac arrest. You can still screw with the motor of the ship, or have him sabotaging whatever is in the ship, but he can t kill someone else. $\endgroup$ – DrakaSAN Apr 21 '16 at 10:08
The thing is, the United States also has that ability.
It's called an "MQ-1 Predator Drone" and it is an unmanned aerial vehicle which flies at a high altitude and fires hellfire rockets at a ground target. The United States frequently uses this technology to assassinate people it suspects of being involved in terrorism. Threatening to magically kill a large number of US citizens by death note definitely fulfills the definition of a terrorist threat, so the precedent is there.
All they need to know is the current location of the president and they can assassinate him with a drone strike.
PhilippPhilipp
$\begingroup$ Not quite. The US can destroy any house it knows about. Perhaps the bad guys are inside, perhaps just some wedding party. Lalalistan can kill any person it knows about, without collateral damage. $\endgroup$ – o.m. Apr 19 '16 at 15:29
$\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – Serban Tanasa Apr 20 '16 at 21:00
$\begingroup$ I don't see anything special about that drone, there are a gazillion other ways to kill someone if you know his current location. That's the hard part, which you don't explain. $\endgroup$ – Oriol Apr 20 '16 at 22:53
$\begingroup$ I agree with @Oriol things like ICBM can nail target anywhere on Earth (further than drone flight range). But first you must know where the target is and you better have a good explanation for what you're doing. $\endgroup$ – Tomáš Zato Apr 21 '16 at 10:33
$\begingroup$ Predator/ICBM solutions will have very powerful political backlash in international politics... ICBM solution will probably initiate a combined chinese/russian ICBM strikes in US for just 1 ICBM, we may not be in cold war but a single ICBM strike can easily turn the situation to worst. Predator drone isn't stealth, it can be shot down if there are visual observation posts which many countries has, even 3rd world ones. $\endgroup$ – mico villena Apr 22 '16 at 4:55
No. You can't attack them. The Dictator would simply start killing world leaders or famous people until the attackers went away. It's the ultimate MAD.
However there would then be an intensive program to work out just where the Death Note is stored and steal it or hit it with a nuke. A single deterrent is not effective long term as it can be neutralized and then you have no backup.
Tim B♦Tim B
$\begingroup$ It should be mentioned you can also tear out pages of the Death Note, and they have the same properties of the original. Those could serve as a backup. $\endgroup$ – mattsven Apr 19 '16 at 17:12
$\begingroup$ "ultimate" MAD? I'm not sure about that...it's hard to write down 7 billion names while thinking of each person's face. On the other hand nukes could extinguish all those people quite easily. I think nukes win the "which one is more ultimate" competition. $\endgroup$ – Jimbo Jonny Apr 19 '16 at 20:36
$\begingroup$ @JimboJonny Not at all. Because the leaders of hostile countries are known. If you order an attack then you personally are going to die. Not some "statistics" in terms of X% troops and Y% collateral damage. You as the minster/president/prime minister/whatever who voted for the war are going to die. That's pretty mutually assured from their perspective. $\endgroup$ – Tim B♦ Apr 19 '16 at 21:00
$\begingroup$ So... assassination of a few leaders is more "destruction" (that's what the D stands for) than global annihilation of the entire race? Being able to kill with surgical precision without collateral damage is pretty much the opposite of what MAD was. Your statements about what would happen to the leaders are true...but your terminology for it is completely off base. That is not MAD. MAD is where 2 nations going to war means both of them will end up completely razed without a man, woman, child, or blade of grass left to tell the story of woe. $\endgroup$ – Jimbo Jonny Apr 19 '16 at 22:26
$\begingroup$ From the point of view of the leaders destruction is mutually assured. It's mutually assured destruction not mutually assured mass destruction... $\endgroup$ – Tim B♦ Apr 19 '16 at 22:49
There are two lessons here learned from Iraq. First: finding the leader of a country isn't easy. Secondly, Even after you take out a country's leadership, you are still a long, LONG way away from seizing/securing the country.
You actually have to eliminate everyone willing to take up arms against you. On the other hand, all you have to do to defend your country is have people still fighting when the other side gives up or is unable to continue fighting.
So at the absolute guarantee of going on every watch list on the planet here's what you write:
Barack Obama - Nuclear blast @ the White house.
Joe Biden - Nuclear blast @ the pentagon.
Robert J. Bentley - Governor Alabama - Nuclear blast @ the Alabamian capitol building.
Bill Walker - Governor Alaska - Nuclear blast @ the Alaskan capitol building..
And you continue for the rest of the states. By waiting longer than the needed 6 seconds after each kill, you can ensure that each death is its own explosion. With a mushroom cloud at the capitol building in each state, the country will not be able to continue their war. Even if they did have the capacity to continue the fight, they will be far more interested in getting home and fixing things there than the drugs or terrorism.
If you don't think that will make the army go home (maybe they'll think you had some sort of part to play in the nuking and want revenge) have the US ambassador to Russia and China assassinate those country's leaders instead of dropping nukes. They'll be too busy fighting off super powers to mess with Lalalistan.
An agglomeration of these two methods could be really evil. "Barack Obama - Russian nuclear blast @ the White house" All the advantages of both scenarios.
If you have a problem with the collateral damage, just aim for the leadership of the government and military. It probably wouldn't be as effective, but your power advantage over the US attacking you is so vast, I don't know if it would matter.
$\begingroup$ Donald Trump - 1000kg anti-matter detonation, Earth. $\endgroup$ – Serban Tanasa Apr 19 '16 at 21:35
$\begingroup$ @SerbanTanasa :0 We want to defend Lalaland, not crack the planet open like an egg. That would create a crater the size of Kansas and a wall of intense heat the size of the continental US. But I like your thinking :) $\endgroup$ – Shane Apr 19 '16 at 22:01
$\begingroup$ Good idea, though cases of death should be elaborated more or tweaked somehow, because of a rule OP didn't mention: death should be physically possible for victim to arrange - i.e. he must somehow be able to initiate/command/provoke nuclear blast - or it automatically reverts to "heart attack". $\endgroup$ – Oleg V. Volkov Apr 20 '16 at 11:49
$\begingroup$ Please note that is death note nation starts using nuclear death notes, than the retaliation will be nuclear; as the the use of nuclear weapons is off the table, it must be for both sides. $\endgroup$ – Joshua Apr 21 '16 at 1:16
Yes, taking over this country is entirely possible. How?
I'm fairly sure every single intelligence agency and military force of the world would be interested in getting control of the Death Note, or at least would want this president to no longer be in control of it. Quite apart from that the larger criminal organizations and many warlords would also try to get control.
If it was discovered who exactly had these powers and how they worked the person would make it to the top of the priority-to-kill-list of the CIA easily and instantly.
Taking him out should be a fairly easy task, as he has no special magical powers of any kind protecting him. Depending on how clever the dictator and his people are he might die within a week from a drone strike, or maybe in a car bombing after a couple of years... But I'm certain that his fate is sealed them moment his powers get publicly known.
Obviously these operations would be covert as to not give the dictator any target to retaliate against should some of them fail.
fgysinfgysin
$\begingroup$ I highly doubt he is going to go and announce that he made a deal with the devil and has this superpower. Saddam Hussein wasn on the top of the CIA assassination list for nearly 30 years before they started a full scale war to kill him. If it takes even as long as a week to get the drone or car bomb, that's enough time for the dictator to depopulate the US. That's not even getting into the fact that you need to take out more than one person to take over a country. Most of the US's fighting happened after Saddam was captured. $\endgroup$ – Shane Apr 19 '16 at 21:04
You've just developed a new form of international stalemate:
Mutually Assured Assassination
If you attack our country we'll assassinate your entire leadership. All of whose names and faces are publicly known.
While the country could be attacked, you're going to lose a lot of people who consider themselves to be important. Presidents, Defence Secretaries, Queens, Prime Ministers etc. If you feel the almost guaranteed loss of these people is worth the risk, then yes, sure you could invade.
However should any of these people die unexpectedly in mysterious circumstances the leader of Lalalistan will be prime suspect and probably wouldn't survive the week.
Can the deathnote mechanics be abused? Is it possible to write that someone dies in bed at a grand old age after a happy retirement surrounded by great-grandchildren?
SeparatrixSeparatrix
$\begingroup$ To answer your question: Kinda. The Death Note anime had several deaths specified like "14th November 2016, commits suicide". So this could be doable $\endgroup$ – Pavel Janicek Apr 19 '16 at 13:54
$\begingroup$ @PavelJanicek: How about hacking in all kinds of things into the "death details" description? E.g. get rich by executing someone by having the Death Note drop 10 tons of gold on them. Or get some mass destruction by specifying a 100m meteor impact as the death cause... $\endgroup$ – fgysin Apr 19 '16 at 15:06
$\begingroup$ @fgysin That is a great idea. Though, the problem is, the cause of death needs to be something reasonable. (Blame the author of Death Note.) $\endgroup$ – Stefnotch Apr 19 '16 at 16:04
$\begingroup$ @fgysin, it's simply a matter of being willing to kill someone for every bit of magic you do. There's no reason not to make Trump fly to Lalalistan, sign all his assets over to the state then drop dead, but that last bit is key, you always have to kill someone. $\endgroup$ – Separatrix Apr 19 '16 at 16:20
$\begingroup$ The rule is essentially that a death that is too improbable just doesn't happen. The experiment done in the show was a series of deaths that violated some factor of the environment. The 10 tons of gold is probably void unless creatively written, the Trump solution is completely valid provided you can be sure he's not already doing something that voids it. $\endgroup$ – Kaithar Apr 19 '16 at 16:43
Death Note has a weakness. It cannot be used to control any others except the to-die. Any attempt to do so results in the method used switching to [induced] heart attack. If the target cannot be killed by heart attack due to not possessing a heart (this already exists) it does not die.
Kind of expensive, but if we can keep our leaders alive we can press the attack by any means necessary. Death note is not very powerful against an APC operated by an unknown individual.
The United States probably wouldn't be able to effectively fight this guy. He could relatively easily assassinate every high-ranking government and military official and have >= ~20,000 names to spare. As a matter of fact, as soon as the first US troops enter his petty kingdom, I expect him to be busily copying names from the order of presidential succession and googling videos of them speaking/having his minions bring him physical copies of said videos.
That is why you make the dictator think you will do what he says. The chances are high that he will demand the US(NATO) withdraw from Lalalistan immediately. He will probably execute a high-ranking US official/general(probably the president) to provide of his power. Have the US make a big show of pulling out - load gear onto ships/mass forced-marches out of the country. Once Evil Dictator Dude sees this, he will believe he has won. He will put his guard down some.
Then have the US send a SEAL team to assassinate him. They succeed, his country collapses into a million warring factions, and the US has control of the Death Note. Now Mr. FormerVicePresidentCurrentPresident can go back to pretending to furiously hunt for the terrorists/drug dealers/crooks while taking bribes from Big Tobacco to not make tobacco illegal.
JDSweetBeatJDSweetBeat
You forgot to mention a critical feature of the Death Note as shown in the anime. You can control the actions of the victim. Lalalistan can use this to get confidential information. If they work very aggressive with the Death Note they can obtain the names of every high ranking officer within hours, causing their command structure to collapse or simply change the orders. The US and their allies would even lose the ability to lunch their own nukes. So no, the US has no chance against a country with a death note.
Enan84Enan84
$\begingroup$ Hi Enan84, and welcome to Worldbuilding and Stack Exchange. I am a little uncertain as to whether this qualifies as an answer according to our standards, but I edited it slightly to highlight what seems to be your proposed answer. Note that we reserve the answer space strictly for answers to the question as asked. For more on this, see How do I write a good answer? in our site's help center. You can edit your answer to clarify and expand on it, should you wish to do so. $\endgroup$ – a CVn♦ Apr 19 '16 at 19:50
$\begingroup$ Hi, thank you for your feedback. Can you specify what worries you? PS: I edited my answer to make it clear that I am referring to the Death Note as shown in the anime, which the thread starter stated to refer to. $\endgroup$ – Enan84 Apr 19 '16 at 19:57
$\begingroup$ There isn't necessarily anything wrong with your answer. Normally, we use comments to suggest improvements to or to request clarification about a post. However, you do not yet have the necessary reputation to comment on other peoples' posts. (Don't worry; a few people finding your answers helpful in answering questions asked will quickly unlock many privileges.) A part of this answer feels like it is mentioning something the OP forgot, which would be appropriate for a comment, but you do provide a valid-looking answer. $\endgroup$ – a CVn♦ Apr 19 '16 at 20:06
$\begingroup$ You're welcome. I hope you will stay around and become a regular contributor. $\endgroup$ – a CVn♦ Apr 19 '16 at 20:57
This is classic MAD, so we need to think about disarming strikes.
Short answer: NATO has got good odds.
A disarming strike is a power surprise attack that destroys the enemy's ability to counter attack effectively. This is normally nuking all the enemy's nukes.
In this case is either destroying the note (breaking it into such small pieces that you can't fit a single name in it) or killing enough aviators to disable the NATO air force.
How fast can one the book kill? The average person writes around 20 words per minute. Assuming you have to write the first and last name, that is 10 kills per minute or 1 kill every six seconds. The US has over 3600 combat air craft; let's assume the rest of NATO has the same amount for a total of 7200 planes and assume 2 pilots per plane. Assuming the writer knows all the pilots names, he will need $\frac{7200 \cdot 2 \cdot 6}{60 \cdot 60}~\text{h} = 24~\text{h}$ to wipe out the air force. Assuming he does not run out of space on the paper.
On the other hand, the air forces only needs 1 bomb in the right place and they have easily 7200 tries. So if the location of note is ever discovered, they can pounce and win.
I like their odds, BTW it's hard to see a plane incoming and you get a few seconds of warning so I don't think the note book writer will get many kills.
What if we divide the note? The question states the notebook still works if sheets are torn out. So a wise user will divide the note among many users across hundreds of users to make it hard to destroy and so prevent a "perfect disarming strike". In the event of war, this also increases the notes kill rate by a factor of a hundred. This makes it possible to still drop many enemies and to make the invasion very costly.
Trang Oul
sdrawkcabdearsdrawkcabdear
$\begingroup$ He doesn't need to wipe out the air force. - At least not initially, it's sufficient to wipe out the command structure of the air force. $\endgroup$ – Taemyr Apr 20 '16 at 11:13
Deny them knowledge
Off your head, how many politicians and military leaders can you name of a major country that is not yours?
If your first step is to disconnect them from all communications networks and isolate them from other media, then Obama and a bunch of other top public personae are vulnerable, but all the people and officers actually planning and implementing an attack are effectively anonymous in the absence of intelligence leaks. They could continue the invasion even (or especially so) after assassination of the first line of political leaders, while not being vulnerable themselves.
Furthermore, normal combat would not be affected - Name Vision applies when seeing a particular person, but much modern warfare is conducted beyond visual range or in vehicles. And that assumes that the president is willing to be on the front lines - which would end everything after a sniper from cover, artillery shot or a bomb from some aircraft.
In pure military means, a person with such a death note would be similar to an extremely good sharpshooter but not much more, and also have most of the same limitations - they must notice the enemy first in order to do damage, and are vulnerable to vehicles, all kinds of indirect fire and close range combat/ambushes.
PeterisPeteris
The deathnote could be easily defeated.
Just write a computer program to generate names consisting of a few million characters and makes the names official. Now one name will physically not fit in the notebook, rendering the notebook useless.
If the notebook magically scales to the size of an average name, etc. It would still take thousands of years to write a name.
Basically, human manual input is the weak point.
tweiricktweirick
As mentioned already, use Death Note ability to command what person should do in the minutes leading to his death to not only kill, but to give enemy leaders bad image in the process.
Make president go ballistic, grab a security's automatic pistol and open fire on peaceful anti-Lalalistan war demonstration and be killed in process. If we call back to anime already, think Code Geass' Yuffienator case.
Make top generals announce their heinious plans on TV to bomb "those Lalalistan untermensh" with new ultra-toxic-checmical-bio-bomb to "preserve only lives that matter - our soldiers", then to personally press big red button in bombing facility only for it to blow up spectacullary and kill generals in process (it is not necessary for them to actually develop such weapon - only to announce and blow themselves up in some imposing-looking lab, so that should be plausible by DN rules).
Plan about 10 performances like that and general chaos of forces left without command and backlash from other nations should be enough to make US busy with its own affairs instead of invading Lalalistan.
Oleg V. VolkovOleg V. Volkov
No, the USA does not stand a chance.
The death note can easily kill people it doesn't know the name of.
Even keeping to plausible scenarios, we can use the 7 minutes and 20 seconds of description of how someone died to write a single name and cause dozens, thousands, millions or billions of deaths as required.
For 20 years, the US nuclear launch code was 00000000000.
http://www.dailymail.co.uk/news/article-2515598/Launch-code-US-nuclear-weapons-easy-00000000.html
Even after they were changed to something more secure, Jimmy Carter left them in his suit pocket when it went to be dry cleaned.
2 of the 3 required officers on a Russian Nuclear Submarine voted to launch their nuclear missiles when an American destroyer launched 'practice rounds' at them. If the 3rd had agreed, the missiles would have launched. With the USA and Russia both at war in Syria (and not exactly on the same side), we have seen some incidents recently that could have escalated.
As recently as 1995, Russian early warning systems detecting a nuclear launch (actually a weather balloon).
While a nuclear war is far from probable, it is certainly plausible.
Pick a single person living in a mid sized US city, and write the note that he died "In the Russian nuclear retaliatory strike".
Originally this post involved a person spontaneously travelling back in time and exploding with force sufficient to destroy the continent of the USA. This was before the Question specified that implausible deaths will just result in heart attack.
$\begingroup$ Implausible death scenarios are automatically reverted to "heart attack" per original DN rules. $\endgroup$ – Oleg V. Volkov Apr 21 '16 at 12:32
$\begingroup$ Was not aware of that. The DN still grants 7 mintues 20 seconds, not quite of omnipotence, but still of power unmatched by nearly any mortal in fiction. Edited the answer to make the scenario plausible, yet the basic premise remains. $\endgroup$ – Scott Apr 22 '16 at 0:31
$\begingroup$ There's also "no collateral deaths" rule. DN will kill exactly one specified person. If scenario doesn't fits that - it too automatically reverts to "heart attack". $\endgroup$ – Oleg V. Volkov Apr 22 '16 at 10:19
Unless the word implausible in the description of the Death Note means what it means when I use the word, the US has no chance at all. As the Death Note and Name Vision are described, the dictator of Lalalistan has infinite wishes, limited only by requirement that he must envision a specific person for each wish, who then dies.
So the dictator simply envisions a random public figure from America, and writes "X sends out a mental blast which kills every other American, and then dies from the stress."
If the term implausible is meant to exclude that, then the specific meaning of implausible in this context needs to be expressly spelled out.
$\begingroup$ 1) no collateral deaths can come from the book, AFAIK. 2) Implausible means "not likely to happen" in this context... so, I think psychic powers are off. $\endgroup$ – Patrice Apr 22 '16 at 14:36
Not the answer you're looking for? Browse other questions tagged magic warfare or ask your own question.
What does a Dracula-esque villain need to maintain a magical castle and a variety of minions? | CommonCrawl |
Grishin, Aleksandr Vladimirovich
http://www.mathnet.ru/eng/person9080
1. A. V. Grishin, "Asymptotic Behavior in Lie Nilpotent Relatively Free Algebras and Extended Grassmann Algebras", Mat. Zametki, 107:6 (2020), 848–854 ; Math. Notes, 107:6 (2020), 903–908
2. A. V. Grishin, "On the measure of inclusion in relatively free algebras with the identity of Lie nilpotency of degree 3 or 4", Mat. Sb., 210:2 (2019), 75–86 ; Sb. Math., 210:2 (2019), 234–244
3. A. V. Grishin, "Asymptotics of the Codimensions $c_n$ in the Algebra $F^{(7)}$", Mat. Zametki, 104:1 (2018), 25–32 ; Math. Notes, 104:1 (2018), 22–28
4. A. V. Grishin, "On the additive structure and asymptotics of codimensions $c_n$ in the algebra $F^{(5)}$", Fundam. Prikl. Mat., 21:1 (2016), 93–104 ; J. Math. Sci., 233:5 (2018), 666–674
5. A. V. Grishin, S. V. Pchelintsev, "Proper central and core polynomials of relatively free associative algebras with identity of Lie nilpotency of degrees 5 and 6", Mat. Sb., 207:12 (2016), 54–72 ; Sb. Math., 207:12 (2016), 1674–1692
6. A. V. Grishin, S. V. Pchelintsev, "On centres of relatively free associative algebras with a Lie nilpotency identity", Mat. Sb., 206:11 (2015), 113–130 ; Sb. Math., 206:11 (2015), 1610–1627
7. A. V. Grishin, "On strong indecomposability of the Dedekind ring localization", Fundam. Prikl. Mat., 19:2 (2014), 43–49 ; J. Math. Sci., 213:2 (2016), 158–162
8. A. V. Grishin, "On $T$-spaces in a relatively free two-generated Lie nilpotent associative algebra of index $4$", Fundam. Prikl. Mat., 17:4 (2012), 133–139 ; J. Math. Sci., 191:5 (2013), 686–690
9. A. V. Grishin, A. V. Tsarev, "$\mathcal E$-closed groups and modules", Fundam. Prikl. Mat., 17:2 (2012), 97–106 ; J. Math. Sci., 186:4 (2012), 592–598
10. A. V. Grishin, "On the Center of a Relatively Free Lie-Nilpotent Algebra of Index $4$", Mat. Zametki, 91:1 (2012), 147–148 ; Math. Notes, 91:1 (2012), 139–140
11. A. V. Grishin, L. M. Tsybulya, A. A. Shokola, "On $T$-spaces and relations in relatively free, Lie nilpotent, associative algebras", Fundam. Prikl. Mat., 16:3 (2010), 135–148 ; J. Math. Sci., 177:6 (2011), 868–877
12. A. V. Grishin, "On the structure of the centre of a relatively free Grassmann algebra", Uspekhi Mat. Nauk, 65:4(394) (2010), 191–192 ; Russian Math. Surveys, 65:4 (2010), 781–782
13. A. V. Grishin, L. M. Tsybulya, "On the structure of a relatively free Grassmann algebra", Fundam. Prikl. Mat., 15:8 (2009), 3–93 ; J. Math. Sci., 171:2 (2010), 149–212
14. A. V. Grishin, L. M. Tsybulya, "On the multiplicative and $T$-space structure of the relatively free Grassmann algebra", Mat. Sb., 200:9 (2009), 41–80 ; Sb. Math., 200:9 (2009), 1299–1338
15. A. V. Grishin, "On independent systems in unitary relatively free algebras", Fundam. Prikl. Mat., 14:8 (2008), 69–71 ; J. Math. Sci., 166:5 (2010), 613–614
16. A. V. Grishin, "On T-spaces and related concepts and results", Fundam. Prikl. Mat., 14:5 (2008), 77–84 ; J. Math. Sci., 163:6 (2009), 677–681
17. A. V. Grishin, L. M. Tsybulya, "Two theorems on the structure of a relatively free Grassmann algebra", Uspekhi Mat. Nauk, 63:4(382) (2008), 181–182 ; Russian Math. Surveys, 63:4 (2008), 782–784
18. A. V. Grishin, L. M. Surmina, "On $T$-spaces of $n$-words over a field of characteristic $p>0$", Uspekhi Mat. Nauk, 62:4(376) (2007), 145–146 ; Russian Math. Surveys, 62:4 (2007), 802–803
19. A. V. Grishin, "Structural and algorithmic problems in $T$-spaces over a field of characteristic $p>0$", Uspekhi Mat. Nauk, 60:3(363) (2005), 175–176 ; Russian Math. Surveys, 60:3 (2005), 568–569
20. A. V. Grishin, "On the level of representability relative to free associative algebras", Uspekhi Mat. Nauk, 59:4(358) (2004), 193–194 ; Russian Math. Surveys, 59:4 (2004), 794–795
21. A. V. Grishin, "Model algebras, multiplicities, and representability indices of varieties of associative algebras", Mat. Sb., 195:1 (2004), 3–20 ; Sb. Math., 195:1 (2004), 1–18
22. A. V. Grishin, "Multiplicities and indices of representability of non-matrix varieties of algebras", Uspekhi Mat. Nauk, 58:6(354) (2003), 143–144 ; Russian Math. Surveys, 58:6 (2003), 1198–1199
23. A. V. Grishin, "Index of representability of a variety of associative algebras", Uspekhi Mat. Nauk, 57:4(346) (2002), 169–170 ; Russian Math. Surveys, 57:4 (2002), 805–806
24. A. V. Grishin, "An application of generalized polynomials to the estimation of the multiplicity of certain varieties of associative algebras", Mat. Sb., 193:3 (2002), 25–36 ; Sb. Math., 193:3 (2002), 333–344
25. A. V. Grishin, "On the existence of a finite basis in a $T$-space of generalized polynomials and on representability", Uspekhi Mat. Nauk, 56:4(340) (2001), 133–134 ; Russian Math. Surveys, 56:4 (2001), 755
26. A. V. Grishin, "Examples of T-spaces and T-ideals over a field of characteristic 2 without the finite basis property", Fundam. Prikl. Mat., 5:1 (1999), 101–118
27. A. V. Grishin, "The variety of associative rings is not a Specht variety", Uspekhi Mat. Nauk, 54:5(329) (1999), 157–158 ; Russian Math. Surveys, 54:5 (1999), 1025–1026
28. A. V. Grishin, "On the finite basis property of abstract $T$-spaces", Fundam. Prikl. Mat., 1:3 (1995), 669–700
29. A. V. Grishin, "On finitely based systems of generalized polynomials", Izv. Akad. Nauk SSSR Ser. Mat., 54:5 (1990), 899–927 ; Math. USSR-Izv., 37:2 (1991), 243–272
30. A. V. Grishin, "The growth exponent of a variety of algebras and its applications", Algebra Logika, 26:5 (1987), 536–557
31. A. V. Grishin, "On the solvability of certain problems of supersonic flow around a wedge within the framework of the Lavrent'ev–Bitsadze approximation", Zh. Vychisl. Mat. Mat. Fiz., 26:12 (1986), 1868–1877 ; U.S.S.R. Comput. Math. Math. Phys., 26:6 (1986), 173–179
32. A. V. Grishin, "Asymptotic properties of free finitely generated algebras in certain varieties", Algebra Logika, 22:6 (1983), 608–625
33. A. V. Grishin, "The multiplicative group of a field", Uspekhi Mat. Nauk, 28:6(174) (1973), 201–202
34. A. V. Grishin, "On the theory of intersection at a normal point of an algebraic surface", Uspekhi Mat. Nauk, 28:5(173) (1973), 235–236
35. N. M. Adrianov, V. A. Artamonov, I. N. Balaba, Yu. A. Bahturin, L. A. Bokut', V. V. Borisenko, E. I. Bunina, S. A. Gaifullin, S. T. Glavatsky, I. Z. Golubchik, S. González, A. V. Grishin, A. È. Guterman, N. I. Dubrovin, M. V. Zaicev, E. I. Zel'manov, N. K. Il'ina, A. Ya. Kanel-Belov, A. L. Kanunnikov, E. S. Kislitsyn, A. A. Klyachko, I. B. Kozhukhov, E. M. Kreines, O. V. Kulikova, T. P. Lukashenko, O. V. Markova, C. Martínez, A. A. Mikhalev, A. V. Mikhalev, A. Yu. Ol'shanskii, A. E. Pentus, A. V.A. Petrov, Yu. G. Prokhorov, S. V. Pchelintsev, V. V. Tenzina, D. A. Timashev, A. A. Tuganbaev, I. N. Tumaikin, V. K. Kharchenko, I. A. Chubarov, A. A. Shafarevich, A. I. Shafarevich, I. P. Shestakov, E. E. Shirshova, V. É. Shpil'rain, "Viktor Timofeevich Markov (21.06.1948—15.07.2019)", Fundam. Prikl. Mat., 23:2 (2020), 3–16
36. V. N. Chubarikov, N. M. Dobrovolsky, V. A. Artamonov, A. V. Mikhalev, V. N. Latyshev, Yu. V. Nesterenko, A. Ya. Belov, A. V. Grishin, E. I. Kompantseva, A. V. Tsarev, E. I. Deza, P. A. Krylov, A. A. Tuganbaev, "Alexander Alexandrovich Fomin (to the 70th anniversary)", Chebyshevskii Sb., 20:2 (2019), 579–582
Moscow State Pedagogical University | CommonCrawl |
PhilPapers PhilPeople PhilArchive PhilEvents PhilJobs
Submit material
Submit a book or article
Upload a bibliography
Personal pages we track
Archives we track
Submitting to PhilPapers
Editor's Guide
The Categorization Project
For Archive Admins
PhilPapers Surveys
Bargain Finder
About PhilPapers
Results for 'Tony R. Merriman' (try it on Scholar)
1000+ found
Listing dateFirst authorImpactPub yearRelevanceDownloads Order
1 filter applied
BibTeX / EndNote / RIS / etc
Export this page: Choose a format.. Formatted textPlain textBibTeXZoteroEndNoteReference Manager
Limit to items.
pro authors only
published only
Configure languages here. Sign in to use this feature.
categorization shortcuts
open articles in new windows
Open Category Editor
The Oxytocin Receptor Gene Variant Rs53576 Is Not Related to Emotional Traits or States in Young Adults.Tamlin S. Conner, Karma G. McFarlane, Maria Choukri, Benjamin C. Riordan, Jayde A. M. Flett, Amanda J. Phipps-Green, Ruth K. Topless, Marilyn E. Merriman & Tony R. Merriman - 2018 - Frontiers in Psychology 9.details
Direct download (2 more)
Book Review:Evaluating Research and Development I. R. Weschler, Paula Brown. [REVIEW]L. A. R. - 1954 - Philosophy of Science 21 (1):76-.details
Book Review:Creative Aspects of Natural Law R. A. Fisher. [REVIEW]L. A. R. - 1952 - Philosophy of Science 19 (4):350-.details
History of Biology in Philosophy of Biology
Psychopathology. By J. S. Nicole, M.R.C.P. & S. (London: Bailliere Tindall & Cox. 1930. Pp. Xii + 203. Price 10s. 6d.).G. G. R. - 1931 - Philosophy 6 (22):271-.details
Psychopathology in Philosophy of Cognitive Science
Blake's Edition of Xenophon's Hellenica I. II., and Other Selections The Hellenica of Xenophon, Books I. And II., Together with Selections From Lysias C. Eratosthenes and From Aristotle's Constitution of Athens, Edited with Notes by R. W. Blake, A.M. Boston. 1894. [REVIEW]C. S. R. - 1895 - The Classical Review 9 (04):231-.details
Classical Greek Philosophy, Misc in Ancient Greek and Roman Philosophy
Jeremiah 21–36 (Anchor Bible 21b) and Jeremiah 37–52 (Anchor Bible 21c). By Jack R. Lundbom.B. R. - 2008 - Heythrop Journal 49 (1):168–169.details
Bref de Sa Sainteté Pie X au R. P. Montagne, directeur de la Revue thomiste.M. C. R. - 1909 - Revue Thomiste 17 (1/6):1.details
CARTON, R. -La Synthèse Doctrinale de Roger Bacon. [REVIEW]S. R. S. R. - 1926 - Mind 35:102.details
Roger Bacon in Medieval and Renaissance Philosophy
GIRONELLA J. R. s.j., "Curso de Cuestiones filosóficas previas al estudio de la Teología".G. R. G. R. - 1964 - Rivista di Filosofia Neo-Scolastica 56:261.details
Dodd A. And Jensen R.. The Core Model. Annals of Mathematical Logic, Vol. 20 , Pp. 43–75.Dodd Tony and Jensen Ronald. The Covering Lemma for K. Annals of Mathematical Logic, Vol. 22 , Pp. 1–30.Dodd A. J. And Jensen R. B.. The Covering Lemma for L[U]. Annals of Mathematical Logic, Pp. 127–135.Donder D., Jensen R. B. And Koppelberg B. J.. Some Applications of the Core Model. Set Theory and Model Theory, Proceedings of an Informal Symposium Held at Bonn, June 1–3, 1979, Edited by Jensen R. B. And Prestel A., Lecture Notes in Mathematics, Vol. 872, Springer-Verlag, Berlin, Heidelberg, and New York, 1981, Pp. 55–97.Dodd A.. The Core Model. London Mathematical Society Lecture Note Series, No. 61. Cambridge University Press, Cambridge Etc. 1982, Xxxviii + 229 Pp. [REVIEW]William Mitchell - 1984 - Journal of Symbolic Logic 49 (2):660-662.details
Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic
R. B. Dobson, Ed., The Church, Politics and Patronage in the Fifteenth Century. Gloucester, Eng.: Alan Sutton; New York: St. Martin's Press, 1984. Pp. 245. $25.Tony Pollard, Ed., Property and Politics: Essays in Later Medieval English History. Gloucester, Eng.: Alan Sutton; New York: St. Martin's Press, 1984. Pp. 204; Table, 2 Maps. $25. [REVIEW]F. L. Cheyette - 1986 - Speculum 61 (2):497-497.details
Tony Becher & Paul R. Trowler, Academic Tribes and Territories: Intellectual Enquiry and the Culture of Disciplines. [REVIEW]K. B. Wray - 2003 - International Studies in the Philosophy of Science 17 (3):317-320.details
Sociology of Science in General Philosophy of Science
Does Gratitude to R for Φ-Ing Imply Gratitude That R Φ-Ed?Tony Manela - forthcoming - Philosophical Studies:1-18.details
Many find it plausible that for a given beneficiary, Y, benefactor, R, and action, ϕ, Y's being grateful to R for ϕ-ing implies Y's being grateful that R ϕ-ed. According to some philosophers who hold this view, all instances of gratitude to, or "prepositional gratitude," are also instances of gratitude that, or "propositional gratitude." These philosophers believe there is a single unified concept of gratitude, a phenomenon that is essentially gratitude that, and whose manifestations sometimes have additional features that make (...) them instances of gratitude to as well. In this article, I show that view to be mistaken. I base my argument on two hypothetical cases, in each of which a beneficiary, Y, is grateful to a benefactor, R, for ϕ-ing, but not grateful that R ϕ-ed. Generalizing from those cases and other cases of gratitude, I argue that prepositional gratitude is the proper response to benevolence-motivated action and propositional gratitude consists in a beneficiary's judging a state of affairs to be valuable for himself and welcoming that state of affairs. Because not every instance of a benefactor's acting benevolently toward a beneficiary is something that beneficiary finds valuable for himself and welcomes, it is possible to be grateful to a benefactor for ϕ-ing but not grateful that she ϕ-ed. Prepositional gratitude and propositional gratitude can each occur without the other and are thus two distinct phenomena. I conclude by explaining the importance of accurately understanding the relationship between prepositional gratitude and propositional gratitude. (shrink)
Gratitude in Normative Ethics
Moral Character in Normative Ethics
Moral Education in Normative Ethics
Moral Judgment in Meta-Ethics
Moral Reasoning and Motivation in Meta-Ethics
Does gratitude to R for ϕ-ing imply gratitude that R ϕ-ed?Tony Manela - forthcoming - Philosophical Studies:1-18.details
The Limits of Emergence: Reply to Tony Lawson.John R. Searle - 2016 - Journal for the Theory of Social Behaviour 46 (4):400-412.details
Review of Tony Wall and David Perrin: Slavoj Žižek: A Žižekian Gaze at Education. [REVIEW]Brian R. Gilbert - 2016 - Studies in Philosophy and Education 35 (6):641-647.details
Philosophy of Education in Philosophy of Social Science
The Great Revolution in the Earth Sciences in the Mid-Twentieth CenturyHenry R. Frankel. The Continental Drift Controversy. 4 Volumes. Volume 1: Wegener and the Early Debate. Xxii + 604 Pp., Illus., Bibl., Index. Volume 2: Paleomagnetism and Confirmation of Drift. Xviii + 525 Pp., Illus., Bibl., Index. Volume 3: Introduction of Seafloor Spreading. Xvi + 476 Pp., Illus., Bibl., Index. Volume 4: Evolution Into Plate Tectonics. Xix + 675 Pp., Illus., Bibl., Index. Cambridge: Cambridge University Press, 2012. $400. [REVIEW]Tony Hallam - 2014 - Isis 105 (2):410-412.details
Review of Tony Wall and David Perrin: Slavoj Žižek: A Žižekian Gaze at Education. [REVIEW]R. Gilbert Brian - forthcoming - Studies in Philosophy and Education:1-7.details
Tony Jones. Splitting the Second: The Story of Atomic Time. X + 199 Pp., Illus., Figs., Tables, App., Index.Bristol/Philadelphia: Institute of Physics Publishing, 2000. $19.99, £14.99. [REVIEW]Ian R. Bartky - 2002 - Isis 93 (1):157-158.details
Philosophy and Politics.Tony Tyley, Janet Hoenig, Bryan Magee, R. M. Dworkin & Inc British Broadcasting Corporation - 1997 - Films for the Humanities & Sciences Distributed by Bbc Worldwide Americas.details
Political Theory in Social and Political Philosophy
The Man Who Could Have Been King: A Storyteller's Guide for Character Education.Tony R. Sanchez - 2006 - Journal of Social Studies Research 30 (2).details
The Long-Term Sustenance of Sustainability Practices in MNCs: A Dynamic Capabilities Perspective of the Role of R&D and Internationalization. [REVIEW]Subrata Chakrabarty & Liang Wang - 2012 - Journal of Business Ethics 110 (2):205-217.details
What allows MNCs to maintain their sustainability practices over the long-term? This is an important but under-examined question. To address this question, we investigate both the development and sustenance of sustainability practices. We use the dynamic capabilities perspective, rooted in resource-based view literature, as the theoretical basis. We argue that MNCs that simultaneously pursue both higher R&D intensity and higher internationalization are more capable of developing and maintaining sustainability practices. We test our hypotheses using longitudinal panel data from 1989 to (...) 2009. Results suggest that MNCs that have a combination of both high R&D intensity and high internationalization are (i) likely to develop more sustainability practices and (ii) are likely to maintain more of those practices over a long-term. As a corollary, MNCs that have a combination of both low R&D and low internationalization usually (i) end up developing little or no sustainability practices and (ii) find it difficult to sustain whatever little sustainability practices they might have developed. (shrink)
Business Ethics in Applied Ethics
R. A. Fisher, Lancelot Hogben, and the Origin(s) of Genotype-Environment Interaction.James Tabery - 2008 - Journal of the History of Biology 41 (4):717-761.details
This essay examines the origin of genotype-environment interaction, or G×E. "Origin" and not "the origin" because the thesis is that there were actually two distinct concepts of G×E at this beginning: a biometric concept, or \[G \times E_B\], and a developmental concept, or \[G \times E_D \]. R. A. Fisher, one of the founders of population genetics and the creator of the statistical analysis of variance, introduced the biometric concept as he attempted to resolve one of the main problems in (...) the biometric tradition of biology - partitioning the relative contributions of nature and nurture responsible for variation in a population. Lancelot Hogben, an experimental embryologist and also a statistician, introduced the developmental concept as he attempted to resolve one of the main problems in the developmental tradition of biology - determining the role that developmental relationships between genotype and environment played in the generation of variation. To argue for this thesis, I outline Fisher and Hogben's separate routes to their respective concepts of G × E; then these separate interpretations of G × E are drawn on to explicate a debate between Fisher and Hogben over the importance of G × E, the first installment of a persistent controversy. Finally, Fisher's \[G \times E_B\] and Hogben's \[G \times E_D \] are traced beyond their own work into mid-2Oth century population and developmental genetics, and then into the infamous IQ Controversy of the 1970s. (shrink)
Genes in Philosophy of Biology
Genotypes and Phenotypes in Philosophy of Biology
Ought-Implies-Can: Erasmus Luther and R.M. Hare.Charles R. Pigden - 1990 - Sophia 29 (1):2-30.details
l. There is an antinomy in Hare's thought between Ought-Implies-Can and No-Indicatives-from-Imperatives. It cannot be resolved by drawing a distinction between implication and entailment. 2. Luther resolved this antinomy in the l6th century, but to understand his solution, we need to understand his problem. He thought the necessity of Divine foreknowledge removed contingency from human acts, thus making it impossible for sinners to do otherwise than sin. 3. Erasmus objected (on behalf of Free Will) that this violates Ought-Implies-Can which he (...) supported with Hare-style ordinary language arguments. 4. Luther a) pointed out the antinomy and b) resolved it by undermining the prescriptivist arguments for Ought-Implies-Can. 5. We can reinforce Luther's argument with an example due to David Lewis. 6. Whatever its merits as a moral principle, Ought-Implies-Can is not a logical truth and should not be included in deontic logics. Most deontic logics, and maybe the discipline itself, should therefore be abandoned. 7. Could it be that Ought-Conversationally-Implies-Can? Yes - in some contexts. But a) even if these contexts are central to the evolution of Ought, the implication is not built into the semantics of the word; b) nor is the parallel implication built into the semantics of orders; and c) in some cases Ought conversationally implies Can, only because Ought-Implies-Can is a background moral belief. d) Points a) and b) suggest a criticism of prescriptivism - that Oughts do not entail imperatives but that the relation is one of conversational implicature. 8. If Ought-Implies-Can is treated as a moral principle, Erasmus' argument for Free Will can be revived (given his Christian assumptions). But it does not 'prove' Pelagianism as Luther supposed. A semi-Pelagian alternative is available. (shrink)
Desiderius Erasmus in Medieval and Renaissance Philosophy
Moral Prescriptivism in Meta-Ethics
Ought Implies Can in Meta-Ethics
On Levi R. Bryant's "Dim Media": The Age of Disruption, Homeless Media, and Problematzing Latour's Apolitical "Flat Ontology" by Ekin Erkan.Ekin Erkan - 2019 - MediaCommons:1-20.details
Where Deleuze and Guattari introduce us to "lines of flight," they transcode the mechanics of "flight" through "capture," fomenting what is understood by many as Deleuze's philosophical thesis: forging an analog relationship between univocity and difference via multiplicitious immanence. This is, of course, posed against Freud's molar unities in their second chapter "One or Several Wolves." Thus, if Heidegger is the philosopher par excellance of queries between hermeneutics and immanence (the philosopher of anti-immanence), it is Deleuze who asks "what is (...) the relationship between immanence and multiplicity?" While, for radical immanence qua Laruelle and post-Laruelleans, Deleuze is confounded with multiplicity (caught in an eternal Prometheun ἰσονομία (isonomia) where there is no victor), for Deleuze multiplicity and difference are two components that can fit together adeptly. Guided by the principle of univocity, Deleuze describes a world of pure multiplicity—all multiplicities are equally immanent within nature. As Whitehead spoke of "occasions," Deleuze speaks of specific gatherings (of heterogeneous multiplicities), dubbed "assemblages," that occasion themselves as blips – blips of singularity on an otherwise smooth plane. Following philosopher Levi R. Bryant's account of "dim media," I situate political lines of flight qua psychogeographies vis-a-vis urban topology. (shrink)
Deleuze and Guattari: Rhizome in Continental Philosophy
Film Media, Misc in Aesthetics
Building an ACT‐R Reader for Eye‐Tracking Corpus Data.Jakub Dotlačil - 2018 - Topics in Cognitive Science 10 (1):144-160.details
Cognitive architectures have often been applied to data from individual experiments. In this paper, I develop an ACT-R reader that can model a much larger set of data, eye-tracking corpus data. It is shown that the resulting model has a good fit to the data for the considered low-level processes. Unlike previous related works, the model achieves the fit by estimating free parameters of ACT-R using Bayesian estimation and Markov-Chain Monte Carlo techniques, rather than by relying on the mix of (...) manual selection + default values. The method used in the paper is generalizable beyond this particular model and data set and could be used on other ACT-R models. (shrink)
A Modal Restriction of R-Mingle with the Variable-Sharing Property.Gemma Robles, José M. Méndez & Francisco Salto - 2010 - Logic and Logical Philosophy 19 (4):341-351.details
A restriction of R-Mingle with the variable-sharing property and the Ackermann properties is defined. From an intuitive semantical point of view, this restriction is an alternative to Anderson and Belnap's logic of entailment E.
Nonclassical Logics in Logic and Philosophy of Logic
''Describing Our Whole Experience'': The Statistical Philosophies of W. F. R. Weldon and Karl Pearson.Charles H. Pence - 2011 - Studies in History and Philosophy of Biological and Biomedical Sciences 42 (4):475-485.details
There are two motivations commonly ascribed to historical actors for taking up statistics: to reduce complicated data to a mean value (e.g., Quetelet), and to take account of diversity (e.g., Galton). Different motivations will, it is assumed, lead to different methodological decisions in the practice of the statistical sciences. Karl Pearson and W. F. R. Weldon are generally seen as following directly in Galton's footsteps. I argue for two related theses in light of this standard interpretation, based on a reading (...) of several sources in which Weldon, independently of Pearson, reflects on his own motivations. First, while Pearson does approach statistics from this "Galtonian" perspective, he is, consistent with his positivist philosophy of science, utilizing statistics to simplify the highly variable data of biology. Weldon, on the other hand, is brought to statistics by a rich empiricism and a desire to preserve the diversity of biological data. Secondly, we have here a counterexample to the claim that divergence in motivation will lead to a corresponding separation in methodology. Pearson and Weldon, despite embracing biometry for different reasons, settled on precisely the same set of statistical tools for the investigation of evolution. (shrink)
19th Century British Philosophy, Misc in 19th Century Philosophy
Explanation in Biology in Philosophy of Biology
Direct download (10 more)
Dr. B.R. Ambedkar: A Modern Indian Philosopher.Desh Raj Sirswal - 2018 - Milestone Education Review 1 (09):19-31.details
Dr. B.R. Ambedkar is one of the names who advocated to change social order of the age-old tradition of suppression and humiliation. He was an intellectual, scholar, statesman and contributed greatly in the nation building. He led a number of movements to emancipate the downtrodden masses and to secure human rights to millions of depressed classes. He has left an indelible imprint through his immense contribution in framing the modern Constitution of free India. He stands as a symbol of struggle (...) for achieving the Social Justice. We can assign several roles to this great personality due to his life full dedication towards his mission of eradicating evils from Indian society. The social evils of Indian society, also neglected this great personality even in intellectual sphere too. The so-called intellectuals of India not honestly discussed his contribution to Indian intellectual heritage, rather what they discussed, also smells their biases towards a Dalit literate and underestimated his great personality. This paper will attempt to discuss important facts about life and a short description of the literature written by Dr. B.R. Ambedkar. This is followed by discussion his philosophy in the five major sections i.e. Feminism and women empowerment, philosophy of education, ideas on social justice and equality, philosophy of politics and economics and philosophy of religion. (shrink)
Indian Ethics in Asian Philosophy
Indian Political Philosophy in Asian Philosophy
Modern Indian Philosophy in Asian Philosophy
R and Relevance Principle Revisited.Eunsuk Yang - 2013 - Journal of Philosophical Logic 42 (5):767-782.details
This paper first shows that some versions of the logic R of Relevance do not satisfy the relevance principle introduced by Anderson and Belnap, the principle of which is generally accepted as the principle for relevance. After considering several possible (but defective) improvements of the relevance principle, this paper presents a new relevance principle for (three versions of) R, and explains why this principle is better than the original and others.
Relevance Logic in Logic and Philosophy of Logic
Pure Hypocrisy.Tony Lynch & A. R. J. Fisher - 2012 - Philosophy in the Contemporary World 19 (1):32-43.details
We argue that two main accounts of hypocrisy— the deception-based and the moral-non-seriousness-based account—fail to capture a specific kind of hypocrite who is morally serious and sincere "all the way down." The kind of hypocrisy exemplified by this hypocrite is irreducible to deception, self-deception or a lack of moral seriousness. We call this elusive and peculiar kind of hypocrisy, pure hypocrisy. We articulate the characteristics of pure hypocrisy and describe the moral psychology of two kinds of pure hypocrites.
Moral States and Processes in Normative Ethics
Cognitive Appraisals and Emotional Experience: Further Evidence.A. S. R. Manstead, Philip E. Tetlock & Tony Manstead - 1989 - Cognition and Emotion 3 (3):225-239.details
Emotion and Consciousness in Psychology in Philosophy of Cognitive Science
Identifying the Gaps in Ethical Perceptions Between Managers and Salespersons: A Multidimensional Approach. [REVIEW]Tony L. Henthorne, Donald P. Robin & R. Eric Reidenbach - 1992 - Journal of Business Ethics 11 (11):849 - 856.details
This research examines, in a general manner, the degree and character of perceptual congruity between salespeople and managers on ethical issues. Salespeople and managers from a diversity of organizations were presented with three scenarios having varying degrees of ethical content and were asked to evaluate the action of the individual in each scenario. Findings indicate that, in every instance, the participating managers tended (1) to be more critical of the action displayed in the scenarios, (2) to view the action as (...) violating a sense of contract or promise, and (3) to view the action as less culturally acceptable than did the salespeople. (shrink)
On the Filter of Computably Enumerable Supersets of an R-Maximal Set.Steffen Lempp, André Nies & D. Reed Solomon - 2001 - Archive for Mathematical Logic 40 (6):415-423.details
We study the filter ℒ*(A) of computably enumerable supersets (modulo finite sets) of an r-maximal set A and show that, for some such set A, the property of being cofinite in ℒ*(A) is still Σ0 3-complete. This implies that for this A, there is no uniformly computably enumerable "tower" of sets exhausting exactly the coinfinite sets in ℒ*(A).
Defending PCL-R.Luca Malatesti & John McMillan - 2010 - In Luca Malatesti & John McMillan (eds.), Responsibility and Psychopathy: Interfacing Law, Psychiatry and Philosophy. Oxford University Press.details
In this chapter we argue that Robert Hare's psychopathy checklist revised (PCL-R) offers a construct of psychopathy that is valid enough for philosophical investigations of the moral and legal responsibility of psychopathic offenders.
Mental Illness in Philosophy of Cognitive Science
Psychopathology and Responsibility in Meta-Ethics
The Validity of Psychopathy in Philosophy of Cognitive Science
$7.49 used $66.08 new $82.00 direct from Amazon (collection) Amazon page
The Role of Religious and Spiritual Values in Shaping Humanity (A Study of Dr. B.R. Ambedkar's Religious Philosophy).Desh Raj Sirswal - 2016 - Milestone Education Review 7 (01):6-18.details
Values are an important part of human existence, his society and human relations. All social, economic, political, and religious problems are in one sense is reflection of this special abstraction of human knowledge. We are living in a globalized village and thinking much about values rather than practice of it. If we define religion and spirituality we can say that religion is a set of beliefs and rituals that claim to get a person in a right relationship with God, and (...) spirituality is a focus on spiritual things and the spiritual world instead of physical/earthly things. If we think rationally we can find the major evils related to religion exiting in present society are due to lack of proper understanding of religion and spirituality. If we really know our own religions and values associated with it, we can create a beautiful world, full or love and respect for each and every human being. The proper knowledge and practice of any religion's values can make an integrated man. In the book, The Buddha and His Dhamma, Dr. Ambedkar elucidated the significance and importance of Dhamma in human life. The Dhamma maintained purity of life, which meant abstains from lustful, evil practices. The Dhamma is a perfection of life and giving up craving. Dhamma's righteousness means right relation of man to man in all sphere of life. The basic idea underlying religion is to create an atmosphere for the spiritual development of the individual. He said that Knowing the proper ways and means is more important than knowing the ideal. The major objective of this paper is to the study the religious philosophy of Dr. B.R. Ambedkar and to study how he established that religious and spiritual values enables religious people in particular and humanity at large to solve contemporary problems. (shrink)
Ethics and Society, Misc in Value Theory, Miscellaneous
Heterodox/Nastika Philosophy, Misc in Asian Philosophy
Mahayana Buddhist Philosophy in Asian Philosophy
Parfit's Fission Dilemma: Why Relation R Doesn't Matter.Henry Pollock - 2018 - Theoria 84 (4):284-294.details
In his work on personal identity, Derek Parfit makes two revolutionary claims: firstly, that personal identity is not what matters in survival; and secondly, that what does matter is relation R. In this article I demonstrate his position here to be inconsistent, with the former claim being defensible only in case the latter is false. Parfit intends his famous fission argument to establish the unimportance of identity – a conclusion disputed by, among others, Mark Johnston. My approach is to critically (...) assess their debate, focusing on Johnston's reductio of Parfit's position. I contend that although Parfit's own response fails, there are other ways to save the fission argument. The unimportance of identity then comes at a cost, however, because the reductio can only be avoided by accepting either that nothing matters in survival, or else that facts about particles and forces do. Either way, relation R cannot be what matters. (shrink)
Does Trust Matter for R&D Cooperation? A Game Theoretic Examination.Marie-Laure Cabon-Dhersin & Shyama V. Ramani - 2004 - Theory and Decision 57 (2):143-180.details
The game theoretical approach to R&D cooperation does not investigate the role of trust in the initiation and success of R&D cooperation: it either assumes that firms are non-opportunists or that the R&D cooperation is supported by an incentive mechanism that eliminates opportunism. In contrast, the present paper focuses on these issues by introducing incomplete information and two types of firms: opportunist and non-opportunist. Defining trust as the belief of each firm that its potential collaborator will respect the contract, it (...) identifies the trust conditions under which firms initiate R&D alliances and contribute to their success. The higher the spillovers, the higher the level of trust required to initiate R&D cooperation for non-opportunists, while the inverse holds for opportunists. (shrink)
A Critique of R.D. Alexander's Views on Group Selection.David Sloan Wilson - 1999 - Biology and Philosophy 14 (3):431-449.details
Group selection is increasingly being viewed as an important force in human evolution. This paper examines the views of R.D. Alexander, one of the most influential thinkers about human behavior from an evolutionary perspective, on the subject of group selection. Alexander's general conception of evolution is based on the gene-centered approach of G.C. Williams, but he has also emphasized a potential role for group selection in the evolution of individual genomes and in human evolution. Alexander's views are internally inconsistent and (...) underestimate the importance of group selection. Specific themes that Alexander has developed in his account of human evolution are important but are best understood within the framework of multilevel selection theory. From this perspective, Alexander's views on moral systems are not the radical departure from conventional views that he claims, but remain radical in another way more compatible with conventional views. (shrink)
Evolution of Phenomena in Philosophy of Biology
Group Selection in Philosophy of Biology
The Ethics of Espionage.Tony Pfaff & Jeffrey R. Tiel - 2004 - Journal of Military Ethics 3 (1):1-15.details
Professional soldiers and academics have spent considerable effort trying to conclude when it is permissible to set aside the usual moral prohibition against killing in order to achieve the goals set before them. What has received much less attention, however, is when it is appropriate to set aside other moral considerations such as the prohibition against deception, theft and blackmail. This makes some sense, since if it is moral to kill someone, whether or not it is appropriate to deceive him (...) seems to be trivial in comparison. But members of the intelligence community, both military and non-military, must determine for times of peace as well as war when it is appropriate to set aside the usual prohibitions in order to achieve national objectives. The purpose of this article is to provide a framework for military and non-military intelligence professionals for answering and discussing these questions. By applying insights from Kantian and Lockean ethics, the authors seek to describe an ethic of the intelligence profession that permits a combination of ethical restraint and intelligence effectiveness. (shrink)
Military Ethics in Applied Ethics
Metafysica AlS een historische discipline: De actualiteit Van R.g. Collingwoods "hervormde metafysica".Guido Vanheeswijck - 1992 - Tijdschrift Voor Filosofie 54 (1):42 - 69.details
Both in An Autobiography and in An Essay on Metaphysics R.G. Collingwood defines the study of metaphysics as primarily at any time an attempt to discover the absolute presuppositions of thinking and secondarily as an attempt to discover the corresponding absolute presuppositions of other peoples and other times, and to follow the historical process by which one set of presuppositions has turned into another. In addition, he states that the distinction between what is true and what is false does not (...) apply to them. The objection often raised against this definition is that it has nothing to do with metaphysics in the traditional sense and that it only refers to a history of ideas. In this article I try to show the link between Collingwood's apparently idiosyncratic definition of metaphysics and the traditional one. I, therefore, have to sketch the background against which Collingwood's concept of metaphysics and the peculiar terminology he makes use of must be interpreted. This reconstruction of the original background is necessary in order to make clear what Collingwood means by his project of a "reformed metaphysics" as a historical inquiry into the absolute presuppositions of human thinking about reality. (shrink)
R. G. Collingwood in 20th Century Philosophy
R-Spondin1 - Discovery of the Long-Missing, Mammalian Female-Determining Gene?Dagmar Wilhelm - 2007 - Bioessays 29 (4):314-318.details
Until recently, sex determination in mammals has often been described as a male determination process, with male differentiation being the active and dominant pathway, and only in its absence is the passive female pathway followed. This picture has been challenged recently with the discovery that the gene encoding R-spondin1 is mutated in human patients with female-to-male sex reversal.((1)) These findings might place R-spondin1 in the exceptional position of being the female-determining gene in mammals. In this review, possible roles of R-spondin1 (...) during sex determination as well as questions arising from this study will be discussed. (shrink)
Biological Sciences in Natural Sciences
R&D Cooperation in Emerging Industries, Asymmetric Innovative Capabilities and Rationale for Technology Parks.Vivekananda Mukherjee & Shyama V. Ramani - 2011 - Theory and Decision 71 (3):373-394.details
Starting from the premise that firms are distinct in terms of their capacity to create innovations, this article explores the rationale for R&D cooperation and the choice between alliances that involve information sharing, cost sharing or both. Defining innovative capability as the probability of creating an innovation, it examines firm strategy in a duopoly market, where firms have to decide whether or not to cooperate to acquire a fixed cost R&D infrastructure that would endow each firm with a firm-specific innovative (...) capability. Furthermore, since emerging industries are often characterized by high technological uncertainty and diverse firm focus that makes the exploitation of spillovers difficult, this article focuses on a zero spillover context. It demonstrates that asymmetry has an impact on alliance choice and social welfare, as a function of ex-post market competition and fixed costs of R&D. With significant asymmetry no alliance may be formed, while with similar firms the cost sharing alliance is dominant. Finally, it ascertains the settings under which the equilibrium outcome is distinct from that maximizing social welfare, thereby highlighting some conditions under which public investment in a technology park can be justified. (shrink)
Nanotechnology in Applied Ethics
Skopos Theory and Legal Translation: A Case Study of Examples From the Criminal Law of the P.R.C.Yanping Liu - 2015 - International Journal for the Semiotics of Law - Revue Internationale de Sémiotique Juridique 28 (1):125-133.details
Legal translation has become a principal means to unfold Chinese laws to the world in the global era and the study of it has proved to be of practical significance. Since the proper theory guidance is the key to the quality of LT translation, this paper focuses on the Skopos theory and the strategies applied in the practice of LT. A case study of LT examples from the Criminal Law of the P.R.C. has been made while briefly reviewing the Skopos (...) theory and its principles. Started with short discussion of LT, this paper probes into the applicability of the three principles of Skopos theory, including the Skopos rule, the coherence rule and the fidelity rule, into the legal texts, especially into the translation of the Criminal Law of the P.R.C. and based on the study, the strategies for LT are proposed, with the hope that it can be useful for reference in other legal texts. (shrink)
Criminal Law in Philosophy of Law
Ethics of Spying: A Reader for the Intelligence Professional, Vol. I.Joel H. Rosenthal, J. E. Drexel Godfrey, R. V. Jones, Arthur S. Hulnick, David W. Mattausch, Kent Pekel, Tony Pfaff, John P. Langan, John B. Chomeau, Anne C. Rudolph, Fritz Allhoff, Michael Skerker, Robert M. Gates, Andrew Wilkie, James Ernest Roscoe & Lincoln P. Bloomfield Jr (eds.) - 2006 - Lanham, MD: Scarecrow Press.details
This is the first book to offer the best essays, articles, and speeches on ethics and intelligence that demonstrate the complex moral dilemmas in intelligence collection, analysis, and operations. Some are recently declassified and never before published, and all are written by authors whose backgrounds are as varied as their insights, including Robert M. Gates, former Director of the Central Intelligence Agency; John P. Langan, the Joseph Cardinal Bernardin Professor of Catholic Social Thought at the Kennedy Institute of Ethics, Georgetown (...) University; and Loch K. Johnson, Regents Professor of Political Science at the University of Georgia and recipient of the Owens Award for contributions to the understanding of U.S. intelligence activities. Creating the foundation for the study of ethics and intelligence by filling in the gap between warfare and philosophy, this is a valuable collection of literature for building an ethical code that is not dependent on any specific agency, department, or country. (shrink)
Ethics of Artificial Intelligence, Misc in Philosophy of Cognitive Science
Differences in Exercise Intensity Seems to Influence the Affective Responses in Self-Selected and Imposed Exercise: A Meta-Analysis.Bruno R. R. Oliveira, Andréa C. Deslandes & Tony M. Santos - 2015 - Frontiers in Psychology 6.details
Is R.S. Peters' Way of Mentioning Women in His Texts Detrimental to Philosophy of Education? Some Considerations and Questions.Helen E. Lees - 2012 - Ethics and Education 7 (3):291-302.details
. Is R.S. Peters' way of mentioning women in his texts detrimental to philosophy of education? Some considerations and questions. Ethics and Education: Vol. 7, Creating spaces, pp. 291-302. doi: 10.1080/17449642.2013.767002.
Feminist Philosophy of Education in Philosophy of Gender, Race, and Sexuality
Women in Philosophy in Philosophy of Gender, Race, and Sexuality
Kinetic Models of (M-R)-Systems.J. A. Prideaux - 2011 - Axiomathes 21 (3):373-392.details
Kinetic models using enzyme kinetics are developed for the three ways that Louie proved that Rosen's minimal (M-R)-System can be closed to efficient cause; i.e., how the "replication" component can itself be entailed from within the system. The kinetic models are developed using the techniques of network thermodynamics. As a demonstration, each model is simulated using a SPICE circuit simulator using arbitrarily chosen rate constants. The models are built from SPICE sub-circuits representing the key terms in the chemical rate equations. (...) The models include the addition of an ad hoc semi-permeable membrane so the system can achieve steady state fluxes and also to illustrate the need for all the efficient cause agents to be continually replaced. Comments are made about exactly what is being simulated. (shrink)
Morris R. Cohen and the Scientific Ideal.David A. Hollinger - 1975 - MIT Press.details
This is Hollinger's book on the life and work of the American philosopher of science Morris R. Cohen.
General Philosophy of Science, Misc in General Philosophy of Science
$4.39 used $72.14 new Amazon page
Some New Lattice Constructions in High R. E. Degrees.Heinrich Rolletschek - 1995 - Mathematical Logic Quarterly 41 (3):395-430.details
A well-known theorem by Martin asserts that the degrees of maximal sets are precisely the high recursively enumerable degrees, and the same is true with 'maximal' replaced by 'dense simple', 'r-maximal', 'strongly hypersimple' or 'finitely strongly hypersimple'. Many other constructions can also be carried out in any given high r. e. degree, for instance r-maximal or hyperhypersimple sets without maximal supersets . In this paper questions of this type are considered systematically. Ultimately it is shown that every conjunction of simplicity- (...) and non-extensibility properties can be accomplished, unless it is ruled out by well-known, elementary results. Moreover, each construction can be carried out in any given high r. e. degree, as might be expected. For instance, every high r. e. degree contains a dense simple, strongly hypersimple set A which is contained neither in a hyperhypersimple nor in an r-maximal set. The paper also contains some auxiliary results, for instance: every r. e. set B can be transformed into an r. e. set A such that A has no dense simple superset, the transformation preserves simplicity- or non-extensibility properties as far as this is consistent with , and A [TRIPLE BOND]TB if B is high, and A ≥TB otherwise. Several proofs involve refinements of known constructions; relationships to earlier results are discussed in detail. (shrink)
Areas of Mathematics in Philosophy of Mathematics
1 — 50 / 1000
Using PhilPapers from home?
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
General Editors:
David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Area Editors:
David Bourget
Gwen Bradford
Berit Brogaard
Margaret Cameron
James Chase
Rafael De Clercq
Ezio Di Nucci
Barry Hallen
Hans Halvorson
Jonathan Ichikawa
Michelle Kosch
Øystein Linnebo
JeeLoo Liu
Brandon Look
Manolo Martínez
Matthew McGrath
Michiru Nagatsu
Susana Nuccetelli
Giuseppe Primiero
Jack Alan Reynolds
Darrell P. Rowbottom
Aleksandra Samonek
Howard Sankey
Jonathan Schaffer
Thomas Senor
Daniel Star
Jussi Suikkanen
Lynne Tirrell
Aness Webster
Learn more about PhilPapers
Home | New books and articles | Bibliographies | Philosophy journals | Discussions | Article Index | About PhilPapers | API | Contact us | Code of conduct
PhilPapers logo by Andrea Andrews and Meghan Driscoll. | CommonCrawl |
Houghton Mifflin Harcourt | Seventh Grade
Home Reports Center Math Seventh Grade Math in Focus: Singapore Math
Math in Focus: Singapore Math - Seventh Grade
In Chapter Test 1, Section B, Item 9 states, "At 6 p.m., the temperature was 2.5°F. By midnight, it had dropped by 6.8°F. By 6 a.m. the next day, it had risen by 3.4°F. What was the final temperature in °F?" (7.NS.3)
In Chapter Test 4, Section C, Item 11 states, "The price of a computer was marked up by 50% and then marked down by 50%. James said that there was no change in the price of the computer in the end. Explain why James' reasoning is incorrect. Calculate the percent change in the price of the computer. Explain how you found your answer." (7.RP.3)
In Cumulative Review 1, Section B, Item 11 states, "Evaluate -20 + 45 ÷ (-3) × (-2)." (7.EE.1)
In Chapter 6 Test, Section C, Item 12 states, "This question has two parts. Part A: Construct triangle ABC where AB = 4cm, BC = 5.3cm, and AC = 6.6cm. State the type of triangle you constructed. Show your drawing and answer in the space below. Part B: Triangle ABC is enlarged to produce another triangle XYZ by a scale factor of 1.4. What is the area of triangle XYZ? Write your answer and your work or explanation in the space below." (7.G.1)
In the End-of-Year Benchmark Assessment, Section B, Item 34 states, "This question has two parts. A population consists of the heights in centimeters of 100 students. A random sample of 10 heights is collected. 170, 175, 180, 176, 175, 174, 173, 178, 176, 177 Part A: Calculate the sample mean height of the students. Estimate the population mean height. Write your answers in the space below. Part B: Draw a plot for the heights and the mean height in the space below." (7.SP.2)
Materials provide opportunities for students to engage in grade-level problems during Engage, Learn, Think, Try, Activity, and Independent Practice portions of the lesson. Engage activities present inquiry tasks that encourage mathematical connections. Learn activities are teacher-facilitated inquiry problems that explore new concepts. Think activities provide problems that stimulate critical thinking and creative solutions. Try activities are guided practice opportunities to reinforce new learning. Activity problems reinforce learning concepts while students work with a partner or small group. Independent Practice problems help students consolidate their learning and provide teachers information to form small group differentiation learning groups.
In Section 1.6, Order of Operations with Integers, students engage with solving real-world and mathematical problems involving the four operations with rational numbers. In the Engage activity on page 75, students write algebraic expressions for real-world situations. The activity states, "A game show awards 30 points for each correct answer and deducts 50 points for each incorrect answer. A contestant answers 2 questions incorrectly and 3 questions correctly. How do you write an expression to find the contestants' final score? Discuss." In the Learn activity, Problem 3, page 74, students apply the order of operations with integers. The problem states, "Evaluate -5 + (8 - 12) (-4)." In the Try activity, Problem 4, page 75, students practice applying the order of operations with integers. The problem states, "48 ÷ (-8 + 6) + 2 ⋅ 28." In Independent Practice, Problem 17, students practice translating real-world situations into expressions and solve. The problem states, "Sarah took three turns in a video game. She scores -120 points during her first turn, 320 points during her second turn, and -80 points during her third turn. What was her average score for the three turns?" Students engage with extensive work and full intent of 7.NS.3 (Solve real-world and mathematical problems involving the four operations with rational numbers).
In Section 2.6 Writing Algebraic Expressions, students translate verbal descriptions into algebraic expressions with one or more variables involving the distributive property. In the Engage activity on page 171, students work in pairs to use bar models to model a situation with one variable. The activity states, "A wooden plank is x feet long. Draw a bar model and write an expression to represent the total length of two such planks. Now, use the bar model to represent the length of $$\frac{1}{3}$$ of the total length of the two planks. How do you write an expression to show it? Explain your reasoning." In the Learn activity, Problem 1, page 176, students translate verbal descriptions into algebraic expressions with more than one variable. The problem states, "Some situations may require you to use more than one variable. Adam has m coins. Rachel has $$\frac{1}{2}r$$ coins. Assuming Adam has more coins than Rachel, how many more does Adam have?" In the Try activity, page 177, students practice translating verbal descriptions into algebraic expressions with more than one variable. The activity states, "The price of a bag is p dollars and the price of a pair of shoes is 5q dollars. Ms. Scott bought both items and paid a sales tax of 20%. Write an algebraic expression for the amount of sales tax she paid." In Independent Practice, Problem 11, students practice translating more complex verbal problems into algebraic expressions. The problem states, "The length of $$\frac{2}{3}$$ of a rope is (4u - 5) inches. Express the total length of the rope in terms of u." These activities provide extensive grade-level work with 7.EE.2 (Understand that rewriting an expression in different form in a problem context can shed light on the problem and how the quantities in it are related).
In Section 4.5, Percent Increase and Decrease, students find a quantity given a percent increase or decrease and find percent increases and decreases. In the Engage activity on page 335, students work in pairs to represent percent changes by drawing bar models. The activity states, "A: A cake cost $20. Its price increased by $5. Express this increase as a percent of the original price. What can you say about the percent increase of the price of the cake? B: Using your answer in (a), what do you think is the percent decrease in price if the cake originally cost $25 and had its price reduced by $5?" In the Learn activity on page 335, students find a quantity given a percent increase or decrease. The activity states, "The price of a pair of running shoes increased by 15% since last year. If the price of the running shoes cost $60 last year, how much does it cost now?" Method 1 shows the original price multiplied by the discounted percent and Method 2 starts with the discounted percent multiplied by the original price. In the Try activity, Problem 2, page 340, students practice finding percent increase or decrease. The problem states, "Alex deposited $1,200 into a savings account. At the end of the first year, the amount of money in the account increased to $1,260. What was the percent interest?" In Independent Practice, Problem 4, students practice finding the new quantity given the original quantity and percent increase or decrease. The problem states, "The price of a pound of grapes was $3.20 last year. This year, the price of grapes fell by 15% due to a better harvest. Find the price of a pound of grapes this year." Students engage with extensive work and full intent of 7.RP.3 (Use proportional relationships to solve multistep ratio and percent problems).
In Section 7.2, Area of a Circle, students use formulas to find the area of circles, semicircles, and quadrants. In the Engage activity on page 143, students find different ways to approximate the area of a circle. The activity states, "On a piece of paper, draw a circle with a diameter of 4 centimeters and a square of sides 4 centimeters. a. How can you find the area of the circle? Discuss the steps you would take to find the area of the circle. b. What is the area of the square? What can you observe about the area of the circle and the area of the square? Discuss." In the Learn activity on page 143, students find the area of a circle. The activity states, "1. The figure on the right shows a circle of radius r in a square. Find the area of the square in terms of r. 2. Draw a square in the circle as shown. Then, find the area of the square in terms of r. 3. Estimate the area of the circle using the areas found in 1. and 2. the area of a circle is less than ____ square units but is more than ____ square units. The area of the circle is about ____ square units." In the Try activity, Problem 1, page 146, students practice finding the area of a circle. The problem states, "Find the area of a circle that has a radius of 18 centimeters." In Independent Practice, Problem 9, students practice finding the area of a circle when given the radius or diameter and using $$\frac{22}{7}$$ for $$\pi$$. The problem states, "A circular pendant has a diameter of 7 centimeters. Find its area. Use $$\frac{22}{7}$$ as an approximation for $$\pi$$." Students engage with extensive work and full intent of 7.G.4 (Know the formulas for the area and circumference of a circle and use them to solve problems).
In Section 8.2, Making Inferences About Populations, students make inferences about a population using statistics from a sample, estimate a population mean, and make comparative inferences about two populations using two sets of sample statistics. In the Engage activity on page 221, students discover the connection between sample data and population data, based on experimental results, and make an inference about the population. The activity states, "Susan has a bag of marbles. She chooses a marble and then replaces it. Of her ten trials, she picks a red marble 8 times. What is reasonable to conclude about the bag of marbles? Explain." In the Learn activity, Problem 1b, page 226, students make comparative inferences about two populations. The problem states, "The weights of the players on two football teams are summarized in the box plots (Team A and B box plots shown). Express the difference in median weight in terms of the interquartile range." Students are shown how to divide the difference in median weight by the interquartile range. In the Try activity, Problem 2, page 225, students practice using an inference to estimate a population mean. The problem states, "A random sample of ages {15, 5, 8, 7, 18, 6, 15, 17, 6, 15} of 10 children was collected from a population of 100 children. a. Calculate the sample mean age of the children and use it to estimate the population mean age. b. Calculate the MAD of the sample. c. Calculate the MAD to mean ratio d. Draw a dot plot for the ages and the mean age. e. Using the MAD to mean ratio and the dot plot, describe informally how carried the population ages are." In Independent Practice, Problem 2, students infer about populations from a sample, given the mean and the MAD. The problem states, "You interviewed a random sample of 25 marathon runners and compiled the following statistics. Mean time to complete the race = 220 minutes MAD = 50 minutes What can you infer about the time to complete the race among the population of runners represented by your sample?" Students engage with extensive work and full intent of 7.SP.2 (Use data from a random sample to draw inferences about a population with an unknown characteristic of interest).
There are 9 chapters, of which 5.5 address major work of the grade, or supporting work connected to major work of the grade, approximately 61%.
In Section 5.1, Complementary, Supplementary, and Adjacent Angles, Try, Problem 2, page 12, students find angle measures involving adjacent angles which connects the supporting work of 7.G.5 (Use facts about supplementary, complementary, vertical, and adjacent angles in a multi-step problem to write and solve simple equations for an unknown angle in a figure) with the major work 7.EE.4 (Use variables to represent quantities in a real-world or mathematical problem, and construct simple equations and inequalities to solve problems by reasoning about the quantities). Students solve, "In the diagram below, ∠AOC, ∠COD and ∠DOB are adjacent angles on a straight line, $$\bar{AB}$$." A diagram shows ∠AOC measures 126$$\degree$$, ∠COD measures x°, and ∠DOB measures 2x$$\degree$$, and students find the value of x.
In Section 6.2, Scale Drawings and Lengths, Try, Problem 1, page 97, students calculate lengths and distances from scale drawings which connects the supporting work of 7.G.1 (Solve problems involving scale drawings of geometric figures, including computing actual lengths and areas from a scale drawing and reproducing a scale drawing at a different scale) with the major work of 7.RP.A (Analyze proportional relationships and use them to solve real-world and mathematical problems). Students solve, "The scale of a map is 1 inch: 15 miles. If the distance on the map between Matthew's home and his school is 0.6 inch, find the actual distance in miles."
In Section 7.1 Radius, Diameter, and Circumference of a Circle, Independent Practice, Problem 18, page 141, students write an equation to find the distance around one quadrant of a circle which connects the supporting work of 7.G.4 (Know the formulas for the area and circumference of a circle and use them to solve problems) with the major work of 7.EE.4 (Use variables to represent quantities in a real-world or mathematical problem, and construct simple equations and inequalities to solve problems by reasoning about the quantities). Students solve, "Find the distance around each quadrant. Use $$\frac{22}{7}$$ as an approximation for $$\pi$$." A quadrant of a circle with a radius of 3.5 in. is shown.
In Section 8.4, Finding Probability of Events, Independent Practice, Problem 10, page 259, students calculate the probability of events which connects the supporting work of 7.SP.6 (Approximate the probability of a chance event) with the major work of 7.RP.3 (Use proportional relationships to solve multistep ratio and percent problems). Students solve, "At a middle school, 39% of the students jog and 35% of the students do aerobic exercise. Of the students who do aerobic exercise, 1 out 5 students also jogs. a. What percent of the students do both activities? b. Draw a Venn diagram to represent the information. c. What fraction of the students only jog? d. What is the probability of randomly selecting a student who does neither activity? Give your answer as a decimal."
In Section 9.3, Independent Events, Try, Problem 1, page 344, students use the multiplication rule and the addition rule of probability to solve problems involving independent events which connects the supporting work of 7.SP.8 (Find probabilities of compound events using organized lists, tables, tree diagrams, and simulation) with the major work of 7.NS.3 (Solve real-world and mathematical problems involving the four operations with rational numbers). Students solve, "A game is played with a fair coin and a six-sided number die. To win the game, you need to randomly obtain heads on a fair coin and 3 on a fair number die. a. Complete the tree diagram. b. Find the probability of winning the game in one try."
In Section 6.3, Scale Drawings and Areas, Independent Practice, Problem 1, connects the supporting work of 7.G.A (Draw construct and describe geometrical figures and describe the relationships between them) to the supporting work of 7.G.B (Solve real-life and mathematical problems involving angle measure, area, surface area, and volume). Students solve, "On a map, 1 inch represents an actual distance of 2.5 miles. The actual area of the lake is 12 square miles. Find the area of the lake on the map."
In Section 7.5, Volume of Prism, Independent Practice, Problem 12 connects the supporting work of 7.G.B (Solve real-life and mathematical problems involving angle measure, area, surface area, and volume) to the supporting work of 7.G.A (Draw construct and describe geometrical figures and describe the relationships between them). Students solve, "The volume of a triangular prism is 700 cubic centimeters. Two of its dimensions are given in the diagram. Find the height of the triangular base." The triangular prism diagram shows a base of 10 cm and a width of 14 cm.
In Section 8.2, Making Inferences About Populations, Independent Practice, Problem 2 connects the supporting work of 7.SP.A (Use random sampling to draw inferences about a population) to the supporting work of 7.SP.B (Draw informal comparative inferences about two populations). Students solve, "You interviewed a random sample of 25 marathon runners and compiled the following statistics. Mean time to complete the race=220 minutes. MAD=50 minutes. What can you infer about the time to complete the race among the population of runners represented by your sample?"
In Section 1.8, Operations with Decimals, Independent Practice, Problem 19 connects the major work of 7.NS.A (Apply and extend previous understanding of operations with fractions to add, subtract, multiply, and divide rational numbers) to the major work of 7.EE.B (Solve real-life and mathematical problems using numerical and algebraic expressions and equations). Students solve, "Evaluate each expression. 11.3 - 5.1 + 3.1 0.2 - 1.1."
In Section 3.3 Real-World Problems: Algebraic Equations, Independent Practice, Problem 7 connects the major work of 7.EE.B (Solve real-life and mathematical problems using numerical and algebraic expressions and equations) to the major work of 7.EE.A (Use properties of operations to generate equivalent expressions). Students solve, "Kevin wrote a riddle. A positive number is 5 less than another positive number. 6 times the lesser number minus 3 times the greater number is 3. Find the two positive numbers."
In Section 4.3, Real-World Problems: Direct Proportion, Independent Practice, Problem 1 connects the major work of 7.RP.A (Analyze proportional relationships and use them to solve real-world and mathematical problems) to the major work of 7.NS.A (Apply and extend previous understanding of operations with fractions to add, subtract, multiply, and divide rational numbers). Students solve, "m varies directly as n, and m = 14 when n = 7. a. Write an equation that relates m and n. b. Find m when n = 16. c. Find n when m = 30."
In Chapter 2, Learning Continuum, What have students learned? states, "In Course 1 Chapter 7, students have learned: Using letters to represent numbers. (6.EE.2, 6.EE.2a, 6.EE,2b, 6.EE.6) Evaluating algebraic expressions. (6.EE.2, 6.EE.2c) Simplifying algebraic expressions. (6.EE.3, 6.EE.4), Expanding and factoring algebraic expressions. (6.EE.2, 6.EE.3, 6.EE.4)"
In Chapter 4, Learning Continuum, What have students learned? states, "In course 1 Chapters 4, 5, 6, and 9, students have learned: Comparing two quantities. (6.RP.1, 6.RP.3d), Equivalent ratios. (6.RP.3a), Rates and unit rates. (6.RP.2, 6.RP.3), Real-world problems: speed and average speed. (6.RP.3, 6.RP.3b), Real world problems: percent. (6.RP.3c), Points on the coordinate plane. (6.NS.6, 6.NS.6b, 6.NS.6c, 6.NS.8, 6.G.3)"
In Chapter 5, Chapter Overview, Math Background, states, "In grade 4, students learned to classify angles as acute, obtuse, right or straight. They also developed knowledge of parallel and perpendicular lines. In Grade 5, students learned to classify triangles according to their lengths of sides and angle measures."
In Chapter 6, Recall Prior Knowledge, states, "In previous grades, students learned to identify a ray that extends in one direction and to use geometric notation to write rays and angles in degrees. They also learned to measure angles in degrees using protractors."
In Chapter 8, Recall Prior Knowledge, states, "In Course 1, students learned to identify measures of variation. They divided a data set into quartiles and identified the interquartile range. Students drew and interpreted box-and-whisker plots. they interpreted data and decided what was 'typical' or most likely."
In Chapter 3, Learning Continuum, What will students learn next? states, "In Course 3 Chapter 4 students will learn: Solving linear equations with one variable. (8.EE.7b), Identifying the number of solutions to a linear equation. (8.EE.7a), Solving for a variable in a two-varaible equation. (8.EE.7b), Solving linear inequalities with one variable."
In Chapter 4, Learning Continuum, What will students learn next? states, "In Course 3 Chapter 5, students will learn: Finding and interpreting slopes of lines. (8.EE.6), Understanding slope-intercept form. (8.EE.6), Writing linear equations. (8.EE.6), Real-world problems: linear equations. (8.EE.5)"
In Chapter 6, Learning Continuum, What will students learn next? states, "In Course 3 Chatpers 9 and 10, students will learn: Dilations. (8.G.3), Understanding and applying congruent figures. Understanding and applying similar figures. (8.G.5)"
In Chapter 8, Statistics and Probability, Learning Continuum, What will students learn next? states, "In Course 2 Chapter 9, students will learn: Probability of compound events. (7.SP.8), Independent events. (7.SP.8), Dependent events. (7.SP.8), In Course 3 Chapter 12, students will learn: Two-way tables. (8.SP.4)"
In Chapter 9, Learning Continuum, What will students learn next? states, "In Course 3 Chapter 12, students will learn: Two-way tables. (8.SP.4) In High School, students will learn: Conditional probability and the rules of probability. (S-CP), Using probability to make decisions (S-MD)."
There are 9 instructional chapters divided into sections of 120 instructional days.
There is one day for each chapter's instructional beginning consisting of Chapter Opener and Recall Prior Knowledge, for a total of 9 additional days.
There is one day for each chapter's closure consisting of Chapter Wrap-Up, Chapter Review, Performance Task, and Project work, for a total of 9 additional days.
There is one day for each chapter's Assessment, for a total of 9 additional days.
The online Common Core Pathway and Pacing Course 2 states the instructional materials can be completed in 146 days, one instructional day added to Section 2.1 in the printed Teacher's Edition. For the purpose of this review the Chapter Planning Guide provided by the publisher in the Teacher's Edition was used.
In Section 1.3, Adding Integers, Learn, Problem 1, page 38, students use counters to model addition. The problem states, "Suppose the temperature was -8°F at 7 a.m. Five hours later, the temperature has risen 10°F. Find the new temperature." Students develop conceptual understanding of 7.NS.1 (Apply and extend previous understandings of addition and subtraction to add and subtract rational numbers; represent addition and subtraction on a horizontal or vertical number line diagram).
In Section 2.4, Expanding Algebraic Expressions, Engage, page 153, students use algebra tiles to expand expressions. The materials state, "Use (green tile shown) to represent +x and (orange tile shown ) to represent +1. Show how you expand 2(2x + 6) and $$\frac{1}{2}$$(2x + 6). Share your method." Students develop conceptual understanding of 7.EE.1 (Apply properties of operations as strategies to add, subtract, factor, and expand linear expressions with rational coefficients).
In Section 5.4, Interior and Exterior Angles, Activity, page 47, students work in pairs to discover the property of interior angles of a triangle. The materials state, "1) Draw and cut out a triangle. Label the three interior angles of the triangle as 1, 2, and 3. 2) Cut out the three angles. Then, arrange them on a straight line. What do you notice about the sum of the measures of the three interior angles?" Students develop conceptual understanding of 7.G.5 (Use facts about supplementary, complementary, vertical, and adjacent angles in a multi-step problem to write and solve simple equations for an unknown angle in a figure).
In Section 8.1, Random Sampling Methods, Learn, Problem 3c, page 215, students use a random number table to determine a random sample. The problem states, "Suppose you want to pick a random sample of 30 people from a town that has 500 residents. You can first assign a unique 3-digit number from 001 to 500 to each resident. Then, use a random number table like the one shown below to select the members of the sample." Students develop conceptual understanding of 7.SP.1 (Understand that statistics can be used to gain information about a population by examining a sample of the population).
In Section 8.4, Finding the Probability of Events, Engage, page 245, students use pictures of cards to determine the probability of an event occurring. The materials state, "Suppose you randomly pick two of the cards below. What is the probability of picking two cards with the same letter? Explain your answer." Four pink cards labeled A-D and four purple cards labeled A-D are shown. Students develop conceptual understanding of 7.SP.5 (Summarize numerical data sets in relation to their context).
Students have opportunities to demonstrate conceptual understanding through Try activities, which are guided practice opportunities to reinforce new learning. The Independent Practice provides limited opportunities for students to continue the development of conceptual understanding. Examples include:
In Section 5.4, Interior and Exterior Angles, Independent Practice, Problem 2, page 55, students use the angle sum of a triangle to find unknown angle measures. The problem states, "The diagrams may not be drawn to scale. Find the value of y. In the diagrams for 4 to 6, ACis a straight line." Students are shown a triangle with interior angles labeled 18° and 26°. Students independently practice conceptual understanding of 7.G.5 (Use facts about supplementary, complementary, vertical, and adjacent angles in a multi-step problem to write and solve simple equations for an unknown angle in a figure).
In Section 8.1, Random Sampling Methods, Independent Practice, Problem 7, page 220, students describe how to implement each of the three sampling methods for a given situation. The problem states, "2,000 runners participated in a marathon. You want to randomly choose 60 of the runners to find out how long it took each one to run the race. Describe how you would select the 60 runners if you use a a) random sampling method. b) systematic sampling method. c) stratified sampling method." Students independently practice conceptual understanding of 7.SP.1 (Understand that statistics can be used to gain information about a population by examining a sample of the population, generalizations about a population from a sample are valid only if the sample is representative of that population).
In Section 1.2, Writing Rational Numbers as Decimals, Learn, Problem 1, page 23, students write rational numbers as repeating decimals using long division. The problem states, "Since $$\frac{1}{3}$$ means 1 divided by 3, you can write $$\frac{1}{3}$$ as a decimal using long division. When you divide 1 by 3, the division process will not terminate with a remainder of 0. The digit 3 keeps repeating infinitely. A decimal, such as 0.333…, is called a repeating decimal. For the repeating decimal 0.333…, the digit 3 repeats itself. you can write 0.333… as $$0.\bar{3}$$, with a bar above the repeating digit 3. So, 0.333… = $$0.\bar{3}$$." Students develop procedural skill and fluency of 7.NS.2d (Convert a rational number to a decimal using long division; know that the decimal form of a rational number terminates in 0s or eventually repeats).
In Section 3.3, Real-World Problems: Algebraic Equations, Learn, Problem 4, page 235, students translate descriptions into algebraic equations. The problem states, "Ivan has 12 more comic books than Hana. If they have 28 comic books altogether, how many comic books does Ivan have? Let the number of comic books that Hana has be x. Then, the number of comic books that Ivan has is x + 12. Because they have 28 books altogether, x + (x + 12) = 28. Number of books that Ivan has: x + 12 = 8 + 12 = 20. Ivan has 20 comic books." Students develop procedural skill and fluency of 7.EE.B.4a (Solve word problems leading to equations of the form px + q = r and p(x + q) = r, where p, q, and r are specific rational numbers. Solve equations of these forms fluently).
In Section 4.1, Identifying Direct Proportion, Learn, Problem 3, page 291, students identify a constant of proportionality from a verbal description. The problem states, "Ana is buying some baseball caps. Each cap costs $8. The amount Ana pays for the caps is directly proportionally to the number of caps she buys. Write an equation that represents the direct proportion." Students develop procedural skill and fluency of 7.RP.2b (Identify the constant of proportionality (unit rate) in tables, graphs, equations, diagrams, and verbal descriptions of proportional relationships).
In Section 5.1, Complementary, Supplementary, and Adjacent Angles, Learn, Problem 2, page 10, students calculate an unknown angle when one of two angles is known in a right angle. The problem states, "In the diagram, ∠PQS and ∠SQR are adjacent angles. They share a common vertex, Q, and a common side, QS, QP is perpendicular to QR. Find the measure of ∠SQR." Students are shown a right angle and given the measurement of 41° for ∠PQS. Students develop procedural skill and fluency of 7.G.5 (Use facts about supplementary, complementary, vertical, and adjacent angles in a multi-step problem to write and solve simple equations for an unknown angle in a figure).
In Section 9.2, Probability of Compound Events, Learn, Problem 2, page 328, students find probability using tree diagrams. The problem states, "Suppose that it is equally likely to rain or not rain on any given day. Draw a tree diagram and use it to find the probability that it rains exactly once on two consecutive days. P(rain exactly once on two consecutive days) = $$\frac{2}{4}$$ = $$\frac{1}{2}$$." Students develop procedural skill and fluency of 7.SP.8 (Find probabilities of compound events using organized lists, tables, tree diagrams, and simulation).
In Section 3.3, Real-World Problems: Algebraic Equations, Independent Practice, Problem 1, page 239, students independently solve a perimeter problem algebraically. The problem states, "Two sections of a garden are shaped like identical isosceles triangles. The base of each triangle is 50 feet, and the other two sides are each x feet long. If the combined perimeter of both gardens is 242 feet, find the value of x." Students independently practice procedural skill and fluency of 7.EE.B.4a (Solve word problems leading to equations of the form px + q = r and p(x + q) = r, where p, q, and r are specific rational numbers. Solve equations of these forms fluently).
In Section 4.1, Identifying Direct Proportion, Independent Practice, Problem 18, page 296, students independently interpret verbal descriptions to write the direct proportion equation. The problem states, "Owen hikes 3 miles in 45 minutes. Given that the distance is directly proportional to the duration he walks, find the constant of proportionality and write an equation to represent the direct proportion." Students independently practice procedural skill and fluency of 7.RP.2b (Identify the constant of proportionality [unit rate] in tables, graphs, equations, diagrams, and verbal descriptions of proportional relationships).
In Section 5.1, Complementary, Supplementary, and Adjacent Angles, Independent Practice, Problem 34, students independently apply concepts of complementary and supplementary angles to solve. The problem states, "The diagrams may not be drawn to scale. The ratio a:b = 2:3. Find the values of a and b. The measure of ∠PQR = 90$$\degree$$." Students independently practice procedural skill and fluency of 7.G.5 (Use facts about supplementary, complementary, vertical, and adjacent angles in a multi-step problem to write and solve simple equations for an unknown angle in a figure).
In Section 9.2, Probability of Compound Events, Independent Practice, Problem 2, page 333, students independently draw a tree diagram and calculate the probability of a compound event. The problem states, "A letter is randomly chosen from the word FOOD, followed by randomly choosing a letter from the word DOT. Use a tree diagram to find the probability that both letters chosen are the same." Students independently practice procedural skill and fluency of 7.SP.8 (Find probabilities of compound events using organized lists, tables, tree diagrams, and simulation).
In Section 1.7, Operations with Fractions and Mixed Numbers, Learn, Problem 1, page 91, students add, subtract, multiply, and divide rational numbers in a real-world situation. The problem states, "Mr. Turner has a partial roll of wire $$18\frac{1}{4}$$ feet long. He needs $$25\frac{1}{2}$$ feet of wire for a remodeling project. How much wire is he short?" Students engage in routine application of 7.NS.3 (Solve real-world and mathematical problems involving the four operations with rational numbers).
In Section 3.5, Real-World Problems: Algebraic Inequalities, Try, Problem 3, Page 262, students solve real-world problems involving algebraic inequalities. The problem states, "Ms. Cooper pays $200 in advance on her account at a health club. Each time she visits the club, $8 is deducted from the account. If she needs to maintain a minimum amount of $50 in the account, how many visits can Ms. Cooper make before she needs to top up the account again?" Students engage in routine application of 7.EE.4b (Solve word problems leading to inequalities of the form px + q > r or px + q < r, where p, q, and r are specific rational numbers).
In Section 4.3, Real-World Problems: Direct Proportion, Independent Practice, Problem 10, page 315, students write a proportion to find an unknown quantity given one of the quantities. The problem states, "It costs $180 to rent a car for 3 days. Find the cost of renting a car for 1 week." Students independently engage in routine application of 7.RP.2 (Recognize and represent proportional relationships between quantities).
In Section 7.6, Real-World Problems: Surface Area and Volume, Independent Practice, Problem 3, page 188, students find the volume and surface area of a composite solid made up of a triangular prism and a rectangular prism. The problem states, "Mr. Turner builds a shed to store his tools. The shed has a roof that is in the shape of a triangular prism. a. Find the amount of space the shed occupies. b. Find the surface area of the shed, including its floor." A diagram with dimensions is provided. Students independently engage in routine application of 7.G.6 (Solve real-world and mathematical problems involving area, volume and surface area of two-and three- dimensional objects composed of triangles, quadrilaterals, polygons, cubes, and right prisms).
In Section 1.5, Multiplying and Dividing Integers, Engage, page 63, students represent a multiplication situation using counters. The materials state, "Show 2 × 4 using (picture of yellow and orange counter shown). How do you show 2 × (-4)? Create a real-world problem to model each situation. Share your real-world problems." Students engage in non-routine application of 7.NS.2 (Apply and extend previous understandings of multiplication and division and of fractions to multiply and divide rational numbers).
In Section 1.7, Operations with Fractions and Mixed Numbers, Engage, page 91, students apply operations with rational numbers in a real-world context. The materials state, "A clock's battery is running low. Every 6 hours, the clock slows down by $$\frac{1}{2}$$ hour. How do you find out how much time the clock slows down by in 1 hour? Share your method." Students engage in non-routine application of 7.NS.3 (Solve real-world and mathematical problems involving the four operations with rational numbers).
In Section 2.7, Real-World Problems: Algebraic Reasoning, Enage, page 192. students work in pairs to solve a percent problem involving algebraic expressions. The materials state, "Ms. Evans bought a total of 60 pens and pencils. There was an equal number of pens and pencils. She gave x percent of the pens and y percent of the pencils to her students. Use algebraic reasoning to write an algebraic expression for the number of pens and pencils that she gave her students. Share your reasoning." Students engage in non-routine application of 7.EE.3 (Solve multi-step real-life and mathematical problems posed with positive and negative rational numbers in any form).
In Chapter 8, Performance Task, Problem 2, page 293, students find the probability of winning if they draw two number cards and the difference is 3 or more. The problem states, "In a game, you and your friend are asked to each select a card from a deck of ten cards with the numbers 1 to 10. Your friend selects a card from the deck. Then, you select a card from the ones remaining in the deck. You do not know your friend's number. You win if the difference between your number and your friend's number is at least 3. a) For which of your friend's numbers do you have the greatest chance of winning? b) For which of your friend's numbers do you have the least chance of winning? c) What is the probability that you will win?" Students independently engage in non-routine application of 7.SP.8 (Find probabilities of compound events using organized lists, tables, tree diagrams, and simulation).
In Section 1.7, Operations with Fractions and Mixed Numbers, Learn, Problem 2, page 86, students multiply rational numbers. The problem states, "Evaluate $$-\frac{3}{7} ⋅ \frac{8}{15}$$."Students engage in procedural skill and fluency of 7.NS.2c (Apply properties of operations as strategies to multiply and divide rational numbers).
In Section 3.5, Real-World Problems: Algebraic Inequalities, Independent Practice, Problem 10, page 266, students write and solve inequalities. The problem states, "A cab company charges $0.80 per mile plus $2 for tolls. Rachel has at most $16 to spend on her cab fare. Write and solve an inequality for the maximum distance she can travel. Can she afford to take a cab from her home to an airport that is 25 miles away?" Students engage in application of 7.EE.4b (Solve word problems leading to inequalities of the form px + q > r or px + q < r, where p, q, and r are specific rational numbers. Graph the solution set of the inequality and interpret it in the context of the problem).
In Section 8.3, Defining Outcomes, Events and Sample Space, Try, Problem 1, page 237, students practice identifying outcomes, samples spaces, and events. The problem states, "Jake spun the spinner on the right and recorded the numbers where the spinner lands. a) List all the possible outcomes. b) State the number of outcomes in the sample space." Students engage in conceptual understanding of 7.SP.7a (Develop a uniform probability model by assigning equal probability to all outcomes, and use the model to determine probabilities of events).
In Section 1.4, Subtracting Integers, Try, Problem 3, page 53, students use the additive inverse property to subtract integers. The problem states, "A fishing boat drags its net 35 feet below the ocean's surface. Then, it lowers the net by an additional 12 feet. Find the fishing net's new position relative to the ocean's surface." Students develop procedural skill and fluency and apply the mathematics of 7.NS.1c (Understand subtraction of rational numbers as adding the additive inverse, p - q = p + (-q)).
In Section 2.1, Adding Algebraic Terms, Learn, Problem 1, page 132, students simplify algebraic expressions with decimal or fractional coefficients by adding like terms. The problem states, "Simplify the expressions 0.9p + 0.7p. Represent the term 0.9p with nine 0.1p sections and the term 0.7p with seven 0.1p sections. From the bar model, 0.9p + 0.7p = 1.6p. The sum is the total number of colored sections in the bar model." Students develop conceptual understanding and build procedural skill and fluency of 7.EE.1 (Apply properties of operations as strategies to add, subtract, factor, and expand linear expressions with rational coefficients).
In Section 7.3, Real-World Problems: Circles, Try, Problem 1, page 156, students use the formula for area of a circle to solve real-world problems. The problem states, "Alex recycled some old fabric to make a rug. He cut out a quadrant and two semicircles to make the rug. Find the area of the rug. Use $$\frac{22}{7}$$ as an approximation for $$\pi$$." A diagram of the rug with dimensions is provided. Students develop procedural skill and fluency and apply the mathematics of 7.G.4 (Know the formulas for the area and circumference of a circle and use them to solve problems).
In Chapter 3, Algebraic Equations and Inequalities, Put On Your Thinking Cap! Problem 1, page 268, students solve real-world problems using algebraic equations. The problem states, "Jamar is five times as old as Kylie. Larissa is five times as old as Jamar. Mitchell is twice as old as Larissa. The sum of their ages is the age of Nora. Nora just turned 81. How old is Jamar?" Teacher guidance states, "You may want to guide students on applying the various heuristics using the problem-solving heuristics poster. Refer students to the corresponding teacher resources for prompts and worked out solutions." Teacher guidance gives a generic reference to have students use the heuristics poster which is repeated throughout the materials.
In Chapter 4, Proportion and Percent of Change, Put On Your Thinking Cap! Problem 1, page 358, students solve real-world problems using proportional relationships. The problem states, "Ms. Davis plans to drive from Town P to Town Q, a distance of 350 miles. She hopes to use only 12 gallons of gasoline. After traveling 150 miles, she checks her gauge and estimates that she has used 5 gallons of gasoline. At this rate, will Ms. Davis arrive at Town Q before stopping for gasoline? Justify your answer." Teacher guidance states, "You may want to guide students on applying the various heuristics using the problem-solving heuristics poster. Refer students to the corresponding teacher resources for prompts and worked out solutions." Teacher guidance gives a generic reference to have students use the heuristics poster which is repeated throughout the materials.
In Chapter 7, Put on Your Thinking Cap!, Problem 2, page 190, students solve a real-world problem using their understanding of circumference of circles and quadrants to find the distance of a shaded part. The problem states, "A cushion cover design is created from a circle of radius 7 inches and 4 quadrants. Find the total area of the shared parts of the design. Use $$\frac{22}{7}$$ as an approximation for $$\pi$$." Teacher guidance states, "Go through the problem using the four-step problem-solving model. Students may need some help getting started after they have understood the problem. Suggest to students that they start by studying the figure and determining what shapes are involved. They can then make a plan as to which formula to use and in what order." Teacher guidance gives a generic reference to have students use the four-step problem-solving method which is repeated throughout the materials.
In Chapter 9, Probability of Compound Events, Put on Your Thinking Cap! Problem 1, page 364, students solve real-world problems using their understanding of dependent events to find the probability of an event, without replacement. The problem states, "If there are 12 green and 6 red apples, find the probability of randomly choosing three apples of the same color in a row, without replacement. Show your work." Teacher guidance states, "You may want to guide students on applying the various heuristics using the problem-solving heuristics poster. Refer students to the corresponding teacher resources for prompts and worked out solutions." Teacher guidance gives a generic reference to have students use the heuristics poster which is repeated throughout the materials.
Section 2.7, Real-World Problems: Algebraic Reasoning, is noted as addressing MP1 on pages 187-196 in the Standards for Mathematical Practice Chart, Chapter Planning Guide, and Section Objectives. However, no identified opportunities for students to make sense of problems and persevere in solving them are provided.
Section 6.4, Real-World Problems: Percent Increase and Decrease, is noted as addressing MP1 on pages 345-356 in the Standards for Mathematical Practice Chart, Chapter Planning Guide, and Section Objectives. However, no identified opportunities for students to make sense of problems and persevere in solving them are provided.
In Section 2.2, Subtracting Algebraic Terms, Independent Practice, Problem 21, page 144, students write algebraic expressions for finding the area of rectangles, and then simplify the expression with rational coefficients by subtracting like terms. The problem states, "Luke simplified the algebraic expression $$\frac{3}{2}x - \frac{1}{3}x$$ as shown below: $$\frac{3}{2}x$$ - $$\frac{1}{3}x$$ = $$\frac{18}{12}x$$ - $$\frac{4}{12}x$$ = $$\frac{14}{12}x$$ Is Luke's simplification correct? Why or why not?" Teacher guidance states, "Assesses students' ability to simplify an algebraic expression with unlike fractional coefficients. They are required to recognize that the given solution is correct but is not in simplest form." The materials misidentify MP2, as students do not consider units involved in a problem, attend to the meaning of quantities, nor understand the relationships between problem scenarios and mathematical representations.
In Section 6.1, Constructing Triangles, Independent Practice, Problem 10, page 90, students recognize that many triangles can be created given the sum of measures of three angles. The problem states, "Suppose you are given three angle measures with a sum of 180. Can you construct a triangle given this information? Can you construct other different triangles? Explain." Teacher guidance states, "Assesses students' ability to recognize that many triangles can be created given the sum of measures of three angles. Changing the side lengths of this kind of triangle will make many similar triangles." Teacher guidance gives a generic reference to assess students' abilities which is repeated throughout the materials.
In Section 8.6, Developing Probability Models, Activity, Problem 6, page 283, students compare the theoretical and experimental probability of randomly selecting a number from 0 to 9. The problem states, "Compare each of the experimental probability models you made with the theoretical probability model at the beginning of this activity. What effect does increasing the number of selected digits have on the experimental probabilities? Which experimental probability model resembles the theoretical probability model more closely? Explain." Teacher guidance states, "In 6, students compare the two experiments with the theoretical model and graph. Pose the following question to students and prompt them for their reasoning. Does the second experiment more closely resemble the theoretical probability? Why? You may want to conclude the activity by having students share their responses with the class." The materials misidentify MP2 in this problem, students do not consider units involved in a problem, attend to the meaning of quantities, nor understand the relationships between problem scenarios and mathematical representations.
In Section 9.1, Compound Events, Independent Practice, Problem 13c, page 326, students recognize outcomes and realize that two simple events can be switched and the number of outcomes still remains the same. The problem states, "For a game, Jesse first rolls a fair four-sided number die labeled 1 to 4. The result recorded is the number facing down. Then, he randomly draws a ball from a box containing two different colored balls. If Jesse first draws a colored ball and then rolls the four-sided number die, will the number of possible outcomes be the same? Explain your reasoning." Teacher guidance states, "Assess students' ability to draw a tree diagram, recognize the outcomes, and realize that the two simple events can be switched and the number of outcomes still remains the same." Teacher guidance gives a generic reference to assess students' abilities which is repeated throughout the materials.
Section 1.8, Operations with Decimals, is noted as addressing MP2 on pages 97-110 in the Standards for Mathematical Practice Chart, Chapter Planning Guide, and Section Objectives. However, no identified opportunities for students to reason abstractly and quantitatively are provided.
Chapter 6, Geometric Construction, Performance Task, is noted as addressing MP2 on pages 119-120 in the Standards for Mathematical Practice Chart, Chapter Planning Guide, and Section Objectives. However, no identified opportunities for students to reason abstractly and quantitatively are provided.
In Section 1.1, Representing Rational Numbers on the Number Line, Activity, Problem 3, page 15, students construct viable arguments when locating rational numbers on a number line. "What is another way to locate the rational numbers on the number line? Explain your answer." Teacher guidance states, "Display the task and give students time to work on it in pairs. Invite students to share their completed number line. Prompt them to share explanations and emphasize: how to convert fractions to decimals and vice versa. How to determine additional segments needed to accurately place the rational numbers. The correct placement of positive and negative numbers. Reasonable approximations, such as 3.6 is a little more than 3.5, so 3.6 is a little more than halfway between 3 and 4. Identify positions of opposites, for instance, -3.6 needs to be the same distance from 0 as 3.6 is from 0."
In Section 4.3, Real-World Problems: Direct Proportion, Independent Practice, Problem 20, page 316, students construct viable arguments. The problem states, "Laila wants to buy some blackberries. Three stores sell blackberries at different prices. Which store has the best deal? Explain." Three bowls of blackberries with different prices per pound are shown. Teacher guidance states, "assesses students' ability to compare unit rates to determine the best deal, They may need to be reminded that there are 16 ounces in a pound." While this teacher guidance does not intentionally develop MP3, students do have the opportunity to construct a viable argument.
Students have the opportunity to critique the reasoning of others in connection to grade-level content leading to the intentional development of MP3, identified as mathematical habits in the materials. However, the teacher guidance is often repetitive, not specific, and distracts from the intentional development of MP3. Examples include:
In Section 3.1, Identifying Equivalent Equations, Independent Practice, Problem 12, page 220, students critique the reasoning of others when finding equivalent equations. The problem states, "Chris was asked to write an equation equivalent to $$\frac{2}{3}$$x = 3 - x. He wrote the following: $$\frac{2}{3}$$x = 3 - x, $$\frac{2}{3}$$x ⋅ 3= 3 ⋅ 3 - x, 2x = 9 - 3x. Chris concluded that $$\frac{2}{3}$$x = 3 - x and 2x = 9 - x are equivalent equations. Do you agree with his conclusion? Give a reason for your answer." Teacher guidance states, "assesses students' ability to identify a mistake in the expansion on the right side of the equation. This is a good opportunity to point out the importance of using parentheses when distributing." While this teacher guidance does not intentionally develop MP3, students do have the opportunity to critique the reasoning of others.
In Section 4.5, Percent Increase and Decrease, Math Sharing, page 342, students critique the arguments of others when solving direct proportion problems involving percent. The materials state, "Caleb has 40 magnets. Zara has 50 magnets. Zara says that she has 25% more magnets than Caleb, hence, Caleb has 25% fewer magnets than her. Do you agree with Zara? Discuss." Teacher guidance states, "Pose the problem to students. Have students express the number of Zara's magnets as a percent of the number of Caleb's magnets, $$\frac{50}{40}$$ ⋅ 100% = 125%. Lead them to see that Zara has 25% more magnets than Caleb. Prompt them to explain why they have to use 40 as a base when calculating the percent. Then, have students express the number of Caleb's magnets as a percent of the number of Zara's magnets, $$\frac{40}{50}$$ ⋅ 100% = 80%. Lead them to see that Caleb has 20% fewer magnets than Zara. Prompt them to explain why they have to use 50 as a base when calculating the percent in this instance. Reiterate the importance of using the correct base when calculating a percent of one quantity over another."
Math Journal Activities provide opportunities for students to engage in the intentional development of MP3. However, the teacher guidance is often repetitive, not specific, and distracts from the intentional development of MP3. Examples include:
In Chapter 2, Algebraic Expressions, Math Journal, page 197, students construct viable arguments and critique the reasoning of others when simplifying expressions. The materials state, "Brielle expanded and simplified the expression 6(x+3) - 2(x+1) + 5 as follows: 6(x + 3) − 2(x + 1) + 5 = 6x + 3 − 2x + 1 + 5 = 6x − 2x + 3 + 1 + 5 = 4x + 9. Explain to Briella her mistakes and show the correct solution." Teacher guidance states, "Review with students the various strategies learned in this chapter. Encourage students to work independently. Error analysis is a useful activity because it leads to fewer mistakes in the future." While this teacher guidance does not intentionally develop MP3, students do have the opportunity to construct a viable argument and critique the reasoning of others.
In Chapter 4, Proportion and Percent of Change, Math Journal, page 357, students construct viable arguments and critique the reasoning of others. The materials state, "Ameila has a box of 150 beads that are either red, purple, or yellow. 20% of the beads are read and $$\frac{2}{5}$$ of the remaining beads are yellow. Ameila worked out the number of yellow beads as follows: 100% - 20% = 80%, $$\frac{2}{5}$$ ⋅ 150 = 60, There were 60 yellow beads. Explain to Ameila her mistake and show her the correct solution." Teacher guidance states, "Review with students the various strategies learned in this chapter. Encourage students to write and explain their steps clearly to avoid such mistakes." While this teacher guidance does not intentionally develop MP3, students do have the opportunity to construct a viable argument and critique the reasoning of others.
Section 5.4, Interior and Exterior Angles, is noted as addressing MP3 on pages 47-58 in the Standards for Mathematical Practice Chart, Chapter Planning Guide, and Section Objectives. However, no opportunities for students to construct arguments or critique the reasoning of others is provided in the lesson.
Section 7.1, Radius, Diameter, and Circumference of a Circle, is noted as addressing MP3 on pages 127-142 in the Standards for Mathematical Practice Chart, Chapter Planning Guide, and Section Objectives. However, no opportunities for students to construct arguments or critique the reasoning of others is provided in the lesson.
In Chapter 2, Algebraic Expressions, Put On Your Thinking Cap, Problem 2, page 199, students convert degrees Fahrenheit to degrees Celsius. The problem states, "Steven and his father are from Singapore, where the temperature is measured in degrees Celsius. While visiting downtown Los Angeles, Steven saw a temperature sign that read 72°F. He asked his father what the equivalent temperature was in °C. His father could not recall the Fahrenheit-to-Celsius conversion formula. C = $$\frac{5}{9}$$(F - 32) However, he remembered that water freezes at 0$$\degree$$C or 32$$\degree$$F and boils at 100$$\degree$$C or 212$$\degree$$F. Using these two pieces of information, would you be able to help Steven figure out the above conversion formula? Explain." Teacher guidance states, "Requires students to solve a problem using algebraic reasoning. Go through the problem using the four-step problem-solving model. Students may need some help getting started after they have understood the problem. If students seem stuck, tell them that a diagram with the given information might help. Encourage them to think of the two scales a double number line, in which 0$$\degree$$C equals 32$$\degree$$F and 100$$\degree$$C to 212$$\degree$$F. So a change in 180 degrees in Fahrenheit correspond to 100 degrees in Celsius. 1 unit in Celsius = $$\frac{180}{100}$$ or $$\frac{9}{5}$$ units in Fahrenheit, and 1 unit in Fahrenheit equals $$\frac{100}{180}$$ or $$\frac{5}{9}$$ units in Celsius. Remind students that scales start at different places. 0$$\degree$$C equals 32$$\degree$$F. If they continue to have trouble, help them think through the expression F = $$\frac{9}{5}$$C + 32 or C = $$\frac{5}{9}$$(F - 32)." Students are told what type of model to use (double number line), and teacher guidance gives a generic reference to have students use the four-step problem-solving method which is repeated throughout the materials.
In Chapter 3, Algebraic Equations and Inequalities, Put On Your Thinking Cap!, Problem 2, page 269, states, "Sara can buy 40 pens with a sum of money. She can buy 5 more pens if each pen costs $0.05 less. a. How much does each pen cost? b. If Sara wants to buy at least 10 more pens with the same amount of money how much can each pen cost at most?" Teacher guidance states "requires students to solve a real-world problem using algebraic reasoning. The challenge is in writing an equation and inequality to represent the situation. Go through the problem using the four-step problem-solving model. Have students work in pairs or small groups. Encourage them to discuss and share strategies." Teacher guidance gives a generic reference to have students use the four-step problem-solving method which is repeated throughout the materials.
In Chapter 9, Probability of Compound Events, Put On Your Thinking Cap!, Problem 3, page 365, students find the probability of dependent events. The problem states, "Diego plans to visit Australia for a vacation, either alone or with a friend. Whether he goes alone or with a friend is equally like. If he travels with a friend, there is a 40% chance of him joining a guided tour. If he travels alone, there is an 80% chance of him joining a guided tour. a) What is the probability of Diego traveling with a companion and not joining a guided tour? b) What is the probability of Diego joining a guided tour?" Teacher guidance states, "requires students to recognize that the first event is whether Diego goes alone or with someone, and the second event is dependent on the occurrence of the first event. Students need to use the concept of complementary events to find the probability. Go through the problem using the four-step problem-solving model. Students may need some help getting started after they have understood the problem. Suggest to students that they start by drawing a tree diagram." Teacher guidance gives a generic reference to have students use the four-step problem-solving method which is repeated throughout the materials. Additionally, students are encouraged to use a tree diagram.
Section 1.3, Adding Integers, is noted as addressing MP4 on pages 35-40 in the Standards for Mathematical Practice Chart, Chapter Planning Guide, and Section Objectives. However, no identified opportunities for students to model mathematics are provided.
Section 7.3, Real-World Problems: Circles, is noted as addressing MP4 on pages 151-158 in the Standards for Mathematical Practice Chart, Chapter Planning Guide, and Section Objectives. However, no identified opportunities for students to model mathematics are provided.
In Chapter 6, Geometric Construction, Put On Your Thinking Cap!, Problem 1, page 112, students find the area of an enlarged triangle. The problem states, "Construct triangle ABC, where AB = AC, BC = 8cm and m∠ABC= 37$$\degree$$. Triangle ABC is enlarged to produce triangle DEF by a scale factor of 2.5. Find the area of triangle DEF." Teacher guidance states, "You may want to guide students on applying the various heuristics using the problem-solving heuristics poster. Refer students to the corresponding teacher resources for prompts and worked out solutions. Requires students to find the area of an enlarged triangle. What are we required to do? What strategies can we use to find the area of triangle DEF? Alert students that the height is measured 4 centimeters. Encourage students to work with a partner to determine the area of the constructed triangle and then multiply it by the scale factor squared." Teacher guidance gives a generic reference to have students use the heuristics poster which is repeated throughout the materials.
In Section 9.3, Probability of Compound Events, Independent Practice Problem 11, page 351, students solve probability problems. The problem states, "Hunter tosses a fair six-sided die twice. What is the probability of tossing an even number on the first toss and a prime number on the second toss?" Teacher guidance states, "Assesses students' ability to find the probability of a compound event that consists of tossing an even number followed by a prime number when a six-sided number die is tossed twice." Teacher guidance gives a generic reference to assess students' abilities which is repeated throughout the materials.
Chapter 6, Recall Prior Knowledge, is noted as addressing MP5 on pages 76-77 in the Standards for Mathematical Practice Chart, Chapter Planning Guide, and Section Objectives. However, no identified opportunities for students to use tools strategically are provided.
Section 7.1, Radius, Diameter, and Circumference of a Circle, is noted as addressing MP5 on pages 131-133 in the Standards for Mathematical Practice Chart, Chapter Planning Guide, and Section Objectives. However, no identified opportunities for students to use tools strategically are provided.
In Section 6.2, Scale Drawings and Lengths, For Language Development, TE page 92, teacher guidance includes, "Make sure that students understand the meaning of 'scale'. You may want to share some examples of scale drawings. Show maps and architectural drawings, pointing out the scales, and explaining that the region or building is 'scaled down' in the scale drawing."
In Section 9.2, Probability of Compound Events, For Language Development, TE page 328, teacher guidance includes,"Be sure students understand the words and the concept of 'favorable outcomes'. Favorable does not mean 'good' or 'bad'. It refers to the elements of which we are trying to find the probability."
In Chapter 9, Probability of Compound Events, Math Journal, Problem 1, page 363, students have the opportunity to use the specialized language of mathematics to explain outcomes. The problem states, "Use an example to explain the difference between possible outcomes and different outcomes." Teacher guidance states, "Requires students to explain the difference between possible outcomes and different outcomes. Encourage them to give examples such as tossing a coin twice, which has four outcomes. Are all the outcomes different? If order does not matter, then the outcome (H, T) is identical to the outcome (T, H). If we draw one piece of fruit from a bag of apples and oranges, we have two mutually exclusive outcomes (apple and orange) and the two outcomes are different. Review with students the various strategies to explain the difference between the probability terms. Encourage students to work independently. You may want to pose the following question to students who are struggling with using precise mathematical language. What strategy would you use to explain the difference between probability terms?"
In Section 1.4, Subtracting Integers, Independent Practice, Problem 21, page 60, Problem 21, students have the opportunity to attend to the specialized language of mathematics as they subtract integers. The problem states, "Ms. Davis has only $420 in her bank account. Describe how to find the amount in her account after she writes a check for $590." Teacher guidance states, "Assess students' ability to explain the steps involved in subtracting a negative integer from a positive one." Neither teacher guidance nor student directions prompt students to use specific mathematical terms to explain their thinking or communicate their ideas.
In Section 3.2, Solving Algebraic Equations, Independent Practice, Problem 13, page 229, students have the opportunity to attend to precision looking for an error another student made, "Tara was asked to solve the equation -4p + 5 = 7 Her solution is shown. (An index card is shown with Tara's steps.) Tara concluded that p = $$\frac{1}{2}$$ is the solution of the equation -4p + 5 = 7. Describe and correct the error that Tara made." Teacher guidance states, "assesses students' ability to identify a mistake that involves dividing by a negative coefficient." Neither teacher guidance nor student directions prompt students to attend to precision. Furthermore, this problem lends itself to MP3.
In Section 4.2, Representing Direct Proportion Graphically, Independent Practice, Problem 6, students have the opportunity to use the specialized language of mathematics to explain a direct proportion. The problem states, "Explain how you can determine whether a line represents a direct proportion." Teacher guidance states, "Assesses students ability to explain what determines a direct proportion graph." Neither teacher guidance nor student directions prompt students to use specific mathematical terms to explain their thinking or communicate their ideas.
There are some instances when the materials attend to the specialized language of mathematics; however, these lessons were not identified as aligned to MP6. For example:
In Section 2.1, Adding Algebraic Terms, For Language Development, TE page 130, states, "Be sure students understand the meaning of like terms. Explain that like terms are terms that have the same variable part. Constant terms are also like terms. Give examples of like terms such as 2, $$\frac{1}{4}$$, 0.3, and -5; a, 3a, $$\frac{1}{2}$$ a, 2.4a, and -7a. List a variety of ten terms (both like terms and unlike terms) on the board and invite volunteers to identify like terms."
Section 2.6, Writing Algebraic Expressions, is noted as addressing MP6 on pages 171-186 in the Standards for Mathematical Practice Chart, Chapter Planning Guide, and Section Objectives. However, no opportunities for students to attend to precision and attend to the specialized language of mathematics are identified in the lesson.
Section 5.1, Complementary, Supplementary, and Adjacent Angles, is noted as addressing MP6 on pages 5-18 in the Standards for Mathematical Practice Chart, Chapter Planning Guide, and Section Objectives. However, no opportunities for students to attend to precision and attend to the specialized language of mathematics are identified in the lesson.
In Section 1.4, Subtracting Integers, Learn, Problem 5, page 49, students use counters to subtract integers. The problem states, "Based on your results in 1 to 4, explain how you can subtract integers." In Question 1, students used counters to evaluate 5 - (+2) compared with 5 + (-2). In Question 4, students used counters to evaluate 7 - (-3) and 7 + 3. Teacher guidance states, "Conclude the activity by asking students to generalize what they have observed about subtraction. You may want to consider each case: a positive minus a positive, a negative minus a positive, a positive minus a negative, a negative minus a negative. Guide students to conclude the task in ENGAGE." Materials scaffold MP7 in this problem which prevents the opportunity to identify structure. Students are provided the model of using counters for each problem and guided through its development. Therefore students do not independently look for and make use of structure to generalize their understanding of subtraction.
In Chapter 1, Rational Numbers, Put On Your Thinking Cap!, Problem 1, page 112, students write an equivalent expression using the distributive property. The problem states, "The 4 key on your calculator is not working. Show how you can use the calculator to find 321 × 64." Teacher guidance states, "Requires students to solve a problem that involves writing an equivalent expression, and evaluating it using the distributive property. Step 1. Understand the problem: What information can we gather from the problem? What are we asked to find? Step 2: Think of a plan: What can we do to help us solve the problem? What number do we have to change in the expression, and how? What is equivalent to 64? Step 3. Carry out the plan: So, what is an equivalent expression of 321 ⋅ 64? Are all these expressions equivalent? Why? Which expression will be easier to evaluate? Why? Invite volunteers to share their equivalent expressions, and ask students to evaluate each of them. Ensure that students are able to apply the distributive property correctly. For example, 321 ⋅ (65 − 1) = 321 ∙ 65 − 321 ∙ 1. If necessary, suggest that students write the subtraction within the parentheses as adding the opposite to help them keep track of the signs. Prompt students to see that the equivalent expressions all have the same result of 20,544. Step 4. Check the answer." Teacher guidance gives a generic reference to have students use the four-step problem-solving method which is repeated throughout the materials and does not require students to look for or use structure in solving.
Section 8.5, Approximating Probability and Relative Frequency, is noted as addressing MP7 on pages 261-276 in the Standards for Mathematical Practice Chart, Chapter Planning Guide, and Section Objectives. However, no identified opportunities for students to make use of structure are provided in the lesson.
Students have minimal opportunities to look for and express regularity in repeated reasoning in connection to grade-level content, identified as mathematical habits in the materials. There is no student guidance, teacher guidance is often repetitive, not specific, and activities are scaffolded preventing intentional development of the full intent of MP8. Examples include:
In Section 3.4, Solving Algebraic Inequalities, Learn, Problem 1, page 248, Problem 1, students analyze a table of values involving dividing inequalities with positive and negative integers. The problem states, "Fill in the table. Use the symbols > or <. a) What happens to the direction of the inequality symbol when you divide by a positive number? Based on your observation, write a rule for dividing both sides of an inequality by a positive number. b) What happens to the direction of the inequality symbol when you divide by a negative number? Based on your observation, write a rule for dividing both sides of an inequality by a negative number. Teacher guidance states, "Summarize that the inequality symbol remains the same when we multiply or divide by a positive number, and the direction of the symbol is reversed when we multiply or divide by a negative number. Help students to visualize why this occurs. Ask them to consider x > y. In other words, when we multiply by a negative number, we flip the number line, moving to the left." Students are provided problems with answers (in the table) and are only responsible for comparing the answers. They do not independently look for and make use of structure in generalizing an understanding of inequalities.
In Section 6.3, Scale Drawings and Areas, Learn, Problem 5, page 104, students explore the relationship between scale factor and corresponding area. The problem states, "Compare the side lengths and the areas for the various scale factors. What pattern do you observe? What relationship between scale factor and area can you deduce?" Teacher guidance states, "In 5, encourage students to look for patterns, in particular the relationship between the scale factor and the area. What is the relationship? Emphasize that this property applies to the areas of other two dimensional figures as well. You may want to conclude the activity by discussing the activity in terms of inductive reasoning, as described in the Best Practice below." Materials scaffold MP8 in this problem which prevents the opportunity to look for and express regularity in repeated reasoning. Students are provided models to find areas of scaled images and a partially completed table designed to organize results. Students look for and express regularity in repeated reasoning to understand the relationship between scale factors and corresponding areas with teacher assistance, including a heavily scaffolded problem.
Section 7.4, Area of Composite Figures, is noted as addressing MP8 on page 164 in the Standards for Mathematical Practice Chart, Chapter Planning Guide, and Section Objectives. However, no identified opportunities for students to look for and express regularity in repeated reasoning are provided in the lesson. | CommonCrawl |
Atwood's machine
Projectiles / 2-D motion
Conservation laws
xaktly | Physics | Chemistry | Thermodynamics
Kinetic energy is the energy of motion
Kinetic energy is the easiest form of energy to think about. If something is moving, it has kinetic energy. If it's moving faster, it has more kinetic energy. And if two things are moving at the same velocity but one has more mass, it also has more kinetic energy.
Kinetic energy (E or Ek or KE) is proportional to mass (m) and proportional to the square of the velocity (v) of an object. The constant of proportionality turns out to be ½ in this case:
$$KE = \frac{1}{2} mv^2$$
The units of KE are Joules (J), 1 J = 1 Kg·m·s-2. Kinetic energy is not a vector. In squaring the velocity vector, v, we lose it's directional part, so KE is a scalar quantity – no direction.
In this section, we'll discuss the kinetic energy of linear motion, or motion in a plane, which can be curved. We'll reserve for another section talking about the kinetic energy of rotation or circular motion.
Examples of kinetic energy
Curved or 2-D motion
Rotation of a body about an axis
KE of electrons and atoms – quantum-mechanical KE
We'll develop the relationship between KE and momentum (p), and we'll look at the conservation of kinetic energy in mechanical processes ... here we go.
An object has kinetic energy (KE) if it is moving. KE is the energy of motion.
An astute student of mine once asked, "If kinetic energy is $\frac{1}{2} mv^2$, what happened to the other half?"
Derivation of the formula
We arrive at the formula for kinetic energy by considering the amount of work that a moving object can do. Think of something like a hammer driving a nail. The hammer has kinetic energy because it has mass and it is moving. It strikes the nail with some initial velocity, vi, and eventually comes to rest (vf = 0). During the motion, work is done on the nail. That work is:
$$w = F \cdot d$$
Newton's 2nd law tells us that force is mass times acceleration, so we make the substitution F = ma:
$$w = (ma) \cdot d$$
Now let's turn to the definition of acceleration. In this case, the initial velocity is finite, and the final velocity is zero,
$$a = \frac{v_f - v_i}{t}$$
So we can reduce that definition to one involving only the initial velocity of the hammer. Here I've just made the acceleration positive to make things simpler:
$$a = \frac{v}{t}$$
Now the average velocity is distance divided by time:
$$\bar{v} = \frac{d}{t}$$
The numerical average of two velocities (one of which is zero) gives us
$$\frac{v}{2} = \frac{d}{t}$$
Solving for time gives us this expression
$$t = \frac{2d}{v}$$
which we can plug into the acceleration formula like this
$$a = \frac{v}{\frac{2d}{v}}$$
Division is multiplication by the reciprocal of the denominator, so that gives us a new acceleration formula:
$$a = \frac{v^2}{2d}$$
which we can plug into our original work expression,
$$w = (ma)\cdot d$$
Cancelling the distances (d/d = 1),
$$w = m \left( \frac{v^2}{2d} \right) \cdot d$$
gives us the total work done:
$$w = \frac{1}{2} mv^2$$
Now that work, must equal the amount of kinetic energy (KE) lost, so that gives us the familiar formula for KE:
$$\bf KE = \frac{1}{2} mv^2$$
Units of kinetic energy
The units of kinetic energy (all kinds of energy, actually) can be determined by looking at the formula:
$$KE = \frac{1}{2} mv^2 \; \longleftarrow \frac{Kg\cdot m^2}{s^2}$$
This collection of units is given the name Joules (symbol J), after James Prescott Joule (1818-1889), an English mathematician & physicist who was an important figure in early thermodynamics.
$$1 \, \frac{Kg\cdot m^2}{s^2} = 1 \; Joule \;(J)$$
The Joule is related to the Newton (N), the unit of force. Recall that a Newton is
$$1 \; N = 1 \; \frac{Kg\cdot m}{s^2}$$
So a Joule is a Newton-meter (Newtons of force multiplied by meters of distance over which that force is exerted:
1 Joule = 1 Newton-meter
$$1 \; J = 1 \; N\cdot m$$
Joules and Newtons and their sub-units (the basic units inside them) are two important units that you should commit to memory.
KE is quadratic in velocity and linear in mass
Kinetic energy depends quadratically (to the second power) on velocity, and linearly on the mass.
You can think of it this way: Say an object like a car is moving toward you. What factors will increase the likelihood of injury, and by how much?
If we double the mass, the KE will double. That's shown in the graph below. The black curve is KE vs. velocity for a 1 Kg object. If we double the mass to 2 Kg (magenta curve), the energy is doubled. If we further double the mass to 4 Kg, the green curve is obtained. Values of KE for a velocity of 6 m/s are shown. You can see that doubling from 1 Kg to 2 Kg doubles the KE from 18 J to 36 J. Further doubling of the mass from 2 Kg to 4 Kg again doubles the KE from 36 J to 72 J. This is a linear relationship between KE and mass.
Now look at the graph in a different way. Consider only the 2 Kg curve. Notice that it's not linear; as we increase the velocity, the KE increases more rapidly. In fact, a doubling of velocity leads to a four-fold increase in KE, because the velocity is squared in the equation.
Consider these cases, in which v is some number (which we just call v), v is twice that value, and v is three times that value.
The resulting kinetic energies rise by factors of 4 and 9 respectively, the squares of the velocity increase.
Things like this are very important when we consider things like being hit by a car: If velocity is doubled, the kinetic energy rises by a factor of four. This translates to braking distance. The braking distance at 30 mi./h is quadrupled in the same car moving at 60 mi./h.
Algebraic rearrangements of the KE formula
You should practice all of the algebraic rearrangements of the kinetic energy formula so that you can use it to solve for KE, mass or velocity if you have the other two bits of information. Here are the results:
Energy is conserved
Energy can be converted from one form to another
Energy exists in many forms in nature. We measure energy by the ability to do work – to exert a force over a distance.
Kinetic energy can be transformed into potential energy – the energy of position. For example, when a roller coaster rolls down a hill, its kinetic energy increases due to the force of gravity working on it. As the car travels back up the next hill, it slows down, therefore it loses KE. That
In climbing the hill the KE of the car is used to do work against the gravitational force. It is converted into gravitational potential energy, the energy of position. On the downhill trip, that PE is converted back into KE.
Other forms of energy are also similarly conserved. For example, heat flows from hotter objects to cooler ones, but the total amount of heat (which is actually the kinetic energy of the small motions of atoms and molecules) remains the same.
Chemical energy is the energy "stored" in chemical bonds. When, in a reaction, the difference between the bond energies of the reactants and products (products minus reactants) is negative, we say that the reaction is exothermic, and it gives off heat. Sometimes reactions require heat energy from the surroundings in order to proceed; these are endothermic reactions.
In the process depicted below, the chemical energy stored in a liquid mixture is converted to kinetic energy as the liquid converts to a gas and pushes outward, moving a piston in the container upward.
To hit the ball farther, a baseball player may either increase bat weight or increase the bat speed (or both, of course, but there's a trade-off). Which will create more kinetic energy increase at the end of a swinging bat:
Increasing the bat weight from 32 oz. to 34 oz. (1 oz = 28.3495 g)
Increasing the bat head speed from 95 mi/h to 100 mi/h (1 mi/h = 0.44704 m/s)
First some unit conversions:
$$ \begin{align} 95 \frac{mi}{h} \left( \frac{0.4407 \, m/s}{1 \, mi/h} \right) &= 42.47 \frac{m}{s} \; \; and \\ 100 \frac{mi}{h} &= 44.07 \frac{m}{s} \\ \\ 32 \frac{mi}{h} \left( \frac{28.3495 \, g}{1 \, oz} \right) &= .907 \, Kg \; \; and \\ 34 \frac{mi}{h} &= 0.964 \, Kg \end{align}$$
Light bat, low speed:
$$KE = \frac{1}{2} 0.907 \,Kg \cdot \left( 42.47 \frac{m}{s} \right)^2 = 818 \, J$$
Heavy bat, low speed:
Light bat, high speed:
A 6.25% increase in bat weight doesn't make as much difference as a 5% increase in speed.
Compare the kinetic energy of a car weighing 3500 lbs., the weight of an average passenger car, and traveling at 20 mi/h (8.94 m/s), with the weight of a 150 lb. cyclist riding a 20 lb. bike at the same speed. (1 lb. = 2.2046 Kg on Earth)
$$3500 \, lbs \left( \frac{2.2046 \, Kg}{1 \, lb} \right) = 7716 \, Kg$$
$$170 \, lbs \left( \frac{2.2046 \, Kg}{1 \, lb} \right) = 375 \, Kg$$
By the way, why do I say "on Earth" when giving the unit conversion?
The kinetic energies:
$$ \begin{align} KE_{car} &= \frac{1}{2} mv^2 = \frac{1}{2}(7716 \, Kg)\left(8.94 \frac{m}{s}\right)^2 \\ &= 308 \, KJ \\ \\ KE_{car} &= \frac{1}{2} mv^2 = \frac{1}{2}(375 \, Kg)\left(8.94 \frac{m}{s}\right)^2 \\ &= 14.9 \, KJ \end{align}$$
The bike has less than 5% of the KE of the car, not to mention that the cyclist isn't surrounded by a cage of metal, plastic and airbags. Be careful out there!
A photon of light has no mass (m = 0). Calculate the momentum of a photon moving at the speed of light, 2.99792458 × 108 m·s-1.
Because a photon has no mass and $KE = \frac{1}{2} mv^2$, a photon has zero KE. However, a photon actually does have momentum, but that's a surprising result from quantum mechanics, a later subject. Nothing to worry about right now, but interesting, right? | CommonCrawl |
Interpolation with the roots of orthogonal polynomials & Spectral expansion
I'm a bit confused about the relationships between these two approximation methods mentioned in the title.
Does this kind of interpolation also belongs to the field of spectral methods?
Are the Lagrange interpolants we get from using the roots of orthogonal polynomials also orthogonal?
It's likely to get mixed with these two methods, could someone please clarify their differences?
Let me use the Chebyshev polynomials as an example:
(1) Using the Chebyshev polynomials as basis functions, then $f(x)$ is approximated as \begin{equation} f(x)\simeq\sum\limits_{n=0}^Na_nT_n (x) \end{equation}
(2) Interpolating at the $(N+1)$ roots, $x_0, x_1, ..., x_k,...,x_N$, of the Chebyshev polynomial $T_{N+1}(x)$, then the interpolation of $f(x)$ is
\begin{equation} f(x)\simeq P_N(x)=\sum\limits_{k=0}^N f(x_k)L_k(x) \end{equation} where $L_k(x)$ is the interpolant function at $x=x_k$.
interpolation approximation spectral-method special-functions
I hope I understood the question correctly. They try to compute exactly the same thing, so they really are equivalent. I'll use Chebyshev polynomials because they are easy to analyze.
Given a function $f(x)$ on $[-1,1]$, the spectral interpolant is the truncation of $$ \begin{aligned} f(x) &= \sum_{n\geq0} \bar a_n T_n(x), \\ \bar a_n &= \frac{1+[n>0]}{\pi}\int_{-1}^1 f(x)T_n(x)\frac{dx}{\sqrt{1-x^2}} \\&= \frac{1+[n>0]}{\pi} \int_{0}^\pi f(\cos\theta)\cos(n\theta)\,\mathrm{d}\theta. \end{aligned} $$ The $N$-th degree Lagrange interpolant, using the roots of $T_{N+1}(x)$ is given by $$ a_n = \frac{1+[n>0]}{N+1}\sum_{k=0}^N f(x_k) T_n(x_k), \qquad x_k = \cos\frac{\pi (k+\frac12)}{N+1}. $$ This uses the fact that $\sum_{k=0}^N T_m(x_k)T_n(x_k)$ is zero when $m\neq n$, $m,n\leq N$.
The formula for $a_n$ is nothing but the discrete cosine transform (type-II) applied to the function values at $x_k$, due to $T_n(x_k) = \cos \pi n(k+\frac12)/(N+1)$.
These are not, strictly speaking, the same, though.
The formula for $a_n$ is a trapezoidal rule approximation to the Fourier cosine integral in the formula for the exact coefficients $\bar a_n$. The trapezoidal rule is known to be exponentially accurate for smooth periodic functions (Trefethen-Weideman 2014), which $f(\cos\theta)$ is. Since for a spectral interpolant you would still have to evaluate the integral somehow, the Lagrange interpolant with the roots as nodes is just a way of evaluating that integral.
I tried to compute the exact difference between $a_n$ and $\bar a_n$, by expanding the sum for $a_n$ using the full series for $f$, and using the identity $$ \sum_{k=0}^{N} \cos(j\theta_k)\cos(n\theta_k) = \frac{N+1}{2}\big( [N+1\setminus j-n](-1)^{(j-n)N/(N+1)} + [N+1\setminus j+n](-1)^{(j+n)N/(N+1)}\big), $$ and for a complex-differentiable function $g(\theta)=f(\cos\theta)$ that is holomorphic in the region of the complex plane $|\Im \theta|<\alpha$, heuristically the error appears to be something on the order $$ a_n - \bar a_n \sim |\bar a_{N+1-n}| \lesssim e^{-\alpha(N+1-n)}. $$ So for smooth functions this error decays very quickly and becomes negligible, so the Lagrange and the spectral interpolants can be considered identical.
Edit. What is the relationship between $\sum a_nT_n(x)$ and $\sum f(x_k) \ell_k(x)$?
Let $a_n$ be defined as above, let $f_1(x) = \sum_{n=0}^{N} a_n T_n(x)$, and let $f_2(x) = \sum_{k=0}^N f(x_k) \ell_k(x)$, where $\ell_k(x) = \prod_{j\neq k} (x-x_j)/(x_k-x_j)$.
Both $f_1(x)$ and $f_2(x)$ are polynomials in $x$ of degree $N$, by construction.
Using the above definition of $a_n$, together with $$T_n(x_k) = \cos(n\theta_k), \qquad \theta_k = \pi(k+\tfrac12)/(N+1) $$ we can check that $f_1(x_k) = f(x_k)$, using the identity $$ \sum_{n=0}^{N} \frac{1+[n>0]}{N+1} \cos(n\theta_j)\cos(n\theta_k) = [j=k]. $$
Therefore $f_1(x)$ and $f_2(x)$ are (non-identically-zero) polynomials of degree $N$ that pass through the same $N+1$ points, and therefore are the same polynomial.
Using the form of the interpolating polynomial in terms of $a_n$ makes the relationship with the Chebyshev series of the function $f(x)$ clearer than the Lagrange interpolation form.
KirillKirill
$\begingroup$ Hi, @KIrill, Thank you so much for this detailed answer, but I am not quite clear about the second equation of $a_n$ you wrote and I can't see its connection with Lagrange interpolating function. Also I have added some edits to my question, could you please take a look at it and explain a bit? $\endgroup$ – user123 Mar 19 '16 at 14:57
$\begingroup$ @David The Lagrange interpolant is a degree-$N$ polynomial, and so is $\sum_{n=0}^{N} a_n T_n(x)$ using the formula I wrote down in terms of $\sum_k f(x_k) T_n(x_k)$. They both perfectly match the function at $N+1$ points $x_k$, so (as degree-$N$ polynomials) they must be identical. See also Berrut-Trefethen (people.maths.ox.ac.uk/trefethen/barycentric.pdf) Again, I think what you're asking about is two ways of computing the same thing. $\endgroup$ – Kirill Mar 19 '16 at 15:26
$\begingroup$ @David Also, I think you skipped a step: the "true" Chebyshev series would be computed through $\int_{-1}^1 f(x)T_n(x)(1-x^2)^{-1/2}\,\mathrm{d}x$, which is a full integral that needs to be evaluated somehow, which is why I thought your question was about the difference between $a_n$ and $\bar a_n$ (you don't make this distinction in your new edits, both using $a_n$). As I see it, a spectral method would be formulated in terms of the integrals, and then later approximated. I may have misunderstood you. $\endgroup$ – Kirill Mar 19 '16 at 15:28
$\begingroup$ I'm sorry, but I still couldn't figure out where does your $a_n$ come from and what's its relationship with $L_n(x)$, not $T_n(x)&. I hope you still have patience on my dullness. $\endgroup$ – user123 Mar 19 '16 at 16:48
$\begingroup$ @David It is a standard formula for computing Chebyshev series coefficients (e.g., equation 3.55 in siam.org/books/ot99/OT99SampleChapter.pdf and the discussion around it; also people.maths.ox.ac.uk/trefethen/ATAP/ATAPfirst6chapters.pdf). The polynomial it computes interpolates $f$ at the $N+1$ roots of $T_{N+1}$, so it must match the Lagrange interpolant. The relationship with $L_n$ is that it is the same polynomial in $x$, but written in two different ways. I used it because it's much easier to analyze the formula in that form than in the Lagrange form. $\endgroup$ – Kirill Mar 19 '16 at 17:23
Thanks for Kirill's detailed answer, which clarifies all the confusion in my head. According to Kirill's answer and the materials he provided, now I want to generalize it a bit to common cases.
Let us suppose $\{F_n(x)\}$ is a set of orthogonal polynomials on $[-1,1]$, i.e.,
\begin{equation} \int_{-1}^1 F_m(x)F_n(x)w(x)dx=g_m\delta_{mn}, \end{equation} and $f(x)$ is a continuous function we want to approximate in [-1,1].
(1) Using $\{F_n(x)\}$ as basis functions, we get \begin{equation} f(x)=\sum_{n=0}^\infty a_nF_n(x), \end{equation} we approximate it with a truncated version, \begin{equation} f(x)=\sum_{n=0}^N a_nF_n(x), \end{equation} where $a_n$ can be computed from \begin{equation} a_n=\frac{1}{g_n}\int_{-1}^1 f(x)F_n(x)w(x)dx. \end{equation}
(2) We use the the $(n+1)$ roots of $F_{N+1}(x)$ to interpolate it: \begin{equation} f(x)\simeq P_N(x)=\sum_{n=0}^N a_nL_n(x). \end{equation} Here, since $P_N(x)$ is an N-th degree polynomial, it can also be expressed using $F_n(x), n=1,2,...,N$, because $\{F_n(x)\}_{n=0}^N$ is a base for the polynomial sapce $P_l, l\leq N$, so we get: \begin{equation} f(x)\simeq P_N(x)=\sum_{n=0}^N c_n F_n(x). \end{equation} also, $P_N(x_k)=f(x_k)$ at the (N+1) interpolation points, which means \begin{equation} f(x_k)=\sum_{n=0}^N c_n F_n(x_k). \end{equation} multiply this equation by $w_k F_m(x_k)$ on both sides and compute the sum on $x_k$, \begin{equation} \sum \limits_{k=0}^N f(x_k)F_m(x_k) w_k=\sum \limits_{k=0}^N w_k F_m(x_k) \sum_{n=0}^N c_n F_n(x_k)=\sum_{n=0}^N c_n \sum \limits_{k=0}^N F_m(x_k) F_n(x_k) w_k. \end{equation} Since $(m+n)\leq 2N$, $F_m(x)F_n(x)$ is a polynomial of degree less than or equal to 2N. From the principle of Gauss quadrature, the following equation holds exactly: \begin{equation} \int_{-1}^1 F_m(x)F_n(x)w(x)dx=\sum \limits_{k=0}^N F_m(x_k) F_n(x_k) w_k=g_m\delta_{mn}, \end{equation} Therefore, \begin{equation} \sum \limits_{k=0}^N f(x_k)F_m(x_k) w_k=\sum_{n=0}^N c_n g_m\delta_{mn}=c_n g_n, \end{equation} thus, \begin{equation} c_n=\frac{1}{g_n}\sum \limits_{k=0}^N f(x_k)F_m(x_k) w_k, \end{equation} which, as Kirill has stated in the answer, is just a rectangular rule approximation of the intergral $a_n$ listed above.
In conclusion, the two forms to approximate $f(x)$ are almost the same. Again, thanks to Kirill's answer.
Not the answer you're looking for? Browse other questions tagged interpolation approximation spectral-method special-functions or ask your own question.
Chebyshev approximation by projection vs interpolation
Polynomials that are orthogonal over curves in the complex plane
Difficulty with Spectral Method using Chebyshev Polynomials
What spline functions are used in Section 13.9 of "Numerical Recipes in C"?
Generating harmonic polynomials in cartesian coordinates
Approximating a step function with polynomials
Role of weight function in Galerkin methods | CommonCrawl |
>Circles
Areas and sectors of circles
How to find the area of a sector?
Find the areas of the circles with the given information below.
Radius is 6 cm
Diameter is 18 mm
i) Area of the circle
ii) The area of the shaded sector
Linda wants to get pizzas for her house party. She has invited 10 friends over. A 12-inch pizza is enough for 4 people. An 8-inch pizza is $7, while a 12-inch pizza is $12. How many 8-inch pizzas and 12-inch pizzas does she need to order, if:
she wants to spend the least amount of money?
she wants to have 4 pizzas and spend as little as possible?
she wants to eat as much outer crust as possible?
How to find the area of a circle
To begin this lesson on finding the area of a sector, we'll first have to start with the area of a circle. A circle with a radius r has an area A found through:
A=πr2A=\pi r^2A=πr2
Do you know how to then find the circumference? The circumference C can be found with the formula of:
C=2πrC=2 \pi rC=2πr
As you can see, in both cases, for both the area and the circumference, we're dealing with the whole circle. Let's take a look at if we were just dealing with a section of the circle.
We can divide a circle into sectors. You can imagine sectors as a radius that extends from the circle's center that then goes around the circumference for a bit, then turns to go down a radius once again to meet at the center. You'll get a sector of a circle that looks like a slice of pie. It has an angle at the center of the slice (or the tip of the slice, if you prefer) that is called the "central" angle.
An important term we're going to have to learn is arc length. The arc length of a sector is the portion of circumference that the sector takes up from the whole circle. It is the portion of the circumference subtended by the central angle.
Keeping in mind the circle's total area, the circumference, and the arc length, we can now learn how to find the area of a section of a circle.
How to find the area of a sector
Through an example question, we'll demonstrate how the relationship between the area of a section and total area can help you find the section's area. There is an area of a sector formula that can help you out which we'll share below, so don't worry about it for now.
Calculate the area of sectors of circles given arc length and radius
(i) Area of the circle
(ii) The area of the shaded sector
(i) To find area of a circle, use the formula πr2\pi r^2πr2
So the area is πr(20)2=1256.6m\pi r(20)^2=1256.6mπr(20)2=1256.6m
We simply need to substitute in the radius of 20m to find the answer.
area of sectiontotal area=arc lengthcircumference\frac{area\;of\;section}{total\;area} = \frac{arc\;length}{circumference}totalareaareaofsection=circumferencearclength
area of section1256.6=279.252π(20)\frac{area\;of\;section}{1256.6} = \frac{279.25}{2 \pi (20)}1256.6areaofsection=2π(20)279.25
area of section=279.25(1256.6)2π(20)area\;of\;section = \frac{279.25(1256.6)}{2 \pi (20)}areaofsection=2π(20)279.25(1256.6)
area of section=2792 m2area\;of\;section = 2792\;m^2areaofsection=2792m2
What exactly are we doing here? In this question, we're finally trying to find the area of a sector. Since the sector is a portion of the total circle, there's a relationship between them. Likewise, there's a relationship between the section of the circumference that's a part of the sector (the arc length), versus the whole circumference. Their ratios are the same! Or rather, their proportions are the same.
That's why we use the formula of:
This helps us use the proportions to work out the unknown. In this case, the unknown is the area of the sector. We're given the arc length (279.25m), we've got the circumference from the formula of C=2πrC = 2 \pi rC=2πr, and lastly, we've calculated the total area from part (i) of the question. The rest of the question is just a matter of substituting in these numbers to the formula and then isolating the area of a sector (the unknown) onto one side to find the answer.
This online interactive sector area of a circle can be dragged so you can explore how the area sector changes as its central angle changes. It's worth checking out to solidify your understanding of the relationships explored in this lesson for circles. | CommonCrawl |
Types of seaweeds pdf
types of seaweeds pdf Seaweed naturally contains many vitamins, folic acid, MSG, niacin, and many minerals. Consider making a collage for framing, or make gift cards. The protocol yields about 2. Green seaweed have different types of longevity. The potential for using mangroves and/or seaweeds as filters for wastes from intensive shrimp pond farming is also discussed. , 2002). Major polysaccharides found in marine algae include fucoidan and laminarans found in brown algae, carrageenan in present red algae and ulvan in green algae [ 4 ]. We've listed some of the more commonly eaten ones below. But, this is far less than Japan uses for certain seaweeds, particularly those used for industrial processing of specialty seaweed products. Yimin Qin, in Bioactive Seaweeds for Food Applications, 2018. In this activity, students will learn the parts of algae, compare them to land plants, and rotate through exploration stations to investigate seaweed close up with their different Bacterial role in green and red seaweeds development: (a) promoting Ulva zoospores settlement on bacterial EPS; (b) reverting normal morphogenesis in axenic culture of Ulva upon putative morphology-inducing bacterial strains; (c) reverting wild-type cell structure of Ulva in the presence of appropriate bacteria; and (d) regeneration of new buds May 15, 2018 · Good examples of algae include seaweed, giant kelp, and pond scum. Brown and green seaweeds are generally eaten for food, while the browns and reds are used in the production of the hydrocolloids: agar, carrageenan and alginate used as industrial thickeners. 6/5 from 841 votes. Hijiki belongs to the family of seaweed types called sargassum and this was the seaweed used by Chinese physicians to treat goitre as long ago as the first century AD. In its simplest form, it consists of the management of naturally found batches. Maine Seaweed Council ; Rockweed Ecology, Industry, and Management fact sheet (PDF, 4. The red algae are commonly sold in dried sheets and used to roll sushi. Types of carrageenan. Description Black seaweed is an annual—it grows and dies back each year. However, they possess a blade that is leaflike, a stipe that is stemlike, and a holdfast that resembles a root. Sea plants are simple marine plants that grow in the shallow waters at the edge of the world's oceans. Part I presents the geographic distribution of seaweeds and seagrasses around the world, environmental factors, floral history, and relevant paleoceanographic considerations, covered geographically. why not call it sea vegetables? Weed sounds so unappetizing, so unwanted. A study recently published in Advances in Botanical Research by Dr. 31 μM trolox/g and CUPRAC ranged from A total of 2,257 broiler chickens were involved in this study. Its ability to absorb carbon dioxide, 1. 1 day ago · Seaweeds are marine organisms with increased contents of bioactive compounds, which are described as potential anti-HPV and anti-cervical cancer agents. Apr 30, 2018 · DIY seaweed fertilizer teas are made by soaking dried seaweed in a pail or barrel of water with a partially closed lid. While there are many different types of algae found floating in the ocean all around world, the Sargasso Sea is unique in that it harbors species of sargassum that are 'holopelagi' - this means that the algae not only freely floats around the ocean, but it reproduces vegetatively on the high seas. Experts believe seaweed can take over plastic soon, as products made from it not only help save the environment, but also cut costs as well. Dietary fiber can constitute up to 75% of the dry components of seaweeds. TABLE 68 US: SEAWEED CULTIVATION MARKET SIZE, BY TYPE, 2018-2025 (USD MILLION) TABLE 69 US: SEAWEED CULTIVATION MARKET SIZE, BY TYPE, 2018-2025 (KT) 12. nitidum, and Por. SHellfiSH and SeaWeed may not be taken from private beaches without the owner's or lessee's permission. What is the stem-like structure called? 5. How to start a seaweed production business. Seaweeds display a variety of different reproductive and life cycles and the description above is only a general example of one type, called alternation of generations. 1. However, you could ingest too much iodine if you eat large amounts over long periods. , 2010) is one good example of this "naturality" trend in the food industries, which may open up into new innovations. Today there are many types of seaweeds consumed in Hawaii. But. There is an abundance of information on the nutrition of seaweeds in books listed in the reference section. Different types of seaweed prefer different conditions. The different types of seaweed available for consumption can be first divided into broad categories which are further available in various subtypes. There are species such as the Codium tomentosum that are perennial, thus they live for many years; and there are also seasonal species, that with the proper conditions of light and nutrients, grow quickly, even forming the famous Green tides. New Sea Education Association findings support ongoing efforts to better understand the ecological implications of these events and how they should be Oct 09, 2019 · The Sargasso Sea is a vast patch of ocean named for a genus of free-floating seaweed called Sargassum. Nov 26, 2020 · Increasing ranges of edible seaweed are available commercially, and this new book explores the different types as well as a fantastic collection of creative recipes to cook with them. pdf), Text File (. The greatest variety of red seaweeds is found in subtropical and tropical waters, while brown seaweeds are more common in cooler, temperate waters. It differs slightly in color, flavor, and nutrient profile from the type University of Hawaiʻi at Mānoa | Make Mānoa yours Feb 25, 2020 · Seaweed is a general nomenclature used for a number of species of algae and marine plants that breed in varied water bodies like rivers and oceans. Depending upon local conditions, bands of seaweed within these zones may be narrow or broad. It has been noted that seaweed extracts exert a stimulatory effect on B lymphocytes and macrophages, which may be used clinically for the modulation of immune responses. The categories under which the seaweeds are classified are: (A) Sea Grasses. c) Press samples of the seaweeds. While people once gathered seaweed by hand from open boats, today vacuum powered mechanical harvesters are frequently Seaweed Seaweed, or macroalgae, refers to several species of macroscopic, multicellular, marine algae. Agar - gelatinous substance obtained from red seaweed and used in biological culture media and as a thickener in foods 2. Enjoy these types of kelp uncooked, but stick to moderate portions of 20 grams or less per day. The presentation is interactive and uses Q &A to involve the students. They are also the varieties that have had the most scientific research conducted on them. 15 Annual quantities for other seaweeds, such as dulse, carrageen moss, and wracks, are estimated to be between 50 and 250 tonnes. Usually they grow more seaweed products, while a large proportion is also used in the production of over 85,000 tonnes of viscous polysaccharides for various food and industrial applications [4]. Seaweeds are generally Black Seaweed, Nori, Laver Porphyra is in the red seaweed group. The first is by being grazed directly, which is discussed in more detail in Section 5. These inflate with the oxygen of photosynthesis and float in rock pools like curled intestines. They grow in a wide range of sizes from minuscule to gigantic. com). Stipe 5. Techniques of Seaweed Culture 4. Common types of seaweed include nori, kombu, kelp, dulce and Irish moss. of seaweed meal, which is sold for US $ 5 million. b) Identify native seaweeds, and look for possible alien (introduced seaweeds). Social license to operate could be useful to the seaweed cultivation industry as it expands – Smaller-sale seaweed cultivation organisations are already practicing activities that are associated with gaining social license to operate Clearly define seaweed industry terms –to alleviate confusion in understanding between Seaweed has been collected in northern New England for agricultural purposes since the first settlers arrived over three hundred years ago. Black seaweed begins to grow in early spring. While the content varies greatly, a serving of seaweed may contain more than 4,500 micrograms of iodine. the seaweeds, or live in the seaweeds, as for example, sponges. The XRD pattern of the carbonized marine algae is The rise in awareness about the medical advantages of commercial seaweed on daily basis is the essential development factor for the global commercial seaweed market. are seaweed farmers and the rest are seaweed processors and traders. These are capable of synthesizing the complex organic substances from the simple inorganic compounds present in sea water. Early usage was by coastal dwellers, who collected storm-cast seaweed, usually large brown seaweeds, and dug it into local soils. 0%. Apr 05, 2019 · Health Benefits of wakame seaweed. CHICAGO, Oct. The protocol was applied to 10 seaweed species. Several species of Porphyra are found along the west coast and approximately 30 species exist worldwide. Seaweeds are photosynthetic macroalgae, the majority of which live in the sea, and are usually divided into green, red and brown algae. Request for Question Clarification by hummer-ga on 04 Jan 2005 17:11 PST Hi again, I only posted the one link - my first sentence was just a suggestion to view the pdf for more details. Two examples of different types of seaweed: a) Caulerpa; b) Halimeda (Click-thru on images for greater detail. seaweed farms in the North Sea linked to a land-based chain for logistics, processing and sales to the food an feed industry. Semi-refined carrageenan, also called PNG (Philippine Natural Grade) or PES (Processed Eucheuma Seaweed). The team cut sections of seaweed and placed them in grips each connected to an arm in a metal frame. Thus, the utilization of seaweed-based biopolymers for edible films and lectins in a perspective of food and health applications is studied in this current article. In laboratory studies, polysaccharide alginate, naturally found in some types of seaweed, has been shown to have antioxidant, anti-inflammatory and anti-immunogenic properties . 2. Also, high volume of seaweed consumption into hydrocolloid production coupled with upsurge demand from European countries further propel the growth of the seaweed market. In 2012, 40% of global seaweed production was eaten weeds contain both types: for example, the polysaccharides agar, alginate, and carra-geenan are soluble, while cellulose and xylan are not. A functional salt FRAP antioxidant activity of T. Seaweed has been used by populations around the world for centuries, as food, cosmetics and medicine. Seaweed, a type of algae, is more than just a primary producer: they also convert carbon dioxide into oxygen and are the foundation for many habitats in the ocean. Enter an email address to share your moodboard with a friend, colleague or client. It's an abundant resource that washes up on beaches throughout the world. , Gracilaria sp. Species from all three groups are consumed as food in Total fresh seaweed production was 623,286 tons for all types of seaweed. This has only more recently been approved for food Sep 10, 2020 · Learn more about Sargassum horneri with this Sargassum horneri information sheet (PDF) photo credit: Jessie Altstatt. In general, however, eating this marine algae is a simple way to boost a person's intake of vitamins and minerals The Benefits of Liquid Seaweed Fertilizer | Dengarden Seaweeds are large algae (macroalgae) that grow in a saltwater or marine environment. After collection, tropical seaweeds were washed, air-dried at room temperature (3–5 days; 27–30 °C) to reduce Dec 18, 2020 · Some types of seaweed, like kelp, look like sheets. ac. Fig. Other types of kelp, such as arame and wakame, contain far less iodine than kombu. The research study is an outcome of ext The report also bifurcates global Commercial Seaweeds market based on product type in red seaweed, brown seaweed and green seaweed. Intertidal and estua rine species are the mo st tolerant, especial ly The types of seaweed growing near the high-water mark, where plants are often exposed to air, differ from those growing at lower levels, where there is little or no exposure. It is important to know the particularities of vegetable protein, as one of the types of proteins , to take full advantage. Res. Seaweed is edible algae that grows in the sea. Most of the seaweeds are medium-sized and are available in multiple colors like red, brown, and green. The inclusion dose ranged from 2 to 30 g/kg, while the intervention duration ranged from 21 to 42 days. 2011 is thought to be the first time that an inundation of such a great scale occurred that beaches from Mar 26, 2017 · Seaweed Cultivation Policy Statement. It is concluded that such techniques, based on ecological engineering, seems promising for mitigating environmental impacts from intensive mariculture; however, continued research on this type of solution is required. It is recognizable Additionally, the Japanese and Chinese cultures have used seaweeds to treat goiter and other glandular problems since 300 BC. • Seaweed is allowed to dry in a designated storage area (good options include empty parking lots or fields). Seaweeds Jun 29, 2017 · Seaweed, which can grow rapidly and efficiently, provides plant-based proteins and shows promise as a source of biofuel to replace fossil fuels. The only problem for a lot of people is that seaweed has a distinctive texture and flavor. Total annual use by the global seaweed industry is about 8 million tonnes of wet seaweed. Seaweed Culture Techniques - Free download as Powerpoint Presentation (. Acropora nasuta minimizes this damage by chemically cuing symbiotic goby fishes ( Gobiodon histrio or Paragobiodon echinocephalus ) to remove the toxic seaweed Chlorodesmis fastigiata . There are three main types of seaweed: green, brown and red. JWST079-07 JWST079-Kim September 2, 2011 11:50 Printer Name: Yet to Come 176 CHEMICAL COMPOSITION OF SEAWEEDS Types of Seaweed. How Are Seaweeds Grouped? Most seaweeds are divided into three groups acmrding to their Chapter 14. Global Commercial Seaweeds market report provides geographic analysis covering regions such as North America, Europe, Asia Pacific, and Rest of World. They grow best under cold waters at medium depth. Veined blade Mastocarpus papillatus: Turkish washcloth Mazzaella cornucopiae: Iridescent horn-of-plenty Mazzaella splendens: Splendid iridescent seaweed Microcladia borealis: Coarse sea lace Seaweeds are used in many maritime countries as a source of food, for industrial applications and as a fertiliser. Seaweeds are classified as Rhodophyta (red algae), Phaeophyta (brown algae) or Chlorophyta (green algae) depending on their nutrient and chemical composition. Arthritis inflammation is attributed to pro-inflammatory cytokines [5] (signal carrying proteins [8] ). [66] Jul 17, 2020 · Seaweed aquaculture uses no (or insignificant) freshwater resources, and could be located in coastal desert regions. 1, 59-63, 2004 The two chief types of plants occurring in the marine environment are the algae and sea grasses. Organisms: seaweeds, algae, surf grass, sea slugs, interdial fishes. Wakame is a low-claorie and low-fat seafood. Red and brown seaweeds arerich in carotenes (provitamin A) and vitamin C, and their amounts may range from 20 to 170 ppm and 500 to 3000 ppm, respectively. Malachite Green Dye Removal Using the Seaweed Enteromorpha 655 XRD study Further, the x-ray diffraction studies of the carbon prepared from the marine algae Enteromorpha were carried out using Rotoflux x-ray diffractometer 20KW / 20A, Model 10. Long ago, gardeners who lived near the ocean learned that seaweed was good for their plants. We all use seaweed products in our day-to-day life in some way or other. The wave energy and substrate of an area determine which seaweeds will grow Jun 29, 2017 · Seaweed, which can grow rapidly and efficiently, provides plant-based proteins and shows promise as a source of biofuel to replace fossil fuels. , Monostroma (Mon. Traditionally, all classes of seaweeds are known as human foods especially in Asian countries; for instance, red algae are known as Nori and brown algae are called Konbu and Wakame in Japan. Private tideland owners and lessees, and members of their immediate family (grandparents, parents, spouse, siblings, children, and grandchildren) are exempt from personal use daily limits when taking ClaMS, Jul 08, 2018 · Seaweeds farming guide. Here is the step-by-step Instructions for drawing seaweed. 4. It falls into three broad groups based on pigmentation; brown, red and green seaweed. · This type of aquaculture is well suited for small-scale operations, by "grassroots" people running a seaweed business at a household level · All of the seaweeds on the above list occur naturally in this region, except kappaphycus which is an introduced species. Giant kelp is a type of green-brown seaweed that is common in shallow waters in the Pacific Ocean, specifically along North, Central and South America's Western coast. Seaweed farming or kelp farming is the practice of cultivating and harvesting seaweed. Giant kelp is almost the fastest growing organism on the planet. Fertilizer uses of seaweed date back at least to the nineteenth century. pptx), PDF File (. The global seaweed extracts market has been segmented based on type, application, and region. This type of permit is issued by each regional office and is evaluated on a case by case basis. Then the subalgebras of seaweed type, or just "seaweeds", have been defined by Panyushev (2001) for arbitrary reductive Lie algebras. the Irish seaweed sector is predicted to increase significantly by 2020. tomaintaintheir nutritionalbenefits. There are over 20 types of edible seaweed and even more are being discovered. In general, three major types of carrageenans can be distinguished: Alcohol processed refined carrageenans. Zanzibar seaweed is in demand, but because cultivation takes place in deep water and is highly specialised, women farmers are unable to farm the higher Organisms: barnacles, mussels, seaweeds-Lower intertidal (infra-littoral fringe zone): Submerged most of the time. seaweeds showed growth enhancement. They are large plants and form vast kelp 'forests' in the subtidal zone, so providing important habitats for many marine animals and protecting shores by reducing wave action. Infuse the seaweed for several weeks then strain. 3 For instance, sugar kelp is an edible type of seaweed that is part of the brown algae family. Moving the top arm pulled the seaweed: the sample's change in length relative to its starting length told them how much strain it was under. 'seaweeds' or sea plants, as they should more correctly be called. Intensively cultivated seaweeds would absorb a significant amount of carbon from ocean water, in addition to excess nutrients released through traditional land-based agricultural systems . Exactly how it works is difficult to pin down, but scientists have found in seaweeds a veritable soup of plant-growth stimulants, vitamins, chelating agents, trace minerals, enzymes, and amino acids, all of which influence the growth of plants in different ways. A staple in Asian diets since ancient times, seaweeds are among the healthiest foods on the planet, packed with vitamins, minerals, and antioxidants. All edible marine seaweed belongs to one of three groups of multicellular algae: green algae, brown algae and red algae. The most recent figures are given by AlgaeBase. There are a variety of seaweed types, which are generally categorised into three main groups; red, green and brown, based on colour (McHugh, 2003). Plants or seaweed have a variety of species. , By type, the global commercial seaweeds market has been classified as red seaweeds, brown seaweeds May 15, 2018 · Good examples of algae include seaweed, giant kelp, and pond scum. You can make them in different green shades to create some variety, or one shade to create some consistency. Autonomous seaweed farms like those by New York-based startup GreenWave, are ready to help interested parties have more access to seaweed-based materials. It's known as carrageen, from the Irish word for "little rock," and looks somewhat like a baby tree, with blades forking off from a small stalk to form fingers. Seaweed can be collected from the wild but is now increasingly cultivated. Eating miyeok-guk that contains a cup of seaweed enables one to absorb around 22% of the recommended daily vitamin K requirement for women and 29% of the recommended daily vitamin K requirement for men. Seaweed Seaweed, or macroalgae, refers to several species of macroscopic, multicellular, marine algae. This chapter addresses the worth of seaweed as a sustainable feed ingredient in diets of farm and aquatic animals. When the alginate hits the solution, it instantly polymerizes a skin that surrounds the liquid, forming what appears to be small egg-like caviar Seaweeds are categorized into three types according to their pigment content: red seaweeds (Rhodophyceae), green seaweeds (Chlorophyceae), and brown seaweeds (Phaeophyceae). Seaweed-derived food hydrocolloids have been known for over a long period of time, and the processes of their extraction from the respective types of seaweeds have also been evolving for a long time. Dec 12, 2007 · Abstract. Laminaria (kombu), and Undaria are some of the examples of brown seaweeds. 18, 19 In a randomized, placebo-controlled, double-blind study (n = 70) in elderly Japanese people living in a nursing home, 300 mg/day of mekabu fucoidan (sulfated polysaccharide seaweed extract) given 4 weeks before Seaweeds have been used for many years as a valuable source of organic matter for various soil types and many different fruit and vegetable crops in the coastal regions of world 11. Locally it is called black seaweed. seaweeds was PES medium whereas f/2 and VS media were found not effective for Eucheuma cultures. The strained remains of the seaweed can be mixed into compost bins or gardens. The Philippines is one of the top producers of seaweeds in the world, and aquatic plants next to indonesia (FAO 2007). Mar 13, 2018 · Future of Seaweed. Though Irish moss is a naturally deep purplish red, the seaweed turns white when it washes ashore. In comparison, blue-green algae, which causes harmful algal blooms, is a cyanobacteria and not a marine algae. ) nitidum, and Porphyra (Por. In the future, food enriched in seaweed extracts or containing purified algal polysaccharides may be used to increase the functionality of the food on the market. Together with microscopic algae called phytoplankton, sea plants contribute to the food chain in the sea, provide homes for Jun 06, 2017 · A seaweed of the upper shore, gutweed is able to tolerate very low salinities. In general, however, eating this marine algae is a simple way to boost a person's intake of vitamins and minerals The Benefits of Liquid Seaweed Fertilizer | Dengarden Jun 29, 2018 · • No disposal—seaweed remains in place or is moved off to one side of the beach or to a non-recreational use area. Most seaweed is collected in the spring or summer. In many oriental countries such as Japan, China, Korea, and others, seaweeds are diet gene to spores of several commercially important seaweed genera, taking advantage of the life cycle(Qin, Jiang, Yu, Li, & Sun, 2010). It can be dried for easier transport, or incorporated wet into compost. Edible Films Mar 15, 2016 · It is, in fact, seaweed, and it has a multitude of culinary uses. pilulifera, different Nov 22, 2016 · Seaweeds and seagrasses contribute to secondary production in the ecosystem in two ways. The west coast of Scotland has suitable inlets and sea lochs for seaweed cultivation, with many already used for aquaculture production. complex type N-glycans bearing 1-2 xylosyl and 1-3 fucosyl residues present in either algae or seagrass. The seaweed is a mass or growth of marine plants. , Seaweed extracts market has shown an exceptional penetration in developed economies in North America. Bocas ARTS PI, Dr. The teacher can use an image of macroaglae to show which type of macroalgae is involved in different uses. It is interesting to note that the control exhibited a Maine Seaweed, LLC Each type of seaweed may contain slightly different nutrients and minerals. Comparison of different material types To compare the algicidal activity by different types of materials from the seaweed C. Which type of seaweed can be important in the formation of a coral reef? Answers 1. So again, I want macroalgae/seaweed/kelp, not microalgae. These organisms can occur as either single cell organisms or multicellular species for the large ones. This photo shows the brown seaweed Sargassum growing amongst corals on the inshore Great Barrier Reef. It is also used in the cuisines of Korea, Peru, Wales and North American Indians. This article presents information on the nutritional aspects of seaweeds in terms of fiber, mineral content, fats and lipids, vitamin contents, and components that have a confirmed and investigated nutritional effect. The Romans used seaweeds in the treatment of wounds, burns, and rashes. Today, Japan is the leading importer of seaweed, while Korea is the premier exporter-with Japan as its main customer. You can find the patt Jan 07, 2016 · There are more than 100 different species of edible seaweed, and around 10,000 species of macro-algae. What is another name for seaweeds? 3. Jun 25, 2018 · Seaweed is chock-full of vitamins, minerals, and fiber, and can be tasty. The four SwPSExts from Gelidium sp. , clonal and non-clonal (Santelices, 1999). Chlorophylls a and b are the main pigments, while carotenoids are accessory pigments, and both are widely distributed in all seaweeds. The delectable result is a sushi roll. While people once gathered seaweed by hand from open boats, today vacuum powered mechanical harvesters are frequently 3. Like other plants, seaweeds contain various inorganic and organic substances which can benefit human health (Kuda et al. Types of Seaweed Polysaccharide Extract (SwPSExt) The seaweed polysaccharide extract (SwPSExt) of four seaweeds, Gelidium sp. Separate multiple email addresses with a comma (name@abc. Responsible aquaculture, ocean farming, is a way for seaweed to be sustainable and renewable as well as ensuring that the crops are monitored for safety. There are about 900 species of green seaweed, 4000 red species and 1500 brown species found in nature. uk. File Name: Food Safety Legislation For Seaweed Products. What is the difference between seaweed and algae? Seaweeds are a group of algae, and have some special characteristics viz. Many seaweeds may also be found in mre than one band. Other common varieties include dulse, arame, wakame, kelp and spirulina. We report herein the evaluation of seaweeds of is useful in preparation of various materials (Berglund, 2005). Though seagrasses are not algae but are also consumed as seaweeds. Oct 14, 2016 · Seaweeds are a substantial source of prebiotic fibres and several research studies on chickens, rats, pigs, nematodes and cell cultures have shown improved intestinal histomorphology, increased abundance of beneficial microbes and increased SCFA production with the administration of dietary seaweeds 15. Gilles Bedoux suggests that the antioxidants in seaweed wraps have measurable anti-aging Seaweeds are a type of algae, so huge inundations are known as 'harmful algae blooms'. Historically, seaweed is a readily available food source that has been consumed by coastal communities likely since the dawn of time [15,16]. These results indicate that seaweeds lack the activities of several of the glycosyltransferases required for the biosynthesis of the complex type N-glycans found in terrestrial plants, and that the context of N-glycan A translated, thoroughly revised, and updated edition of the German work. Part II covers seaweed ecophysiology, including the relationships of light, temperature, salinity, and other abiotic Bacterial role in green and red seaweeds development: (a) promoting Ulva zoospores settlement on bacterial EPS; (b) reverting normal morphogenesis in axenic culture of Ulva upon putative morphology-inducing bacterial strains; (c) reverting wild-type cell structure of Ulva in the presence of appropriate bacteria; and (d) regeneration of new buds The seaweed also contain significant amounts of carbohydrate compounds, which are most beneficial to the body, and vegetable protein. Most diets supply less than 1,000 micrograms daily. Read Sea Vegetable Celebration. Seaweed resources are an important source of carbon fixa-tion. important feature of many types of seaweed is their ability t o take up more phosphoru s than they require f or maximum gro wth. Seaweed is known to be high in Feb 25, 2017 · In 2000, Dergachev and Kirillov introduced subalgebras of "seaweed type" in $\\mathfrak{gl}_n$ and computed their index using certain graphs, which we call type-${\\sf A}$ meander graphs. The topics will include: food microbiology, food-borne pathogens, and fermentation; food engineering, chemistry, biochemistry, rheology, and sensory properties; novel ingredients and nutrigenomics; emerging Aug 16, 2020 · Seaweed is a great source of fatty acids, vitamin A and antioxidants, all of which have been shown to aid the body in rebuilding the skin's elasticity and regenerating damaged skin cells. Seaweeds grow mostly in shallow marine waters, under 100 m (330 ft) deep; however, some such as Navicula pennata have been recorded to a depth of 360 m (1,180 ft). Seaweed fertilizer tea can be watered in at the root zone or used as a foliar spray. Seaweed is very low in calories. 5μg of high molecular weight DNA collected from 5mg of dried material of different type of seaweeds, with no RNA. They are photosynthetic, like plants, and "simple" because they lack the many distinct organs found in land plants. Seaweeds can do photosynthesis in all their tissues; most plants photosynthesize only in their leaves. g. com, name2@abc. Seaweed is one of the most important aquaculture commodities in the Philippines. Both common and Latin names come from the fronds, which are, in fact, tiny tubes. Seaweed species are diverse and are generally classified into three separate phyla based on colour and other characteristics (FAO 2003; Gupta and Abu-Ghannam 2011). The most common type of Dec 10, 2020 · Spadaro and Butler demonstrate in two separate field experiments that enhancing the density of native, herbivorous Caribbean king crabs on coral patch reefs overgrown by seaweeds reversed an ecological phase shift and shifted reef communities toward recovery by reducing seaweed cover and increasing the abundance and diversity of corals and fishes. commercialsys - c program LogMeIn x86 seaweeds have a positive impact on moderately eutrophic water by absorbing nutrients from surrounding waters. Nov 19, 2015 · Woods Hole, Massachusetts, USA (PRWEB) November 19, 2015 -- Massive quantities of Sargassum, a distinctive brown seaweed, have flooded Caribbean shores in recent years, setting off local concerns about economic impacts on fishing and tourism. Name the 3 types of seaweeds. Kelps belong to the brown seaweeds. It is commonly used in Japanese cuisine. d) Make a pressed seaweed art project. The entire body of the seaweed is called the _____. Also known as"kanten" or China grass, agar-agar is a red seaweed that is low in calories. Kelp is a large type of seaweed that has large leaf-like protrusions known as fronds and can grow as long as 200 feet (61 m). In addition, the increasing demand for seaweed extracts to generate biomass for biofuel production across the globe is also catalyzing market growth. Seaweeds were collected from two different regions, Indonesia (tropic) and Japan (temperate). Being plants of unique structure and biochemical composition, seaweeds could be exploited for their multifunctional properties. 93MB). Nov 07, 2014 · The fact is that seaweed is in many of the products that we use daily. conoides ranged from 39. Seaweeds are marine plants that play important roles on both healthy and degraded reefs. Jan 10, 2019 · Types of Seaweed. Food Safety Legislation For Seaweed Products Bocas ARTS PI, Dr. These tools include a series of How-To videos, an illustrated multilingual glossary of terms, a formal ontology, and downloadable protocols. examples of common seaweeds found in the upper, rrid-, and lower intertidal zones and below the low tide mark. ) What is seaweed used for? Around the world certain species of seaweed are harvested to create carrageenan products. Some types of seaweed have more calcium than cheese, more iron than beef, and more protein than eggs, plus seaweed is a very rich source of micronutrients. The composition of the dry ingredients in the different types of seaweeds can vary a great deal, but the approximate proportions are about 45–75 percent carbohydrates and fiber, 7–35 percent proteins, less than 5 percent fats, and a large number of different minerals and vitamins. Types of Edible Seaweed. 61 with a microprocessor record. Total fresh seaweed production was 623,286 tons for all types of seaweed. polykrikoides and was used in further experiments. Green Algae Chlorophyta most live in freshwater and terrestrial environments the group from which embryophytes (higher plants) emerged only 10% are marine Jan 05, 2010 · Most people know two general categories of seaweeds: wracks (members of the brown algal order Fucales such as Fucus) and kelps (members of the brown algal order Laminariales such as Laminaria), and some have heard of Carrageen or Irish Moss (a red alga, Chondrus crispus) and Dulse or Dillisk (also a red alga, Palmaria palmata). Seaweed Precaution. Molecular taxonomy of NE Pacific seaweeds Supervisor: Dr. Seaweed is particularly rich in iodine, more so than other foods 1. Free Download e-Books C System Volume Information restore 3A3C753E-374F-4D63-88D5-9555F76A7918 RP167 A0206280. Bacteria - any of a group of microscopic single-celled organisms, prokaryotic microorganisms 3. Wakame seaweed is rich in minerals, anti-oxidants, omega-3 fatty acids, protein, vitamins and dietary fiber in ample amounts contributing to overall health and wellness. Clonal seaweeds have ability to propagate by fragmentation Abstract In this paper, DNA isolation protocol for development of seaweeds collected from seashore areas of Rameshwaram. Health benefits. Commercial cultivation. 1 Introduction. Mar 30, 2020 · Kelp is a type of large, brown seaweed that grows in shallow, nutrient-rich saltwater near coastal fronts around the world. 5. Clonal Vs Non-clonal agronomy In an agronomic point of view, there are two types of seaweeds; viz. Its ability to absorb carbon dioxide, Stinging seaweed disease is a skin irritation caused by direct exposure to a poisonous type of algae named Lyngbya majuscula. This is a presentation slide for seaweed culture techniques By product type, red seaweed segment is leading the market with maximum revenue share (%). Introduced to the island of Zanzibar in 1988, seaweed farming currently employs 25,000 people, mostly rural women, while upwards of 150,000 people benefit indirectly from the seaweed industry. Only exposed during the lowest spring tides. Fish. Martone Botany Department & Biodiversity Research Centre, University of British Columbia (UBC) Applications are invited for a two-year postdoctoral fellowship (PDF) to help resolve the taxonomy of several NE Pacific seaweeds by evaluating DNA sequences and morpho-anatomy - To study and analyze the global Seaweed Powder consumption (value & volume) by key regions/countries, product type and application, history data from 2014 to 2018, and forecast to 2024. There are more than 6000 species of red seaweed. . One barrier to developing seaweed-based industries relates to reported high levels of arsenic for many types of seaweed including some species of interest to Irish companies. 3MB) and recording form PDF (73KB) or request a free copy by emailing your name and postal address to seaweeds@nhm. Seaweed market is driven by increasing use of seaweed into various herbal products and raising awareness of the use of multiple species of seaweed into food and feed applications. Seaweed (or macroalgae) are a diverse group of mostly photosynthetic algae found in marine and freshwater environments. 05) on yield, the ratio of Na:K and functional salt NaCl. The seaweed type used included seaweed blend 27, Laminaria japonica 28,30, Undaria pinnatifida 29,31, Hizikia fusiformis 31, and Ulva lactuca 32. macroscopic, multi-cellular, benthic, and marine. July 2008 page 1 of 44 Aug 24, 2020 · Immune. It is recognizable Seaweeds have various minerals, vitamins, carbohydrates, and sometimes protein. Seaweeds - Diversity in Marine Ecosystem: The diversity of life in the marine environment is extraordinary; the greatest biodiversity is in the world's oceans There are thought to be over 10,000 species of seaweed, reflecting its immense diversity, both in flavour and nutritional properties. For that reason they are currently excluded from being considered Seaweeds are also the most significant sources of non-animal SPs and the chemical structure of these polymers vary according to the type of algae . It is usually eaten with raw fish (poke) and poi (Abbott 1996). Seaweeds are generally Maine Seaweed, LLC Each type of seaweed may contain slightly different nutrients and minerals. Seaweeds belong to a group of plants known as algae. Our study aims to bring together all the results of the previous studies, conducted in order to highlight the potency of bioactive molecules from seaweeds, as anti-HPV and anti-cervical agents. In its most advanced form, it consists of fully controlling the life cycle of the algae. Once dried, Dec 12, 2007 · Abstract. , Mon. 5 MB) Learn about Sea Grant's Seaweed Production on Mussel Farms in Maine research project. , because of their nutritious nature. The recommended intake for adults is 150 micrograms daily. Seaweeds are exported either in raw form (fresh or dried seaweeds) or processed form (carrageenan and kelp powder). Use of seaweed as food has strong roots in Asian countries such as China, Japan and the seaweed plots. There are various types of edible seaweeds that are used for preparing appetizers, main course meals, salads, soups, etc. Brown seaweed was the second-largest product segment in 2019 with a share of over 45. 16 Norway Seaweed harvesting is a highly regulated industry in Norway. Black Seaweed, Nori, Laver Porphyra is in the red seaweed group. They are eukaryotic organisms and lack any vascular tissue (for the transport of water and other compounds such as sugars) or any organised tissue. The value of seaweeds in the marine ecosystem including the production of oxygen and as food and shelter for marine animals. . Seaweeds . And now we know they're great for the waistline, too: A 2010 study found the algae can reduce our rate of fat absorption by 75 percent, thanks to its seaweed-based biopolymers such as carrageenans and alginates are among the most important biopolymers due to their great abundance and processability [11,12]. Red Apr 05, 2019 · Seaweeds can be classified into three broad groups based on pigmentation: Brown (Phaeophyceae) seaweeds: They are the most common and large group of seaweeds found worldwide. It holds just 45 calories per 100 grams, slightly more than that of nori (35 calories). View by Slideshow Save Tutorial in One Image Download Printable PDF Guide View by Scrolling The Complete Drawing Tutorial in One Image Download Printable PDF of the Drawing Guide Click the PDF icon to view a printable … How to Draw a Seaweed Read More » Surfactants, soaps, and detergents- studies about seaweed soap pdf ,Case Study # 3 – Surfactants, soaps and detergents 1 Surfactants, soaps, and detergents Case study # 3 Presented by Kimberly Quant and Maryon P. Visit the seashore and choose your survey plot. Vast research into the cultivation of seaweeds is currently being undertaken but there is a lack of methodological strategies in place to develop novel drugs from these sources. It's also a good source of plant nutrients, such as nitrogen, phosphorus, potassium, calcium and magnesium. Mainly, brown seaweeds are used as foods and as raw materials for the extraction of hydrocolloid and alginate. Arrange several seaweeds onto cardstock or other craft paper. Corals in the genus Acropora generate much of the structural complexity upon which coral reefs depend, but they are susceptible to damage from toxic seaweeds. For example, some seaweed polysaccharides are used in toothpaste, soap, shampoo, cosmetics, milk, ice cream, meat, processed food, air freshener, and many other items. The term includes some types of Rhodophyta (red), Phaeophyta (brown) and Chlorophyta (green) macroalgae. Production and utilization of products from commercial seaweeds pdf. Mar 19, 2019 · Although seaweed is a fast-growing plant (giant kelp can grow 12 – 24 inches per day), pollution and over-harvesting in the wild is a key consideration as edible seaweed grows in popularity. On-land cultivation is essential for commercial success in the development of human functional products from seaweeds at industrial scales. Since seaweed is composed primarily of water, this reduces the amount of material to be disposed of. Effects of Brown Seaweed ( Sargassum polycystum ) Extracts on Kidney, Liver, and Pancreas of Type 2 Diabetic Rat Model MahsaMotshakeri, 1 MahdiEbrahimi, 2 YongMengGoh, 2,3 HemnHassanOthman, 2 MohdHair-Bejo, 2 andSuhailaMohamed 4 FacultyofFoodScience&Technology,UniversitiPutraMalaysia,Serdang,Selangor,Malaysia May 24, 2015 · Seaweed. While kelp is food for many organisms, kelp also provides shelter for many forms of sea life. The red seaweed provides a softer and chewier texture used to add flavor in a variety of dishes and may also be eaten as a dried snack. Thallus 3. After the presentation, students perform several reinforcement activities: AlgaeBase dynamic species counts shows that there are about 11,000 species of seaweeds, of which 7,500 are red algae (Rhodophyta), 2,000 are browns and 1,500 are greens (about 800 species of Bryopsidophyceae, 50 species of Dasycladophyceae, 400 Siphoncladophyceae, and 250 marine Ulvophyceae. The 15 seaweeds including harvesting locations, phylum, family, scientific and local name, and date of collection are listed in Table 1. There are three main categories of these edible seaweeds, such as; red, green and brown algae, in which they are further segregated. Arsenic (As) is a chemical element that raises concern from both an environmental and a human health Seaweed Caviar & Instant Worms by Bryce Hixson Colored alginate (liquefied brown seaweed) is dropped and drizzled into a solution of calcium chloride. One of the most amazing types of marine algae on earth is the giant kelp. Gulfweeds, or sargassum, are a type of brown algae that grow in warmer water and can float in large masses, particularly in an area known as the Sargasso Sea. Macroalgae 2. The second is through their entry into the food chain in the form of detritus and/or dissolved organic matter ( DOM ) (Wilkinson, 1995). 50% for 2020-2027 with factors such as high production cost along with stringent regulations of the government will hinder the growth of the market in emerging economies. Different species of seaweed have differing habitat requirements, but all require good water flow to provide nutrients. Seaweeds offer a wide range of therapeutic possibilities both internally and externally. Abstract Utilization of seaweed as a feed supplement for animals is not a new phenomenon; in fact, it has been utilized by farmers as a valuable feed source for livestock and aquaculture for ages. 3. There are seven types of algae based on the different types of pigmentation and the food reserves. You may wish to avoid one type of seaweed altogether. 12 to 55. Seaweed comes in three color varieties – red, brown and green. While most of the nutrition studies were conducted on non-Alaska samples, the Mar 27, 2019 · Seaweed is a type of marine macroalgae that is classified as either a green, red, or brown algae. In molecules, these seaweeds are an important source of various fact algae are considered as a potential source for cellulose, which polysaccharides. Within minutes of seaweed contact, or contact from Results: The results demonstrated the interaction between different types of seaweed, temperature and heating time had a significant effect on the level of 5% (P < 0. 4. - To understand the structure of Seaweed Powder market by identifying its various subsegments. The chapter initially outlines the global feed market size and explains the Mar 30, 2016 · Types of Seaweed Three types: green, brown and red not always easy to recognize visually because of pigment proportions primarily limited to areas of shallow water and rocky shores 16. They are widely distributed in colder zones and are absent from tropical waters. The most familiar types can generally be divided into three groups: Green (Chlorophyta), Red , and Brown-Kelps (Phaeophyta—related to Chromista). Seaweeds are exported either in raw forms (fresh or dried seaweeds) or processed forms (semi-refined chips/carrageenan and refined carrageenan). By type, the global seaweed extracts industry has been classified as liquid, powder, and The only other seaweed harvested in large quantities is rockweed, at approximately 20,000 tonnes per annum. The growth area in seaweed fertilizers is in the production of liquid seaweed extracts. Undaria pinnatifida, often referred to as Asian Kelp or Wakame, is a brown seaweed that is native to Japan where it is cultured for consumption (e. Today seaweed is still gathered along the coast, but the techniques used and technologies employed, have drastically altered the industry. The fixed bottom arm measured the forces acting on the seaweed. PES medium has low higher concentration of salts which interfere with the growth in vitro. Activities: Activity 1 - How to Compost Seaweed. This report presents the market size and development trends by detailing the Seaweed Extracts market revenue, market share and growth rate from 2015-2026, and it gives a thorough analysis by various product types, applications, regions, and main participants. 1 Seaweeds in Canada are largely used as biostimulant extracts for crops as well as feed supplements TABLE 70 CANADA: SEAWEED CULTIVATION MARKET SIZE, BY TYPE, 2018-2025 (USD Thus, both of these factors are expected to drive the commercial seaweeds market during the forecast period, Segmentation, The global commercial seaweeds market has been segmented based on type, method of harvesting, form, application and region. 5 billion. The term includes some members of the red, brown and green algae. Brown algae (class Phaeophyceae) commonly found as seaweeds include kelps and Fucus. Below are the various types of Edible Seaweed: Red Seaweed: Red seaweed is in the phylum Rhodophyta and is red because it contains the pigment phycoerythrin. Limu, (Hawaiian for seaweed) has been eaten by Hawaiians for hundreds of years. 1 shows the different types of media composition. of seaweed. The term Seaweeds in this case refers only to macrophytic marine algae, both wild and cultivated, growing in saltwater. Supplement No. Next up in the OJFTMHYLW list is seaweed. Classified according to their chemical of SeaweedS There are three basic types of seaweed: red, brown and green (Figure 1). Wilson Freshwater, has developed a number of tools for training in seaweed identification, systematics, and taxonomy. Potassium chloride processed refined carrageenans. Examples of seaweed in a Sentence Recent Examples on the Web In addition to rice and seaweed, the set includes uni, negi toro, tamagoyaki, unagi and other fillings. Contributors: John West and Hilconida P. pdf Size: 5909 KB Type: PDF, ePub, eBook Category: Book Uploaded: 2020 Sep 08, 17:43 Rating: 4. Recently, a meander graph approach to computing the index in types ${\\sf Menu items included poi, two types of fish (mamo and awe), beef stew, ogo (seaweed), salmon salad, steak, rice, tuna salad, chips, crackers, beer, soda, and water. Wakame Seaweed is a type of algae commonly harvested from cold, tropical and temperate waters. 1 A Field Guide to the British Seaweed By Emma Wells Wells Marine Surveys aquaculture systems. Undaria pinnatifida Invasion and Distribution. If you are running a group survey, download our guide for group leaders PDF (4. Nov 21, 2011 · The Handbook of Macroalgae: Biotechnology and Applied Phycology describes the biological, biotechnological and the industrial applications of seaweeds. Coralline seaweeds Delesseria decipiens: Winged rib Halosaccion glandiforme: Sea sacs Hildenbrandia rubra: Rusty rock Hymenena sp. A case was defined as onset of a burning sensation in the mouth or throat or two or more of the following symptoms: vomiting, diarrhea, nausea, or lethargy within 2 hours after Key Terms Emelie Espinoza 1. Variety of seaweed. pilulifera was shown to kill and rupture the most cells of C. Among seaweed types, hijiki it is richest in calcium and contains about 14 times as much calcium as milk. B, brown seaweed; R, red seaw eed; G, green seaweed. It's a quite repetitive pattern which makes it a quick project. Because seaweed is a primary producer and makes its food from the sun, many organisms feed on the kelp and then in turn feed other animals. It's up to you. Seaweeds or macroalgae have been a source of food, feed and medicine in the orient as well as the West since ancient times. The brown seaweeds are more useful and grow in cold waters in both the Northern and Southern Hemispheres. Download and print the Big Seaweed Search guide PDF (3. Botanically, seaweeds are classified as Green, Brown, or Red. Seaweed - Wikipedia Seaweed, any of the red, green, or brown marine algae that grow along seashores. 15, 2020 /PRNewswire/ -- According to the new market research report "Seaweed Cultivation Market by Type (Red, Brown, Green), Method of Harvesting (Aquaculture, Wild Harvesting Nov 15, 2012 · Previously, seaweeds containing fucoidan have been found to have anti-tumor activity in mice and some cell lines and Japanese researchers at the Biomedical Research Laboratories and the Research Institute for Glycotechnology Advancement found that seaweeds containing Fucoidan caused various types of established cancer cell lines to self-destruct. Macroalgae come in many colors including green, red, brown and blue, as well as in a variety of forms—some growing tall, with others growing as mats. Types of Seaweed Farming (village-based, industrial-scale or a mixture of the two), their Advantages and their Constraints 4. Like land plants, seaweeds contain photosynthetic pigments (similar to New type of opal formed by common seaweed discovered 17 April 2018 Morphology and structural colour of the brown algae seaweed known as Cystoseira Tamariscifolia. Bring a knife or pair of scissors and a bag. Whether dried, rehydrated or eaten raw, treated as a vegetable, flaked and sprinkled as a seasoning, or munched as a crispy snack, seaweeds offer wide-ranging Jul 02, 2020 · Seaweed can be bought in many specialty supermarkets, catalogs and online. Yet, seaweed is a terrific food. In a few species there is an alternating sexual and asexual reproductive process with every generation. txt) or view presentation slides online. Strugstad October 1st, 2010 Materials included in reading package: 1. Mar 26, 2017 · Seaweed Cultivation Policy Statement. This ancient meal was and still is a balanced meal consisting of proteins, carbohydrate, lipids and minerals. They are very low in fat and are approxi-mately 80-90% water. Re: Identification of the Seaweed Rights Area(s) and species/type of Seaweed applied for: The Department of Agriculture, Forestry and Fisheries; Branch: Fisheries Management ("the Department") has received your application for a seaweed right and in order to process your The Biodiversity Heritage Library works collaboratively to make biodiversity literature openly available to the world as part of a global biodiversity community. Off-bottom (post and line) The general approach is to suspend a series of lines of 10m in length between two posts, which are usually made of wood. Jan 22, 2019 · This whimsical seaweed is great to use as decoration on tables for ocean themed parties. The fine, hairlike, dark-brown seaweed, commonly known as lyngbya, is found in tropical and subtropical marine and estuarine environment worldwide, including Hawaiian shoreline. Types of seaweed commonly eaten include: Seaweed Extracts Market is estimated to grow at 8. Seaweeds are broadly classified on the basis of their color, which determines its other qualities. Brown varieties of seaweed are the most commonly eaten, including wakame and kelp. Patrick T. , miso soup and seaweed salad Dec 04, 2019 · Bottled edible seaweed production volume Japan 2010-2019 Share of global seaweed-flavored product launches worldwide 2011-2015, by region Industry revenue of "Canned seafood and seaweed" in Harvest of wild seaweed outside of an aquatic farm for commercial sale is authorized under a Commissioner's Permit (5 AAC 37. They thrive best in waters up to about Seaweed has been collected in northern New England for agricultural purposes since the first settlers arrived over three hundred years ago. The Celts noted that ordinary seaweed contracted as it dried and then expanded with moisture. ppt / . The Annual Review of Food Science and Technology, in publication since 2010, covers current and significant developments in the multidisciplinary field of food science and technology. ) dentate were prepared according to the protocol described in Wu and Pan [55]. Description of the major types of seaweeds – brown, red and green. The two chief types of plants occurring in the marine environment are the algae and sea grasses. Key words: seaweed, carbon fixation, CO 2 水研センター研報,別冊第1 号,59-63, 平成16年 Bull. 5 CANADA 12. The major utilisation of these plants as food is in Asia, particularly Japan, Korea and China, where seaweed cultivation has become a major industry. Seaweed is a loose colloquial term encompassing macroscopic, multicellular, benthic marine algae. Seaweed as a fertilizer has a lot going for it. Segmentation. The cards labelled "The Uses of Algae" 15a -15f which are located after page 102 may be cut by the teacher and individually displayed to explain the overall concept of the lesson. Red, Green, and Brown 4. Introduction Seaweeds are a group of photosynthetic nonflowering plant Key to seaweed species 19 Summarised key to species 30 Summary of species characteristics 39 Species descriptions and images 46 Reduced species list identification guide as required by the Water Framework Directive. 100). Cognitive Market Research provides detailed analysis of Seaweed Salts in its recently published report titled, "Seaweed Salts Market 2027". 4 Seaweed can be either wild Seaweeds display a variety of different reproductive and life cycles and the description above is only a general example of one type, called alternation of generations. Calumpong (Co-lead member), Georg Martin (Lead member) 1. Seaweeds are plants, although they lack true stems, roots, and leaves. Seaweed is a good source of vitamin K, an essential vitamin, which is an important factor in blood-clotting. Jan 01, 2015 · Seaweed-enriched bread acceptable to consumers (Hall et al. Many seaweeds contain anti-inflammatory and anti-microbial agents. For at least 1,500 years, the Japanese have enrobed a mixture of raw fish, sticky rice, and other ingredients in a seaweed called nori. Economic importance Some 221 species of seaweed are utilized commercially. A type of algae, Ancylonema nordenskioeldii , was found in Greenland in areas known as the 'Dark Zone', which caused an increase in the rate of melting ice sheet. Agar-Agar . Almost all seaweed sold in stores is dried. The most popular seaweed species are nori, which is dried in sheets and widely used to make sushi. Sep 27, 2011 · Since humans are the consumers of these types of functional seaweed products, traceability and security of supply are of the utmost importance to successful commercialization. There are many kinds of seaweed commonly consumed in Japan, and all are quite low in calories, contain many minerals, and are high in fiber. In countries across Asia it has been utilised for its high vitamin and mineral content and still forms part of the staple diet today. From the sea-weed screening experiment, C. 3. 1. Suzanne Fredericq and Dr. Agen. 1 Seaweed types and usage Seaweed is a type of algae produced in a variety of water temperatures from cold to tropical. types of seaweeds pdf
mos, yhp, epw, wsahw, ay22, x8ss, kk1pr, zm, a90v, ggd, 27py, xh5ao, qed5, ni, mwb, | CommonCrawl |
Phase disambiguation using spatio-temporally modulated illumination in depth sensing
Takahiro Kushida1,
Kenichiro Tanaka1,
Takahito Aoto2,
Takuya Funatomi1 &
Yasuhiro Mukaigawa1
IPSJ Transactions on Computer Vision and Applications volume 12, Article number: 1 (2020) Cite this article
Phase ambiguity is a major problem in the depth measurement in either time-of-flight or phase shifting. Resolving the ambiguity using a low frequency pattern sacrifices the depth resolution, and using multiple frequencies requires a number of observations. In this paper, we propose a phase disambiguation method that combines temporal and spatial modulation so that the high depth resolution is preserved while the number of observation is kept. A key observation is that the phase ambiguities of temporal and spatial domains appear differently with respect to the depth. Using this difference, the phase can disambiguate for a wider range of interest. We develop a prototype to show the effectiveness of our method through real-world experiments.
Depth measurement is widely used in applications such as augmented reality, factory automation, robotics, and autonomous driving. In the computer vision field, there are two well-known techniques for measuring scene depth using active illumination. One is the time-of-flight camera, which uses temporally modulated illumination to measure the travel time of light; the other is the phase shifting, which uses temporally modulated illumination to find the correspondence between the projector and the camera for triangulation.
A common problem is how to resolve the periodic ambiguity of the phase because either measurement gives the phase that is defined between 0 to 2π. Typical solution is to use multiple frequencies to resolve the phase ambiguity. However, the phase ambiguity still exists in the frequency of the greatest common divisor, which requires several measurements to obtain a wider range of interest. Another possible approach is to use a low frequency that sacrifices the depth resolution. The aim of this study is to resolve the phase ambiguity in fewer observations, where both the wider range of interest and the better resolution of the depth are guaranteed.
A key observation of this paper is that the phase ambiguities of the time-of-flight (ToF) and the phase shifting appear differently on the depth domain. Since the temporal phase is proportional to the depth, the depth candidates from the phase appear at equal intervals along with the depth. On the other hand, the spatial phase is defined as the disparity domain; hence, the depth candidates appear at gradually increasing intervals. Based on this difference, the phase ambiguity can be resolved by combining temporal and spatial modulation. Because the candidate depth that satisfies both measured phases seldom appears, the number of phase can be reduced to one for each frequency. In this paper, we discuss ordinary ToF and phase shifting in the same framework. We show that precise depth can be measured in a wide range by combining temporal and spatial modulation. We also reveal the resolution and the range of interest theoretically, analyze the recoverability, and build a prototype to show the effectiveness of our method via real-world experiments.
This paper extends its preliminary version [1] with the following differences. Extensions have been made to (1) reveal the depth resolution and the range of interest of our proposed method, (2) develop an efficient implementation, and (3) confirm that the unrecoverable depth due to ambiguity seldom exists by simulation.
The rest of the paper is organized as follows. The related work is discussed in Section 2, a brief review of the ordinary time-of-flight and phase shifting algorithms are provided in Section 3, a spatio-temporal modulation technique is proposed in Section 4, the resolution and range of interest of our method is analyzed in Section 5, experiments with a prototype system is shown in Section 6, and we conclude with some discussions in Section 7.
Active depth measurements have been widely studied in the computer vision field. Earlier work used a projector-camera system to convert the projector's pixel index into multiple projection images based on the gray code [2]. The phase shifting approach [3] recovers subpixel correspondences by detecting the phase of the sinusoid. Gupta and Nayer [4] unwrapped the phase from slightly different frequencies so that it became robust to indirect light transport with a small budget of projection numbers. Mirdehghan et al. [5] proposed an optimal code for the structured light technique. The time-of-flight method is another way to measure depth. It emits amplitude modulated light, and a delayed signal is detected that corresponds to the scene depth [6]. Because the range of interest and the depth resolution are tradeoffs, a better resolution is obtained by limiting the range of interest [7]. We combine these techniques to realize both better resolution and wider range of interest.
Another problem regarding the ToF is multi-path interference due to indirect light transport. Recovering the correct depth of multi-path scenes has been broadly studied using a parametric model [8, 9], K-sparsity [10, 11], frequency analysis [12], and data-driven approaches [13–15]. Because the scene depth can be recovered by the first-returning photon, the depth can be obtained after recovering light-in-flight imaging [16–21]. Multi-path interference is mitigated by combining ToF and projector. Naik et al. [22] combined the ToF camera and a projector-camera system to mitigate a multi-path that uses direct-global separation [23]. Similar ideas are implemented with the ToF projectors that can modulate both spatially and temporally [24, 25]. In both cases, direct-global separation is utilized to mitigate multi-path interference. We also use a similar system for phase disambiguation not only for mitigating multi-path.
To obtain fine resolution, Gupta et al. [26] proposes the optimal code for ToF modulation. Gutierrez-Barragan et al. [27] proposes an optimization approach for designing practical coding functions under hardware constraints. Kadambi et al. [28] uses the polarization cue to recover the smooth surface. Our method is more fundamental layer; hence, these techniques can be incorporated with our method to boost the resolution. An interferometer can also obtain micrometer resolution of a small size object. Interferometry gives micrometer resolution [29] in a carefully controlled environment. Li et al. [30] recover micro-resolution ToF using the superheterodyne technique. Maeda et al. [31] leverages the heterodyne technique to the polarization imaging to obtain the accurate depth.
Phase unwrapping is a subproblem in the depth measurement. The phase has to be unwrapped with either the phase shifting or the ToF; otherwise, the estimated depth have 2π ambiguity. The number of observations can be reduced by sacrificing the spatial resolution. The projector's coordinates can be obtained from a single image using a color code [32], a wave grid pattern [33], and a light-field ToF [34]. Our method falls into this class but does not sacrifice the spatial resolution nor require many patterns. Our method leverages the asymmetric relations of spatial and temporal wrapping to solve the ambiguity of the phase.
Depth measurement techniques using modulated illumination
Before explaining our method, we briefly review the ToF and phase shifting methods. We respectively explain them as the phase measurements using temporally or spatially modulated light.
Temporal modulation (time-of-flight)
The ToF camera emits the temporally modulated light as shown in Fig. 1a. It measures the amplitude decay and phase delay of the modulated light, and the phase delay corresponds to the time it takes for the light to make a round trip.
Modulation variations. a ToF modulates the light temporally. b Phase shifting modulates the light spatially. c Our method combines temporal and spatial modulations at the same time to mitigate the phase ambiguity problem while preserving the depth resolution
The ToF camera measures the correlation between the signals emitted and those received. For each frequency, the phase delay is calculated from the correlations with NT reference signals, which are temporally shifted. For the k-th signal, the correlation ik(x) at the camera pixel x is represented as
$$\begin{array}{*{20}l} {i}_{k}({x}) &= g\left({t} + \frac{2\pi k}{N_{T}}\right) * s({x}, {t}) \end{array} $$
$$\begin{array}{*{20}l} &= \frac{{A}({x})}{2} \cos{\left({{\phi}_{T}}({x}) + \frac{2\pi k}{N_{T}}\right)} + {O}({x}), \end{array} $$
where \(g\left ({t} + \frac {2\pi k}{N_{T}}\right)\) is the reference signal with the shifted phase 2πk/NT, s is the returned signal, the ∗ operator represents the correlation, A is the amplitude decay, ϕT is the phase delay, and O is the ambient light. In the case of NT=4, the phase ϕT and the amplitude A of the returned signal can be recovered by a direct conversion method from multiple observations while changing the phase \(\frac {2\pi k}{N_{T}}\) as
$$\begin{array}{*{20}l} {{\phi}_{T}}({x}) &= \arctan{\left(\frac{{i}_{3}({x}) - {i}_{1}({x})}{{i}_{0}({x}) - {i}_{2}({x})} \right)}, \end{array} $$
$$\begin{array}{*{20}l} {A}({x}) &= \sqrt{({i}_{3}({x}) - {i}_{1}({x}))^{2} + \left({i}_{0}({x}) - {i}_{2}({x}) \right)^{2}}. \end{array} $$
The depth d is obtained as
$$\begin{array}{*{20}l} {d}({x}) = \frac{c}{2 {\omega_{T}}}{{\phi}_{T}}({x}), \end{array} $$
where ωT is the modulation frequency and c is the speed of light.
Spatial modulation (phase shifting)
The phase shifting spatially modulates the projection pattern. Finding the correspondences between the projector and camera pixels is the main part of the spatial phase shifting. The idea is to project the sinusoidal pattern as shown in Fig. 1b and measure the phase of the sinusoid for each pixel, which corresponds to the projector's pixel coordinates.
The observed intensity of the camera Il(x) for l-th shift is represented a
$$\begin{array}{*{20}l} {I}_{l}({x}) = {A}({x})\cos{\left({{\phi}_{S}}({x}) - \frac{2\pi l}{N_{S}} \right)} + {O}({x}), \end{array} $$
where ϕS is the spatial phase of the projection pattern due to disparity. There are three unknown parameters, which are the offset O, the amplitude A(x), and the phase ϕS(x); therefore, they can be recovered from NS≥3 observations while changing the phase of the pattern. In the case of NS=4, the spatial phase ϕS and the amplitude A can be recovered in the same way as the ToF as
$$\begin{array}{*{20}l} {{\phi}_{S}}({x}) &= \arctan{\left(\frac{{I}_{3}({x}) - {I}_{1}({x})}{{I}_{0}({x}) - {I}_{2}({x})}\right)}, \end{array} $$
From the estimated disparity, the scene depth can be recovered using the triangulation theory. For example, when the parallel stereo is assumed, the depth is inversely proportional to the disparity as
$$\begin{array}{*{20}l} {d}({x}) = \frac{{b}{f}}{{x} - \frac{{{\phi}_{S}}({x})}{\omega_{S}}} \end{array} $$
where \({x} - \frac {{\phi }_{S}}{{\omega _{S}}({x})}\) is the disparity, ωS is the spatial angular frequency of the projection pattern, f is the focal length, and b is the baseline of the pro-cam system. Here, x represents the horizontal pixel position.
Phase ambiguity and depth resolution
A common problem in both temporal and spatial methods is 2π ambiguity, where the phase is wrapped when the depth exceeds the maximum depth of interest. A naive approach is using a low frequency to avoid the phase ambiguity. However, a tradeoff exists between the range of interest and the depth resolution. While the phase ambiguity does not appear at a lower frequency, the depth resolution becomes low as shown in Fig. 2a. With a higher frequency, the depth resolution improves while the phase ambiguity becomes significant, and the depth cannot be uniquely recovered for a wide range of interest as shown in Fig. 2b.
Tradeoff among the depth resolution, the range of interest, and the number of measurements. The dashed blue line represents the low frequency phase, and the solid line represents the high frequency. Horizontal red bands represents the resolution of the measured phase. Intersections of the blue lines and the horizontal red bands (depicted as red circles) are the candidate depth, and the corresponding depth resolution is illustrated as vertical red bands. a, b While the resolution in phase is the same, the corresponded depth resolution vary depending on the frequency. With higher frequency, better depth resolution is obtained; however, there is depth ambiguity. c Using multiple frequencies, the range of interest can be extended to the frequency of the greatest common divisor, and the depth resolution is determined by the highest frequency. d The bottom table summarizes the trade-off
The phase ambiguity is usually relaxed by using multiple frequencies in either a temporal or a spatial domain. However, multiple captures are required, and it sacrifices real-time possibility as shown in Fig. 2c. We propose a hybrid approach of disambiguation that can take advantage of a different nature in temporal and spatial modulation.
Proposed method
We propose a hybrid method of temporal and spatial modulation as shown in Fig. 1c. The phase ambiguity can be resolved by using both temporal and spatial phases instead of using multiple frequencies in either domain.
Spatio-temporal phase disambiguation
Our key idea is that the depth candidates from the ambiguity of the temporal and spatial phases are different. In the case of the temporal phase, the intervals of the depth candidates are constant along the depth because the depth is proportional to the phase, as shown in Eq. (5). On the other hand, the spatial phase is defined in the disparity domain. Because the depth is inversely proportional to the disparity (as shown in Eq. (9)), the intervals of depth candidates increase along with the depth. Figure 3 shows the phase observations along with the scene depth. Multiple depth candidates correspond to a single phase. The depth candidates appear at the same interval for the temporal phase, while the intervals of the spatial phase increase. This difference is a key feature of our method to resolve the phase ambiguity.
Phase observations with the depth. While depth candidates of the temporal phase appear at the same intervals, those of the spatial pattern appear at increasing intervals. This difference is the cue to disambiguate the depth candidate. The unique depth candidate that satisfy both temporal phase and spatial phase can be obtained
Depths that satisfy both temporal and spatial phases seldom appear. The unwrapped phase is not restricted by the greatest common divisor, and the set of temporal and spatial phases is unique for the wider range of interest. The candidate depths can be respectively obtained from the following equations as
$$\begin{array}{*{20}l} {d}_{T} &= \frac{{c}}{2 {\omega_{T}}}(2\pi n_{T} + {{\phi}_{T}}) \end{array} $$
$$\begin{array}{*{20}l} {d}_{S} &= \frac{{b} {f}}{{x} - \frac{2\pi n_{S} + {{\phi}_{S}}}{{\omega_{S}}} }. \end{array} $$
The integer pair (nT,nS) that satisfies dT=dS seldom exists. Therefore, the phase ambiguity problem can be resolved using phases of different domains.
Phase recovery and depth estimation
Defining I0 as the irradiance, the emitted signal from the projector with the k-th temporal shift and the l-th spatial shift I(p,t,k,l) can be expressed as
$$ \begin{aligned} I({p}, t, k, l) &= I_{0} \left(\frac{1}{2}\cos \left(\omega_{T} t + \frac{2\pi k}{N_{T}}\right) + \frac{1}{2} \right)\\&\quad \left(\frac{1}{2}\cos\left({\omega_{S}}{p} - \frac{2\pi l}{N_{S}}\right) + \frac{1}{2}\right), \end{aligned} $$
where t is time and p is the projector's pixel. The returned signal r(x,t,k,l) at the camera pixel x is represented as
$$ {{}\begin{aligned} r(x, t, k, l)& =I_{0} \kappa(x) \left(\frac{1}{2}\cos \left(\omega_{T} t - \phi_{T}(x) - \frac{2\pi k}{N_{T}}\right) + \frac{1}{2}\right)\\& \left(\frac{1}{2}\cos\left(\phi_{S}(x) - \frac{2\pi l}{N_{S}}\right) + \frac{1}{2}\right) \\ &+ o(x), \end{aligned}} $$
where κ is the reflectance of target object, o(x) is the ambient light, ϕT(x) is the phase delay corresponding to the round trip time, and ϕS(x) is the phase corresponding to the disparity (x−p). The intensity is the correlation with the reference signal \(g_{{\omega _{T}}}(t)\) [35] as
$$\begin{array}{*{20}l} i({x}, k, l) &= \int_{0}^{T} r({x}, t, k, l)g_{{\omega_{T}}}(t) dt \\ &\approx\,{A}({x}) \left(\frac{1}{2} \cos\left({{\phi}_{T}}({x}) + \frac{2\pi k}{N_{T}}\right) + \frac{1}{2}\right)\\&\quad \left(\frac{1}{2} \cos\left({{\phi}_{S}}({x}) - \frac{2\pi l}{N_{S}}\right) + \frac{1}{2}\right) \\ &\quad + {O}({x}), \end{array} $$
where T is the exposure time. The temporal phase ϕT and spatial phase ϕS are obtained from 8 observations with NT=4 and NS=4 as
$$ \left\{{\begin{aligned} {{\phi}_{T}}({x}) &= \arctan{\frac{{i}({x}, 3, 0) - {i}({x}, 1, 0)}{{i}({x}, 0, 0) - {i}({x}, 2, 0)}} \\ {{\phi}_{S}}(x) &= \arctan{\frac{{i}(x, 0, 3)-{i}(x, 0, 1) }{{i}(x, 0, 0)-{i}(x, 0, 2)}}. \end{aligned}}\right. $$
Now, we have two phases: the temporal phase ϕT and the spatial phase ϕS. Depth estimation from the two phases is similar to the unwrapping problem in both the multi-frequency phase shifting and the ToF, and it can be solved by searching a lookup table [4]. The observed phases should respectively equal to the phases computed from the same depth, the computed phase ϕT ~(d),ϕS ~(d) is obtained as
$$\begin{array}{*{20}l} {\Tilde{\phi_{T}}}(d) &= \frac{2 {\omega_{T}} d}{c} \bmod{2 \pi} \end{array} $$
$$\begin{array}{*{20}l} {\Tilde{\phi_{S}}}(d, {x}) &= {\omega_{S}} \left(x - \frac{{b} {f}}{d} \right) \bmod{2\pi}. \end{array} $$
A lookup table is built for each horizontal pixel position x of the camera because the spatial phase depends on the pixel position. The table \(\mathcal {T}_{{x}}\) at the horizontal position x consists of the vector \(\Phi _{D_{i}, {x}} = [{\Tilde {\phi _{T}}}(D_{i}), {\Tilde {\phi _{S}}}(D_{i}, {x})]\) of the candidate depth Di as
$$\begin{array}{*{20}l} \mathcal{T}_{{x}}(D_{i}) = \Phi_{{D_{i}, {x}}} = \left[{\Tilde{\phi_{T}}}(D_{i}), {\Tilde{\phi_{S}}}(D_{i}, {x})\right]. \end{array} $$
For each pixel, the depth can be estimated by searching the lookup table as
$$\begin{array}{*{20}l} \hat{d}({x}) = \arg\min_{d} \left\lVert{\mathcal{T}_{{x}}({d}) - \left[{{\phi}_{T}}({x}), {{\phi}_{S}}({x})\right]}\right\rVert^{2}_{2}. \end{array} $$
Efficient implementation In practice, building the look up table for each horizontal pixel position is not necessary. Although the spatial phase and corresponding depth depends on the position of camera pixel, the disparity does not depend on the position of the camera pixel. The depth of all camera pixels can be obtained by only one look up table by building from the pair of temporal phase and the disparity after converting the measured phase to the disparity. The disparity is obtained from the measured spatial phase ϕS and pixel position x as
$$\begin{array}{*{20}l} {\delta}({x}, {{\phi}_{S}}({x})) &= {x} - \frac{{{\phi}_{S}}({x})}{{\omega_{S}}} \end{array} $$
$$\begin{array}{*{20}l} &= \frac{bf}{\tilde{d}}, \end{array} $$
where δ represents the disparity and \(\tilde {d}\) is the wrapped depth. The table \(\mathcal {T'}\) consists of the vector \(\Phi _{D_{i}}' = [{\Tilde {\phi _{T}}}(D_{i}), {\Tilde {\delta }}(D_{i})]\) of the candidate depth Di as
$$\begin{array}{*{20}l} {\Tilde{\delta}}(D_{i}) &= \frac{{b} {f}}{D_{i}} \bmod{\frac{2\pi}{{\omega_{S}}}} \end{array} $$
$$\begin{array}{*{20}l} \mathcal{T'}(D_{i}) &= \Phi_{D_{i}}'=\left [{\Tilde{\phi_{T}}}(D_{i}), {\Tilde{\delta}}(D_{i})\right], \end{array} $$
where δ~ is the computed disparity from candidate depths. For each pixel, the depth can be estimated by searching the lookup table as
$$\begin{array}{*{20}l} \hat{d}({x}) = \arg\min_{d} \left\lVert{\mathcal{T'}({d}) - [{{\phi}_{T}}({x}), {\delta}({x}, {{\phi}_{S}})]}\right\rVert^{2}_{2}. \end{array} $$
Analysis of the proposed method
Depth resolution The resolution is better than ToF in a near range and better than phase shifting in a far range.
The resolution of ordinary ToF and phase shifting is respectively represented as [6, 25]
$$\begin{array}{*{20}l} {\Delta d}_{T} &= \frac{c\pi}{{\omega_{T}}}\frac{\sqrt{B}}{2\sqrt{8}A}, \end{array} $$
$$\begin{array}{*{20}l} {\Delta d}_{S} &= \frac{2\pi{d}^{2}}{{b}{f}{\omega_{S}}}\frac{\sqrt{B}}{2\sqrt{8}A}, \end{array} $$
where A and B are the number of photo-electrons that the sensor can accumulate and represents the amplitude and the DC component, respectively. We suppose that A and B are the parameters of the hardware and are independent from the scene. However, the returned light is influenced by the light falloff in real; hence, a future work is expected to include this effect to analyze more accurately.
Figure 4 shows the depth resolution of ToF and phase shifting along with the depth according to Eqs. (25) and (26). The resolution of ToF is constant at any depth while the resolution of phase shifting is proportional to the square of the depth. The proposed method achieves the resolution that is close to the better resolution of either phase shifting or time-of-flight as shown in Fig. 4.
Depth resolution along with the depth. According to Eqs. (25) and (26), the resolution of ToF is constant (blue) and the resolution of phase shifting is proportional to the square of the depth (orange). The depth dcross is the depth where the lines of the resolution of ToF and the resolution of the phase shifting is crossed. The proposed method can achieve the resolution that is close to the phase shifting in near range before dcross and the resolution that is close to the ToF in far range after dcross (green)
The depth dcross is defined by the depth where the resolution of ToF is equal to the resolution of phase shifting. In the range near than dcross, the resolution of our method is better than ToF and close to phase shifting. In the range far than dcross, the resolution of our method is better than phase shifting and close to ToF. The depth dcross is given as
$$\begin{array}{*{20}l} {\Delta d}_{S} &= {\Delta d}_{T} \end{array} $$
$$\begin{array}{*{20}l} {d_{\text{cross}}} &= \sqrt{\frac{{c} {b} {f} {\omega_{S}}}{2 {\omega_{T}}}}. \end{array} $$
When we want to improve the resolution of pure ToF, the maximum range of this system should be designated shorter than dcross.
Range of interest The range of interest (ROI) of the proposed method is determined by the relative relation between the temporal and the spatial frequencies.
Nearest range When the spatial frequency is too high compared with the temporal frequency, the phase ambiguity problem cannot be resolved because multiple candidate depths exist within the resolution of the ToF, as shown in Fig. 5a. The spatial frequency varies depending on the depth because the projection is perspective. As the distance is shorter, the spatial frequency is higher. This property gives the nearest ROI of the proposed method. The nearest ROI dmin is where the wrapping distance of spatial phase is equal to the resolution of the ToF at the given temporal and spatial frequencies as
$$\begin{array}{*{20}l} {d}_{S}|_{n_{S}=n_{S}^{\prime}} \ - {d}_{S}|_{n_{S}=n_{S}^{\prime} - 1} = \frac{{\Delta d}_{T}}{2}, \end{array} $$
Upper and lower bound of the ROI. Orange lines represent the candidate depths of spatial modulation; blue lines represent the candidate depths of temporal modulation. The width of the line shows the resolution. a If the depth is near than dmin, several candidate depths from spatial modulation (orange lines) exist within the resolution of temporal modulation (blue band). b On the other hand, if the depth is longer than dmax, several candidate depths from temporal modulation (blue lines) exist within the spatial resolution (orange band)
where \(\phantom {\dot {i}\!}{d}_{S}|_{n_{S}=n_{S}^{'}}\) is the unwrapped depth and \(\phantom {\dot {i}\!}{d}_{S}|_{n_{S}=n_{S}' - 1}\) is the neighbor depth candidate from Eq. (11). Substituting Eq. (17) and transforming the expression, the minimum depth of the range of interest dmin can be obtained asFootnote 1
$$\begin{array}{*{20}l} {d_{\text{min}}} = \frac{{\Delta d}_{T}}{4} +{\frac{1}{2}\sqrt{\frac{{\Delta d}^{2}_{T}}{4} + \frac{{\omega_{S}} {b} {f} {\Delta d}_{T}}{\pi}}}. \end{array} $$
Farthest range When the spatial frequency is too low compared with the temporal frequency, the phase ambiguity problem cannot be resolved because multiple candidate depths exist within the resolution of the spatial phase shifting, as shown in Fig. 5b. Because the resolution of the spatial phase shifting is inversely proportional to the depth, the farthest ROI dmax is determined. The farthest ROI dmax is where the wrapping distance of temporal phase is equal to the resolution of the phase shifting as
$$\begin{array}{*{20}l} {d}_{T}|_{n_{T}=n_{T}'} \ - {d}_{T}|_{n_{T}=n_{T}'-1} = \frac{\Delta d_{S}}{2}, \end{array} $$
where \(\phantom {\dot {i}\!}{d}_{T}|_{n_{T}=n_{T}'}\) is the unwrapped depth and \(\phantom {\dot {i}\!}{d}_{T}|_{n_{T}=n_{T}'-1}\) is the neighbor depth candidate from Eq. (11). Substituting Eq. (16), Eq. (26), and transforming the expression, the farthest ROI dmax can be obtained asFootnote 2
$$\begin{array}{*{20}l} {d_{\text{max}}} = \sqrt{\frac{{\omega_{S}} {b} {f} {c}^{2} \pi}{{\omega_{T}}^{2} {\Delta d}_{T}}}. \end{array} $$
Unrecoverable point There are few unrecoverable depths in the proposed method. Figure 6 shows that the pair of temporal and spatial phases corresponding to the depth. The vertical axis is the temporal phase, and the horizontal axis is the spatial phase. The color of the curves represents the depth. The intersections of the curves are unrecoverable depth because different depths have the same phase pair. This is a limitation of this method; however, these points generally appeared sparsely in the image hence can be estimated by looking at neighbor pixels of the image.
The transition of temporal and spatial phases with respect to the depth. The vertical axis represents the temporal phase and the horizontal axis represents the spatial phase. The color represents the depth. The intersections of the curves have the same phase pair at the different depths. These depths cannot be recovered uniquely
We confirm that the unrecoverable points seldom exists via simulation. We evaluate the percentage of unrecoverable pixels in an image using an indoor dataset [36]. Temporal phases and spatial phases were respectively rendered, and the depth image is estimated by our method from these phase images. The temporal frequency is set to 50 MHz, and spatial frequency is 1/0.6 mm−1. One hundred scenes were selected from the dataset randomly.
The results are shown in Fig. 7. Depths of some pixels cannot be recovered due to multiple candidates. The average ratio of the uncovered pixel in each image is less than 5%. These points exist sparsely in the image; hence, it is possible to select the candidate by looking around their pixels.
Some results of simulation. Black color means the pixels cannot be recovered due to depth ambiguity. Unrecoverable pixel seldom exists in the image
Brightness of the pattern One may think that the temporal phase cannot be obtained if the spatial pattern is completely black. Because the spatial sinusoidal pattern is projected, all the pixels have a chance to obtain the photons unless the spatial pattern is extremely low. A possible solution is to add the constant value to the spatial pattern so that there are no pixels that are always black. In this case, the observation Eq. (14) is rewritten as
$$\begin{array}{*{20}l} {i}({x}, k, l) =&{A}({x}) \left(\frac{1}{2} \cos\left({{\phi}_{T}}({x}) + \frac{2\pi k}{N_{T}} \right) + \frac{1}{2} \right) \\&\quad \left(A_{S} \cos\left({{\phi}_{S}}({x}) - \frac{2\pi l}{N_{S}} \right) + O_{S} \right) \\&+ {O}({x}), \end{array} $$
where AS and OS (0<OS−AS and OS+AS≤1) are the amplitude and offset of the spatial modulation, respectively. Analogous to Eq. (14), both phases can be obtained by the same equations as Eq. (15) in the NT,NS=4 case. So, it is not necessary to increase the number of observations.
We demonstrated the effectiveness of our proposed method with real-world experiments.
Hardware prototype We developed a hardware prototype that can illuminate a scene with a spatio-temporal modulated pattern. Our prototype was built onto a ToF camera (Texas Instruments OPT8241-CDK-EVM). The light source was replaced with a laser diode and a DMD system that can project the spatial pattern. The light source was an 830-nm laser diode (Hamamatsu Photonics L9277-42), and its emission was synchronized with the ToF sensor. The light emitted by the diode was collimated and expanded through lenses, and then reflected onto a DMD device (Texas Instruments DLP6500) that had 1920×1080 pixels. Finally, the spatio-temporal pattern was projected onto the scene through a projection lens, as shown in Fig. 8.
Hardware prototype. The light source unit consists of a laser diode and a DMD device. The emission of the laser diode is temporally modulated by the sync signal from the ToF camera and then spatially modulated by the DMD. The ToF camera and the projection lens of the projector are placed side by side
First, the measurement system was calibrated in a standard way for the pro-cam systems using a reference board [37]. The phase of the ToF on each pixel was then calibrated to share the same coordinates as the pro-cam system. A white plane board was captured while its position was moved for the phase calibration. For each measurement of the board, the pair of the raw phase and the ground-truth depth was obtained because the depth of the board was measured by the ordinary phase shifting. The parameter to recover the depth from the phase was calibrated by line fitting.
Result First, we measured a white planar board and placed it at approximately 350 mm from the camera and slightly slanted it, as shown in Fig. 9a. The temporal frequency was 60 MHz, and the period of the spatial pattern was 60 pixels on the projection image. The baseline between the camera and the projector was approximately 70 mm, and the focal length of the projection lens was 35 mm.
Results with a white planar board. Ordinary ToF, phase shifting (single high frequency), and our method are compared. a The object was placed at a slight slant. b The estimated depth images. Because the depth cannot be identified in the phase shifting, the depth image cannot be visualized. c The cross-section of the red line is shown. While the ordinary ToF is noisy and phase shifting has many candidates, our method recovers a smooth and unique depth candidate
The depths were obtained by an ordinary ToF with a single low frequency, phase shifting with single high frequency, and our method for the comparison. Figure 9b shows the estimated depth images. Both the ToF and our method recover the global depth. The depth image with phase shifting cannot be visualized because it has multiple depth candidates. The cross-section of the red line is shown in Fig. 9c. While the depth measured by the ordinary ToF is noisy and there are many depth candidates due to phase ambiguity in the phase shifting, our method recovers a smooth surface while resolving the phase ambiguity. The region near the edge is not correctly disambiguated because the resolution of the temporal measurement exceeds the interval of the phase shifting. The ToF resolution near the edge is lower than what we expected because the illumination is very low near the edge. However, decreasing the spatial frequency might have mitigated it.
Finally, we measured a plaster bust and placed it approximately 400 mm from the camera, as shown in Fig. 10a. The estimated depth images are shown in Fig. 10b. The cross-section of the depth is shown in Fig. 10c. Our method recovers a unique and smooth depth.
Results with a plaster bust. a The scene. b The depth maps. Black pixels represent the occlusion. c The cross-section of the red lines drawn on (b). Our method recover a unique and smooth surface
We developed a depth sensing method that uses spatio-temporally modulated illumination. We showed that the phase ambiguities of the temporal and spatial modulations are different, so it is possible to effectively resolve the ambiguities while reducing the observations and preserving the depth resolution.
Our proposed method inherits not only the strength of time-of-flight camera and active stereo using projector-camera system but also the weakness of them. While the proposed method can archive better resolution and wider range of interest, it may suffer from occlusion, which scarifies the ToF camera's potential. However, in practice, the current ToF camera is not a co-axial setup and it does not much suffer from occlusion. If the spatial-temporal projector is configured in the micro-baseline setup similarly to a ToF camera, the system does not much suffer from occlusion.
In this paper, depths of the ToF measurement are defined as the distance between a camera and a target; on the other hand, depths of the projector-camera system of phase shifting is defined as the distance between a center of baseline and a target. In practice, the difference should be correct for implementation although this is not affected to our key idea. Indeed, this model mismatch is absorbed by calibration step to build a look up table.
Our hardware prototype has some limitations. Because the DMD produces the sinusoidal pattern by controlling the mirrors on and off, it can make artifacts to the ToF. We ignored this effect, but it should be considered to control the DMD or to use a solid spatial light modulator appropriately. The quality of the spatio-temporally modulated illumination of our prototype is not very high. The temporal phase contains a systematic distortion, and the spatial resolution of the projector is currently limited to 64 pixels on the DMD, corresponding to 4 pixels on the camera, because the pattern is blurred. This might be due to the collimation and the alignment accuracy of the optics or the diffraction on the DMD. The light source cannot emit a spatial pattern that is equal to or less than the camera pixel's size, resulting in diminished phase shifting. In future implementations, we will develop a better light source unit to improve the temporal phase measurements and generate higher spatial resolutions.
Derivation of Eq. (30)
We reshow Eq. (29) for the derivation of Eq. (30) as
$$\begin{array}{*{20}l} {d}_{S}|_{n_{S}=n_{S}'} \ - {d}_{S}|_{n_{S}=n_{S}' - 1} = \frac{{\Delta d}_{T}}{2}. \end{array} $$
\(\phantom {\dot {i}\!}{d}_{S}|_{n_{S}=n_{S}'-1}\) is the neighbor depth candidate as
$$\begin{array}{*{20}l} {d}_{S}|_{n_{S}=n_{S}^{\prime}-1} = \frac{{b} {f}}{{x} - \frac{2\pi (n_{S}^{\prime}-1) + {{\phi}_{S}}}{{\omega_{S}}}}. \end{array} $$
(A.1)
The unwrapped depth \(\phantom {\dot {i}\!}{d}_{S}|_{n_{S}=n_{S}'}\) that satisfy Eq. (29) is the minimum depth of the range of interest dmin as
$$\begin{array}{*{20}l} {d}_{S}|_{n_{S}=n_{S}^{\prime}} = \frac{{b} {f}}{{x} - \frac{2\pi n_{S}^{\prime} + {{\phi}_{S}}}{{\omega_{S}}}} = {d_{\text{min}}}. \end{array} $$
Substituting Eqs. (A.1) and (A.2) to Eq. (29),
$$\begin{array}{*{20}l} {d_{\text{min}}} - \frac{{b} {f}}{{x} - \frac{2\pi (n_{S}' - 1) + {{\phi}_{S}}}{{\omega_{S}}}} = \frac{{\Delta d}_{T}}{2}, \notag \\ {d_{\text{min}}} - \frac{{b} {f}}{{x} - \frac{2\pi n_{S}' + {{\phi}_{S}}}{{\omega_{S}}} - \frac{2\pi}{{\omega_{S}}}} = \frac{{\Delta d}_{T}}{2}. \end{array} $$
Substituting Eq. (A.2) to the denominator part,
$$\begin{array}{*{20}l} {d_{\text{min}}} - \frac{{b} {f}}{\frac{{b} {f}}{{d_{\text{min}}}} - \frac{2\pi}{{\omega_{S}}}} = \frac{{\Delta d}_{T}}{2}. \end{array} $$
Multiplying both sides of the equation by \(\frac {{b}{f}}{{d_{\text {min}}}} - \frac {2\pi }{{\omega _{S}}}\) and rearranging the equation,
$$\begin{array}{*{20}l} {d_{\text{min}}} \left(\frac{{b} {f}}{{d_{\text{min}}}} - \frac{2\pi}{{\omega_{S}}} \right) - {b} {f} = \frac{{\Delta d}_{T}}{2} \left(\frac{{b} {f}}{{d_{\text{min}}}} - \frac{2\pi}{{\omega_{S}}} \right) \end{array} $$
$$\begin{array}{*{20}l} {d_{\text{min}}}^{2} - \frac{{\Delta d}_{T}}{2} {d_{\text{min}}} + \frac{{\omega_{S}}}{2 \pi} \frac{{\Delta d}_{T}}{2} {b} {f} = 0. \end{array} $$
Solving the quadratic equation for dmin, we obtain
$$\begin{array}{*{20}l} {d_{\text{min}}} &= \frac{{\Delta d}_{T}}{4} +{\frac{1}{2}\sqrt{\frac{{\Delta d}^{2}_{T}}{4} + \frac{{\omega_{S}} {b} {f} {\Delta d}_{T}}{\pi}}}, \end{array} $$
where the other solution is always negative and the out of range of dmin>0.
Derivation of Eq. 32
Substituting Eqs. (10) and (26) to Eq. (31),
$$\begin{array}{*{20}l} \frac{{c}}{2 {\omega_{T}}}(2\pi n_{T} + {{\phi}_{T}}) &- \frac{{c}}{2 {\omega_{T}}}(2\pi (n_{T} - 1) + {{\phi}_{T}}) \end{array} $$
$$\begin{array}{*{20}l} &=\frac{1}{2} \frac{2\pi{d_{\text{max}}}^{2}}{{b}{f}{\omega_{S}}}\frac{\sqrt{B}}{2\sqrt{8}A}, \\ &\frac{{c} \pi}{{\omega_{T}}} = \frac{\pi{d_{\text{max}}}^{2}}{{b}{f}{\omega_{S}}}\frac{\sqrt{B}}{2\sqrt{8}A}. \end{array} $$
Rearranging the equation,
$$\begin{array}{*{20}l} {d_{\text{max}}}^{2} &= \frac{{\omega_{S}} {b} {f} {c}}{{\omega_{T}}}\frac{2\sqrt{8}A}{\sqrt{B}}. \end{array} $$
Substituting Eq. (25) to Eq. (A.9) to cancel A and B,
$$\begin{array}{*{20}l} {d_{\text{max}}}^{2} &= \frac{{\omega_{S}} {b} {f} {c}^{2}\pi}{{\omega_{T}}^{2}{\Delta d}_{T}}. \end{array} $$
(A.10)
Therefore,
$$\begin{array}{*{20}l} {d_{\text{max}}} &= \sqrt{\frac{{\omega_{S}} {b} {f} {c}^{2} \pi}{{\omega_{T}}^{2} {\Delta d}_{T}}}, \end{array} $$
because dmax>0.
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Please see Appendix for the derivation.
ToF:
DMD:
Digital mirror device
Kushida T, Tanaka K, Takahito A, Funatomi T, Mukaigawa Y (2019) Spatio-temporal phase disambiguation in depth sensing In: Proc. ICCP. https://doi.org/10.1109/iccphot.2019.8747338.
Inokuchi S, Sato K, Matsuda F (1984) Range imaging system for 3-D object recognition In: Proc. International Conference on Pattern Recognition, 806–808.. IEEE Computer Society Press.
Salvi J, Fernandez S, Pribanic T, Llado X (2010) A state of the art in structured light patterns for surface profilometry. Pattern Recog 43. https://doi.org/10.1016/j.patcog.2010.03.004.
Gupta M, Nayer S (2012) Micro phase shifting In: Proc. CVPR, 813–820.. IEEE. https://doi.org/10.1109/CVPR.2012.6247753.
Mirdehghan P, Chen W, Kutulakos KN (2018) Optimal structured light à la carte In: Proc. CVPR. https://doi.org/10.1109/cvpr.2018.00654.
Lange R, Seitz P (2001) Solid-state time-of-flight range camera. IEEE J Quantum Electron 37(3):390–397.
Yasutomi K, Usui T, Han S. -m., Takasawa T, Keiichiro K, Kawahito S (2016) A submillimeter range resolution time-of-flight. IEEE Trans Electron Devices 63(1):182–188.
Heide F, Xiao L, Kolb A, Hullin MB, Heidrich W (2014) Imaging in scattering media using correlation image sensors and sparse convolutional coding,. Opt Express 22(21):26338–50.
Kirmani A, Benedetti A, Chou PA (2013) Spumic: simultaneous phase unwrapping and multipath interference cancellation in time-of-flight cameras using spectral methods In: IEEE International Conference on Multimedia and Expo (ICME), 1–6. https://doi.org/10.1109/icme.2013.6607553.
Freedman D, Krupka E, Smolin Y, Leichter I, Schmidt M (2014) SRA: Fast Removal of General Multipath for ToF Sensors In: Proc. ECCV, 1–15. https://doi.org/10.1007/978-3-319-10590-1_16.
Qiao H, Lin J, Liu Y, Hullin MB, Dai Q (2015) Resolving transient time profile in ToF imaging via log-sum sparse regularization. Opt Lett 40(6):918–21.
Kadambi A, Schiel J, Raskar R (2016) Macroscopic interferometry: rethinking depth estimation with frequency-domain time-of-flight In: Proc. CVPR, 893–902. https://doi.org/10.1109/cvpr.2016.103.
Marco J, Hernandez Q, Muñoz A, Dong Y, Jarabo A, Kim MH, Tong X, Gutierrez D (2017) DeepTof: off-the-shelf real-time correction of multipath interference in time-of-flight imaging. ACM Trans Graph 36(6):219–121912. https://doi.org/10.1145/3130800.3130884.
Tanaka K, Mukaigawa Y, Funatomi T, Kubo H, Matsushita Y, Yagi Y (2018) Material classification from time-of-flight distortions. IEEE TPAMI. https://doi.org/10.1109/tpami.2018.2869885.
Su S, Heide F, Wetzstein G, Heidrich W (2018) Deep end-to-end time-of-flight imaging In: Proc. CVPR. https://doi.org/10.1109/cvpr.2018.00668.
Velten A, Willwacher T, Gupta O, Veeraraghavan A, Bawendi MG, Raskar R (2012) Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging. Nat Commun 3(745). https://doi.org/10.1038/ncomms1747.
Heide F, Hullin MB, Gregson J, Heidrich W (2013) Low-budget transient imaging using photonic mixer devices. ACM ToG 32(4):1.
MATH Google Scholar
Kitano K, Okamoto T, Tanaka K, Aoto T, Kubo H, Funatomi T, Mukaigawa Y (2017) Recovering temporal PSF using ToF camera with delayed light emission. IPSJ Trans Comput Vis Appl 9(15). https://doi.org/10.1186/s41074-017-0026-3.
Kadambi A, Whyte R, Bhandari A, Streeter L, Barsi C, Dorrington A, Raskar R (2013) Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles. ACM ToG 32(6):1–10.
O'Toole M, Heide F, Xiao L, Hullin MB, Heidrich W, Kutulakos KN (2014) Temporal frequency probing for 5D transient analysis of global light transport. ACM ToG 33(4):1–11.
O'Toole M, Heide F, Lindell D, Zang K, Diamond S, Wetzstein G (2017) Reconstructing transient images from single-photon sensors In: Proc. CVPR. https://doi.org/10.1109/cvpr.2017.246.
Naik N, Kadambi A, Rhemann C, Izadi S, Raskar R, Bing Kang S (2015) A light transport model for mitigating multipath interference in time-of-flight sensors In: Proc. CVPR, 73–81. https://doi.org/10.1109/cvpr.2015.7298602.
Nayar SK, Krishnan G, Grossberg MD, Raskar R (2006) Fast separation of direct and global components of a scene using high frequency illumination. ACM ToG 25(3):935–944.
Whyte R, Streeter L, Cree MJ, Dorrington AA (2015) Resolving multiple propagation paths in time of flight range cameras using direct and global separation methods. Opt Eng 54:54–549. https://doi.org/10.1117/1.OE.54.11.113109.
Agresti G, Zanuttigh P (2018) Combination of spatially-modulated ToF and structured light for MPI-free depth estimation In: ECCV Workshop on 3D Reconstruction in the Wild.. IEEE. https://doi.org/10.1007/978-3-030-11009-3_21.
Gupta M, Velten A, Nayar SK, Breitbach E (2018) What are optimal coding functions for time-of-flight imaging?. ACM ToG 37(2):13–11318. https://doi.org/10.1145/3152155.
Gutierrez-Barragan F, Reza S, Velten A, Gupta M (2019) Practical coding function design for time-of-flight imaging In: Proc. CVPR. https://doi.org/10.1109/cvpr.2019.00166.
Kadambi A, Taamazyan V, Shi B, Raskar R (2015) Polarized 3D: high-quality depth sensing with polarization cues In: Proc. ICCV, 3370–3378. https://doi.org/10.1109/iccv.2015.385.
Gkioulekas I, Levin A, Durand F, Zickler T (2015) Micron-scale light transport decomposition using interferometry. ACM ToG 34(4):37–13714.
Li F, Willomitzer F, Rangarajan P, Gupta M, Velten A, Cossairt O (2018) Sh-tof: micro resolution time-of-flight imaging with superheterodyne interferometry In: Proc. ICCP. https://doi.org/10.1109/iccphot.2018.8368473.
Maeda T, Kadambi A, Schechner YY, Raskar R (2018) Dynamic heterodyne interferometry In: Proc. ICCP.. IEEE. https://doi.org/10.1109/ICCPHOT.2018.8368471.
Sagawa R, Kawasaki H, Furukawa R, Kiyota S (2011) Dense one-shot 3D reconstruction by detecting continuous regions with parallel line projection In: Proc. ICCV. https://doi.org/10.1109/iccv.2011.6126460.
Sagawa R, Sakashita K, Kasuya N, Kawasaki H, Furukawa R, Yagi Y (2012) Grid-based active stereo with single-colored wave pattern for dense one-shot 3D scan In: 3DIMPVT, 363–370. https://doi.org/10.1109/3DIMPVT.2012.41.
Jayasuriya S, Pediredla A, Sivaramakrishnan S, Molnar A, Veeraraghavan A (2015) Depth fields: extending light field techniques to time-of-flight imaging In: 2015 International Conference on 3D Vision, 1–9. https://doi.org/10.1109/3DV.2015.8.
Heide F, Heidrich W, Hullin M, Wetzstein G (2015) Doppler time-of-flight imaging. ACM ToG 34(4):36–13611.
McCormac J, Handa A, Leutenegger S, J.Davison A (2017) SceneNet RGB-D: can 5m synthetic images beat generic ImageNet pre-training on indoor segmentation?https://doi.org/10.1109/iccv.2017.292.
Zhang Z (2000) A flexible new technique for camera calibration. TPAMI 22:1330–1334. https://doi.org/10.1109/34.888718.
We thank all the people who gave us various insightful and constructive comments.
This work is partly supported by JST CREST JPMJCR1764 and JSPS Kaken grant JP18H03265 and JP18K19822.
Nara Institute of Science and Technology, 8916-5 Takayama-cho, Ikoma, 630-0192, Japan
Takahiro Kushida, Kenichiro Tanaka, Takuya Funatomi & Yasuhiro Mukaigawa
University of Tsukuba, 1-1-1 Tennodai, Tsukuba, 305-8577, Japan
Takahito Aoto
Takahiro Kushida
Kenichiro Tanaka
Takuya Funatomi
Yasuhiro Mukaigawa
TK contributed to the concept, conducted experiments, and wrote the manuscript; KT and TA contributed to the concept and optical design and edited the manuscript; and TF and YM supervised the project and improved the representation. The authors reviewed and approved the final manuscript.
Correspondence to Takahiro Kushida.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Kushida, T., Tanaka, K., Aoto, T. et al. Phase disambiguation using spatio-temporally modulated illumination in depth sensing. IPSJ T Comput Vis Appl 12, 1 (2020). https://doi.org/10.1186/s41074-020-00063-x
DOI: https://doi.org/10.1186/s41074-020-00063-x
Time-of-flight camera
Phase shifting
Computational photography | CommonCrawl |
Fourier transformation on special form of error function
I would like ask how to compute the Fourier transform $F(k)$ of the following function:
$\exp\left[-\frac{\eta^2}{8\sigma (0,t)}\right]\text{erf}\left(\frac{\eta}{\sqrt{8\sigma(0,t)}}\right)$
It can be seen that differentiating $\text{erf}\left(\frac{\eta}{\sqrt{8\sigma(0,t)}}\right)$ is $\exp\left[-\frac{\eta^2}{8\sigma (0,t)}\right]$ times a constant. It may reduce to compute Fourier transform of $\frac{d F(\eta)^2}{d\eta}$, where $F(\eta)=\text{erf}\left(\frac{\eta}{\sqrt{8\sigma(0,t)}}\right)$. Could someone provide solution to the calculation? In case if it cannot be found explicitly, I would also like to know how to express its Fourier transform $F(k)$ in series expansion.
fourier-transform
will_cheuk
will_cheukwill_cheuk
$\begingroup$ What is $\sigma(0,t)$ supposed to mean? is $\eta$ the variable you're supposed to be taking the FT with respect to? $\endgroup$ – Batman Sep 9 '17 at 5:37
$\begingroup$ We do Fourier transformation on $\eta$ only. You can treat any function in terms of $t$ to be constant. It maybe not easy to get explicitfunction, getting expansion in terms of Fourier variable up to order 3 is fine. $\endgroup$ – will_cheuk Sep 9 '17 at 13:15
Let be $$ f(\eta)=\exp\left[-\frac{\eta^2}{8\sigma (0,t)}\right]\text{erf}\left(\frac{\eta}{\sqrt{8\sigma(0,t)}}\right)=\mathrm e^{-a^2\eta^2}\text{erf}\left(a\eta\right) $$ where $a=\frac{1}{\sqrt{8\sigma(0,t)}}$ and $\mathrm{erf}(x)=\frac{2}{\sqrt\pi}\int_0^x\mathrm e^{-u^2}\mathrm d u$.
We have $$ \mathcal F\{f\}(\omega)=F(\omega)=\int_{-\infty}^\infty f(\eta)\,\mathrm e^{-i\omega\eta}\mathrm d\eta=\int_{-\infty}^\infty \mathrm e^{-a^2\eta^2}\text{erf}\left(a\eta\right)\,\mathrm e^{-i\omega\eta}\mathrm d\eta $$ Putting $\xi=a\eta$ we have $$ \begin{align} F(\omega)&=\frac{1}{a}\int_{-\infty}^\infty \text{erf}\left(\xi\right)\,\mathrm e^{-\xi^2}\mathrm e^{-i\frac{\omega}{a}\xi}\,\mathrm d\xi\\ &=\frac{1}{a}\mathrm e^{\frac{\omega^2}{4a^2}}\int_{-\infty}^\infty \text{erf}\left(\xi\right)\,\mathrm e^{-(\xi+i\frac{\omega}{2a})^2}\,\mathrm d\xi\\ &=\frac{1}{a}\mathrm e^{\frac{\omega^2}{4a^2}}\left[-\sqrt\pi\mathrm{erf}\left(i\frac{\omega}{2\sqrt 2\,a}\right)\right]\\ &=-\sqrt{8\pi\sigma}\mathrm e^{2\omega^2\sigma}\mathrm{erf}\left(i\omega\sqrt \sigma\right) \end{align} $$ using $$ \int_{-\infty}^\infty \text{erf}\left(x\right)\,\mathrm e^{-(\alpha x+\beta)^2}\,\mathrm dx=-\sqrt\pi\mathrm{erf}\left(\frac{\beta}{\sqrt{\alpha^2+1}}\right),\qquad\Re\{\alpha^2\}>-1 \tag{$\star$} $$
Using the imaginary error function $\mathrm{erfi}(z)=-i\,\mathrm{erf}(iz)$ we have
$$ F(\omega)=-i\sqrt{8\pi\sigma}\mathrm e^{2\omega^2\sigma}\mathrm{erfi}\left(\omega\sqrt \sigma\right) $$
Proof of $(\star)$
Let be $$F(\beta)=\int_{-\infty}^\infty \mathrm e^{-(\alpha x+\beta)^2}\text{erf}(x)\mathrm dx$$ and differentiating $$ \begin{align} F'(\beta)&=\frac{\mathrm d}{\mathrm d \beta} \left(\int_{-\infty}^\infty \mathrm e^{-(\alpha x+\beta)^2}\text{erf}(x)\mathrm dx \right)\\ &=-2\int_{-\infty}^\infty (\alpha x+\beta)e^{-(\alpha x+\beta)^2}\text{erf}(x)\mathrm dx \\&=\left[\mathrm e^{-(\alpha x+\beta)^2}\text{erf}(x) \right]_{-\infty}^\infty-\frac{2}{\sqrt{\pi}}\int_{-\infty}^\infty \mathrm e^{-(\alpha x+\beta)^2} \mathrm e^{-x^2}\mathrm dx\\ &=0-\frac{2}{\sqrt{\pi}}\sqrt{\frac{\pi}{\alpha^2+1}}\mathrm e^{-\frac{\beta^2}{\alpha^2+1}} \\ F'(\beta)&=-\frac{2}{\sqrt{\alpha^2+1}}\mathrm e^{-\frac{\beta^2}{\alpha^2+1}} \end{align} $$ and integrating this first order ODE we have $$ F(\beta)=F(0) -\frac{2}{\sqrt{\alpha^2+1}}\int_0^\beta\mathrm e^{-\frac{\xi^2}{\alpha^2+1}}\mathrm d\xi=F(0)-\sqrt\pi\text{erf}\left(\frac{\beta}{\sqrt{\alpha^2+1}}\right) $$ and observing that $F(0)=\int_0^0(\cdot)=0$ we finally have $$ \int_{-\infty}^\infty \mathrm e^{-(\alpha x+\beta)^2}\text{erf}(x)\mathrm dx=-\sqrt\pi\text{erf}\left(\frac{\beta}{\sqrt{\alpha^2+1}}\right) $$
alexjoalexjo
$\begingroup$ Thanks. But now, I would like to know how to show that conditional equality. I break down to half interval and integrate, which should have exponential function only! $\endgroup$ – will_cheuk Sep 13 '17 at 4:22
$\begingroup$ I'll add the details for the integral. $\endgroup$ – alexjo Sep 13 '17 at 10:36
$\begingroup$ But you didn't like the answer...did you? $\endgroup$ – alexjo Sep 19 '17 at 10:15
$\begingroup$ That's fine, I like it. But I think there should be a constant $\frac{1}{\alpha}$ missing in the final answer $\endgroup$ – will_cheuk Sep 22 '17 at 4:50
Not the answer you're looking for? Browse other questions tagged fourier-transform or ask your own question.
Fourier transform of $ \int_{-\infty}^{t} f(\eta )\text{d}\eta $
Derivation of a Fourier Sine Transform
Fourier transform in $\mathbb{R}^n$ of $e^{-|x|}$
Solution for the Fourier transform of multiplication of two error functions.
Approximating inverse Fourier transform with inverse discrete Fourier transform
The Fourier transform of $\frac{\text{erf}(\omega x)}{x}$
Fourier transform of a windowed cosine function
Fourier series expansion of $\exp(\sum_{j=1}^{\infty}a_{j}\exp(ijλ))$
Any special properties for the inverse Fourier transform of the exponential of a Fourier transform?
Relationship between the DFT and the continuous Fourier transform of a piecewise constant function | CommonCrawl |
Regional disparities and influencing factors of high quality medical resources distribution in China
Lei Yuan1,2,
Jing Cao3,
Dong Wang1,
Dan Yu1,2,
Ge Liu1 &
Zhaoxin Qian1,2
International Journal for Equity in Health volume 22, Article number: 8 (2023) Cite this article
With the gradual increase of residents' income and the continuous improvement of medical security system, people's demand for pursuing higher quality and better medical and health services has been released. However, so far little research has been published on China's high quality medical resources (HQMR). This study aims to understand the spatiotemporal variation trend of HQMR from 2006 to 2020, analyze regional disparity of HQMR in 2020, and further explore the main factors influencing the distribution of HQMR in China.
The study selected Class III level A hospitals (the highest level medical institutions in China) to represent HQMR. Descriptive statistical methods were used to address the changes in the distribution of HQMR from 2006 to 2020. Lorentz curve, Gini coefficient (G), Theil index (T) and High-quality health resource density index (HHRDI) were used to calculate the degree of inequity. The geographical detector method was used to reveal the key factors influencing the distribution of HQMR.
The total amount of HQMR in China had increased year by year, from 647 Class III level A hospitals in 2006 to 1580 in 2020. In 2020, G for HQMR by population was 0.166, while by geographic area was 0.614. T was consistent with the results for G, and intra-regional contribution rates were higher than inter-regional contribution rates. HHRDI showed that Beijing, Shanghai, and Tianjin had the highest allocated amounts of HQMR. The results of the geographical detector showed that total health costs, government health expenditure, size of resident populations, GDP, number of medical colleges had a significant impact on the spatial distribution of HQMR and the q values were 0.813, 0.781, 0.719, 0.661, 0.492 respectively. There was an interaction between the influencing factors.
China's total HQMR is growing rapidly but is relatively inadequate. The distribution of HQMR by population is better than by geography, and the distribution by geography is less equitable. Population size and geographical area both need to be taken into account when formulating policies, rather than simply increasing the number of HQMR.
Health, as a basic human demand, is the basis for achieving comprehensive human development [1]. Medical resources are an important part of public services, and the equitable allocation of medical resources not only affects the health level of residents, but is also closely related to the healthy and sustainable development of human society [2]. The World Health Organization points out that equity in health services means that members of society should have demand-oriented access to health services, rather than depending on factors such as ethnicity, social status, income level, and religious beliefs [3]. However, inequitable allocation of health resources is currently a global problem, especially in developing countries [4]. In the 2030 Agenda for Sustainable Development, the United Nations has clearly identified "ensuring universal access to health and health care services and achieving universal health coverage" as the main goal.
Since the founding of the People's Republic of China, China's medical and health care has made great progress. However, the imbalance of medical services between urban and rural areas and among different regions is still very prominent [3, 5,6,7]. In order to solve the problem of difficulties and high expenses in medical care, China launched a new health care system in 2009, with the goal of "basic medical services for all" and the concept of "providing the basic medical and health system to all people as a public product" [8, 9]. Strengthening the primary care system is a priority of new medical reform in China. The capacity of service delivery at primary care institutions has seen a significant improvement, and the infrastructure of community health centers in cities and township health centers in rural areas has been greatly optimized. Almost every rural town has at least one primary health care facility [10].
With the gradual increase of residents' income and the continuous improvement of the medical insurance system, primary health care services can no longer fully meet the needs of Chinese people, and people's demand for pursuing higher quality and better medical and health services has been released [11]. Due to the uneven distribution of HQMR, people must move across regions to access higher-quality medical and health services. This further exacerbates the social problems of difficulty and high cost of getting medical treatment [12]. Thus, the Chinese government released the "Health China 2030 Plan" in 2016, which proposed to "basically achieve a balanced allocation of high-quality medical and health resources" [13]. In March 2021, China's "Outline of the 14th Five-Year Plan (2021–2025) for National Economic and Social Development and the Long-Range Objectives Through the Year 2035" proposed to accelerate the expansion of HQMR and a balanced regional layout of such resources among different regions in China [14].
HQMR refers to the resources with high quality in the whole medical service system, which is characterized with advanced medical techniques, good medical facilities and service, and standardized management. Based on the current Chinese health care policy, hospitals in China are divided into three classes, and each class is divided into level A, B and C [15]. Class III level A hospitals are the highest level medical institutions in China's current medical service system, with better medical service and management, medical quality and safety, technical level and efficiency. Therefore, this study selects Class III level A hospitals (high-level hospitals) as the study subjects to represent HQMR.
At present, the research on medical fairness theory has made great progress and formed a series of theories such as utilitarian ethical doctrine, egalitarian distribution theory, radical liberal theory, and communitarian theory [16]. A lot of research has been conducted in the fields of medical services and health, accessibility of medical facilities, inequity in medical services, and distribution of medical resources and their influencing factors [17, 18]. An Iranian study assessed geographical distribution of hospitals and inequality of hospital beds against socioeconomic status of residents of five metropolitan cities [19]. A study in Mongolia compared urban and rural areas using the Mann–Whitney U test and further investigated the distribution equality of physicians, nurses, and hospital beds throughout Mongolia using the Gini coefficient [20]. Huimin Yu et al. analyzed from the perspective of distribution by both population and service area the equity of physician locations in 31 provincial administrative regions in China [21]. Baoguo Shi et al. explored the allocation patterns of elite hospitals in China and their influencing factors using a linear regression model [11]. Yue Zhang et al. analyzed equity and efficiency of primary health care resource allocation in mainland China by the Lorenz curve, Gini coefficient, Theil index and health resource density index [22]. In addition, there are also studies that analyzed differences and inequalities in regional distribution of health resources in selected Chinese provinces [23,24,25,26].
In general, most studies have focused on the regional differences and equity of overall or basic medical resources, and there are fewer studies related to the regional differences and influencing factors of HQMR, and there are no studies on the effects of interactions between different influencing factors on the distribution of HQMR. Therefore, this study assesses the equity of the current allocation of HQMR in China by analyzing the regional differences and influencing factors of HQMR in China, so as to provide reference of the HQMR allocation for China and other regions.
Data on the provincial high-level hospitals for the years 2006 to 2020 and factors affecting the distribution of HQMR were obtained either from the China Health Statistical Yearbook published by the National Health Commission or the China Statistics Yearbook published by the National Bureau of Statistics.
The layout of HQMR is influenced by diverse and complex factors including demographic factors, medical insurance, education level, economic development, and medical expenditures. The indicators that characterize demographic factors include size of resident populations, population density, and proportion of urban population in total population. The indicators that medical insurance include medical insurance density, and medical insurance depth. The indicators of education level include the proportion of population with college degree and above in the total population and the number of medical colleges. The indicators that characterize economic development include GDP and disposable income. The indicators reflecting medical expenditures include residents' health care expenditure, total health costs and government health expenditure. The details are shown in Table 1.
Table 1 Explanatory variables in the study
China's land area is approximately 9.6 million square kilometers and is divided into 34 provincial administrative regions. This study selected 31 provinces in mainland China except for Hong Kong special administrative region, Macao special administrative region, and Taiwan province due to inconsistency of statistics caliber and data collection. According to economic factors and geographical locations, 31 provinces are divided into 6 administrative regions as follows, North: including Beijing, Tianjin, Hebei, Shanxi, Inner Mongolia. Northeast: including Heilongjiang, Jilin, Liaoning. East: including Shanghai, Jiangsu, Zhejiang, Anhui, Fujian, Jiangxi, Shandong. Central South: including Henan, Hubei, Hunan, Guangdong, Guangxi, Hainan. Southwest: including Chongqing, Sichuan, Guizhou, Yunnan, Tibet. Northwest: including Shaanxi, Gansu, Qinghai, Ningxia, Xinjiang. The geographical locations of 6 administrative regions are shown in Fig. 1.
The distribution of 6 administrative regions in China
Descriptive statistical methods were used to address changes in the distribution of HQMR from 2006 to 2020. The ArcGIS 10.5 software was used to draw the distribution map of China's HQMR in 2006, 2011, 2016, and 2020. The allocation inequity of high-level hospitals in 2020 based on both population and geographical distribution was calculated by the Lorentz curve, Gini coefficient, Theil index and High-quality health resource density index. The geographical detector method was used to analyze the key factors influencing the distribution of HQMR.
The Lorenz curve
The Lorenz curve is a common tool to evaluate the equity of health and medical resources allocation in the field of public health [27]. The bending degree of the Lorenz curve can reflect the inequality of resource allocation. A 45° line indicates absolute equity. If the Lorenz Curve is closer to the absolute equity line, the allocation of health resources is more equitable [28]. In this study, the number of high-level hospitals per capita (area) was in ascending order, with the X-axis representing the cumulative percentage of the population or area, and the Y-axis representing the cumulative percentage of the high-level hospitals.
The Gini coefficient (G)
G is usually used to assess the equity of income and resource allocation, which was derived from the Lorenz curve [29, 30]. G ranges from 0 to 1. "0" means the evenest distribution of medical and health resources while "1" means the most concentrated and inequitable. 0 < G < 0.2 indicates that the distribution of medical resources is of absolute equity; 0.2 ≤ G < 0.3, relative equity; 0.3 ≤ G < 0.4, proper equity; 0.4 ≤ G < 0.5, relative inequity; 0.5 ≤ G < 1 severe inequity. The formula of the G is as follows.
$$G=1-\sum\limits_{i=1}^{n}\left({X}_{i+1}-{X}_{i}\right)\left({Y}_{i+1}+{Y}_{i}\right)$$
In the formula, \({X}_{i}\) illustrates the cumulative percentage of the population and area in the ith district., and \({Y}_{i}\) illustrates the cumulative percentage of high-level hospitals in the ith district. n is the number of 31 provinces.
Theil Index (T)
T can be used to analyze the source of inequity. The advantage of T is that it measures the contribution of both intra- and inter-regional differences to overall inequality. T ranges from 0 to 1. Generally, the smaller the T value, the more balanced the resource distribution. T calculated the formula as follows:
$$T=\sum\limits_{i=1}^{n}{P}_{i}\times log\left(\frac{\overline{\mathrm{R}} }{{\overline{R} }_{i}}\right)$$
In the formula, \({P}_{i}\) is the proportion of every province's population (area); R is a high-quality hospital allocated by population (area) in the 31 provinces; and \({R}_{i}\) is the total population (area) in the 31 provinces nationwide. n is the number of 31 provinces.
G and T were calculated based on population and area in this study.
T can be divided into \({T}_{inter}\) and \({T}_{intra}\), and the calculation of \({T}_{inter}\) and \({T}_{intra}\) is as follows [31]:
$$T={T}_{inter}{ + T}_{intra}$$
$${T}_{inter}=\sum\limits_{j=1}^{m}{P}_{j}\times log\left(\frac{{P}_{j}}{{Y}_{j}}\right)$$
$${T}_{intra}=\sum\limits_{j=1}^{m}{P}_{j}\times {T}_{j}$$
\({P}_{j}\): proportion of the six groups' (North, Northeast, East, Central South, Southwest and Northwest regions) population (area) accounting for the overall population of China.
\({Y}_{j}\): proportion of high-level hospitals owned by the six groups (North, Northeast, East, Central South, Southwest and Northwest regions) accounting for the total number of high-level hospitals nationwide.
\({T}_{j}\): T of the six groups (North, Northeast, East, Central South, Southwest and Northwest regions).
The contribution rate of intra- and inter-region can be calculated by dividing \({T}_{intra}\) /T and \({T}_{inter}\) /T [32].
High-quality Health Resource Density Index (HHRDI)
The health resource density index comprehensively considers the influencing factors of population and geographical area, and can better reflect the comprehensive level of the distribution of health resources by population and geographical area [22]. Therefore, this study refers to the calculation principle of the health resource density index to establish a high-quality health resource density index (HHRDI). The calculation formula is:
$$\mathrm{HHRDI}=\frac{{HHR}_{i}}{\sqrt{{A}_{i}\times {P}_{i}}}$$
In the formula, \(HH{R}_{i}\): HQMR quantity of the ith region. \({A}_{i}\): geography of the ith region. \({P}_{i}\): population of the ith region.
Geographical Detector
The geographical detector is a set of statistical methods that detect spatial heterogeneity and reveal the driving forces behind it [33]. The geographically weighted regression is a linear model, while the geographical detector is a nonlinear model. The advantage of the geographical detector is that it can quantify the interaction force between two independent variables and two dependent variables without considering multicollinearity [34]. The geographical detector is widely used to explore the formation mechanism of the spatial distribution of geographic objects, including risk detection, factor detection, ecological detection, and interactive detection [35].
Factor detection and interactive detection methods were used in this study. The factor detection mainly measures the influence of each factor on the HQMR; the interactive detection mainly analyzes the influence of the interaction between the factors on the distribution of HQMR, that is, the combined effect of the two factors—whether it will increase or decrease the influence on HQMR. The factor detection is calculated as follows:
$$q=1-\frac{\sum_{m=1}^{L}{{\sigma }_{m}^{2}N}_{m}}{\mathrm{N}{\sigma }^{2}}$$
In the formula, the value range of q is [0, 1] and the larger the q value, the stronger the explanatory power of the independent variable X to the attribute dependent variable Y; m = 1, …; L: the stratification of the factor X and the variable Y; Nm and N: the number of units in the layer m and the whole area, respectively; \({\sigma }_{m}^{2}\) and \({\sigma }^{2}\): the layer m and the variance of the Y values for the whole area respectively.
Interaction detection is used to detect whether the interaction of two influences enhances, weakens, or is independent in explaining the spatial variation on the dependent variable. It is discriminated by comparing the magnitude of q(X1), q(X2) and q(X1 ∩ X2). The specific classification is shown in Table 2.
Table 2 The types of factor interaction expression
As geographic detectors are applied to the characteristics of the type variables, the potential influence factor data were first discretized separately using the natural breakpoint grading function of ArcGIS software and classified into five categories in order of their values from highest to lowest. The geographical detector software platform compiled by excel was used to perform the analyses. p < 0.05 was considered statistically significant.
Spatiotemporal variation of HQMR
As shown in Table 3, the total amount of HQMR in China had been increasing rapidly in recent years. The number of high-level hospitals increased from 647 in 2006 to 1,580 in 2020; the growth rates (GRs) was 144.2%, and the average annual increase was 62.2 hospitals. The growth rates of Southwest, Northwest, and East were higher than the national level, and GRs from 2006 to 2020 were 314.29%, 206.67%, and 181.82%, respectively.
Table 3 Quantity of HQMR in China from 2006 to 2020
Based on the provincial high-level hospitals data in 2006, 2011, 2016, and 2020, the Natural Breaks method was used to divide the 31 provincial high-level hospitals into five categories, namely, Low Level Areas (1–16), Relative Low Level Areas (17–41), Medium Level Areas (42–56), Relative High Level Areas (57–82) and High-Level Areas (83–122). As shown in Fig. 2, from 2006 to 2020, the supply level of HQMR in China increased rapidly; the number of Low Level Areas decreased from 14 provinces to three provinces; the number of Relative Low Level areas decreased from 16 provinces to eight provinces; the number of Medium Level areas increased from two provinces to ten provinces; six provinces developed into Relative High Level Areas, and three provinces including Guangdong, Sichuan, and Shandong entered High Level Areas.
Spatial distribution of HQMR changes in China. A distribution of high-level hospitals in China in 2006; B distribution of high-level hospitals in China in 2011; C distribution of high-level hospitals in China in 2016; D distribution of high-level hospitals in China in 2020
HQMR distribution in 2020 in China
In 2020, there were 1,580 high-level hospitals in China, with an average of 51 in each province. From a regional perspective, the number of high-level hospitals in East (434) and Central South (386) regions was higher than that in other regions, followed by Southwest (232), North (222), Northeast (168), and Northwest (138). From the perspective of provinces, Guangdong, Shandong, and Sichuan have more than 100 high-level hospitals, 122, 108, and 105 respectively. The number of high-level hospitals of 15 provinces is lower than the national average, and Tibet and Ningxia have the least, 9 and 6 respectively. See Table 4 for details.
Table 4 Distribution of HQMR in China in 2020
The national average number of high-level hospitals per million population in 2020 was 1.12. From a regional perspective, the number of high-level hospitals per million population of Northeast was the highest (1.71), followed by Northwest (1.33), North (1.31), and Southwest (1.13), while East and Central South regions had 1.02 and 0.94 respectively, lower than the national average. In terms of provinces, Beijing has the most high-level hospitals (2.51), whereas Henan province has the least (0.68). The number of high-level hospitals per million population of 13 provinces including Henan, Hebei, Guizhou, and Ningxia was lower than the national average.
The national average number of high-level hospitals per 10,000 km2 in 2020 was 1.65. From a regional perspective, the number of high-level hospitals per 10,000 km2 of East was the highest (5.41), followed by Central South (3.80) and Northeast (2.16), while North (1.42), Southwest (0.99), and Northwest (0.45) were lower than the national average. From the perspective of provinces, the top five provinces in terms of the number of high-level hospitals per 10,000 km2 were Shanghai, Beijing, Tianjin, Jiangsu, and Shandong, while nine provinces including Inner Mongolia, Xinjiang, Qinghai, and Tibet were lower than the national average.
Equity of HQMR distribution in China in 2020
Equity based on Lorenz curve and G
Figure 3 shows that the Lorenz curve by population distribution was close to the absolute equity line, and the calculated G was 0.166, indicating that HQMR was in an absolutely fair state by population in 2020. The Lorenz curve by geographical area was far from the absolute equity line, and the calculated G was 0.614, indicating that the distribution of HQMR by geographical area in 2020 was severely inequal.
Lorenz curve of the distribution of HQMR. A the Lorenz curve by population distribution; B The Lorenz curve by geographical distribution
To see whether the inequality of the HQMR distribution has improved, we proceeded with a longitudinal analysis of changes in inequality over time. Figure 4 shows the change in the G of China's HQMR from 2006 to 2020. The G by population dropped from 0.290 in 2006 to 0.166 in 2020, which means that the distribution of HQMR by population had improved from a state of relative equity to absolute equity. The G by geographic area decreased from 0.694 in 2006 to 0.614 in 2020, showing an overall decreasing trend and suggesting a certain degree of improvement in HQMR equity. However, the G were all greater than 0.6, indicating that the distribution of HQMR by geographic area was still in a state of severe inequity.
Inequality trends based on the Gini coefficient in China from 2006 to 2020
Figure 5 shows the Lorenz curves and G of the six regions by geographical area. Lorenz curves of Northeast, East, and South Central were relatively close to the absolute equity line, with Gini coefficients of 0.202, 0.251, and 0.224 respectively, which was a relatively equitable state. The Lorenz curves of North, Southwest, and Northwest were far from the absolute equity line, with Gini coefficients of 0.655, 0.546, and 0.428 respectively, which was an inequitable state. It suggests that the geographical inequity of HQMR distribution in China in 2020 may mainly come from North, Southwest, and Northwest.
Lorenz curves and Gini coefficients of HQMR by geographic area in different regions in 2020. A Lorenz curves and Gini coefficients of HQMR by geographic area in North in 2020; B Lorenz curves and Gini coefficients of HQMR by geographic area in Northeast in 2020; C Lorenz curves and Gini coefficients of HQMR by geographic area in East in 2020; D Lorenz curves and Gini coefficients of HQMR by geographic area in Central South in 2020; E Lorenz curves and Gini coefficients of HQMR by geographic area in Southwest in 2020; F Lorenz curves and Gini coefficients of HQMR by geographic area in Northwest in 2020
Equity based on T
The T by regional population of HQMR was 0.022, and the contribution rate of inter-and intra-region was 32% and 68% respectively. The T by regional area of HQMR was 0.315, and the contribution rate of inter-and intra-region were 48% and 52% respectively. The T of HQMS showed the same trend with the G and Lorenz curve and the inequality was mainly attributed to intra-regional differences.
HHRDI
As shown in Table 4, the national HHRDI in 2020 was 1.36. HHRDI of 23 provinces was higher than the national average. Among them, Beijing, Shanghai, Tianjin were the top three, with 9.17, 8.06, 7.62, respectively. HHRDI of 8 western provinces including Yunnan, Guizhou, Gansu, Ningxia, Inner Mongolia, Xinjiang, Qinghai and Tibet provinces were lower than the national average, with 1.26, 1.19, 0.89, 0.87, 0.77, 0.72, 0.49, 0.43 respectively.
Analysis of influencing factors
The explanatory power (q-value) of each factor on the spatial heterogeneity of HQMR in China in 2020 and the significant P-values are shown in Table 5. Based on the q value, the factor detector reveals the extent to which a factor explains the spatial distribution of HQMR. The q values are sorted in the following order: x11(0.813) > x12(0.781) > x1(0.719) > x8(0.661) > x7(0.492) > x2(0.393) > x9(0.228) > x5(0.181) > x10(0.179) > x3(0.158) > x6(0.055) > x4(0.02). Five factors passed the significance test at the 5% level, total health costs (x11) and government health expenditure(x12), size of resident populations(x1), GDP(x8), number of medical colleges(x7), respectively, indicating that these five factors made significant contribution to the spatial distribution of HQMR, and their explanatory power reached 81.3%,78.1%, 71.9%, 66.1%, 49.2%, respectively.
Table 5 Results of the factor detection
The interaction detection results between factors were shown in Table 6. The factors of all interaction types were "Enhance, nonlinear", indicating that there was an interaction between the influencing factors, and the explanatory power of any two factors interactions was greater than that of a single factor. This means that the distribution of HQMR in China is not caused by a single influencing factor, but results from a combination of different influencing factors. The interaction between medical insurance density (x4) and total health costs (x11) has the largest impact, which is 0.980. In addition, the phenomenon worthy of attention was that although population density(x2), medical insurance density (x4), medical insurance depth(x5), proportion of population with college degree and above in the total population(x6) alone had no significant influence on the distribution of HQMR, they had a large effect on HQMR through its interaction with other factors. Especially the interactive influence of the factor pairs x6 ∩ x11, x1 ∩ x4, x2 ∩ x11, and x5 ∩ x11 were greater than 0.90, which indicates these four factors cannot be neglected in the development and allocation of HQMR.
Table 6 Results of the interactive detection
Overall, the geographic distribution of HQMR in China in 2020 was the result of a combination of factors, of which five factors, including total health costs, government health expenditure, size of resident populations, GDP, and number of medical colleges had a direct and significant impact on the distribution.
This study describes the changes in China's HQMR from 2006 to 2020 and explores regional disparities of HQMR and its determinants in 2020, and summarizes four major findings.
First, the total amount of HQMR in China has increased rapidly in recent years, but the demand for quality resources is also increasing. The results of this study show that between 2006 and 2020, the number of high-level hospitals increased from 647 to 1580 with a growth rate of 144.2%; the number of high-level hospitals per million population increased from 0.49 to 1.12; the average number of high-level hospitals per 10,000 km2 increased from 0.68 to 1. 65. The growth rate of total HQMR is higher than the population growth rate, which means that the access to quality medical services for residents is increasing [36]. However, as the residents' economic status and living standards continue to improve and the continuous improvement of the medical insurance system, the people's demand for higher quality and better healthcare services is also rapidly growing. In addition, Chinese residents lack trust in the standard of care in primary health care institutions and are willing to pay more and spend more time seeking care in high-level hospitals [37]. According to data from the 2020 China Health Statistics Yearbook, from 2015 to 2019, the bed utilization rate of tertiary general hospitals affiliated with the National Health Commission increased from 102.1% to 106.3%, and the bed utilization rate of tertiary general hospitals at the provincial level was about 100%, while during the same period the bed utilization rate of secondary and primary hospitals decreased from 84.1% to 81.6% and 58.8% to 54.7%, respectively. The efficiency of medical resource utilization reflects the patients' demand for HQMR from the side.
Second, we found that there are obvious regional differences in the distribution of HQMR in China. Regionally, East and Central South regions have more high-level hospitals, with a number of 434 and 386 respectively, much higher than the other four regions. Other similar studies have also found significant regional differences in China's health care resources, with significantly higher-quality physicians and health care resources in the east than in the west [3, 38, 39]. The three provinces with the highest HHRDI were Beijing, Shanghai, and Tianjin, which were significantly higher than the national average. Due to the richness of HQMR in Beijing and Shanghai, a large number of patients from other parts of China have long been attracted to "cross-regional access to medical care ", with 37.21% and 40.12% of inpatients in tertiary hospitals in Beijing and Shanghai, respectively, coming from outside the region in 2019 [40]. According to the China National Medical Service and Quality Safety Report, the top 5 provinces with the highest outflow ratio in 2019 were Tibet, Anhui, Inner Mongolia, Hebei and Gansu, which are mainly in the central and western regions, with outflow ratios of 27.87%, 18.38%, 17.02%, 14.09% and 11.91%, respectively. The top 5 provinces for patient inflow were Shanghai, Beijing, Jiangsu, Zhejiang and Guangdong, and the inflow provinces were basically concentrated in the eastern regions with developed HQMR. As a result, the problem of difficulty and high cost of getting medical treatment is further exacerbated by the large number of patients chasing HQMR and health services across regions [12, 41].
Third, the inequality of HQMR is mainly reflected in the geographic distribution rather than population. The Lorenz curve and G results of this study show that the allocation of HQMR by population distribution is in an equitable state, and the disparity in the allocation of HQMR by geographic area is huge. The Chinese government has issued numerous documents to optimize the allocation of health resources. When HQMR is limited, these documents recommend prioritizing the allocation of HQMR based on population [31]. Thus, the equity of resource allocation based on population size is significantly better than that based on geographic area. Similar results have been found in other studies [42,43,44]. Equity in health care services includes not only population equity but also geographic equity [38]. The geographic accessibility of health care services is closely related to population health outcomes [45]. Therefore, it is reasonable to recommend that both demographic and geographic factors should be taken into account when the government makes health planning [46]. At present, health resources especially HQMS should be more allocated to economically underdeveloped and remote regions to gradually improve geographic equity and promote balanced regional development.
Finally, the results of the analysis of influencing factors proved that size of resident populations, number of medical colleges, GDP, total health costs and government health expenditure are the key factors influencing the allocation of HQMR in China. Previous studies also found that the resident population has a positive effect on the allocation of healthcare resources, probably because the larger the size of the resident population, the greater the demand for healthcare services will be, which is one of the factors considered by the government in allocating HQMR [47]. The number of medical colleges is also an important factor in the distribution of HQMR in China. This is mainly because in China, medical colleges usually have one or several affiliated hospitals, which serve as internship sites for medical schools to train clinicians and to absorb outstanding graduates to stay in the hospitals. Therefore, these affiliated hospitals are generally the best hospitals in the region, which in turn creates a concentration of HQMR [11]. GDP is an important indicator of a country's or region's economic status and development level. It is generally believed that the economic development of a region can provide strong support for medical health expenditure. Governments in better-off regions are able to afford to invest in HQMR, while poorer areas cannot. In addition, Liu W et al. also found that cities with better economic conditions are easy to attract HQMR [3]. This may be because regions with better economic conditions have more growth opportunities and higher incomes, which will attract more health professionals to employment. The total health cost is the total monetary amount of health resources of a country or region raised from the whole society to carry out the health service activities in a certain period, which can reflect the degree of emphasis on health care under certain economic conditions and cost burden levels [48]. Due to the high level of medical services provided by high-level hospitals, the price of HQMR services is higher than that of basic medical services, and the cost of medical expenses is also higher. Governments can influence health outcomes in a country or region in many ways, such as by developing health programs and increasing health investments [49]. Huanhuan Jia et al. found that the Chinese government plays a leading role in health and has a crucial influence on health development [50]. In China, government-established public medical institutions dominate the healthcare system, so government health expenditure play a key role in expanding HQMR.
In order to enable the majority of people to enjoy quality medical and health services close to their homes and reduce the phenomenon of cross-regional access to medical care, there are a few suggestions. On the one hand, China needs to continue to expand the total amount of HQMR to solve the problem of "inadequate" development. Measures include continually enhancing the sources of financial investment in the medical and health fields and encouraging social capital to support and participate in medical care, and guiding social forces to improve medical facilities and equipment [40]. On the other hand, it is more important and urgent to accelerate the balanced distribution of HQMR in China. First, China's government can promote the horizontal flow of HQMR, and guide the layout of HQMR to areas with weak medical service capacity and huge public demand for medical services. For example, the central government ought to select high-level medical institutions from areas rich in HQMR, such as Beijing and Shanghai, and encourage these hospitals to build regional medical centers in areas with high patient outflow and relatively weak medical resources [51]. Furthermore, the local government can introduce experienced and excellent medical personnel to work in economically underdeveloped and remote regions by giving generous subsidies and improving their social status. Second, it is necessary to promote the vertical flow of HQMR. Through various forms of medical consortia, the government could cultivate a number of medical groups with obvious brand advantages and provide high-level services across regions, and promote the grouping and branding of HQMR [13]. Third, the full use of "Internet + Medical", artificial intelligence, big data, telemedicine, and other advanced technologies expands the service scope of advantageous medical resources [52]. Internet hospitals offer convenient outpatient delivery regardless of the patients' distance from the hospital [53]. In certain high-income countries such as the United Kingdom, the United States, Japan, the use of the Internet for video consultation or health advice for patients has helped alleviate the shortage of health resources to some extent [54, 55]. Some research shows that artificial intelligence technologies, such as virtual AI and telemedicine, are expected to help China overcome current limitations in the allocation of healthcare resources and alleviate the pressures associated with access to high-quality medical care [56].
This study has several limitations. Firstly, the distribution of HQMR is measured by the number of high-level hospitals, without considering the number of beds and medical personnel in high-level hospitals. In addition, in the demonstration of the influencing factors, only data on social and economic aspects in 2020 were used, hence the trend of the influence of factors on HQMR cannot be fully demonstrated. Finally, because of the limited data availability, we only discussed the distribution of HQMR at the provincial level, and the distribution of HQMR at the prefecture cities and counties levels needs to be further explored in future studies.
China's total HQMR is growing rapidly but is relatively inadequate. The distribution of HQMR by population is better than by geography, and the distribution by geography is less equitable. Population size and geographical area both need to be taken into account when formulating policies, rather than simply increasing the number of HQMR. In order to improve the access of all citizens to high quality medical services, it is recommended to accelerate the expansion and balanced layout of HQMR and promote coordinated regional development, rather than simply increasing the number of HQMR.
The survey data collected and analyzed during the current study are available from the corresponding author on reasonable request.
CNY:
HHRDI:
High-quality health resource density index
HQMR:
High quality medical resources
The Gini coefficient
Theil Index
GDP:
GRs:
The growth rates
Grad FP. The preamble of the constitution of the world health organization. Bull World Health Organ. 2002;80(12):981–4.
Rohde J, Cousens S, Chopra M, et al. 30 years after Alma-Ata: Has primary health care worked in countries? Lancet. 2008;372(9642):950–61.
Liu W, Liu Y, Twum P, Li S. National equity of health resource allocation in China: Data from 2009 to 2013. Int J Equity Health. 2016;15:68.
Ismail M. Regional disparities in the distribution of Sudan's health resources. East Mediterr Health J. 2020;26(9):1105–14.
Yang L, Wang H, Xue L. What about the health workforce distribution in rural China? An assessment based on eight-year data. Rural Remote Health. 2019;19(3):4978.
Song S, Yuan B, Zhang L, et al. Increased inequalities in health resource and access to health care in rural china. Int J Environ Res Public Health. 2018;16(1):49.
Chen J, Lin Z, Li LA, et al. Ten years of China's new healthcare reform: A longitudinal study on changes in health resources. BMC Public Health. 2021;21(1):2272.
Cao X, Bai G, Cao C, et al. Comparing Regional Distribution Equity among Doctors in China before and after the 2009 Medical Reform Policy: A Data Analysis from 2002 to 2017. Int J Environ Res Public Health. 2020;17(5):1520.
Tao W, Zeng Z, Dang H, et al. Towards universal health coverage: Lessons from 10 years of healthcare reform in China. BMJ Glob Health. 2020;5(3):e2086.
Li L, Fu H. China's health care system reform: Progress and prospects. Int J Health Plann Manage. 2017;32(3):240–53.
Shi B, Fu Y, Bai X, et al. Spatial pattern and spatial heterogeneity of chinese elite hospitals: A Country-Level analysis. Front Public Health. 2021;9:710810.
Chen Y, Yin Z, Xie Q. Suggestions to ameliorate the inequity in urban/rural allocation of healthcare resources in China. Int J Equity Health. 2014;13:34.
"Healthy China 2030" Planning Outline: General Office of the State Council, PRC; 2016 [Available from: http://www.gov.cn/xinwen/2016-10/25/content_5124174.htm.
"the Outline of the 14 Five-Year Plan (2021–2025) for National Economic and Social Development and the Long-Range Objectives Through the Year 2035: PRC; 2021. [Available from: http://www.gov.cn/xinwen/2021-03/13/content_5592681.htm?pc.
Wang J, Ma JJ, Liu J, Zeng DD, Song C, Cao Z. Prevalence and risk factors of comorbidities among hypertensive patients in china. Int J Med Sci. 2017;14(3):201–12.
Walker RL, Siegel AW. Morality and the limits of societal values in health care allocation. Health Econ. 2002;11(3):265–73.
Glasziou P, Straus S, Brownlee S, et al. Evidence for underuse of effective medical services around the world. Lancet. 2017;390(10090):169–77.
Evans NG, Sekkarie MA. Allocating scarce medical resources during armed conflict: Ethical issues. Disaster Mil Med. 2017;3:5.
Chavehpour Y, Rashidian A, Woldemichael A, Takian A. Inequality in geographical distribution of hospitals and hospital beds in densely populated metropolitan cities of Iran. Bmc Health Serv Res. 2019;19(1):614.
Erdenee O, Paramita SA, Yamazaki C, Koyama H. Distribution of health care resources in Mongolia using the Gini coefficient. Hum Resour Health. 2017;15(1):56.
Yu H, Yu S, He D, Lu Y. Equity analysis of Chinese physician allocation based on Gini coefficient and Theil index. Bmc Health Serv Res. 2021;21(1):455.
Zhang Y, Wang Q, Jiang T, Wang J. Equity and efficiency of primary health care resource allocation in mainland China. Int J Equity Health. 2018;17(1):140.
Dong E, Xu J, Sun X, Xu T, Zhang L, Wang T. Differences in regional distribution and inequality in health-resource allocation on institutions, beds, and workforce: A longitudinal study in China. Arch Public Health. 2021;79(1):78.
Li Q, Wei J, Jiang F, et al. Equity and efficiency of health care resource allocation in Jiangsu Province, China. Int J Equity Health. 2020;19(1):211.
Ding L, Zhang N, Mao Y. Addressing the maldistribution of health resources in Sichuan Province, China: A county-level analysis. PLoS One. 2021;16(4):e250526.
Huang M, Luo D, Wang Z, et al. Equity and efficiency of maternal and child health resources allocation in Hunan Province, China. Bmc Health Serv Res. 2020;20(1):300.
Pu L. Fairness of the distribution of public medical and health resources. Front Public Health. 2021;9:768728.
Yu Q, Yin W, Huang D, et al. Trend and equity of general practitioners' allocation in China based on the data from 2012–2017. Hum Resour Health. 2021;19(1):20.
Jin J, Wang J, Ma X, Wang Y, Li R. Equality of medical health resource allocation in china based on the gini coefficient method. Iran J Public Health. 2015;44(4):445–57.
Zhang X, Zhao L, Cui Z, Wang Y. Study on Equity and Efficiency of Health Resources and Services Based on Key Indicators in China. PLoS One. 2015;10(12):e144809.
Fang P, Dong S, Xiao J, Liu C, Feng X, Wang Y. Regional inequality in health and its determinants: Evidence from China. Health Policy. 2010;94(1):14–25.
Wu Y, Hu K, Han Y, Sheng Q, Fang Y. Spatial characteristics of life expectancy and geographical detection of its influencing factors in china. Int J Environ Res Public Health. 2020;17(3):906.
Li W, Zhang P, Zhao K, Zhao S. The geographical distribution and influencing factors of COVID-19 in china. Trop Med Infect Dis. 2022;7(3).
Yue H, Hu T. Geographical Detector-Based spatial modeling of the COVID-19 mortality rate in the continental united states. Int J Environ Res Public Health. 2021;18(13):6823.
Tao W, Zeng Z, Dang H, et al. Towards universal health coverage: Lessons from 10 years of healthcare reform in China. BMJ Glob Health. 2020;5(3): e2086.
Zhang T, Xu Y, Ren J, Sun L, Liu C. Inequality in the distribution of health resources and health services in China: Hospitals versus primary care institutions. Int J Equity Health. 2017;16(1):42.
Wu J. Measuring inequalities in the demographical and geographical distribution of physicians in China: Generalist versus specialist. Int J Health Plann Manage. 2018;33(4):860–79.
Sun J, Luo H. Evaluation on equality and efficiency of health resources allocation and health services utilization in China. Int J Equity Health. 2017;16(1):127.
Fu L, Xu K, Liu F, Liang L, Wang Z. Regional disparity and patients mobility: Benefits and spillover effects of the spatial network structure of the health services in china. Int J Environ Res Public Health. 2021;18(3):1096.
Yang G, Ma Y, Xue Y, Meng X. Does the development of a High-Speed railway improve the equalization of medical and health services? Evidence from china. Int J Environ Res Public Health. 2019;16(9):1609.
Yan K, Jiang Y, Qiu J, et al. The equity of China's emergency medical services from 2010–2014. Int J Equity Health. 2017;16(1):10.
Wang Y, Li Y, Qin S, et al. The disequilibrium in the distribution of the primary health workforce among eight economic regions and between rural and urban areas in China. Int J Equity Health. 2020;19(1):28.
Pan J, Shallcross D. Geographic distribution of hospital beds throughout China: A county-level econometric analysis. Int J Equity Health. 2016;15(1):179.
Song P, Ren Z, Chang X, Liu X, An L. Inequality of paediatric workforce distribution in china. Int J Environ Res Public Health. 2016;13(7):703.
Lostao L, Blane D, Gimeno D, Netuveli G, Regidor E. Socioeconomic patterns in use of private and public health services in Spain and Britain: Implications for equity in health care. Health Place. 2014;25:19–25.
Zheng A, Fang Q, Zhu Y, Jiang C, Jin F, Wang X. An application of ARIMA model for predicting total health expenditure in China from 1978–2022. J Glob Health. 2020;10(1):10803.
Liang LL, Tussing AD. The cyclicality of government health expenditure and its effects on population health. Health Policy. 2019;123(1):96–103.
Jia H, Jiang H, Yu J, Zhang J, Cao P, Yu X. Total health expenditure and its driving factors in china: A gray theory analysis. Healthcare (Basel). 2021;9(2):207.
National Development and Reform Commission of China. Regional medical center construction pilot work program, Government document; 2019. [Available at: http://www.gov.cn/xinwen/2019-11/10/content_5450633.htm.
Kong X, Ai B, Kong Y, et al. Artificial intelligence: A key to relieve China's insufficient and unequally-distributed medical resources. Am J Transl Res. 2019;11(5):2632–40.
Xie X, Zhou W, Lin L, et al. Internet hospitals in china: Cross-Sectional survey. J Med Internet Res. 2017;19(7):e239.
Iacobucci G. Online GP service prescribed drugs without safety checks, says CQC. BMJ. 2017;357:j3194.
Iacobucci G. GP at Hand: Commissioning group asks NHS England for extra pound18m to cope with demand. BMJ. 2018;361:k2080.
Li R, Yang Y, Wu S, et al. Using artificial intelligence to improve medical services in China. Ann Transl Med. 2020;8(11):711.
This paper cited the China Health Statistical Yearbooks and China Statistical Yearbooks, which provide the data for this study.
Xiangya Hospital, Central South University, Changsha, Hunan, China
Lei Yuan, Dong Wang, Dan Yu, Ge Liu & Zhaoxin Qian
National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Changsha, Hunan, China
Lei Yuan, Dan Yu & Zhaoxin Qian
Department of Cardiovascular Medicine, Third Xiangya Hospital, Central South University, Changsha, Hunan, China
Jing Cao
Lei Yuan
Dan Yu
Ge Liu
Zhaoxin Qian
LY and ZXQ designed the study. LY analyzed the data and wrote the manuscript. ZXQ and JC assisted the analysis and revised the manuscript. DW and DY participated in data interpretation. GL assisted with literature search. All authors have read and approved the final manuscript.
Correspondence to Zhaoxin Qian.
The data used in this article is public.
Yuan, L., Cao, J., Wang, D. et al. Regional disparities and influencing factors of high quality medical resources distribution in China. Int J Equity Health 22, 8 (2023). https://doi.org/10.1186/s12939-023-01825-6
Regional disparities | CommonCrawl |
Home All issues Volume 18 / No 5 (Septembre-Octobre 2011) OCL, 18 5 (2011) 259-262 Full HTML
OCL
Volume 18, Number 5, Septembre-Octobre 2011
Lipids and Brain II. Actes des Journées Chevreul 2011 (Deuxième partie)
PUFA and Neurodevelopment
https://doi.org/10.1051/ocl.2011.0391
Synthesis of ...
Sex and ...
N-3 fatty acids and pregnancy
Use of ...
OCL 2011; 18(5): 259–262
Different dietary omega-3 sources during pregnancy and DHA in the developing rat brain
Caroline E. Childs1,2*, Alison L. Fear1, Samuel P. Hoile1 and Philip C. Calder1
1 Institute of Human Nutrition and Developmental Origins of Health and Disease Division, School of Medicine, University of Southampton, Southampton SO16 6YD, United Kingdom
2 Department of Food and Nutritional Sciences, The University of Reading, Whiteknights PO Box 226, Reading, Berkshire RG6 6AP, UK
* [email protected]
The essential n-3 fatty acid α-linolenic acid (ALA) can be converted into eicosapentaenoic acid (EPA), docosapentaenoic acid (DPA) and docosahexaenoic acid (DHA) under the action of desaturase and elongase enzymes. Human studies have demonstrated that females convert a higher proportion of ALA into EPA and DHA than males. We have demonstrated that when fed upon an ALA rich diet, female rats have a significantly higher EPA content of plasma and liver lipids than males. When fetal tissues were collected, it was observed that pups from dams fed the ALA rich diet had a comparable brain DHA status to those from dams fed on a salmon-oil based diet, indicating that conversion of ALA to DHA during pregnancy was efficient, and that DHA accumulated in a tissue-specific manner. Similar efficacy of dietary ALA in women during pregnancy would mean that plant n-3 fatty acids would be useful alternatives to preformed EPA and DHA.
Key words: omega-3 / sex / pregnancy
© John Libbey Eurotext 2011
ALA: α-linolenic acid
DHA: docosahexaenoic acid
DPA: docosapentaenoic acid
EPA: eicosapentaenoic acid
HF: high-fat
LA: linoleic acid
LC: long-chain
LF: low-fat
PC: phosphatidylcholine
PE: phosphatidylethanolamine
PUFA: polyunsaturated fatty acid
Synthesis of long chain n-3 PUFA
Dietary sources of the essential fatty acid α-linolenic acid (ALA; 18:3n-3) include green leaves, some seeds, nuts and cooking oils. The principal dietary source of the long-chain (LC) n-3 polyunsaturated fatty acids (PUFA) eicosapentaenoic acid (EPA; 20:5n-3), docosapentaenoic acid (DPA; 22:5n-3) and docosahexaenoic acid (DHA; 22:6n-3) is oily fish, yet it is estimated that only 27% of UK adults habitually eat oily fish (Scientific Advisory Committee on Nutrition, 2004).
In addition to consumption in the diet, LC n-3 PUFA can be endogenously synthesised via a series of elongase, desaturase and β-oxidation steps from their essential fatty acid precursor ALA (Leonard et al., 2004) (figure 1). This same series of desaturase and elongase enzymes is also involved in the metabolism of the n-6 PUFA linoleic acid (LA) into its longer-chain, more unsaturated derivatives (e.g. arachidonic acid). In Western diets, consumption of LA is about 10 times that of ALA (Burdge and Calder, 2006), suggesting that synthesis of n-6 PUFA will predominate.
Biosynthesis of LC n-3 PUFA from α-linolenic acid.
Sex and plasma and tissue n-3 fatty acid composition
Studies have identified sex differences in circulating plasma concentrations of LC n-3 PUFA. While these studies vary in their sample size, degree of dietary control exerted and the range of blood lipids analysed, all have found that women have significantly higher circulating DHA concentrations compared to men and that this is independent of dietary intake (Nikkari et al., 1995; Giltay et al., 2004; Bakewell et al., 2006; Crowe et al., 2008). Rat studies have also identified that the proportion of DHA is higher in liver and plasma phospholipids in females than males (Burdge et al., 2008; Extier et al., 2010; Childs et al., 2010a).
Data from studies using stable isotope-labelled ALA demonstrate that there are sex differences in the ability to synthesize LC n-3 PUFA from ALA. Young women converted a greater proportion of ALA into EPA and DHA compared to men (Burdge et al., 2002a; Burdge and Wootton, 2002b). It has been hypothesised that sex differences are established in order to ensure an adequate supply of LC n-3 PUFA to the developing fetus during pregnancy (Bakewell, 2006). If this is the case, then it is possible that LC n-3 PUFA synthesis may be further upregulated during pregnancy.
Specific maternal dietary fatty acids, particularly n-3 PUFA, have been demonstrated to be essential for successful fetal development and later tissue function in both humans and animals. Transfer of DHA to the developing fetus in human pregnancy predominantly occurs in the last 10 weeks of pregnancy, with the majority of this DHA accumulated within fetal adipose tissue (Haggarty, 2004). The observation that DHA is found in high concentrations in the retina and accumulates in the fetal brain during late pregnancy and in early neonatal life has led to the suggestion that an adequate dietary supply of this fatty acid is required for optimal brain and visual development (Farquharson et al., 1995). Animal studies where n-3 fatty acid deficient diets have been provided demonstrate that dietary n-3 fatty acids are essential for normal cognitive and visual function, as reviewed in detail elsewhere (Lauritzen et al., 2001).
Human studies have investigated the role of LC n-3 PUFA, particularly DHA, when provided in milk formula to both preterm and healthy term infants. Meta-analyses indicate that the addition of DHA to pre-term infant formula is beneficial for optimal visual development in early life (Sangiovanni et al., 2000; Uauy et al., 2003). Whether these effects persist beyond early life (i.e. after 4 months of age) has not yet been established. In term infants, formula containing DHA was found to improve markers of cognitive function (Cheatham et al., 2006). However, the clinical relevance of the reported statistically significant differences and the validity of the neurodevelopmental tests utilised in these studies have been questioned (Koo, 2003).
Human studies have demonstrated that there are significant effects of pregnancy upon blood lipid fatty acid composition, though the effects observed have been mixed. For example, while some studies have identified a reduction in plasma phospholipid DHA status during pregnancy (Wijendran et al., 1999; Hornstra, 2000), others have reported increased DHA content of plasma phospholipids (Postle et al., 1995; Burdge et al., 2006) or red blood cells (Stewart et al., 2007). These differences between studies can most likely be attributed to variations in the type of sample analysed, whether results were expressed as a percentage or in absolute concentrations, and the possible confounding effect of maternal diet and adipose tissue composition. Studies in rats comparing virgin animals with those at the end of pregnancy have shown that the fatty acid composition of phospholipids from plasma and liver is significantly altered in response to pregnancy with higher DHA and lower arachidonic acid contents (Smith and Walsh, 1975; Cunnane, 1989; Chen et al., 1992; Burdge et al., 1994). In rats, higher DHA in liver and plasma phosphatidylcholine (PC) has been attributed to changes in PC synthesis (Burdge et al., 1994). However, it is unclear whether the increased availability of DHA is a result of mobilisation of DHA from adipose tissue, increased dietary intake or greater synthesis by desaturation and elongation of precursors.
Use of an ALA rich diet in rat models
We have identified that there are significant diet × sex interactions in rat tissue n-3 fatty acid composition (Childs et al., 2010a). Female rats fed an ALA rich diet had a higher proportion of EPA in plasma and liver PC compared to males (figure 2), with data suggesting that these differences may be mediated by higher expression of Δ6 desaturase (Δ6D) mRNA and greater Δ6D activity in females than males (Childs et al., 2010a). We also identified that providing an ALA rich diet during pregnancy resulted in equivalent EPA status in fetal immune tissues (figure 3A) and equivalent DHA status in the fetal brain to that achieved in the offspring of dams fed a high-fat salmon-oil diet (figure 3B) (Childs et al., 2010b). This indicates a significant role of maternal and/or fetal LC n-3 PUFA synthesis in determining fetal LC n-3 PUFA status in a tissue specific manner. The effect of maternal diet during pregnancy upon fetal brain DHA content persists until weaning (figure 4).
EPA content of plasma phosphatidylcholine in male and female rats fed an ALA-rich diet for 20 days. Values are mean ± SD, n=6 per group. *Significantly different from males (p<0.05). Data are taken from [10].
EPA and DHA contents of thymus and brain phosphatidylethanolamine from 20-day gestation pups from rat dams fed different experimental diets for 20 days of pregnancy. Values are mean±SD, n=6 per group. Means without a common letter differ, P<0.05. LF, low fat diet; HF, high fat diet; ND, not detected (<0.1%). Data are taken from [29].
DHA content of brain phosphatidylethanolamine of pups from rat dams fed different high fat (HF) experimental diets for 20 days of pregnancy. Day 20 indicates day 20 of gestation; week 3, 6, 9 and 12 indicate weeks post-birth. Values are mean n=6 per group. *HF Salmon significantly different from the other two groups, P<0.05. Data are not previously published.
We have found that the percentage content of n-3 fatty acids among rats receiving standard laboratory chow ad libitum and the response of rats to ALA supplementation regimes compares favourably with available data from human studies. If dietary ALA during pregnancy significantly influences fetal brain and immune tissue LC n-3 PUFA content in humans this would have significant implications for strategies aimed at improving infant cognitive function or promoting infant immune development and reducing the risk of immune dysfunction (e.g. atopic sensitisation). To date, studies in pregnancy examining these infant outcomes have largely provided marine-sources of n-3 fatty acids. The availability of plant-oil sources of n-3 fatty acids would greatly benefit vegetarian and vegan women and would have an environmental impact by reducing demand upon marine resources.
Further rat studies will be necessary to determine the threshold of ALA supplementation required to maintain equivalent brain DHA and immune tissue EPA to that achieved with a fish-oil rich diet. Whether these changes to tissue fatty acid composition result in any differences in offspring visual, cognitive or immune function is also yet to be determined. It would be of interest to conduct human studies to investigate whether there are similar sex differences in the response to dietary ALA. If the effects observed in our rat model of dietary ALA during pregnancy were replicated in human studies, this approach could be used to investigate whether there are benefits to offspring health, including women who are unwilling or unable to consume marine-based interventions (i.e. fish or fish oils).
Bakewell L, Burdge GC, Calder PC. Polyunsaturated fatty acid concentrations in young men and women consuming their habitual diets. Br J Nutr 2006; 96: 93–99. [CrossRef] [PubMed] [Google Scholar]
Burdge GC, Hunt AN, Postle AD. Mechanisms of hepatic phosphatidylcholine synthesis in adult rat: effects of pregnancy. Biochem J 1994; 303: 941–947. [PubMed] [Google Scholar]
Burdge GC, Jones AE, Wootton SA. Eicosapentaenoic and docosapentaenoic acids are the principal products of alpha-linolenic acid metabolism in young men. Br J Nutr 2002a; 88: 355–363. [CrossRef] [PubMed] [Google Scholar]
Burdge GC, Wootton SA. Conversion of alpha-linolenic acid to eicosapentaenoic, docosapentaenoic and docosahexaenoic acids in young women. Br J Nutr 2002b; 88: 411–420. [CrossRef] [PubMed] [Google Scholar]
Burdge GC, Calder PC. Dietary alpha-linolenic acid and health related outcomes: a metabolic perspective. Nutr Res Rev 2006; 19: 26–52. [CrossRef] [PubMed] [Google Scholar]
Burdge GC, Sherman RC, Ali Z, Wootton SA, Jackson AA. Docosahexaenoic acid is selectively enriched in plasma phospholipids during pregnancy in Trinidadian women-results of a pilot study. Reprod Nutr Dev 2006; 46: 63–67. [CrossRef] [PubMed] [Google Scholar]
Burdge GC, Slater-Jefferies JL, Grant RA, et al. Sex, but not maternal protein or folic acid intake, determines the fatty acid composition of hepatic phospholipids, but not of triacylglycerol, in adult rats. Prostaglandins Leukot Essent Fatty Acids 2008; 78: 73–79. [CrossRef] [PubMed] [Google Scholar]
Cheatham CL, Colombo J, Carlson SE. N-3 fatty acids and cognitive and visual acuity development: methodologic and conceptual considerations. Am J Clin Nutr 2006; 83: 1458S–1466S. [PubMed] [Google Scholar]
Chen ZY, Yang JL, Menard CR, Cunnane SC. Linoleoyl-enriched triacylglycerol species increase in maternal liver during late pregnancy in the rat. Lipids 1992; 27: 21–24. [CrossRef] [PubMed] [Google Scholar]
Childs CE, Romeu-Nadal M, Burdge GC, Calder PC. The polyunsaturated fatty acid composition of hepatic and plasma lipids differ by both sex and dietary fat intake in rats. J Nutr 2010a; 140: 245–250. [CrossRef] [Google Scholar]
Childs CE, Romijn T, Enke U, Hoile S, Calder PC. Maternal diet during pregnancy has tissue-specific effects upon fetal fatty acid composition and alters fetal immune parameters. Prostaglandins Leukot Essent Fatty Acids 2010b; 83: 179–184. [CrossRef] [Google Scholar]
Crowe FL, Skeaff CM, Green TJ. Serum n-3 long-chain PUFA differ by sex and age in a population-based survey of New Zealand adolescents and adults. Br J Nutr 2008; 99: 168–174. [CrossRef] [PubMed] [Google Scholar]
Cunnane SC. Changes in essential fatty acid composition during pregnancy: maternal liver, placenta and fetus. Nutrition 1989; 5: 253–255. [PubMed] [Google Scholar]
Extier A, Langeliera B, Perruchota MH, et al. Gender affects liver desaturase expression in a rat model of n-3 fatty acid repletion. J Nutr Biochem 2010; 21: 180–187. [CrossRef] [PubMed] [Google Scholar]
Farquharson J, Jamieson EC, Logan RW, Patrick WJ, Howatson AG, Cockburn F. Age- and dietary-related distributions of hepatic arachidonic and docosahexaenoic acid in early infancy. Pediatr Res 1995; 38: 361–365. [CrossRef] [PubMed] [Google Scholar]
Giltay EJ, Gooren LJ, Toorians AW, Katan MB, Zock PL. Docosahexaenoic acid concentrations are higher in women than in men because of estrogenic effects. Am J Clin Nutr 2004; 80: 1167–1174. [PubMed] [Google Scholar]
Haggarty P. Effect of placental function on fatty acid requirements during pregnancy. Eur J Clin Nutr 2004; 58: 1559–1570. [CrossRef] [PubMed] [Google Scholar]
Hornstra G. Essential fatty acids in mothers and their neonates. Am J Clin Nutr 2000; 71: 1262S–1269S. [PubMed] [Google Scholar]
Koo WW. Efficacy and safety of docosahexaenoic acid and arachidonic acid addition to infant formulas: can one buy better vision and intelligence? J Am Coll Nutr 2003; 22: 101–107. [CrossRef] [PubMed] [Google Scholar]
Lauritzen L, Hansen HS, Jorgensen MH, Michaelsen KF. The essentiality of long chain n-3 fatty acids in relation to development and function of the brain and retina. Prog Lipid Res 2001; 40: 1–94. [CrossRef] [PubMed] [Google Scholar]
Leonard AE, Pereira SL, Sprecher H, Huang YS. Elongation of long-chain fatty acids. Prog Lipid Res 2004; 43: 36–54. [CrossRef] [PubMed] [Google Scholar]
Nikkari T, Luukkainen P, Pietinen P, Puska P. Fatty acid composition of serum lipid fractions in relation to gender and quality of dietary fat. Ann Med 1995; 27: 491–498. [CrossRef] [PubMed] [Google Scholar]
Postle AD, Al MDM, Burdge GC, Hornstra G. The composition of individual molecular species of plasma phosphatidylcholine in human pregnancy. Early Hum Dev 1995; 43: 47–58. [CrossRef] [PubMed] [Google Scholar]
Sangiovanni JP, Parra-Cabrera S, Colditz GA, Berkey CS, Dwyer JT. Meta-analysis of dietary essential fatty acids and long-chain polyunsaturated fatty acids as they relate to visual resolution acuity in healthy preterm infants. Pediatrics 2000; 105: 1292–1298. [CrossRef] [PubMed] [Google Scholar]
Scientific Advisory Committee on Nutrition (SACoN). Advice on fish consumption: benefits & risks. London: TSO, 2004. [Google Scholar]
Smith RW, Walsh A. Composition of liver lipids of the rat during pregnancy and lactation. lipids 1975; 10: 643–645. [CrossRef] [PubMed] [Google Scholar]
Stewart F, Rodie VA, Ramsay JE, Greer IA, Freeman DJ, Meyer BJ. Longitudinal assessment of erythrocyte fatty acid composition throughout pregnancy and post partum. Lipids 2007; 42: 335–344. [CrossRef] [PubMed] [Google Scholar]
Uauy R, Hoffman DR, Mena P, Llanos A, Birch EE. Term infant studies of DHA and ARA supplementation on neurodevelopment: results of randomized controlled trials. J Pediatr 2003; 143: S17–S25. [CrossRef] [PubMed] [Google Scholar]
Wijendran V, Bendel RB, Couch SC, et al. Maternal plasma phospholipid polyunsaturated fatty acids in pregnancy with and without gestational diabetes mellitus: relations with maternal factors. Am J Clin Nutr 1999; 70: 53–61. [PubMed] [Google Scholar]
To cite this article: Childs CE, Fear AL, Hoile SP, Calder PC. Different dietary omega-3 sources during pregnancy and DHA in the developing rat brain. OCL 2011;18(5):259ߝ262. doi: 10.1051/ocl.2011.0391
All Figures
In the text
Brain docosahexaenoic acid (DHA) levels of young rats are related to alpha-linolenic acid (ALA) levels and fat matrix of the diet: impact of dairy fat
OCL 2011 ; 18(6) : 293–296
Impact of maternal dietary lipids on human health
OCL 2018, 25(3), D302
Conversion of $\alpha$-linolenic acid to longer-chain polyunsaturated fatty acids in human adults
Reprod. Nutr. Dev. 45, 581-597 (2005)
Lipids for infant formulas
Docosahexaenoic acid is selectively enriched in plasma phospholipids during pregnancy in Trinidadian women – Results of a pilot study
Reprod. Nutr. Dev. 46, 63-67 (2006) | CommonCrawl |
Geometric conditions for the existence of a rolling without twisting or slipping
CPAA Home
Polynomial-in-time upper bounds for the orbital instability of subcritical generalized Korteweg-de Vries equations
January 2014, 13(1): 419-433. doi: 10.3934/cpaa.2014.13.419
Continuous dependence in hyperbolic problems with Wentzell boundary conditions
Giuseppe Maria Coclite 1, , Angelo Favini 2, , Gisèle Ruiz Goldstein 3, , Jerome A. Goldstein 4, and Silvia Romanelli 1,
Department of Mathematics, University of Bari, Via E. Orabona 4, I--70125 Bari, Italy, Italy
Dipartimento di Matematica, Università degli Studi di Bologna, Piazza di Porta S. Donato, 5, 40126 Bologna
The University of Memphis, Department of Mathematical Sciences, Memphis, TN 38152, United States
Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, United States
Received January 2013 Revised May 2013 Published August 2013
Let $\Omega$ be a smooth bounded domain in $R^N$ and let \begin{eqnarray} Lu=\sum_{j,k=1}^N \partial_{x_j}\left(a_{jk}(x)\partial_{x_k} u\right), \end{eqnarray} in $\Omega$ and \begin{eqnarray} Lu+\beta(x)\sum\limits_{j,k=1}^N a_{jk}(x)\partial_{x_j} u n_k+\gamma (x)u-q\beta(x)\sum_{j,k=1}^{N-1}\partial_{\tau_k}\left(b_{jk}(x)\partial_{\tau_j}u\right)=0, \end{eqnarray} on $\partial\Omega$ define a generalized Laplacian on $\Omega$ with a Wentzell boundary condition involving a generalized Laplace-Beltrami operator on the boundary. Under some smoothness and positivity conditions on the coefficients, this defines a nonpositive selfadjoint operator, $-S^2$, on a suitable Hilbert space. If we have a sequence of such operators $S_0,S_1,S_2,...$ with corresponding coefficients \begin{eqnarray} \Phi_n=(a_{jk}^{(n)},b_{jk}^{(n)}, \beta_n,\gamma_n,q_n) \end{eqnarray} satisfying $\Phi_n\to\Phi_0$ uniformly as $n\to\infty$, then $u_n(t)\to u_0(t)$ where $u_n$ satisfies \begin{eqnarray} i\frac{du_n}{dt}=S_n^m u_n, \end{eqnarray} or \begin{eqnarray} \frac{d^2u_n}{dt^2}+S_n^{2m} u_n=0, \end{eqnarray} or \begin{eqnarray} \frac{d^2u_n}{dt^2}+F(S_n)\frac{du_n}{dt}+S_n^{2m} u_n=0, \end{eqnarray} for $m=1,2,$ initial conditions independent of $n$, and for certain nonnegative functions $F$. This includes Schrödinger equations, damped and undamped wave equations, and telegraph equations.
Keywords: Wentzell boundary conditions, higher order boundary operators., continuous dependence, Wave equation, semigroup approximation.
Mathematics Subject Classification: Primary: 35B30, 47D06; Secondary: 35L0.
Citation: Giuseppe Maria Coclite, Angelo Favini, Gisèle Ruiz Goldstein, Jerome A. Goldstein, Silvia Romanelli. Continuous dependence in hyperbolic problems with Wentzell boundary conditions. Communications on Pure & Applied Analysis, 2014, 13 (1) : 419-433. doi: 10.3934/cpaa.2014.13.419
S. Agmon, A. Douglis and L. Nirenberg, Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions. I,, Comm. Pure Appl. Math., 12 (1959), 623. doi: 10.1002/cpa.3160120405. Google Scholar
S. Agmon and L. Nirenberg, Properties of solutions of ordinary differential equations in Banach space,, Comm. Pure Appl. Math., 16 (1963), 121. doi: 10.1002/cpa.3160160204. Google Scholar
G. M. Coclite, A Favini, C. G. Gal, G. R. Goldstein, J. A. Goldstein, E. Obrecht and S. Romanelli, The role of Wentzell boundary conditions in linear and nonlinear analysis,, In, (2009), 279. Google Scholar
G. M. Coclite, A. Favini, G. R. Goldstein, J. A. Goldstein and S. Romanelli, Continuous dependence on the boundary conditions for the Wentzell Laplacian,, Semigroup Forum, 77 (2008), 101. doi: 10.1007/s00233-008-9068-2. Google Scholar
K.-J. Engel and R. Nagel, "One-Parameter Semigroups for Linear Evolution Equations,", Graduate Texts in Mathematics, (2000). Google Scholar
A. Favini, G. R. Goldstein, J. A. Goldstein and S. Romanelli, The heat equation with generalized Wentzell boundary conditions,, J. Evol. Equ., 2 (2002), 1. doi: 10.1007/s00028-002-8077-y. Google Scholar
A. Favini, G. R. Goldstein, J. A. Goldstein, E. Obrecht and S. Romanelli, Elliptic operators with general Wentzell boundary conditions, analytic semigroups and the angle concavity theorem,, Math. Nachr., 283 (2010), 504. doi: 10.1002/mana.200910086. Google Scholar
J. A. Goldstein, "Semigroups of Linear Operators and Applications,", Oxford University Press, (1985). doi: 10.1016/0022-1236(69)90020-2. Google Scholar
J. A. Goldstein, Time dependent hyperbolic equations,, J. Functional Analysis, 4 (1969), 31. Google Scholar
J. A. Goldstein and G. Reyes, Asymptotic equipartition of operator-weighted energies in damped wave equations,, {Asymptotic Analysis}, (). Google Scholar
T. Kato, "Perturbation Theory for Linear Operators,", Die Grundlehren der mathematischen Wissenschaften, (1966). Google Scholar
P. D. Lax, "Functional Analysis,", Pure and Applied Mathematics (New York). Wiley-Interscience [John Wiley & Sons], (2002). Google Scholar
J.-L. Lions and E. Magenes, "Non-Homogeneous Boundary Value Problems and Applications. Vol. I,", Die Grundlehren der mathematischen Wissenschaften, (1972). Google Scholar
H. Triebel, "Theory of Function Spaces,", Monographs in Mathematics, (1983). doi: 10.1007/978-3-0346-0416-1. Google Scholar
Mokhtari Yacine. Boundary controllability and boundary time-varying feedback stabilization of the 1D wave equation in non-cylindrical domains. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021004
Ludovick Gagnon, José M. Urquiza. Uniform boundary observability with Legendre-Galerkin formulations of the 1-D wave equation. Evolution Equations & Control Theory, 2021, 10 (1) : 129-153. doi: 10.3934/eect.2020054
Xu Zhang, Chuang Zheng, Enrique Zuazua. Time discrete wave equations: Boundary observability and control. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 571-604. doi: 10.3934/dcds.2009.23.571
Roland Schnaubelt, Martin Spitz. Local wellposedness of quasilinear Maxwell equations with absorbing boundary conditions. Evolution Equations & Control Theory, 2021, 10 (1) : 155-198. doi: 10.3934/eect.2020061
Kuntal Bhandari, Franck Boyer. Boundary null-controllability of coupled parabolic systems with Robin conditions. Evolution Equations & Control Theory, 2021, 10 (1) : 61-102. doi: 10.3934/eect.2020052
Qianqian Hou, Tai-Chia Lin, Zhi-An Wang. On a singularly perturbed semi-linear problem with Robin boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 401-414. doi: 10.3934/dcdsb.2020083
Wenrui Hao, King-Yeung Lam, Yuan Lou. Ecological and evolutionary dynamics in advective environments: Critical domain size and boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 367-400. doi: 10.3934/dcdsb.2020283
Bopeng Rao, Zhuangyi Liu. A spectral approach to the indirect boundary control of a system of weakly coupled wave equations. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 399-414. doi: 10.3934/dcds.2009.23.399
Chueh-Hsin Chang, Chiun-Chuan Chen, Chih-Chiang Huang. Traveling wave solutions of a free boundary problem with latent heat effect. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021028
Yoichi Enatsu, Emiko Ishiwata, Takeo Ushijima. Traveling wave solution for a diffusive simple epidemic model with a free boundary. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 835-850. doi: 10.3934/dcdss.2020387
Isabeau Birindelli, Françoise Demengel, Fabiana Leoni. Boundary asymptotics of the ergodic functions associated with fully nonlinear operators through a Liouville type theorem. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020395
Mengni Li. Global regularity for a class of Monge-Ampère type equations with nonzero boundary conditions. Communications on Pure & Applied Analysis, 2021, 20 (1) : 301-317. doi: 10.3934/cpaa.2020267
Amru Hussein, Martin Saal, Marc Wrona. Primitive equations with horizontal viscosity: The initial value and The time-periodic problem for physical boundary conditions. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020398
Vandana Sharma. Global existence and uniform estimates of solutions to reaction diffusion systems with mass transport type boundary conditions. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021001
Fang Li, Bo You. On the dimension of global attractor for the Cahn-Hilliard-Brinkman system with dynamic boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021024
Jan Březina, Eduard Feireisl, Antonín Novotný. On convergence to equilibria of flows of compressible viscous fluids under in/out–flux boundary conditions. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021009
Maho Endo, Yuki Kaneko, Yoshio Yamada. Free boundary problem for a reaction-diffusion equation with positive bistable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3375-3394. doi: 10.3934/dcds.2020033
Shenglan Xie, Maoan Han, Peng Zhu. A posteriori error estimate of weak Galerkin fem for second order elliptic problem with mixed boundary condition. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020340
Giuseppe Maria Coclite Angelo Favini Gisèle Ruiz Goldstein Jerome A. Goldstein Silvia Romanelli | CommonCrawl |
Search results for: P. Millet
Items from 1 to 20 out of 511 results
Search for a heavy pseudoscalar boson decaying to a Z and a Higgs boson at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A. M. Sirunyan, A. Tumasyan, W. Adam, F. Ambrogi, more
The European Physical Journal C > 2019 > 79 > 7 > 1-27
A search is presented for a heavy pseudoscalar boson $$\text {A}$$ A decaying to a Z boson and a Higgs boson with mass of 125$$\,\text {GeV}$$ GeV . In the final state considered, the Higgs boson decays to a bottom quark and antiquark, and the Z boson decays either into a pair of electrons, muons, or neutrinos. The analysis is performed using a data sample corresponding to an integrated luminosity...
Search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at 13 TeV
The CMS collaboration, A. M. Sirunyan, A. Tumasyan, W. Adam, more
Journal of High Energy Physics > 2019 > 2019 > 6 > 1-34
Abstract Results are reported of a search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at the LHC. The data sample corresponds to an integrated luminosity of 35.9 fb−1 collected at a center-of-mass energy of 13 TeV using the CMS detector. The results are interpreted in the context of models of gauge-mediated supersymmetry breaking. Production...
Search for the associated production of the Higgs boson and a vector boson in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV via Higgs boson decays to τ leptons
Abstract A search for the standard model Higgs boson produced in association with a W or a Z boson and decaying to a pair of τ leptons is performed. A data sample of proton-proton collisions collected at s $$ \sqrt{s} $$ = 13 TeV by the CMS experiment at the CERN LHC is used, corresponding to an integrated luminosity of 35.9 fb−1. The signal strength is measured relative to the expectation...
Leg- vs arm-cycling repeated sprints with blood flow restriction and systemic hypoxia
Sarah J. Willis, Fabio Borrani, Grégoire P. Millet
European Journal of Applied Physiology > 2019 > 119 > 8 > 1819-1828
Purpose The aim was to compare changes in peripheral and cerebral oxygenation, as well as metabolic and performance responses during conditions of blood flow restriction (BFR, bilateral vascular occlusion at 0% vs. 45% of resting pulse elimination pressure) and systemic hypoxia (~ 400 m, FIO2 20.9% vs. ~ 3800 m normobaric hypoxia, FIO2 13.1 ± 0.1%) during repeated sprint tests to exhaustion (RST)...
Search for a low-mass τ−τ+ resonance in association with a bottom quark in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV
Abstract A general search is presented for a low-mass τ−τ+ resonance produced in association with a bottom quark. The search is based on proton-proton collision data at a center-of-mass energy of 13 TeV collected by the CMS experiment at the LHC, corresponding to an integrated luminosity of 35.9 fb−1. The data are consistent with the standard model expectation. Upper limits at 95% confidence level...
Search for supersymmetry in events with a photon, jets, $$\mathrm {b}$$ b -jets, and missing transverse momentum in proton–proton collisions at 13$$\,\text {Te}\text {V}$$ Te
A search for supersymmetry is presented based on events with at least one photon, jets, and large missing transverse momentum produced in proton–proton collisions at a center-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te . The data correspond to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 and were recorded at the LHC with the CMS detector in 2016. The analysis characterizes signal-like...
Combined measurements of Higgs boson couplings in proton–proton collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
Combined measurements of the production and decay rates of the Higgs boson, as well as its couplings to vector bosons and fermions, are presented. The analysis uses the LHC proton–proton collision data set recorded with the CMS detector in 2016 at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te , corresponding to an integrated luminosity of 35.9$${\,\text {fb}^{-1}} $$ fb-1 . The combination is based...
Combinations of single-top-quark production cross-section measurements and |fLVVtb| determinations at s $$ \sqrt{s} $$ = 7 and 8 TeV with the ATLAS and CMS experiments
The ATLAS collaboration, M. Aaboud, G. Aad, B. Abbott, more
Abstract This paper presents the combinations of single-top-quark production cross-section measurements by the ATLAS and CMS Collaborations, using data from LHC proton-proton collisions at s $$ \sqrt{s} $$ = 7 and 8 TeV corresponding to integrated luminosities of 1.17 to 5.1 fb−1 at s $$ \sqrt{s} $$ = 7 TeV and 12.2 to 20.3 fb−1 at s $$ \sqrt{s} $$ = 8 TeV. These combinations...
Measurement of inclusive very forward jet cross sections in proton-lead collisions at s N N $$ \sqrt{s_{\mathrm{NN}}} $$ = 5.02 TeV
Abstract Measurements of differential cross sections for inclusive very forward jet production in proton-lead collisions as a function of jet energy are presented. The data were collected with the CMS experiment at the LHC in the laboratory pseudorapidity range −6.6 < η < −5.2. Asymmetric beam energies of 4 TeV for protons and 1.58 TeV per nucleon for Pb nuclei were used, corresponding to a...
Measurement of the energy density as a function of pseudorapidity in proton–proton collisions at $$\sqrt{s} =13\,\text {TeV} $$ s=13TeV
A measurement of the energy density in proton–proton collisions at a centre-of-mass energy of $$\sqrt{s} =13$$ s=13 $$\,\text {TeV}$$ TeV is presented. The data have been recorded with the CMS experiment at the LHC during low luminosity operations in 2015. The energy density is studied as a function of pseudorapidity in the ranges $$-\,6.6<\eta <-\,5.2$$ -6.6<η<-5.2 and $$3.15<|\eta...
Measurement of the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ production cross section, the top quark mass, and the strong coupling constant using dilepton events in pp collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A measurement of the top quark–antiquark pair production cross section $$\sigma _{\mathrm {t}\overline{\mathrm {t}}} $$ σtt¯ in proton–proton collisions at a centre-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te is presented. The data correspond to an integrated luminosity of $$35.9{\,\text {fb}^{-1}} $$ 35.9fb-1 , recorded by the CMS experiment at the CERN LHC in 2016. Dilepton events ($$\mathrm...
Search for vector-like quarks in events with two oppositely charged leptons and jets in proton–proton collisions at $$\sqrt{s} = 13\,\text {Te}\text {V} $$ s=13Te
A search for the pair production of heavy vector-like partners $$\mathrm {T}$$ T and $$\mathrm {B}$$ B of the top and bottom quarks has been performed by the CMS experiment at the CERN LHC using proton–proton collisions at $$\sqrt{s} = 13\,\text {Te}\text {V} $$ s=13Te . The data sample was collected in 2016 and corresponds to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 . Final states...
Neuromuscular evaluation of arm-cycling repeated sprints under hypoxia and/or blood flow restriction
Arthur Peyrard, Sarah J. Willis, Nicolas Place, Grégoire P. Millet, more
Purpose This study aimed to determine the effects of hypoxia and/or blood flow restriction (BFR) on an arm-cycling repeated sprint ability test (aRSA) and its impact on elbow flexor neuromuscular function. Methods Fourteen volunteers performed an aRSA (10 s sprint/20 s recovery) to exhaustion in four randomized conditions: normoxia (NOR), normoxia plus BFR (NBFR), hypoxia (FiO2 = 0.13, HYP) and...
Measurements of the pp → WZ inclusive and differential production cross sections and constraints on charged anomalous triple gauge couplings at s $$ \sqrt{s} $$ = 13 TeV
Abstract The WZ production cross section is measured in proton-proton collisions at a centre-of-mass energy s $$ \sqrt{s} $$ = 13 TeV using data collected with the CMS detector, corresponding to an integrated luminosity of 35.9 fb−1. The inclusive cross section is measured to be σtot(pp → WZ) = 48.09 − 0.96+ 1.00 (stat) − 0.37+ 0.44 (theo) − 2.17+ 2.39 (syst) ± 1.39(lum) pb, resulting in...
Search for nonresonant Higgs boson pair production in the b b ¯ b b ¯ $$ \mathrm{b}\overline{\mathrm{b}}\mathrm{b}\overline{\mathrm{b}} $$ final state at s $$ \sqrt{s} $$ = 13 TeV
Abstract Results of a search for nonresonant production of Higgs boson pairs, with each Higgs boson decaying to a b b ¯ $$ \mathrm{b}\overline{\mathrm{b}} $$ pair, are presented. This search uses data from proton-proton collisions at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb−1, collected by the CMS detector at the LHC. No signal is observed, and...
Search for contact interactions and large extra dimensions in the dilepton mass spectra from proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV
Abstract A search for nonresonant excesses in the invariant mass spectra of electron and muon pairs is presented. The analysis is based on data from proton-proton collisions at a center-of-mass energy of 13 TeV recorded by the CMS experiment in 2016, corresponding to a total integrated luminosity of 36 fb−1. No significant deviation from the standard model is observed. Limits are set at 95% confidence...
Measurement of the top quark mass in the all-jets final state at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV and combination with the lepton+jets channel
A top quark mass measurement is performed using $$35.9{\,\text {fb}^{-1}} $$ 35.9fb-1 of LHC proton–proton collision data collected with the CMS detector at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV . The measurement uses the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ all-jets final state. A kinematic fit is performed to reconstruct the decay of the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ system...
Search for resonant production of second-generation sleptons with same-sign dimuon events in proton–proton collisions at $$\sqrt{s} = 13\,\text {TeV} $$ s=13TeV
A search is presented for resonant production of second-generation sleptons ($$\widetilde{\mu } _{\mathrm {L}}$$ μ~L , $$\widetilde{\nu }_{\mu }$$ ν~μ ) via the R-parity-violating coupling $${\lambda ^{\prime }_{211}}$$ λ211′ to quarks, in events with two same-sign muons and at least two jets in the final state. The smuon (muon sneutrino) is expected to decay into a muon and a neutralino (chargino),...
Search for resonant t t ¯ $$ \mathrm{t}\overline{\mathrm{t}} $$ production in proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV
Abstract A search for a heavy resonance decaying into a top quark and antiquark t t ¯ $$ \left(\mathrm{t}\overline{\mathrm{t}}\right) $$ pair is performed using proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV. The search uses the data set collected with the CMS detector in 2016, which corresponds to an integrated luminosity of 35.9 fb−1. The analysis considers three exclusive...
Search for excited leptons in ℓℓγ final states in proton-proton collisions at s = 13 $$ \sqrt{\mathrm{s}}=13 $$ TeV
Abstract A search is presented for excited electrons and muons in ℓℓγ final states at the LHC. The search is based on a data sample corresponding to an integrated luminosity of 35.9 fb−1 of proton-proton collisions at a center-of-mass energy of 13 TeV, collected with the CMS detector in 2016. This is the first search for excited leptons at s $$ \sqrt{s} $$ = 13 TeV. The observation is consistent...
Content availability
WATER ELECTROLYSIS (16)
HIGGS (13)
PROTON EXCHANGE MEMBRANE (6)
PROTON-EXCHANGE MEMBRANE (6)
TOP QUARK (5)
B2G (4)
CROSS SECTION (4)
EXOTICA (4)
FEATURE EXTRACTION (4)
HEAVY IONS (4)
IMPEDANCE (4)
JETS (4)
B-PHYSICS (3)
DIBOSON (3)
EXTRA DIMENSIONS (3)
HIGH PRESSURE (3)
NEUROMUSCULAR FATIGUE (3)
PRÉVENTION (3)
SOLID POLYMER ELECTROLYTE (3)
2HDM (2)
A. LAYERED COMPOUNDS (2)
ALPHA-S (2)
AQGC (2)
B-TAGGING (2)
B. CRYSTAL GROWTH (2)
BAROREFLEX (2)
BFR (2)
C. X-RAY DIFFRACTION (2)
CARBON-SUPPORTED PLATINUM NANO-PARTICLES (2)
CHARGE ASYMMETRY (2)
COORDINATION (2)
D. CRYSTAL STRUCTURE (2)
DIMUONS (2)
DIPHOTON (2)
ELECTROMYOGRAPHIC ACTIVITY (2)
ELECTROWEAK (2)
ENERGY EXPENDITURE (2)
ENTHALPY (2)
EVENT-RELATED POTENTIAL (2)
EXO (2)
FIRST ORDER TRANSITION (2)
FUEL CELL (2)
HEAT FLUX CALORIMETRY (2)
HEAVY ION (2)
HYDRIDING KINETICS (2)
HYDROGEN EVOLUTION REACTION (2)
HYDROGEN PERMEATION (2)
IMPEDANCE SPECTROSCOPY (2)
INTERMETALLIC HYDRIDES (2)
LEPTON-FLAVOUR-VIOLATION (2)
LEPTONS (2)
LHTL (2)
LOW MISSING TRANSVERSE ENERGY (2)
MAGNETRON SPUTTERING (2)
MEDIAL ARCH (2)
MSSM (2)
NITRIDES (2)
OCCLUSION (2)
PALLADIUM ALLOYS (2)
PALUDISME (2)
PEM TECHNOLOGY (2)
PEM WATER ELECTROLYSIS (2)
Springer (313)
Elsevier (178)
Wiley (11)
Journal of High Energy Physics (188)
The European Physical Journal C (83)
Physics Letters B (82)
International Journal of Hydrogen Energy (27)
European Journal of Applied Physiology (21)
Journal of Science and Medicine in Sport (7)
Journal of Alloys and Compounds (5)
Journal of Solid State Chemistry (5)
The European Physical Journal B (5)
Physica B: Physics of Condensed Matter (4)
Electrochimica Acta (3)
Journal of Applied Electrochemistry (3)
Nuclear Physics A (3)
Parasitology International (3)
Scandinavian Journal of Medicine & Science in Sports (3)
Solid State Sciences (3)
Transactions of the Royal Society of Tropical Medicine and Hygiene (3)
Acta Physiologica (2)
Archives de pediatrie (2)
Experimental Physiology (2)
Materials Research Bulletin (2)
Pharmacological Research (2)
Science & Sports (2)
Surface & Coatings Technology (2)
Acta Tropica (1)
Bulletin de la Société de pathologie exotique (1)
Chemistry of Materials (1)
Chemistry – A European Journal (1)
Clinical Therapeutics (1)
EMC - Odontologie (1)
Electrochemistry Communications (1)
European Neuropsychopharmacology (1)
Experimental Parasitology (1)
Extreme Physiology & Medicine (1)
Gait & Posture (1)
Human Movement Science (1)
IEEE Sensors Journal (1)
International Journal of Energy Research (1)
International Journal of Pharmaceutics (1)
Journal of Cardiovascular Magnetic Resonance (1)
Journal of Electromyography and Kinesiology (1)
Journal of Internal Medicine (1)
Journal of Magnetism and Magnetic Materials (1)
Journal of Materials Science: Materials in Medicine (1)
Journal of Neural Transmission (1)
Journal of Physics and Chemistry of Solids (1)
Journal of Power Sources (1)
Medecine et Maladies Infectieuses (1)
Metallurgical and Materials Transactions A (1)
Oral Oncology Supplement (1)
Russian Journal of Electrochemistry (1)
Solid State Communications (1)
Solid State Ionics (1)
The Lancet Respiratory Medicine (1)
Thermochimica Acta (1) | CommonCrawl |
Previous volume | This volume | Most recent volume | All volumes | Next volume | Previous article | Next article
Local base change via Tate cohomology
Author: Niccolò Ronchetti
Journal: Represent. Theory 20 (2016), 263-294
MSC (2010): Primary 11F70, 11S37, 22E50
DOI: https://doi.org/10.1090/ert/486
Published electronically: September 27, 2016
Abstract: We propose a new way to realize cyclic base change (a special case of Langlands functoriality) for prime degree extensions of characteristic zero local fields. Let $F / E$ be a prime degree $l$ extension of local fields of residue characteristic $p \neq l$. Let $\pi$ be an irreducible cuspidal $l$-adic representation of $\mathrm {GL}_n(E)$ and let $\rho$ be an irreducible cuspidal $l$-adic representation of $\mathrm {GL}_n(F)$ which is Galois-invariant. Under some minor technical conditions on $\pi$ and $\rho$ (for instance, we assume that both are level zero) we prove that the $\bmod l$-reductions $r_l(\pi )$ and $r_l(\rho )$ are in base change if and only if the Tate cohomology of $\rho$ with respect to the Galois action is isomorphic, as a modular representation of $\mathrm {GL}_n(E)$, to the Frobenius twist of $r_l(\pi )$. This proves a special case of a conjecture of Treumann and Venkatesh as they investigate the relationship between linkage and Langlands functoriality.
James Arthur, The principle of functoriality, Bull. Amer. Math. Soc. (N.S.) 40 (2003), no. 1, 39–53. Mathematical challenges of the 21st century (Los Angeles, CA, 2000). MR 1943132, DOI https://doi.org/10.1090/S0273-0979-02-00963-1
I. N. Bernstein and K. E. Rummelhart, Draft of: Representations of $p$-adic groups, lectures at Harvard University, 1992.
I. N. Bernstein and A. V. Zelevinsky, Induced representations of reductive ${\mathfrak p}$-adic groups. I, Ann. Sci. École Norm. Sup. (4) 10 (1977), no. 4, 441–472. MR 579172
I. N. Bernšteĭn and A. V. Zelevinskiĭ, Representations of the group $GL(n,F),$ where $F$ is a local non-Archimedean field, Uspehi Mat. Nauk 31 (1976), no. 3(189), 5–70 (Russian). MR 0425030
Richard E. Borcherds, Modular moonshine. III, Duke Math. J. 93 (1998), no. 1, 129–154. MR 1620091, DOI https://doi.org/10.1215/S0012-7094-98-09305-X
Daniel Bump, Automorphic forms and representations, Cambridge Studies in Advanced Mathematics, vol. 55, Cambridge University Press, Cambridge, 1997. MR 1431508
Colin J. Bushnell and Guy Henniart, Modular local Langlands correspondence for ${\rm GL}_n$, Int. Math. Res. Not. IMRN 15 (2014), 4124–4145. MR 3244922, DOI https://doi.org/10.1093/imrn/rnt063
Colin J. Bushnell and Guy Henniart, The essentially tame local Langlands correspondence. I, J. Amer. Math. Soc. 18 (2005), no. 3, 685–710. MR 2138141, DOI https://doi.org/10.1090/S0894-0347-05-00487-X
Colin J. Bushnell and Guy Henniart, The essentially tame local Langlands correspondence, III: the general case, Proc. Lond. Math. Soc. (3) 101 (2010), no. 2, 497–553. MR 2679700, DOI https://doi.org/10.1112/plms/pdp053
Roger W. Carter, Finite groups of Lie type, Pure and Applied Mathematics (New York), John Wiley & Sons, Inc., New York, 1985. Conjugacy classes and complex characters; A Wiley-Interscience Publication. MR 794307
Charles W. Curtis and Irving Reiner, Methods of representation theory. Vol. I, John Wiley & Sons, Inc., New York, 1981. With applications to finite groups and orders; Pure and Applied Mathematics; A Wiley-Interscience Publication. MR 632548
I. B. Fesenko and S. V. Vostokov, Local fields and their extensions, 2nd ed., Translations of Mathematical Monographs, vol. 121, American Mathematical Society, Providence, RI, 2002. With a foreword by I. R. Shafarevich. MR 1915966
Stephen Gelbart, An elementary introduction to the Langlands program, Bull. Amer. Math. Soc. (N.S.) 10 (1984), no. 2, 177–219. MR 733692, DOI https://doi.org/10.1090/S0273-0979-1984-15237-6
S. I. Gel′fand, Representations of the full linear group over a finite field, Mat. Sb. (N.S.) 83 (125) (1970), 15–41. MR 0272916
J. A. Green, The characters of the finite general linear groups, Trans. Amer. Math. Soc. 80 (1955), 402–447. MR 72878, DOI https://doi.org/10.1090/S0002-9947-1955-0072878-2
I. G. Macdonald, Symmetric functions and Hall polynomials, The Clarendon Press, Oxford University Press, New York, 1979. Oxford Mathematical Monographs. MR 553598
Gopal Prasad, Galois-fixed points in the Bruhat-Tits building of a reductive group, Bull. Soc. Math. France 129 (2001), no. 2, 169–174 (English, with English and French summaries). MR 1871292, DOI https://doi.org/10.24033/bsmf.2391
David Renard, Représentations des groupes réductifs $p$-adiques, Cours Spécialisés [Specialized Courses], vol. 17, Société Mathématique de France, Paris, 2010 (French). MR 2567785
Jean-Pierre Serre, Linear representations of finite groups, Springer-Verlag, New York-Heidelberg, 1977. Translated from the second French edition by Leonard L. Scott; Graduate Texts in Mathematics, Vol. 42. MR 0450380
Jean-Pierre Serre, Local fields, Graduate Texts in Mathematics, vol. 67, Springer-Verlag, New York-Berlin, 1979. Translated from the French by Marvin Jay Greenberg. MR 554237
Takuro Shintani, Two remarks on irreducible characters of finite general linear groups, J. Math. Soc. Japan 28 (1976), no. 2, 396–414. MR 414730, DOI https://doi.org/10.2969/jmsj/02820396
T. A. Springer, Cusp forms for finite groups, Seminar on Algebraic Groups and Related Finite Groups (The Institute for Advanced Study, Princeton, N.J., 1968/69) Lecture Notes in Mathematics, Vol. 131, Springer, Berlin, 1970, pp. 97–120. MR 0263942
T. A. Springer, Characters of special groups, Seminar on Algebraic Groups and Related Finite Groups (The Institute for Advanced Study, Princeton, N.J., 1968/69) Lecture Notes in Mathematics, Vol. 131, Springer, Berlin, 1970, pp. 121–166. MR 0263943
T. A. Springer and R. Steinberg, Conjugacy classes, Seminar on Algebraic Groups and Related Finite Groups (The Institute for Advanced Study, Princeton, N.J., 1968/69) Lecture Notes in Mathematics, Vol. 131, Springer, Berlin, 1970, pp. 167–266. MR 0268192
J. Tate, Number theoretic background, Automorphic forms, representations and $L$-functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977) Proc. Sympos. Pure Math., XXXIII, Amer. Math. Soc., Providence, R.I., 1979, pp. 3–26. MR 546607
D. Treumann and A. Venkatesh, Functoriality, Smith theory and the Brauer homomorphism, http://arxiv.org/pdf/1407.2346.pdf.
Marie-France Vignéras, Correspondance de Langlands semi-simple pour ${\rm GL}(n,F)$ modulo ${\scr l}\not = p$, Invent. Math. 144 (2001), no. 1, 177–223 (French). MR 1821157, DOI https://doi.org/10.1007/s002220100134
Marie-France Vignéras, Représentations $l$-modulaires d'un groupe réductif $p$-adique avec $l\ne p$, Progress in Mathematics, vol. 137, Birkhäuser Boston, Inc., Boston, MA, 1996 (French, with English summary). MR 1395151
David A. Vogan Jr., The local Langlands conjecture, Representation theory of groups and algebras, Contemp. Math., vol. 145, Amer. Math. Soc., Providence, RI, 1993, pp. 305–379. MR 1216197, DOI https://doi.org/10.1090/conm/145/1216197
James Arthur, The principle of functoriality, Bull. Amer. Math. Soc. (N.S.) 40 (2003), no. 1, 39–53. MR 1943132, DOI https://doi.org/10.1090/S0273-0979-02-00963-1
I. N. Bernstein and A. V. Zelevinsky, Induced representations of reductive ${\mathfrak {p}}$-adic groups. I, Ann. Sci. École Norm. Sup. (4) 10 (1977), no. 4, 441–472. MR 0579172
I. N. Bernstein and A. V. Zelevinsky, Representations of the group $GL(n,F),$ where $F$ is a local non-Archimedean field, Uspehi Mat. Nauk 31 (1976), no. 3(189), 5–70; translation, Russian Math. Surveys 31 (1976), no. 3 (Russian). MR 0425030
Colin J. Bushnell and Guy Henniart, Modular local Langlands correspondence for $\textrm {GL}_n$, Int. Math. Res. Not. IMRN 15 (2014), 4124–4145. MR 3244922
Roger W. Carter, Finite groups of Lie type: Conjugacy classes and complex characters, Pure and Applied Mathematics (New York), John Wiley & Sons, Inc., New York, 1985. MR 794307
Charles W. Curtis and Irving Reiner, Methods of representation theory. Vol. I: With applications to finite groups and orders, Pure and Applied Mathematics, John Wiley & Sons, Inc., New York, 1981. MR 632548
S. I. Gel′fand, Representations of the full linear group over a finite field, Mat. Sb. (N.S.) 83 (125) (1970), 15–41; translation, Math. USSR Sbornik 12 (1970), no. 1. MR 0272916
J. A. Green, The characters of the finite general linear groups, Trans. Amer. Math. Soc. 80 (1955), 402–447. MR 0072878
I. G. Macdonald, Symmetric functions and Hall polynomials, Oxford Mathematical Monographs, The Clarendon Press, Oxford University Press, New York, 1979. MR 553598
Gopal Prasad, Galois-fixed points in the Bruhat-Tits building of a reductive group, Bull. Soc. Math. France 129 (2001), no. 2, 169–174 (English, with English and French summaries). MR 1871292
Jean-Pierre Serre, Linear representations of finite groups, Translated from the second French edition by Leonard L. Scott, Graduate Texts in Mathematics, vol. 42, Springer-Verlag, New York-Heidelberg, 1977. MR 0450380
Jean-Pierre Serre, Local fields, Translated from the French by Marvin Jay Greenberg, Graduate Texts in Mathematics, vol. 67, Springer-Verlag, New York-Berlin, 1979. MR 554237
Takuro Shintani, Two remarks on irreducible characters of finite general linear groups, J. Math. Soc. Japan 28 (1976), no. 2, 396–414. MR 0414730
Marie-France Vignéras, Correspondance de Langlands semi-simple pour $\textrm {GL}(n,F)$ modulo $l\not = p$, Invent. Math. 144 (2001), no. 1, 177–223 (French). MR 1821157, DOI https://doi.org/10.1007/s002220100134
Retrieve articles in Representation Theory of the American Mathematical Society with MSC (2010): 11F70, 11S37, 22E50
Retrieve articles in all journals with MSC (2010): 11F70, 11S37, 22E50
Niccolò Ronchetti
Affiliation: Department of Mathematics, Stanford University, Stanford, California 94305
Email: [email protected]
Received by editor(s): July 2, 2015
Received by editor(s) in revised form: April 21, 2016, and July 18, 2016 | CommonCrawl |
Wound healing potentials of herbal ointment containing Calendula officinalis Linn. on the alteration of immunological markers and biochemical parameters in excision wounded animals
Shobana Gunasekaran ORCID: orcid.org/0000-0002-4793-61781,
Agnel Arul John Nayagam2 &
Rameshkannan Natarajan3
The present study was designed to investigate the in vivo wound healing activity of herbal ointment prepared from Calendula officinalis Linn. on excision wounded rats.
The excision wound model was employed for wound healing activity in albino rats. Healthy albino rats (150–200 g) of either sex were taken for excision wound model. Animals were divided into five groups of six animals in each. Group I served as normal control, Group II served as excision wounded control without treatment and Group III, IV served as excision wounded rats were treated with herbal ointment of two different doses (10% and 20%) applied topically for 14 days and group V served as excision wounded animals treated with reference ointment soframycin. Healing potential was evaluated by the rate of wound contraction, immunological markers like IL-6(Interleukin 6), TNF-alpha (Tumor necrosis factor -α), PDGF (Platelet Derived Growth Factor) and EGF (Epidermal Growth Factor), lipid peroxide (LPO), superoxide dismutase (SOD), and biochemical parameters like hydroxyproline, hesosamine, and tissue protein.
The topical application of herbal ointment treated groups showed increase in the levels of growth factors such as PDGF and EGF hydroxy proline, hesosamine, tissue protein, SOD and wound contraction and the ointment normalized the levels of lipid peroxide, IL-6, TNF-alpha compared than that of excision wounded animals.
From the above results, it was concluded that the topical application of herbal ointment exhibited significant wound healing activity in excision wounded rats as evidenced by increased wound contraction and collagen synthesis.
Wounds are clinical entities which are common in day to day life. Wound may be defined as a break in the continuity of the living tissue to an injury. Wounds cause discomfort and more prone to infection and other trouble some complications. Some diseases like immune compromised conditions, ischaemia and conditions like malnutrition, ageing, local infection and local tissue damage which leads to delaying of wound healing. Wound healing is an intricate process where the skin repair itself after injury. Wound healing is divided into three phases like inflammatory, proliferative and remodelling phase. Inflammatory phase is characterized by increased blood flow, increased capillary permeability and increased migration of leucocyte in the affected area. The proliferative phase is characterised by granulation, contraction and epithelisation. Remodelling phase determines the strength and appearance of the healed area.
A wide range of therapies for promoting wound healing have been suggested, which include anti microbial agents, cyanoacrylate adhesives, corticosteroids, phototherapy with low power lasers and other anti inflammatory, immunosuppressive and immune modulatory agents. They all commonly have a problem by causing severe side effects which include allergic reaction, scar formation etc. Even some of the drugs may delay the healing time due to the inhibition of collagen synthesis, epithelisation etc. Hence the need of the hour is to develop the new drugs without side effects.
In this scenario, herbal medicine still hold their unique place in the way of having without side effects. A large number of herbal products are equally used by tribals and folklore traditions in India for treatment of cuts, wounds, and burns. The chemical entities derived from plants need to be identified and formulated for the treatment and management of wounds. In this direction a number of plant drugs are being investigated at present. Many plant drugs have been used in management and treatment of wounds over the years. Plants and their extracts have immense potential in the management and treatment of wounds.
Calendula officinalis Linn. or pot marigold is a common plant belonging to Asteraceae family, native to southern Europe. The plant species has been reported to contain a variety of phyto-chemicals, including carbohydrates, phenolic compounds, lipids, steroids, tocopherols, terpenoids, quinones and carotenoids [11, 26] with different health benefits [19, 22]. The major active constituents of plant include triterpendiol esters, saponins, and flavonoids including rutin and hyperoside. This herb used medicinally either in the form of infusions, tincture, liquid extracts, creams or ointments. The skin care products are also available from this plant across the globe. So the present study was aimed to evaluate the wound healing potentials of aqueous extracts of Calendula officinalis Linn. on excision wounded animals.
The antibodies and chemicals were obtained from Sigma Aldrich pvt Ltd., India.
Collection and authentication of plant material
Flower of Calendula officinalis Linn. were collected in and around trichy. The plant was identified and authenticated by Rabinat Herbarium, St joseph's college, trichy and a voucher specimen (Voucher number: BISH0000619230) was deposited with herbarium.
Methods of extraction
Flower of the Calendula officinalis Linn. were shade dried and powdered coarsely using electrical blender. 200 g of the plant powder was mixed with six parts of water. Then it was boiled until it was reduced to one third and filtered. Then the filtrate was evaporated to dryness. Paste form of the extract was obtained and it stored in refrigerator at 4 °C for ointment preparation.
Preparation of herbal ointment
The wound healing ointment was prepared by mixing aqueous extract of Calendula officinalis Linn. at the concentration of 10 and 20% of (w/w) using white wax [2, 5].
Care of rats
Healthy adult Wistar strain of albino rats of either sex, weighing 150–200 g were used as experimental models. Animals were kept in ventilated cages and fed with standard rat chow pellet obtained from Sai Durga Food and Feeds, Bangalore, India, and water ad-libitum. All the studies were conducted according to the ethical guidelines of CPCSEA after obtaining necessary clearance from the committee (Approval No: 790/03/ac/CPCSEA).
Grouping and dosing of animals
The animals were divided into five groups as given below. Each group containing of six animals.
GROUP I was served as Normal control
GROUP II was served as Excision wounded animals without treatment,
GROUP III and IV was served as Excision wounded animals treated with Herbal ointment (HO) at 10% and 20% applied topically for 14 days
GROUP V was served as Excision wounded animals treated with Standard Drug SOFRAMYCIN OINTMENT (SO) applied topically for 14 days.
Creation of wound
An excision wound was created on the dorsal side of rats. The dorsal sides of rats were shaved with a razor blade. Excision wound of size 2 cm areas of skin in length, 0.2 cm in depth were created by using surgical scissors. Haemostasis achieved by blotting the wound with cotton swab soaked in normal saline. All the rats were given regular dressing changes and kept for observation [21].
Measurement of wound contraction
An excision wound was traced by following the progressive changes in wound area planimetrically, excluding the day of wounding. The size of wounds was traced on a transparent paper in every day, throughout the study period. The tracing was then shifted to graph paper, from which the wound surface area was evaluated. The percentage of wound contraction was calculated by the following formula [29]:
$$ \%\mathbf{wound}\ \mathbf{contraction}=\frac{\mathbf{initial}\ \mathbf{wound}\ \mathbf{size}-\mathbf{specific}\ \mathbf{day}\ \mathbf{wound}\ \mathbf{size}\times \mathbf{100}}{\mathbf{Initial}\ \mathbf{wound}\ \mathbf{size}} $$
All the results were expressed as Mean ± SEM. The data were statistically analyzed by one – way analysis of variance (ANOVA) and P values < 0.05 were considered significant.
Parameters studied
After the experimental period, the animals were sacrificed by cervical dislocation and the blood and tissue samples were collected for analysing biochemical parameters such as IL – 6 and TNF - Alpha [9], PDGF and EGF [14], hydroxy proline [35], hesoxamine [33], tissue protein [15], lipid peroxide [23], superoxide dismutase [20].
Result and discussion
Wound contraction, the process of shrinkage of area of the wound depends on the reparative abilities of the tissue, type and extent of the damage and general state of the health of the tissue [25]. The process of mobilizing healthy skin surrounding the wound to cover the denuded area and involves complex and superbly orchesterated interactions of cells, extracellular matrix and cytokines. This centripetal movement of wound margin is believed to be due to the activity of myofibroblast [8]. In the present study, herbal ointment treated animals were found to contract much faster. Increased rate of wound contraction in herbal ointment (HO) treated animals might be due to increase in proliferation and transformation of fibroblast cells into myofibroblasts. And also the effect of the herbal ointment on wound contraction may be due to the presence of flavonoids and saponins which are responsible for the release of cytokines, increased synthesis of collagen and angiogenesis [1, 22] (Table 1, Fig. 1a, b and c).
Table 1 Effect of herbal ointment on wound contraction in excision wounded animals
a. Photograph of wound contraction in excision wound. b. photograph of wound contraction in excision wound (HO TREATED). c. Photograph of wound contraction in excision wound (RO TREATED)
The effect of herbal ointment on IL – 6, PDGF, EGF and TNF- α in excision wounded rats were represented in the Table 2. The levels of IL-6 & TNF – alpha were found to be higher and the levels of PDGF (Platelet Derived Factor) and EGF (Epidermal Growth Factor) was found be lower in excision wounded rats when compared to normal rats. On treatment with herbal ointment, the levels of IL-6 & TNF – alpha were significantly reduced and PDGF and EGF levels were increased in a dose dependent manner.
Table 2 Effect of herbal ointment on immunological markers in excision wounded animals
The inflammatory phase is the first and essential stage in the wound healing process. However, prolonged inflammation causes enhanced release of cytokines such as IL-1β, IL-6 and TNF-α, severe healing disturbances and increased fibrosis and scarring [28]. The increased and prolonged action of neutrophils and pro-inflammatory cytokines are associated with tissue damage through the production and induction of proteolytic enzymes and arachidonic acid metabolites, thereby leading to a delay in initiation of the repair phase [6]. In particularly, overexpression of TNF-α and IL-6 leads to destructive effects in wound healing [30], and in different pathological conditions of the skin [24]. Inhibition of these mediators may regulate the progress of cutaneous wound healing and thus represent a good therapeutic target [12]. In the present study, the levels of IL-6, TNF – alpha were significantly increased in incision wound control which signifys the disturbances in the wound healing and also enhancement of fibrosis and scarring in the wounded area. And the levels of IL-6(Interleukin 6), TNF – alpha (Tumor necrosis factor -α) were found to be lower in the treatment of herbal ointment which indicates the progress of wound healing and decreases the fibrosis and scarring.
On the other hand, the untreated animals, tissue injury causes the platelets aggregation and activation which results in the release of pro inflammatory cytokines. These cytokines activate the macrophages and the activated macrophages releases the higher concentration of IL- 6 and TNF – Alpha. The elevated level of IL- 6 and TNF – Alpha that produced oxidative stress at the wound site. The oxidative stress arrest the migration and proliferation of fibroblast and keratinocytes on the wound site which delayed the wound healing. Treatment of herbal ointment containing Calendula officinalis Linn., the activation of macrophages were inhibited by stopping the release of pro inflammatory cytokines and reduce the oxidative stress at the wound site which results in the rapid migartion and proliferation of keratinocytes and fibroblast which is responsible for wound healing (Fig. 2a).
a. Effect of herbal ointment containing Calendula officinalis Linn. On Interleukin 6(IL-6). b. Mechanism Epidermal Growth Factor (EGF) on wound Healing
Epidermal growth factor (EGF) is primarily produced by platelets and is present in high concentrations during the earliest stages of wound healing [31]. The EGF receptor (EGFR) is a tyrosine kinase receptor that may be activated by several other ligands, including TGF-α (Transforming growth factor α) and heparin-binding EGF. The activated EGFR phosphorylates itself and other ligands on tyrosine residues, setting off a pleiotropic signaling cascade that may result in enhanced cell motility, protein secretion, differentiation or dedifferentiation, mitogenesis or apoptosis. In wound healing, EGFR signaling regulates cell adhesion, expression of matrix-degrading proteinases and cell locomotion. EGF decreases the rate of epithelialization of wounds and reduces scarring by preventing excessive wound contraction. EGF is important growth factor, it play a major role in the wound healing through stimulation, proliferation and migration of keratinocytes, endothelial cells and fibroblast which faciliatates the regeneration of injured tissue. In the present investigation, levels of EGF was decreased in excision wound control indicates the defects in the proliferation of fibroblast. And the level of EGF was significantly increased when the treatment of animals treated with herbal ointment which indicates the migaration and proliferation of keratinocytes and fibroblasts, lower scarring and epithelialization period it play a crucial role in healing.
The actual mechanism of EGF in wound healing is, in the epithelial injury, the ATP was released, it act as an early signal to cellular responses and also initiates the intracellular Ca2+ signalling that resulting in the activation of ADAM. The ADAM activates the EGF – R which produces some regulating responses on P13k, ERK and other signalling molecules which results in the proliferation of fibroblast. Finally the migaration and proliferation of keratinocytes and fibroblasts play a major role in the wound healing (Fig. 2b).
PDGF is stored in platelets and released in abundance from degranulating platelets during the clotting cascade at the time of wounding [17]. A variety of human cell types important in wound healing secrete PDGF, including placental cells, macrophages, monocytes, fibroblasts, vascular smooth muscle cells and endothelial cells. PDGF is a potent mitogen for fibroblasts, glial cells and smooth muscle cells and is a chemoattractant for neutrophils and macrophages. These assist in angiogenesis, fibroblast hyperplasia and collagen deposition, re-epithelialization and granulation tissue formation at the wound bed [4]. The release of PDGF (Platelet Derived Growth Factor) is essential for wound repair [7]. By increasing collagenase production in fibroblasts and facilitates their migration through and remodeling of the wound matrix ([10];Sabine [3, 34]). In the present study, the level of PDGF is decreased in the excision wounded control indicates the fibroblast hyperplasia in the wounded tissue. And the level was found to be higher in treatment of herbal ointment signifies the re-epithelialization and granulation tissue formation and increased collagen synthesis at the wounded area.
The data depicted in Table 3 reveals that the levels of hydroxyproline and hesosamine and tissue protein content of the granulation tissue of the excision wounded animals. The levels of hydroxyl proline, hesosamine and tissue protein were significantly reduced in excision wounded rats when compared to normal rats. Upon treatment with herbal ointment those level were increased.
Table 3 Effect of Herbal ointment on hydroxyl proline,hesosamine and tissue protein in excision wounded animals
Collagen is a major protein of the extracellular matrix and is the component that ultimately contributes to wound strength. Collagen not only confers strength and integrity to the tissue matrix but also plays an important role in homeostasis and epithelialization in the later stages of wound healing [13]. Hydroxyproline is an uncommon amino acid present in the collagen fibers of granulation tissues. In the present study revealed increased hydroxyproline content, which is a reflection of increased cellular proliferation and therefore increased collagen synthesis, after treatment of herbal ointment topically.
Hexosamine and hexuronic acid are matrix molecules, which act as ground substratum for the synthesis of new extracellular matrix. The glycosaminoglycans are known to stabilize the collagen fibres by ameliorate electrostatic and ionic interactions with it and possibly control their ultimate alignment and characteristic size. Their ability to bind and alter protein-protein interaction has identified them as important determinants of cellular responsiveness in development, homeostasis and diseases [32]. In the present study, hexosamine concentrations were significantly increased with herbal ointment treated groups when compared with excision wound control indicating stabilization of collagen fibres [27]. Hence the enhanced hydroxyproline and hexosamine synthesis in wound tissue provides capability to injured tissue and induce healing.
Protein is very essential for the inflammatory process during wound healing and also in the development of granulation tissue. The low level of protein content in excision wounded controls signifies the delayed wound healing by prolonged inflammatory phase, inhibition of fiberplasia and remodeling phase. Concomitant increase in the total protein content in the animals treated with herbal ointment signifying active synthesis and deposition of matrix proteins in the granulation tissues which enhance the wound healing process [16].
The levels of lipid peroxide was depicted in the Table 4. A significant elevation in the level of lipid peroxide was noted in the wound tissue. Upon treatment with herbal ointment LPO level is normalised. Lipid peroxidation is oxidative deterioration of poly unsaturated fatty acids which leads to cellular injury and also generate the peroxide radicals. The cytokine cascade activated after a wound injury which stimulates phagocytic cells that results in the formation of oxygen free radicals and lipid peroxidation. In excision wounded animals showed a elevation in LPO which indicates the scavenging capacity of the wounded tissues. Decreased level of lipid peroxide in the herbal ointment treated groups indicates the anti-lipid peroxidative effect of herbal ointment containing Calendula officinalis Linn.
Table 4 Effect of Herbal ointment on lipid peroxide (LPO) and superoxide dismutase (SOD) in excision wounded animals
The level of SOD were showed in the Table 4. In excision wounded animals the level of SOD was found to be lower in excision wounded control compared than that of normal control. Animals treated with herbal ointment, raised the level of SOD in excision wounded animals. The superoxide radical anion is the major ROS generated during the respiratory burst of inflammatory cells. It can be detoxified by SOD. Low level of SOD in untreated animals showed to increased tissue damage and inhibit the healing process in control group. The superoxide dismutase (SOD) level were found to be increased in the herbal ointment treated groups indicates that the tissue damage was being repaired by the scavenging activity appear to be a reflex mechanism to guard against the extra cellular oxygen derived free radicals Thus, SOD enhanced wound healing may be due to the free radical scavenging action of the plants as well as enhanced antioxidant enzyme level in the granulation tissue [18].
In conclusion, the basis of the results obtained in the present investigation, it is possible to conclude that the herbal ointment containing Calendula officinalis Linn.has significant wound healing activity due to improved collagen synthesis, increased wound contraction and alteration of interleukin 6, Epidermal Growth Factor (EGF), Platelet Derived Factor (PDGF) and Tumor Necrosis Factor - alpha (TNF - α). Further investigation, the isolated compounds from Calendula officinalis Linn. will be used for the different wound models.
Cm2 :
Centimeter square
EGF:
Epidermal Growth Factor
HO:
Herbal Ointment
IL-6:
Interleukin 6
LPO:
Lipid peroxide
MDA:
Malendialdehyde
nM:
Nano moles
PDGF:
Platelet Derived Growth Factor
Pg:
Pico gram
Reference Ointment
SOD:
Superoxide dismutase
TNF-alpha:
Tumor necrosis factor-α
Agnel Arul John N, Shobana G, Keerthana K. Wound healing efficacy of herbal ointment containing Oldenlandia herbacea Roxb. on excision wounded animals. Int. Res. J. Pharm. 2018;9(8):95–9.
Ansel H, Popovich N. Preparation of topical dosage forms introduction to pharmaceutical dosage forms. 4th ed. Philadelphia, PA, USA: Lea & Febiger; 1985.
Barrientos S, Brem H, Stojadinovic O, Tomic-Canic M. Clinical application of growth factors and cytokines in wound healing. Wound Repair Regen. 2014;22(5):569–78.
Bennett NT, Schultz GS. Growth factors and wound healing: part II. Role in normal and chronic wound healing. Am J Surg. 1993;166:74–81.
British Pharmacopoeia (BP). Department of health and social security Scottish home and health department. Office of the British Pharmacopoeia Commission, UK, vol. 2, 713, 1988.
Ebaid H, Ahmed OM, Mahmoud AM, Ahmed RR. Limiting prolonged inflammation during proliferation and remodeling phases of wound healing in streptozotocin-induced diabetic rats supplemented with camel undenatured whey protein. BMC Immunol. 2013;14:31.
Falanga V. Chronic wounds: Pathophysiologic and experimental considerations. J Invest Der-Matol. 1993;100:721–5.
Gabbaiani G, Harschel BJ, Ryan GB. Granulation tissue as a contractile organ. J Exp Med. 1976;135:719.
Grellner W, Georg T, Wilske J. Quantitative analysis of proinflammatory cytokines (IL-1beta, IL-6, TNF-alpha) in human skin wounds. Forensic Sci Int. 2000;113(1–3):251–64.
Herndon DN, Hayward PG, Rutan RL, Barow RE. Growth hormones and factors in surgical patients. Adv Surg. 1992;25:65–97.
Kishimoto S, Maoka T, Sumitomo K, Ohmiya A. Analysis of carotenoid composition in petals of Calendula (Calendula officinalis L.). Biosci Biotechnol Biochem. 2005;69:2122–8.
Kolomytkin OV, Marino AA, Waddell DD, Mathis JM, Wolf RE, Sadasivan KK, et al. IL-1beta-induced production of metalloproteinases by synovial cells depends on gap junction conductance. Am J Physiol Cell Physiol. 2002;228(6):1254–60.
Landsman A, Taft D, Riemer K. The role of collagen bioscaffolds, foamed collagen, and living skin equivalents in wound healing. Clin Podiatr Med Surg. 2009;26:525–33.
Loot. Fibroblasts derived from chronic diabetic ulcers differ in their response to stimulation with EGF, IGF-I, bFGF and PDGF-AB compared to controls. Eur. J Cell Biol. 2002;81:153.
Lowry OH, Rose Brough NJ, Farr AL, Randall RJ. Protein measurement with Folin phenol reagent. J Biol Chem. 1951;193:265–75.
MacKay D, Meller AL. Nutritional support for wound healing. Altern Med Rev. 2003;8(4):359–77.
Martin P, Hopkinson-Wooley J, McCluskey J. Growth factors and cutaneous wound repair. Prog Growth Factor Res. 1992;4:25–44.
Meenakshi S, Ragavan G, Nath V, Ajay Kumar SR, Shanta M. Antimicrobial, wound healing and antioxidant activity of Plagiochasma appendiculatum. J Ethanopharmacol. 2006;1:67–72.
Miliauskas G, Venskutonis PR, Van Beek TA. Screening of radical scavenging activity of some medicinal and aromatic plant extracts. Food Chem. 2004;85:231–7.
Misra HP, Fridovich I. The role of super oxide anion in the auto oxidation of epinephrine and a simple assay for SOD. J Biol Chem. 1972;247:3170–5.
Mortone JP, Malone MH. Evaluation of vulnerary activity by and open wound procedure in rats. Arch Int Pharm Ther. 1972;196(6):117–36.
Muley BP, Khadabadi SS, Banarase NB. Phytochemical constituents and pharmacological activities of Calendula officinalis L. (Asteraceae): a review trop J. Pharm Res. 2009;8:455–65.
Ohkawa H, Ohishi N, Yagi K. Assay of lipid peroxides in animal tissues for Thiobarbituric acid reaction. Anal Biochem. 1979;95:351–8.
Paquet P, Pierard GE. Interleukin-6 and the skin. Int Arch Allergy Immunol. 1996;109(4):308–17.
Priya K, Arumugam G, Rathinam B, Wells A, Babu M. Celosia argentea Linn. Leaf extract improves wound healing in a rat burn wound model. Wound Repair Regen. 2004;12:618–25.
Re TA, Mooney D, Antignac E, Dufour E, Bark I, Srinivasan V, Nohynek G. Application of the threshold toxicological concern approach for the safety evaluation of Calendula flower (Calendula officinalis L.) petals and extracts used in cosmetic and personal care products. Food Chem Toxicol. 2009:471246–54..
Ricard-Blum S, Ruggiero F. The collagen superfamily: from the extracellular matrix to the cell membrane. Pathol Biol. 2005;53:430–42.
Röhl J, Zaharia A, Rudolph M, Murray AZ. The role of inflammation in cutaneous repair. Wound Pract Res. 2015;23(1):8–15.
Sadaf F, Saleem R, Ahamed M, Ahamed SI, Navaid-ul-Zafar. Healing potential of cream containing extract of sphaeranthus indicus on dermal wounds in Guinea pigs. J. Ethanopharmacol. 2006;107:161–3.
Singer AJ, Clark RAF. Mechanisms of disease: cutaneous wound healing. New Engl J Med. 1999;341(10):738–46.
Steed DL. The role of growth factors in wound healing. Surg Clin North Am. 1997;77:575–86.
Trownbridge JM, Gallo RL. Dermatan sulfate: new functions from an old glycosaminoglycan. Glycobiology. 2002;12(9):117–25.
Wagner WO. A more sensitive assay disseminating galactosamine and glucosamine in mixtures. Anal Biochem. 1972;94:394–6.
Werner S, Grose R. Regulation of wound healing by growth factors and cytokines. Physiol Rev. 2003;83:835–70.
Woessener F Jr. Catabolism of collagen and non collagen protein in rat uterus during post partem involution. J Biochem. 1961;83:304–14.
The authors are thankful to Management, Srimad Andavan Arts and Science College (Autonoumous) for providing research facilities and Dr.G.Jothi, Dean of Life-Science and Head and Teaching and Non teaching Faculty of Department of Biochemistry, Srimad Andavan Arts and Science College (Autonoumous), Triuchirappalli, Tamil Nadu, India. For giving guidance to complete this research work.
The study was done through self finance.
PG and Research Department of Biochemistry, Srimad Andavan Arts and Science College (Autonomous), Tiruchirappalli, Tamil Nadu, India
Shobana Gunasekaran
Sri Ranga Ramanuja Centre for Advanced Research in Sciences, Srimad Andavan Arts and Science college (Autonomous) Affiliated to Bharathidasan University, Tiruchirappalli, Tamil Nadu, 620 005, India
Agnel Arul John Nayagam
PG & Research Department of Biochemistry, Srimad Andavan Arts and Science college (Autonomous) Affiliated to Bharathidasan University, Tiruchirappalli, Tamil Nadu, 620 005, India
Rameshkannan Natarajan
SG collected the plant material, prepared plan of work and carried out research work RN discussed the plan of work with IAEC members and got approval to carry out this research work. AAJN guided the researcher and helped for writing manuscript. The authors read and approved the final manuscript.
Correspondence to Shobana Gunasekaran.
The study protocol was approved by the ethical guidelines of CPCSEA after obtaining necessary clearance from the committee (Approval No: 790/03/ac/CPCSEA).
The authors permitted to publish this work in Clinical Phytoscience.
The authors declare that there is no conflict of interests regarding the publication of this paper.
Gunasekaran, S., Nayagam, A.A.J. & Natarajan, R. Wound healing potentials of herbal ointment containing Calendula officinalis Linn. on the alteration of immunological markers and biochemical parameters in excision wounded animals. Clin Phytosci 6, 77 (2020). https://doi.org/10.1186/s40816-020-00215-7
PDGF
TNF-alpha
LPO
EGF etc. | CommonCrawl |
Histone H3 and TORC1 prevent organelle dysfunction and cell death by promoting nuclear retention of HMGB proteins
Hongfeng Chen1,
Jason J. Workman1,
Brian D. Strahl2 &
R. Nicholas Laribee ORCID: orcid.org/0000-0003-4519-44701
How cells respond and adapt to environmental changes, such as nutrient flux, remains poorly understood. Evolutionarily conserved nutrient signaling cascades can regulate chromatin to contribute to genome regulation and cell adaptation, yet how they do so is only now beginning to be elucidated. In this study, we provide evidence in yeast that the conserved nutrient regulated target of rapamycin complex 1 (TORC1) pathway, and the histone H3N-terminus at lysine 37 (H3K37), function collaboratively to restrict specific chromatin-binding high mobility group box (HMGB) proteins to the nucleus to maintain cellular homeostasis and viability.
Reducing TORC1 activity in an H3K37 mutant causes cytoplasmic localization of the HMGB Nhp6a, organelle dysfunction, and both non-traditional apoptosis and necrosis. Surprisingly, under nutrient-rich conditions the H3K37 mutation increases basal TORC1 signaling. This effect is prevented by individual deletion of the genes encoding HMGBs whose cytoplasmic localization increases when TORC1 activity is repressed. This increased TORC1 signaling also can be replicated in cells by overexpressing the same HMGBs, thus demonstrating a direct and unexpected role for HMGBs in modulating TORC1 activity. The physiological consequence of impaired HMGB nuclear localization is an increased dependence on TORC1 signaling to maintain viability, an effect that ultimately reduces the chronological longevity of H3K37 mutant cells under limiting nutrient conditions.
TORC1 and histone H3 collaborate to retain HMGBs within the nucleus to maintain cell homeostasis and promote longevity. As TORC1, HMGBs, and H3 are evolutionarily conserved, our study suggests that functional interactions between the TORC1 pathway and histone H3 in metazoans may play a similar role in the maintenance of homeostasis and aging regulation.
In response to changing environmental conditions, such as nutrient fluxes or the presence of stress, eukaryotic cells adapt by modulating chromatin structure and their gene expression programs [1]. Chromatin alterations are largely controlled by DNA methylation, histone post-translational modifications, ATP-dependent nucleosome remodeling, histone chaperones, and histone variants [2, 3]. Oftentimes, the changes made to chromatin and the subsequent regulation of key transcriptional programs is the endpoint for such environmentally responsive chromatin pathways. In some cases, however, chromatin changes function not as an endpoint but instead they propagate this information to regulate additional nuclear and/or cytoplasmic processes [1]. Thus, signaling to and from chromatin impacts a wide range of biological activities. While the mechanisms controlling chromatin structure continue to be elucidated, how environmental information is transferred to the chromatin and transcription regulatory apparatus, and its impact on chromatin's signaling functions, remains poorly understood.
The target of rapamycin (TOR) pathway is a highly conserved signaling cascade essential for cell growth, proliferation, and suppression of stress responses [4, 5]. TOR consists of two distinct subpathways composed of TOR complex 1 (TORC1) and TOR complex 2 (TORC2). Environmental nutrients, growth factors/mitogens, and energy specifically activate TORC1, which then stimulates numerous downstream transcriptional and translational processes regulating anabolism [5]. Simultaneously, TORC1 suppresses catabolic stress responses such as autophagy [4, 5]. Budding yeast TORC1 consists of either the Tor1 or Tor2 kinase and the Kog1, Lst8, and Tco89 subunits. While TORC1 activity is essential, yeast lacking the Tor1 or Tco89 subunits is viable. However, these mutants exhibit hypersensitivity to agents that suppress TORC1, including the specific inhibitor rapamycin, nutrient starvation, and other environmental stresses [4].
Yeast TORC1 is activated predominantly by nitrogen, in particular amino acids, which is registered by the EGO complex. EGO consists of the Ego1-3 subunits, as well as the Rag GTPases Gtr1 and Gtr2. EGO resides in the vacuole membrane where it senses luminal amino acid accumulation and then activates vacuole-localized TORC1 in response [6]. The V-ATPase complex, which is the resident proton pump in the vacuole membrane, also interacts with EGO to stimulate TORC1 as well [7]. Active TORC1 then either phosphorylates downstream effectors to mediate its biological functions, or it activates some processes directly, including transcription by RNA polymerase I and III (Pol I and Pol III) [8, 9]. TORC1 downstream effector pathways include direct phosphorylation of the Sch9 kinase to promote ribosomal biogenesis, as well as activation of the Ypk3 kinase to phosphorylate ribosomal protein S6 [10, 11]. TORC1 also phosphorylates the regulatory factor Tap42 which binds to PP2A- and PP2A-like phosphatases. Tap42 sequesters these enzymes onto the vacuole surface and restricts their access to substrates, many of which regulate nutrient stress responses [4, 12, 13]. Therefore, TORC1 relays upstream environmental information to the downstream biochemical machinery controlling cell growth and proliferation.
Emerging studies also implicate TORC1 in chromatin regulation, suggesting TORC1 may mediate environment–epigenome interactions. For example, the yeast Esa1 histone acetyltransferase is recruited in a TORC1-regulated fashion to ribosomal protein (RP) genes to promote histone acetylation and gene transcription [14]. TORC1 also regulates histone H3 lysine 56 acetylation (H3K56ac), and this process is important for TORC1-dependent transcription of the ribosomal DNA (rDNA) loci by Pol I and ribosomal RNA co-transcriptional processing [15]. Further studies of yeast rDNA regulation have determined that rDNA copy number increases in a TORC1-dependent manner which is a process that is actively opposed by the sirtuins Sir2, Hst3, and Hst4 [16]. Thus, the eukaryotic genome adaptively alters gene copy number in response to environmental stimuli through a mechanism involving TORC1-dependent epigenetic regulation.
Recently, we sought to identify histone H3 or H4 residues that exhibited genetic interactions with TORC1 and as such might function in TORC1-regulated epigenetic mechanisms. Using a rapamycin-based chemical genomics screen, we determined that a mutation of H3 lysine 37 (H3K37) was synthetically lethal when combined with decreased TORC1 signaling [17]. Because H3K37 regulates high mobility group box (HMGB) association to chromatin, we examined the fate of HMGB proteins in the H3K37 mutant. Our initial studies found that an H3K37 mutant disrupted chromatin binding of the yeast HMGB protein Nhp10, causing a significant fraction of Nhp10 to localize to the cytosol when TORC1 signaling was reduced. Increased Nhp10 cytosolic accumulation correlated with massive cell death in the H3K37 mutant when TORC1 was inhibited [17]. However, whether decreased chromatin binding by Nhp10 or other HMGBs was the proximal cause of cell death under these conditions was not defined.
Besides histones, HMG proteins are the next most abundant protein component of chromatin [18]. The HMGB family is evolutionarily conserved from yeast to humans, and they function in all genome-regulatory processes by binding to minor groove DNA to create altered DNA structures [18]. While some HMGB domain-containing transcription factors bind DNA in a sequence-specific fashion, non-transcription factor HMGBs bind in a sequence-independent, but chromatin context-dependent manner. HMGBs also have additional regulatory roles independent of DNA binding. For instance, necrotic mammalian cells release the prototypical HMGB factor, HMGB1, into the extracellular milieu. This extracellular HMGB1 stimulates inflammatory processes by binding to Toll-like and RAGE receptors on innate immune cells [18]. Additionally, cytosolic HMGB1 has important roles in regulating mitochondrial function and cellular metabolism [19, 20]. Intriguingly, in vitro binding assays using HMGB1 and nucleosomal DNA demonstrate that HMGB1 selectively makes contacts with the H3 tail at multiple positions, including histone H3 lysine 36 (H3K36) and H3K37 [21, 22]. Although performed in vitro, these studies reinforce the possibility that the functional genetic interactions identified between TORC1 and H3K37 involve HMGB chromatin binding.
The mechanisms underlying the connections between HMGB chromatin binding and TORC1 have remained unclear. In this study, we provide further evidence that TORC1 and H3K37 function synergistically to retain specific HMGBs in the nucleus. Impairment of both TORC1 signaling and H3K37 causes these HMGBs to accumulate in the cytoplasm. Once cytoplasmic, these HMGBs induce both an atypical mitochondrial-dependent apoptosis and necrosis caused by vacuole dysregulation and impaired pH homeostasis. Surprisingly, we find that either reduced HMGB chromatin binding or HMGB dysregulation increases TORC1 signaling which severely curtails the chronological aging process. These results demonstrate that TORC1 signaling and H3K37 act to maintain cellular homeostasis and promote longevity by restricting HMGB localization to the nucleus.
Histone H3K37 disruption differentially affects the nuclear localization of specific HMGBs
Our rapamycin-based genetic screens determined that H3K37A sensitized cells to reduced TORC1 activity [17]. This effect is solely specific for H3K37A as mutation of flanking residues, including the adjacent methylated H3K36, has no effect (Fig. 1a). H3K37A rapamycin sensitivity is independent of H3K37 post-translational modification since both H3K37R and H3K37Q restore growth on rapamycin plates, albeit H3K37Q mutants do have a modest growth advantage under these conditions (Fig. 1b). To confirm these results are not due solely to the histone H3/H4 library background, we utilized a well-characterized histone shuffle strain and shuffled into it vectors expressing H3 wild type (H3WT), H3K37A, or H3K37R as the sole source of histone H3 [23]. These cells, along with the H3WT and H3K37A derived from the library, were spotted to control or 10 nM rapamycin plates. The shuffle strain expressing H3WT is considerably more rapamycin sensitive than the H3WT from the library; however, in the shuffle background the H3K37A is still sensitive to TORC1 inhibition, while the H3K37R restores growth (Fig. 1c). These results suggest that the TORC1 phenotype caused by H3K37A is not due solely to loss of electrostatic charge or post-translational modification (since both H3K37R and H3K37Q rescue), but it is instead due to the impairment of a protein–protein contact at H3K37 due to the incorporation of the small hydrophobic alanine at this position. Because both arginine and glutamine have the potential to form electrostatic or polar interactions, their incorporation at H3K37 likely restores this disrupted function.
H3K37 is required for cell death suppression in TORC1-inhibited cells independent of histone modification status. a Spotting assay with H3WT and the indicated histone H3 mutants on control plates or plates containing 20 nM rapamycin. b As in a except a mutant that restores electrostatic charge (H3K37R) or mimics constitutive acetylation (H3K37Q) was analyzed. c H3WT and H3K37A mutants from a were spotted in parallel with a histone shuffle strain containing as the sole source of histone H3 either H3WT, H3K37A, or H3K37R. d As in a, except H3WT, H3K37A, and an H3 mutant lacking the first 32N-terminal amino acids (H3Δ1-32) were analyzed. e H3WT and H3K37A expressing control vector or vector expressing the rapamycin-resistant TOR1-1 allele were spotted to plasmid-selective media (SC-Leu) or media containing 15 nM rapamycin. f H3WT and H3K37A cells were mock treated or 20 nM rapamycin treated for the indicated times and then stained with YO-PRO-1 to detect apoptosis (Yo+ PI−) and propidium iodide to detect necrosis (Yo+ PI+). Cells were then analyzed by flow cytometry
The histone H3N-terminus has several sites of post-translational modifications that contribute to a diverse array of chromatin functions, including gene transcription [24]. To address the interaction of the H3N-terminus with the TORC1 pathway, we compared the rapamycin sensitivity of H3WT, H3K37A, and an H3N-terminal truncation mutant (H3Δ1-32) lacking the majority of post-translationally modified N-terminal residues. H3Δ1-32-expressing cells exhibit increased sensitivity to TORC1 inhibition relative to H3WT; however, they still retain the ability to grow under these conditions unlike the complete growth impairment detected with H3K37A (Fig. 1d). The growth inhibition caused by rapamycin in H3K37A is due solely to TORC1 suppression as cells expressing a rapamycin-resistant TOR1-1 vector grow comparable to TOR1-1-expressing H3WT (Fig. 1e) [25]. Therefore, while post-translationally modifiable positions on the H3N-terminus likely contribute to mediating some sensitivity to TORC1 inhibition, H3K37 provides an absolutely essential, highly specific function in this regard.
We previously demonstrated that extended TORC1 inhibition in H3K37A caused cell death by necrosis [17]. To gauge how quickly cell death occurs in H3K37A, and to determine whether it is solely necrotic or whether it may encompass both necrosis and apoptosis at early stages, we mock- or 20 nM rapamycin-treated H3WT and H3K37 for increasing lengths of time. Cells were then stained with YO-PRO-1 (which stains early apoptotic cells) and propidium iodide (PI, which stains necrotic cells) and analyzed by flow cytometry [26]. No cell death occurred in mock- or rapamycin-treated H3WT or mock-treated H3K37A over the course of the experiment (Fig. 1f). After TORC1 inhibition in H3K37A, negligible apoptosis (Yo+ PI−) and necrosis (Yo+ PI+) occurred at 60 min, while both were detected by 90 min (Fig. 1f). These results indicate that decreased TORC1 signaling in H3K37A causes cytotoxicity through both apoptosis and necrosis and that this does not begin significantly until after 60-min post-TORC1 inhibition.
H3K37A impairs Nhp10 chromatin binding, while TORC1 inhibition exacerbates this effect to cause Nhp10 cytoplasmic accumulation [17]. Since HMGB deregulation is cytotoxic [17], we speculated that TORC1 inhibition results in H3K37A cytotoxicity by causing HMGB cytoplasmic accumulation. To test this, we generated a series of reporter strains expressing genomically integrated EGFP reporter tags at the loci encoding the HMGBs ABF2, HMO1, IXR1, and NHP6A in H3WT and H3K37A cells. Live cell confocal microscopy of mock treated or cells treated for 1 h with 20 nM rapamycin revealed highly specific effects on HMGB cellular localization. Nhp6a was exclusively localized to the nucleus in H3WT independent of TORC1, while it was mostly nuclear in H3K37A mock-treated cells. However, in H3K37A TORC1 inhibition caused a fraction of Nhp6a to become cytosolic (Fig. 2a, b). These data were in stark contrast to those found for Ixr1. In H3WT cells, Ixr1 remained nuclear in both mock- and rapamycin-treated cells. However, in mock-treated H3K37A, Ixr1 accumulated in the cytoplasm which was reversed when TORC1 signaling was diminished (Fig. 2c, d). The HMGB Abf2 is used to demarcate mitochondria as it localizes exclusively to this organelle [27]. As expected, Abf2 localization remained in the cytoplasm in either H3WT or H3K37A irrespective of TORC1 activity, thus demonstrating the nuclear-specific effects of the histone mutation (Additional File 1: Figure S1a). Interestingly, the mitochondria in the TORC1-inhibited H3K37A cells appear to be more elongated relative to the rapamycin-treated H3WT cells, suggesting the possibility that increased mitochondrial stress may be occurring in these cells (Additional File 1: Figure S1a). Such an interpretation would be consistent with the increase in apoptotic cell death detected in TORC1-inhibited H3K37A cells (Fig. 1f) since apoptosis is a mitochondrial-dependent process [28]. Additionally, and consistent with our previous study, the nuclear localization of the TORC1 transcriptional effector HMGB, Hmo1, was unaffected under both active and reduced TORC1 signaling conditions in both H3WT and H3K37A (Additional File 1: Figure S1b) [17]. Therefore, H3K37A impairs the nuclear localization of select HMGB factors under both normal and reduced TORC1 signaling conditions.
Histone H3 and TORC1 differentially regulate Nhp6a and Ixr1 cellular localization. Confocal microscopy and brightfield images of H3WT and H3K37A expressing either Nhp6a-EGFP (a, b) or Ixr1-EGFP (c, d). Cells were mock or 20 nM rapamycin treated for 1 h. The outline of individual cells is demarcated by the line trace. The nucleus is indicated by Hoechst (blue) staining. Scale bar indicates 5 μm for all live cell confocal images
Because TORC1 inhibition in H3K37A increased Nhp6a-EGFP cytoplasmic localization, which correlated with induction of cell death, we analyzed this HMGB further. Nhp6a-EGFP strains, along with cells expressing Nhp6a-EGFP in an H3K37R background, were cultured to log phase and either mock treated or treated with 20 nM rapamycin for 1 h, before analysis by confocal microscopy. As expected, Nhp6a localized exclusively to the nucleus in H3WT regardless of TORC1 activity, while rapamycin treatment reduced (by ~15 %) the nuclear Nhp6a pool in H3K37A (Fig. 3a, b). The H3K37R, which restores growth under impaired TORC1 signaling conditions (Fig. 1b, c), completely restored Nhp6a nuclear localization (Fig. 3a, b). To unequivocally confirm these effects on Nhp6a localization were due solely to TORC1 inhibition, we transformed H3WT and H3K37A Nhp6a-EGFP-expressing cells with control or rapamycin-resistant TOR1-1 expression vector. These cells were cultured in selective, nutrient-defined media buffered to pH 6.5 and then mock or 20 nM rapamycin treated for 2 h before confocal microscopy analysis. Consistent with our previous results, Nhp6a nuclear localization was unaffected in H3WT expressing control or TOR1-1 expression vector under either condition (Additional File 1: Figure S2a-b). Rapamycin-treated H3K37A cells with control vector exhibited increased Nhp6a movement to the nuclear periphery and into the cytoplasm, while the expression of the rapamycin-resistant TOR1-1 allele completely restored Nhp6a nuclear localization (Additional File 1: Figure S2a-b). This effect correlates with the restoration of cell viability as well (Fig. 1e). Nhp6a and its paralog Nhp6b are redundant components of the FACT histone chaperone [29, 30]. To determine whether rapamycin-induced Nhp6a cytoplasmic localization involved FACT, we integrated an EGFP tag at the SPT16 genomic locus (a core FACT subunit) in H3WT and H3K37A and repeated these experiments. Spt16 remained nuclear in both H3WT and H3K37A, irrespective of TORC1 activity (Fig. 3c). Therefore, H3K37A causes a subpopulation of Nhp6a not affiliated with FACT to localize to the cytoplasm when TORC1 activity is limiting, suggesting Nhp6a cytoplasmic accumulation may be connected to cell death induction.
TORC1 and histone H3 regulate Nhp6a nuclear localization independently of the FACT histone chaperone complex. a Mock or 1-h 20 nM rapamycin-treated H3WT, H3K37A, and H3K37R cells expressing Nhp6a-EGFP were analyzed by confocal microscopy. Cell outlines are indicated by the line trace. b Quantification of Nhp6a nuclear localization from three independent experiments with the average and standard deviation (SD) plotted. One-way ANOVA was performed across all categories which is indicated by the dashed line, while the solid black line indicates the pairwise comparison analyzed by Student's t test. *P < 0.05; **P < 0.01. c. As in a, except cells expressing Spt16-EGFP were analyzed. Brightfield images for a and c are in Additional File 1: Figure S3
TORC1 inhibition in H3K37A induces cell death through both mitochondrial and vacuole dysfunction
If cytoplasmic localization of any single HMGB induces cell death in H3K37A TORC1-inhibited cells, then loss of this HMGB may rescue growth of these cells. To test this possibility, we deleted five of the seven HMGB encoding genes in both H3WT and H3K37A. We were unable to obtain an ABF2 gene deletion for unknown reasons, and the NHP6B gene, which encodes the Nhp6a paralog, overlaps with an uncharacterized gene, so it was not tested. Of the five HMGB deletions examined, only ixr1∆ weakly rescued H3K37A growth on rapamycin plates (Fig. 4a). Intriguingly, ixr1∆ also resulted in more robust growth of H3WT when TORC1 was inhibited (Fig. 4a), suggesting its loss provided a generalized growth advantage. These results were surprising since Ixr1 completely localizes to the nucleus in H3K37A TORC1-inhibited cells (Fig. 2b). To further characterize the mechanism involved, we cultured H3WT, H3K37A, and their ixr1∆ derivatives to log phase, mock treated or treated with 20 nM rapamycin for 5.5 h, and then stained cells to quantify the population of both apoptotic and necrotic cells. We also engineered mitochondria-deficient (ρ°) derivatives to determine whether the cell death required mitochondria since this organelle regulates apoptosis [28]. TORC1 inhibition had no effect on H3WT or its derivatives, while it induced both apoptosis and necrosis in H3K37A (Fig. 4b). Individual loss of either functional mitochondria or ixr1∆ completely abolished apoptosis in TORC1-inhibited H3K37A cells, while having negligible effects on necrosis (Fig. 4b). This necrosis was not due to parallel roles for mitochondria and Ixr1 in cell death regulation since H3K37A ixr1∆ ρ° cells exhibited comparable amounts of necrosis relative to individual H3K37A ρ° or H3K37A ixr1∆ cells (Fig. 4b).
Histone H3 and TORC1 synergistically suppress both apoptosis and necrosis. a Spotting assay with H3WT, H3K37A, and the indicated HMG gene deletions. b Flow cytometry analysis of the indicated strains cultured to log phase and then mock treated or treated with 20 nM rapamycin for 5.5 h before staining with YO-PRO-1 and PI. Data are the average and SD of three independent experiments. c As in b except staining was performed only with YO-PRO-1 to solely detect apoptotic cells. d cDNA samples from H3WT and H3K37A mock or 20 nM rapamycin treated for 1 h were analyzed for CIT2 expression. Data are the average and SD of five independent experiments. e As in b except cells were stained with DHE. The average and SD of three independent experiments are presented. f Spotting assay on selective media (SC-Leu) with H3WT and H3K37A carrying either a control vector or an SOD1 high copy expression vector (OE). g Spotting assay with H3WT, H3K37A, and their derivatives lacking Ixr1 (ixr1∆), functional mitochondria (ρ°), or both. h As in a except genes encoding the indicated apoptotic effectors were deleted either individually or in combination. For all statistical analyses, one-way ANOVA was performed across all categories which is indicated by the dashed line, while the solid black lines indicate the specific pairwise comparisons which were analyzed by Student's t test. *P < 0.05; **P < 0.01; ***P < 005
Both Ixr1 and the HMGB Rox1 repress genes encoding mitochondrial components [31]. To test how specific the ixr1∆-dependent apoptosis suppression was, we repeated the mock and 5.5-h rapamycin treatment and compared apoptosis levels in H3WT, H3K37A, and either the ixr1∆ or rox1∆ in these backgrounds. TORC1 inhibition induced apoptosis in H3K37A, which was suppressed by ixr1∆, whereas rox1∆ had no effect (Fig. 4c). These results suggest that apoptosis suppression in ixr1∆ is likely not due to altered transcription of Ixr1 and Rox1 co-regulated genes. Therefore, both mitochondria and Ixr1 regulate apoptosis in TORC1-limited cells, while neither affects the concomitant necrosis that occurs.
Reduced TORC1 activity increases mitochondrial function in part because the TORC1 subunit, Lst8, functions in retrograde signaling [32]. Because H3K37A rapamycin-induced apoptosis requires mitochondria, we investigated whether TORC1 inhibition in H3K37A altered retrograde signaling. Log phase cultures of H3WT and H3K37A were either mock or 20 nM rapamycin treated for one hour before analyzing expression of the retrograde-inducible CIT2 gene [33]. Consistent with the increased mitochondrial function that occurs when TORC1 is inhibited, CIT2 was upregulated approximately tenfold in rapamycin-treated H3WT cells compared to control (Fig. 4d). While H3K37A had no effect on CIT2 expression in mock-treated cells, CIT2 was induced to significantly higher levels in rapamycin-treated H3K37A relative to the comparable H3WT (Fig. 4d). Since retrograde activation occurs as a consequence of mitochondrial dysfunction, we repeated this experiment and stained cells with dihydroethidium (DHE) to measure reactive oxygen species (ROS) as an indicator of mitochondrial stress. We analyzed the derivative ixr1∆ and rox1∆ mutants as well. Mock-treated H3WT and H3K37A exhibited similar ROS levels, while TORC1 inhibition in H3WT did not alter ROS; however, rapamycin treatment did induce significantly higher ROS in H3K37A (Fig. 4e). Intriguingly, ixr1∆, but not rox1∆, completely suppressed this increased ROS which also correlated with ixr1∆-dependent suppression of apoptosis (Fig. 4b, e). To test whether this ROS caused cell death, we transformed H3WT and H3K37A cells with control vector or a multicopy vector overexpressing superoxide dismutase SOD1, which detoxifies ROS [34]. Surprisingly, SOD1 overexpression did not rescue H3K37A growth when TORC1 was inhibited (Fig. 4f). While these data demonstrate a role for both mitochondrial dysfunction and Ixr1 in regulating TORC1-inhibited H3K37A apoptosis, the results suggest ROS suppression alone is insufficient to restore cell growth.
An ixr1∆, or loss of functional mitochondria, abolished rapamycin-induced H3K37A apoptosis, yet the ixr1∆ ρ° did not substantially reduce necrosis (Fig. 4b). To determine whether the double mutant enhanced the weak growth rescue that the ixr1∆ provided H3K37A cells, we spotted these cells from Fig. 4b to control or 20 nM rapamycin plates. Consistent with our previous results (Fig. 4a), H3WT ixr1∆ cells grew more robustly on rapamycin-containing media (evident by larger colony size), whereas H3WT ρ° cells grew more poorly (Fig. 4g). Surprisingly, while the H3K37A ixr1∆ weakly restored growth, neither the H3K37A ρ° nor the H3K37A ixr1∆ ρ° mutant was capable of growth even though both suppressed apoptosis (Fig. 4b, c, g). These data suggest that mitochondria, while responsible for the TORC1-inhibited H3K37A apoptosis, also provide a required positive function necessary for cell growth under these conditions. To further characterize the apoptosis mechanisms involved, we deleted the genes encoding mitochondrial-regulated apoptosis effectors, including metacaspase (YCA1), endonuclease G (NUC1), and apoptosis-inducing factor (AIF1), either individually or in combination [28]. Intriguingly, none of these mutants rescued H3K37A under TORC1-suppressive conditions (Fig. 4h). These data demonstrate that TORC1 inhibition-induced H3K37A apoptosis requires Ixr1 and mitochondria, but this apoptosis occurs independently of traditional apoptotic effectors and is genetically separable from the concurrent necrosis that occurs.
Loss of Ixr1 results in the transcriptional induction of many genes involved in cell metabolism and stress responses [35]. We considered the possibility that ixr1∆ weakly rescued TORC1-inhibited H3K37A cells indirectly due to transcriptional upregulation of normally Ixr1-repressed genes. We tested this by transforming H3WT and H3K37A cells with a control vector and, for H3K37A, individual galactose-inducible vectors expressing a subset of Ixr1-repressed genes linked to metabolism (MET10, PBI2, GNA1, and GID1) or stress responses (STF2, PAI3, TIR1, and TIR3) [35]. We also included a galactose-inducible YAP1 expression vector, since Yap1 induces stress response genes [36]. Of the genes examined, only TIR1 overexpression weakly rescued H3K37A growth on rapamycin plates (Fig. 5a). TIR1 encodes a cell wall mannoprotein induced under acid stress conditions [37], thus suggesting TORC1 suppression in H3K37A may induce an acid stress response to cause cell death. We tested this directly by culturing H3WT and H3K37A cells to mid-log phase and either mock or 20 nM rapamycin treated for 30 min. Cells were then stained with the vacuole-specific dye FM-464 and the pH indicator dye 5(6)-CFDA before performing confocal microscopy [38, 39]. Under physiological pH, 5(6)-CFDA does not significantly fluoresce, but as intracellular pH decreases it becomes fluorescent. Mock- and rapamycin-treated H3WT, as well as mock-treated H3K37A, exhibited no detectable 5(6)-CFDA fluorescence. However, within 30 min of TORC1 inhibition, significant 5(6)-CFDA fluorescence began to accumulate in H3K37A vacuoles indicating decreased intravacuolar pH (denotes by white arrows, Fig. 5b). This signal accumulated over time only in H3K37A, demonstrating that H3K37A TORC1 inhibition induces rapid vacuole dysfunction and altered pH homeostasis (Fig. 5c). To determine whether this intracellular acidification contributes to cell death, H3WT, H3K37A, and their ρ° derivatives were spotted to control and rapamycin plates which were either non-buffered or buffered to pH 6.0 with MES. Rapamycin alone prevented H3K37A growth; however, pH buffering weakly rescued H3K37A growth (Fig. 5d). Surprisingly, the H3K37A ρ° mutant grew more poorly under these conditions than did the H3K37A mutant (Fig. 5d). Our data suggest that while mitochondria may regulate apoptosis in H3K37A cells, they also likely provide a positive metabolic function required for cell growth when intracellular pH decreases. These results also demonstrate that the necrosis induced by TORC1 inhibition in H3K37A cells is caused by impaired vacuole homeostasis and pH dysregulation. The ability of ixr1∆ to weakly rescue H3K37A is therefore most likely indirectly caused by the increased expression of acid stress response genes such as TIR1 [35].
TORC1 and histone H3 prevent cell death through vacuole dysfunction and altered pH homeostasis. a H3WT and H3K37A transformed with control vector or the indicated galactose-inducible expression vectors were spotted to the indicated plates. b H3WT and H3K37A were cultured to log phase and then mock or 20 nM rapamycin treated for 30′ before staining with FM-464 (red, stains vacuole membrane) and 5(6)-CFDA (green upon acidification). Arrows indicate cells where significant fluorescence is detectable. c Experiment was performed as in b, and cells were sampled, stained, and analyzed by flow cytometry at the indicated times post-rapamycin treatment. Data are the average and SD of 3 independent experiments. d H3WT, H3K37A, and ρ° derivatives in these backgrounds were spotted to control synthetic complete (SC) plates or 20 nM rapamycin SC plates that were either non-buffered or buffered to pH 6.0 with MES
Impaired HMGB nuclear localization dysregulates TORC1 signaling and limits chronological longevity
The profound sensitivity of H3K37A cells to TORC1 inhibition suggests that they are over reliant on TORC1 activity to maintain viability. To explore this concept in greater mechanistic detail, we assessed the strength of TORC1 signaling in H3WT and H3K37A cells cultured in nutrient-rich media to mid-log phase before mock treating or treating with 20 nM rapamycin for 30 min. Cell extracts were prepared, and TORC1 activity was monitored by assessing phosphorylation of ribosomal protein S6 (phosphoS6) [11]. Surprisingly, the mock-treated H3K37A mutant exhibited substantially increased TORC1 activity relative to the comparable mock-treated H3WT, while rapamycin treatment suppressed TORC1 to approximately the same extent in both H3WT and H3K37A (Fig. 6a). To determine whether the elevated TORC1 signaling in H3K37A was due to HMGB dysregulation, we analyzed TORC1 activity in H3WT and H3K37A, as well as their nhp6a∆ or hmo1∆ derivatives. We also analyzed nhp10∆ since its chromatin binding and nuclear localization are impaired in H3K37A similar to that detected for Nhp6a [17]. In the H3WT background, the nhp6a∆, nhp10∆, and hmo1∆ decreased the overall level of TORC1 signaling relative to H3WT, suggesting these HMGBs affect basal TORC1 activity in cells with wild-type chromatin (Fig. 6b). Importantly, nhp6a∆ and nhp10∆, but not hmo1∆, significantly reduced TORC1 signaling in H3K37A to H3WT levels (Fig. 6b). This observation is consistent with the cytoplasmic accumulation of both Nhp6a (Figs. 2a, 3a, b) and Nhp10 [17], but not Hmo1 (Additional File 1: Figure S1b) in TORC1-inhibited H3K37A cells. To test whether HMGB dysregulation directly increases TORC1 activity, we transformed H3WT cells with control or galactose-inducible HA-tagged NHP6A or HMO1 expression vectors, cultured them to log phase in raffinose media, and induced them with galactose for 20 min before preparing cell extracts and analyzing phosphoS6. Hmo1 expression caused a minor, while Nhp6a expression caused a much greater, increase in phosphoS6 (Fig. 6c). These results demonstrate that HMGB dysregulation, due to impaired chromatin association or deregulated HMGB expression, stimulates TORC1 signaling.
HMG chromatin binding and TORC1 function to promote cell viability and longevity. a IB analysis for phosphoS6 levels from H3WT and H3K37A cells cultured to log phase and then either mock or 20 nM rapamycin treated for 30 min. b As in a except the indicated HMG gene deletion mutants were included. c H3WT cells were transformed with control vector or the indicated galactose-regulated HMG expression vectors and cultured to log phase in raffinose media before induction with 2 % galactose for 20 min. Samples were then processed for phosphoS6 IB analysis. d–f As in c except cells were cultured solely in raffinose media to log phase and then stained with 5(6)-CFDA and analyzed by flow cytometry. e represents the average and SD of the peak 5(6)-CFDA fluorescence of the entire gated population, while f is the average cell number and SD of the fraction inside the bracket in d. Data are quantification of five independent experiments. g Chronological aging assay of H3WT and H3K37A performed in non-buffered and buffered (buff) SC media. Each strain was cultured in triplicate, and the data represent the average and SD of each time point performed in triplicate. For all statistical analyses, one-way ANOVA was performed across all categories (indicated by the dashed line), while the solid black lines indicate specific pairwise comparisons that were analyzed by Student's t test. *P < 0.05; **P < 0.01; ***P < 005
We next examined whether HMGB dysregulation in H3WT mimicked the same defect in vacuolar pH homeostasis that occurred in H3K37A upon TORC1 limitation. We initially attempted a galactose induction utilizing our HMGB expression vectors coupled with 5(6)-CFDA staining and flow cytometry analysis. However, galactose treatment caused cell acidification even in vector control cells which confounded data interpretation (data not shown). To bypass this issue, we repeated the experiment in cells cultured solely to log phase in raffinose media since we have found these vectors exhibit low level, leaky HMGB expression. Log phase cells were stained with 5(6)-CFDA and analyzed by flow cytometry (Fig. 6d). Expression of either Nhp6a or Hmo1 increased overall mean fluorescence intensity of the population (Fig. 6e), as well as the total percentage of cells falling within the acidified category (Fig. 6f, cell population quantified is indicated by the horizontal bracket in Fig. 6d). These data demonstrate that HMGB dysregulation in cells with wild-type chromatin replicates the increased vacuolar acidification detected in TORC1-inhibited H3K37A cells.
Reduced TORC1 activity extends both chronological and replicative longevity in all organisms examined [40]. Because H3K37A increases TORC1 signaling, we specifically determined whether this histone mutant altered chronological longevity. Yeast chronological aging assays involve culturing cells in minimal (SC) media for 3 days to exhaust nutrients. At this point, the experiment is initiated (Day 0) and quantification of cell survival by analyzing colony-forming units (CFUs) begins [41]. Our initial attempts following this approach resulted in too few viable H3K37A cells at Day 0, so we modified the experiment and only cultured cells for 1 day before initiating the experiment. As media acidification can confuse interpretation of yeast aging studies [42], we performed the experiments in media either non-buffered or buffered to pH 6.0. Under non-buffered conditions, both H3WT and H3K37A exhibited decreased viability by Day 3, although H3K37A viability was reduced significantly more than H3WT (Fig. 6g). Importantly, performing the experiment in buffered media prevented loss of H3WT viability over the course of the experiment, thus demonstrating that decreased H3WT viability under non-buffered conditions is due solely to acid stress and is not a true longevity defect (Fig. 6g). However, even under buffered conditions H3K37A viability was substantially reduced (by ~50 %), thus demonstrating a true reduction in chronological longevity independent of acid stress (Fig. 6g). Collectively, these data demonstrate that the histone H3N-terminus anchors HMGBs in the nucleus to maintain normal TORC1 regulation and promote cell viability. Impairing this process deregulates TORC1 signaling and causes cytotoxicity under conditions that mimic nutrient limitation (rapamycin treatment) or reflect a nutrient-depleted state (chronological aging).
While it is clear that responses to environmental change include epigenetic mechanisms that depend on dynamic chromatin regulation, how this information is faithfully transmitted to chromatin is not well understood. In this report, we provide critical evidence that the nutrient regulated TORC1 pathway, and the histone H3N-terminal tail at H3K37, function collaboratively to retain specific chromatin-associated HMGB factors within the nucleus to maintain cell viability. These data provide further support for the concept that signaling through TORC1 is a key mechanism by which the environment communicates with chromatin to affect the epigenome. They also demonstrate an essential role for chromatin in restricting HMGBs to the nucleus that is required for maintaining viability in TORC1-suppressive environmental conditions, including nutrient stress or during chronological aging. Our results support the idea that under conditions which severely repress TORC1, HMGB chromatin dissociation and cytoplasmic localization may act as a cell death-initiating event. Conceptually, this idea is distinct from the current paradigm developed from studies in mammalian cells where release of HMGB1 from chromatin occurs after necrosis is initiated. This proposed function for chromatin also would be consistent with the growing recognition that chromatin modulation is not solely an endpoint for upstream signaling pathways. Instead, dynamic chromatin changes can integrate inputs from upstream signaling pathways and then propagate this information to control additional regulatory pathways to mediate biological effects. Therefore, TORC1 signaling and downstream control of HMGB chromatin association may function as a bidirectional signaling relay to mediate control of cell viability under nutrient-regulated conditions.
Our data further demonstrate that H3K37A selectively affects HMGB nuclear localization, thus illustrating that not all HMGBs are governed by the same chromatin-interacting mechanisms. For example, we demonstrate that Ixr1 steady-state nuclear localization is perturbed by H3K37A when TORC1 is not inhibited, yet under these same conditions, Nhp6a is only minimally affected, while Hmo1 is not affected at all. However, this situation is reversed when TORC1 is inhibited, such that Ixr1 becomes exclusively nuclear, while Nhp6a accumulates in the cytoplasm. These results, coupled with our previous observation that H3K37A decreases Nhp10 chromatin binding, suggest that in vivo, H3K37 stabilizes specific HMGB interactions with chromatin. Such a scenario is consistent with the highly selective role for the histone H3N-terminal tail in contacting HMGB1 in in vitro nucleosomal binding assays [21, 22]. This study further strengthens the concept that chromatin can dictate HMGB association in vivo, an idea which had been suggested previously from genome-wide and gene-specific HMGB analyses [17, 43]. Because HMG proteins constitute the largest fraction of non-histone chromatin-associated proteins, further defining the mechanisms governing their chromatin binding will be essential for understanding their role in genome regulation.
Additional mechanisms likely function in parallel with H3K37 to promote HMGB chromatin association and nuclear localization, including redundant interactions with the H3 tail, specific chromatin structures regulated by the local histone post-translational modification environment, or even modification of the HMGBs directly. This latter point is especially relevant since HMGB1 is extensively modified by a variety of post-translational modifications including phosphorylation, acetylation, and oxidation [44, 45]. A distinct possibility is that TORC1 regulates HMGB modifications to facilitate their chromatin binding such that combining H3K37A with decreased TORC1 signaling synergistically impairs HMGB chromatin association and nuclear retention. H3K37 disruption, while selectively impacting HMGB nuclear localization, clearly has differential effects on these HMGBs. The mechanisms driving these differences are not immediately obvious but could be related to the distinct functions of each HMGB. For example, Ixr1 has a potential role as a stress response regulator since it functions both as a transcriptional activator and as repressor of many target genes involved in this process [35]. Since TORC1 inhibition activates the environmental stress response, TORC1 repression may restore Ixr1 nuclear localization in H3K37A by overriding the inhibitory effect H3K37A has on Ixr1 chromatin binding [46]. While H3K37A minimally affects Nhp6a nuclear localization during normal growth, suppressing TORC1 causes a fraction of Nhp6a to localize to the cytoplasm. How the majority of Nhp6a remains in the nucleus under these conditions is unclear. However, Nhp6a is an abundant HMGB so a distinct possibility could be that the cytoplasmic Nhp6a pool in TORC1-inhibited H3K37A cells may derive from a normally "hyperdynamic" fraction of genome-bound, FACT-independent Nhp6a. In environmental conditions that repress TORC1, chromatin may anchor this hyperdynamic HMGB population in the nucleus to prevent its cytoplasmic localization and induction of cell death.
We believe that disrupted HMGB chromatin association, and their consequent cytoplasmic localization, likely initiates cell death in TORC1-inhibited H3K37A cells. We base this conclusion on the following observations. The H3K37R mutation both restores Nhp6a nuclear localization and completely rescues growth when TORC1 is inhibited. Furthermore, significant cell death is only detectable after Nhp6a localizes to the cytoplasm, implying that HMGB movement to the cytoplasm occurs before cell death initiation. Supporting this concept, we and others have demonstrated that increased HMGB expression causes cytotoxicity, although the underlying mechanisms were not defined [17, 47, 48]. We provide a more detailed understanding of this process by demonstrating that intravacuolar pH dysregulation caused by TORC1 inhibition in H3K37A can be replicated in cells with wild-type chromatin solely through HMGB deregulation. Furthermore, buffering pH partially restores H3K37A growth under TORC1-suppressive conditions. These results demonstrate that cytoplasmic HMGBs impair vacuole homeostasis through unknown mechanisms to cause necrosis. The apoptosis we observed in TORC1-inhibited H3K37A cells is more difficult to interpret. Apoptosis clearly depends on functional mitochondria, although deletion of traditional apoptosis effectors is incapable of restoring growth to these cells. Although this could be due to the concurrent necrosis that occurs, H3K37A ρ° cells fail to grow under buffered conditions when TORC1 is inhibited. Therefore, suppressing both apoptosis and pH-induced necrosis simultaneously is not sufficient to promote growth of H3K37A cells. We interpret these data to suggest that while mitochondrial dysfunction is responsible for the apoptosis in TORC1-inhibited H3K37A, mitochondria provide additional metabolic requirements necessary for cell growth when TORC1 is impaired. Such a role for mitochondria would be consistent with the upregulation of mitochondrial function when TORC1 is inhibited [49, 50].
Finally, the TORC1 dysregulation caused by H3K37A is suppressed by nhp6a∆ or nhp10∆, but not by hmo1∆. Nhp6a and Nhp10 are the two HMGBs whose cytoplasmic localization specifically increases when TORC1 activity is reduced, suggesting their movement to the cytoplasm contributes to TORC1 deregulation. Although the majority of Nhp6a localizes to the nucleus in H3K37A before TORC1 suppression, it is possible that a small fraction exists in the cytoplasm which cannot be reliably detected. Furthermore, while we have not specifically addressed whether ixr1∆ suppresses the deregulated TORC1 signaling in H3K37A, this HMGB likely does contribute since a significant fraction of it is cytoplasmic during normal H3K37A growth. An intriguing observation is that while Hmo1 localization is not affected by H3K37A, and hmo1∆ fails to reduce H3K37A TORC1 dysregulation, this HMGB is a key effector of the TORC1-regulated transcriptome [51]. Therefore, not all HMGBs linked to TORC1-regulated transcription are impacted by H3K37A. The observation that multiple HMGBs are affected by H3K37A and contribute to TORC1 deregulation does provide an explanation for why no single HMGB gene deletion restores viability to TORC1-inhibited H3K37A cells. The cytoplasmic accumulation of multiple HMGBs simultaneously is likely what induces cytotoxicity upon TORC1 suppression. How cytoplasmic HMGBs dysregulate TORC1 signaling, and this causes cell death when TORC1 activity is reduced, is not yet understood. Regardless of the mechanisms involved, our data outline a role for TORC1 and the H3N-terminal tail in HMGB nuclear anchoring. This pathway is essential for cell survival when TORC1 activity is limiting, and it plays an important role in chronological aging. In metazoans, mechanisms that alter the H3N-terminus over time in post-mitotic cells may release nuclear HMGBs into the cytoplasm to limit their chronological longevity.
This study reveals a crucial role for H3K37 in the maintenance of cell homeostasis and viability under conditions of reduced TORC1 signaling. Specifically, we demonstrate that H3K37 disruption differentially effects the nuclear localization of a subset of HMGB proteins. Most importantly, we show that impaired TORC1 signaling in an H3K37 mutant increases the localization of the model HMGB, Nhp6a, and that this correlates with impaired organelle homeostasis and the induction of both apoptosis and necrosis. Intriguingly, while the apoptosis requires mitochondria, it does not depend on traditional apoptotic effectors. The resulting necrosis which occurs is connected to impaired vacuole homeostasis and pH dysregulation. Unexpectedly, our results directly show that H3K37 disruption increases basal TORC1 signaling, an effect which is suppressed by deletion of the genes encoding those HMGBs whose cytoplasmic localization increases in an H3K37 mutant. This increased TORC1 signaling can be replicated in cells with normal chromatin by overexpressing these same HMGBs, thus demonstrating a direct role for HMGBs in TORC1 deregulation. The physiological consequence TORC1 deregulation is to severely reduce the chronological longevity of the H3K37 mutant, a result consistent with TORC1 as an essential aging regulator. HMG proteins are highly conserved throughout evolution, and they constitute the most abundant protein component of chromatin outside of histones. Our results suggest the evolutionarily conserved H3N-terminal tail likely anchors HMGBs in all eukaryotes as a mechanism of retaining them within the nucleus to maintain homeostasis, prevent TORC1 deregulation, and promote cellular longevity.
Yeast plasmids, strains, and culture conditions
Yeast strains and plasmids utilized are listed in Additional File 2: Table S1 and Table S2, respectively. Except for the histone H3 shuffle strain used in Fig. 1c, all other yeast histone mutants in this study were derived from the published histone H3/H4 library [52] which was purchased from Open Biosystems (GE Dharmacon). To generate ura3∆ derivatives of the H3WT and H3K37A, cells were streaked to 5-FOA-containing plates and resistant clones, which had lost the ability to grow in the absence of uracil, were isolated. Yeast strain engineering, including gene deletion or epitope tagging, was performed as described [53]. For experiments in nutrient-rich media, cells were cultured in 1 % yeast extract/2 % peptone/2 % dextrose (YPD) with media components purchased from Research Products International. Experiments in minimal media were performed by culturing cells in yeast synthetic complete (SC) media (0.17 % yeast nitrogen base/0.1 % glutamic acid/2 % glucose/0.2 % dropout mix) or SC media lacking the appropriate nutrient. Yeast SC culture media reagents were purchased from US Biologicals. To isolate H3WT and H3K37A ρ° mutants, cells were cultured in SC media containing 25 mg/mL ethidium bromide and then individual colonies were isolated. All ρ° mutants were confirmed to have lost functional mitochondria by their inability to grow on a non-fermentable (glycerol) carbon source. For the confocal microscopy analysis experiments utilizing the TOR1-1 expression vector, cells were cultured to mid-log phase in SC-leucine media that was buffered to pH 6.5 before treating with 20 nM rapamycin for two hours. All cells were cultured at 30 °C with shaking. For spotting assays, equal cell numbers from overnight cultures were pelleted, washed, and then fivefold serially diluted. Cells were then spotted to the appropriate plates and incubated at 30 °C for four to six days before photographing. The H3K37A and H3K37R histone plasmids were generated via standard site-directed mutagenesis with plasmid pWZ414-F12 as a template [23]. To generate the high copy vector overexpressing SOD1, 300 base pairs upstream of the translational start site and 100 bp downstream of the translational stop of the SOD1 genomic locus from yeast strain BY4741 were cloned as an XhoI/BamHI fragment into vector pRS426. Galactose-regulated plasmids were purchased from Open Biosystems.
Antibodies and cell stains
The following antibodies were utilized: α-RPS6 (Abcam), α-phosphoS6 (Cell Signaling), α-HA and α-GFP (Santa Cruz), and α-G6PDH (Sigma). DNA staining of live cells was performed with Hoechst stain (Life Technologies). YO-PRO-1 and PI stains were purchased from Life Technologies.
Live cell confocal microscopy
For the GFP localization experiments, cells were grown to log phase, pelleted, and washed twice with sterile water. Pellets were resuspended in 100 μL sterile water and stained with Hoechst 33342 (2 μg/mL) for 20–30 min prior to mounting onto polylysine-coated slides and confocal analysis. The vacuole pH experiments were conducted similarly to the GFP localization experiments, except FM4-64 (8 μM) and CFDA (5 μM) were added to cells cultured in YPD.
Image analysis in Zen 2 blue
Zen Lite version 2.0.0 software was utilized to perform quantification of Nhp6a nuclear/cytoplasmic localization. The Spline Contour tool was used to trace borders around the cell periphery and nucleus of individual cells. Borders were closed, and the fluorescence intensity within each enclosed area was calculated, along with the mean intensity value for each channel. The nuclear area was multiplied by the mean intensity value for the green channel to give the total nuclear fluorescence intensity (TNFI).
$${\text{TNFI = nuclear}}\;{\text{area}} \left( {{\text{nm}}^{2} } \right)\;*\;{\text{nuclear}}\;{\text{mean}}\;{\text{intensity}}\;{\text{value}}$$
This was repeated using the outer cell border values which provided a measure of the total cellular fluorescence intensity (TCFI).
$${\text{TCFI}} = {\text{cellular}}\;{\text{area}} \left( {{\text{nm}}^{2} } \right)\;*\;{\text{cellular}}\;{\text{mean}}\;{\text{intensity}}\;{\text{value}}$$
The following calculation was then performed to get the percentage of total nuclear protein:
$$\% {\text{nuclear}} = \frac{\text{TNFI}}{\text{TCFI}}\;*\;100$$
Random fields of cells were chosen for quantification, with approximately 20–40 cells quantified per condition, per biological replicate (3 replicates). Only cells with clear nuclear DNA staining and detectable EGFP signal were quantified.
For the YO-PRO-1 and PI staining, cells were grown to log phase, pelleted, and washed twice with sterile PBS. YO-PRO-1 (10 μM) and PI (50 μg/mL) were added followed by a 20- to 30-min incubation. Samples were then processed on a BD LSRII flow cytometer, and data were analyzed using FlowJo V10. DHE (30 μM) and 5(6)-carboxyfluorescein diacetate (CFDA, 100 μM) staining were performed identically to that described above.
RT-qPCR and immunoblot analysis
Total RNA was extracted, and randomly primed cDNA was synthesized using 1 μg of DNase I-digested RNA and the ImProm II Reverse Transcription System from Promega. Gene-specific qPCR and normalization to the SPT15 housekeeping gene were performed as previously described [54]. Whole-cell extracts and immunoblotting were prepared and performed as outlined previously [54]. To quantify immunoblot results, films were scanned and analyzed by ImageJ software.
Chronological aging assay
Single H3WT or H3K37A colonies were picked from freshly streaked YPD plates and cultured overnight in 5 mL SC. The stationary phase cultures were then diluted into fresh SC media to an OD600 of 0.1 and returned to the incubator. This marked the beginning of the experiment, denoted as "day 0." Because the H3K37A viability declined so quickly, dilution and spotting were conducted every day for 6 days. For the buffered media, the 50-mL SC media used to age the cells were adjusted to pH 6.0 using citrate phosphate buffer (64.2 mM Na2HPO4 and 17.9 mM citric acid).
TORC1:
target of rapamycin complex 1
HMGB:
high mobility group box
Badeaux AI, Shi Y. Emerging roles for chromatin as a signal integration and storage platform. Nat Rev Mol Cell Biol. 2013;14(4):211–24.
Venkatesh S, Workman JL. Histone exchange, chromatin structure and the regulation of transcription. Nat Rev Mol Cell Biol. 2015;16(3):178–89.
Rothbart SB, Strahl BD. Interpreting the language of histone and DNA modifications. Biochim Biophys Acta. 2014;1839(8):627–43.
Loewith R, Hall MN. Target of rapamycin (TOR) in nutrient signaling and growth control. Genetics. 2011;189(4):1177–201.
Laplante M, Sabatini DM. mTOR signaling in growth control and disease. Cell. 2012;149(2):274–93.
Binda M, et al. The Vam6 GEF controls TORC1 by activating the EGO complex. Mol Cell. 2009;35(5):563–73.
Dechant R, et al. Cytosolic pH regulates cell growth through distinct GTPases, Arf1 and Gtr1, to promote Ras/PKA and TORC1 activity. Mol Cell. 2014;55(3):409–21.
Li H, et al. Nutrient regulates Tor1 nuclear localization and association with rDNA promoter. Nature. 2006;442(7106):1058–61.
Wei Y, Tsang CK, Zheng XF. Mechanisms of regulation of RNA polymerase III-dependent transcription by TORC1. EMBO J. 2009;28(15):2220–30.
Urban J, et al. Sch9 is a major target of TORC1 in Saccharomyces cerevisiae. Mol Cell. 2007;26(5):663–74.
Gonzalez A, et al. TORC1 promotes phosphorylation of ribosomal protein S6 via the AGC kinase Ypk3 in Saccharomyces cerevisiae. PLoS ONE. 2015;10(3):e0120250.
Jiang Y, Broach JR. Tor proteins and protein phosphatase 2A reciprocally regulate Tap42 in controlling cell growth in yeast. EMBO J. 1999;18(10):2782–92.
Di Como CJ, Arndt KT. Nutrients, via the Tor proteins, stimulate the association of Tap42 with type 2A phosphatases. Genes Dev. 1996;10(15):1904–16.
Rohde JR, Cardenas ME. The tor pathway regulates gene expression by linking nutrient sensing to histone acetylation. Mol Cell Biol. 2003;23(2):629–35.
Chen H, et al. The histone H3 lysine 56 acetylation pathway is regulated by target of rapamycin (TOR) signaling and functions directly in ribosomal RNA biogenesis. Nucleic Acids Res. 2012;40(14):6534–46.
Jack CV, et al. Regulation of ribosomal DNA amplification by the TOR pathway. Proc Natl Acad Sci U S A. 2015;112(31):9674–9.
Chen H, et al. Target of rapamycin signaling regulates high mobility group protein association to chromatin, which functions to suppress necrotic cell death. Epigenetics Chromatin. 2013;6(1):29.
Das C, Tyler JK, Churchill ME. The histone shuffle: histone chaperones in an energetic dance. Trends Biochem Sci. 2010;35(9):476–89.
Gdynia G, et al. The HMGB1 protein induces a metabolic type of tumour cell death by blocking aerobic respiration. Nat Commun. 2016;7:10764.
Kang R, et al. The HMGB1/RAGE inflammatory pathway promotes pancreatic tumor growth by regulating mitochondrial bioenergetics. Oncogene. 2014;33(5):567–77.
Watson M, et al. Characterization of the interaction between HMGB1 and H3-a possible means of positioning HMGB1 in chromatin. Nucleic Acids Res. 2014;42(2):848–59.
Kawase T, et al. Distinct domains in HMGB1 are involved in specific intramolecular and nucleosomal interactions. Biochemistry. 2008;47(52):13991–6.
Zhang W, et al. Essential and redundant functions of histone acetylation revealed by mutation of target lysines and loss of the Gcn5p acetyltransferase. EMBO J. 1998;17(11):3155–67.
Wozniak GG, Strahl BD. Hitting the 'mark': interpreting lysine methylation in the context of active transcription. Biochim Biophys Acta. 2014;1839(12):1353–61.
Chan TF, et al. A chemical genomics approach toward understanding the global functions of the target of rapamycin protein (TOR). Proc Natl Acad Sci U S A. 2000;97(24):13227–32.
Rodriguez-Lombardero S, et al. Proteomic analyses reveal that Sky1 modulates apoptosis and mitophagy in Saccharomyces cerevisiae cells exposed to cisplatin. Int J Mol Sci. 2014;15(7):12573–90.
Diffley JF, Stillman B. DNA binding properties of an HMG1-related protein from yeast mitochondria. J Biol Chem. 1992;267(5):3368–74.
Carmona-Gutierrez D, et al. Apoptosis in yeast: triggers, pathways, subroutines. Cell Death Differ. 2010;17(5):763–73.
Formosa T, et al. Spt16-Pob3 and the HMG protein Nhp6 combine to form the nucleosome-binding factor SPN. EMBO J. 2001;20(13):3506–17.
Brewster NK, Johnston GC, Singer RA. A bipartite yeast SSRP1 analog comprised of Pob3 and Nhp6 proteins modulates transcription. Mol Cell Biol. 2001;21(10):3491–502.
Lambert JR, Bilanchone VW, Cumsky MG. The ORD1 gene encodes a transcription factor involved in oxygen regulation and is identical to IXR1, a gene that confers cisplatin sensitivity to Saccharomyces cerevisiae. Proc Natl Acad Sci U S A. 1994;91(15):7345–9.
Liu Z, et al. RTG-dependent mitochondria to nucleus signaling is negatively regulated by the seven WD-repeat protein Lst8p. EMBO J. 2001;20(24):7209–19.
Liu Z, Butow RA. Mitochondrial retrograde signaling. Annu Rev Genet. 2006;40:159–85.
Leitch JM, Yick PJ, Culotta VC. The right to choose: multiple pathways for activating copper, zinc superoxide dismutase. J Biol Chem. 2009;284(37):24679–83.
Vizoso-Vazquez A, et al. Ixr1p and the control of the Saccharomyces cerevisiae hypoxic response. Appl Microbiol Biotechnol. 2012;94(1):173–84.
Temple MD, Perrone GG, Dawes IW. Complex cellular responses to reactive oxygen species. Trends Cell Biol. 2005;15(6):319–26.
Bourdineaud JP. At acidic pH, the diminished hypoxic expression of the SRP1/TIR1 yeast gene depends on the GPA2-cAMP and HOG pathways. Res Microbiol. 2000;151(1):43–52.
Preston RA, Murphy RF, Jones EW. Assay of vacuolar pH in yeast and identification of acidification-defective mutants. Proc Natl Acad Sci USA. 1989;86(18):7027–31.
Vida TA, Emr SD. A new vital stain for visualizing vacuolar membrane dynamics and endocytosis in yeast. J Cell Biol. 1995;128(5):779–92.
Lopez-Otin C, et al. The hallmarks of aging. Cell. 2013;153(6):1194–217.
Hu J, et al. Assessing chronological aging in Saccharomyces cerevisiae. Methods Mol Biol. 2013;965:463–72.
Burtner CR, et al. A molecular mechanism of chronological aging in yeast. Cell Cycle. 2009;8(8):1256–70.
Dowell NL, et al. Chromatin-dependent binding of the S. cerevisiae HMGB protein Nhp6A affects nucleosome dynamics and transcription. Genes Dev. 2010;24(18):2031–42.
Hoppe G, et al. Molecular basis for the redox control of nuclear transport of the structural chromatin protein Hmgb1. Exp Cell Res. 2006;312(18):3526–38.
Zhang Q, Wang Y. HMG modifications and nuclear function. Biochim Biophys Acta. 2010;1799(1–2):28–36.
Mayordomo I, Estruch F, Sanz P. Convergence of the target of rapamycin and the Snf1 protein kinase pathways in the regulation of the subcellular localization of Msn2, a transcriptional activator of STRE (Stress Response Element)-regulated genes. J Biol Chem. 2002;277(38):35650–6.
Yoshikawa K, et al. Comprehensive phenotypic analysis of single-gene deletion and overexpression strains of Saccharomyces cerevisiae. Yeast. 2011;28(5):349–61.
Espinet C, et al. An efficient method to isolate yeast genes causing overexpression-mediated growth arrest. Yeast. 1995;11(1):25–32.
Bonawitz ND, et al. Reduced TOR signaling extends chronological life span via increased respiration and upregulation of mitochondrial gene expression. Cell Metab. 2007;5(4):265–77.
Giannattasio S, et al. Retrograde response to mitochondrial dysfunction is separable from TOR1/2 regulation of retrograde gene expression. J Biol Chem. 2005;280(52):42528–35.
Berger AB, et al. Hmo1 is required for TOR-dependent regulation of ribosomal protein gene transcription. Mol Cell Biol. 2007;27(22):8015–26.
Dai J, et al. Probing nucleosome function: a highly versatile library of synthetic histone H3 and H4 mutants. Cell. 2008;134(6):1066–78.
Janke C, et al. A versatile toolbox for PCR-based tagging of yeast genes: new fluorescent proteins, more markers and promoter substitution cassettes. Yeast. 2004;21(11):947–62.
Laribee RN, et al. Ccr4-not regulates RNA polymerase I transcription and couples nutrient signaling to the control of ribosomal RNA biogenesis. PLoS Genet. 2015;11(3):e1005113.
HC, JJW, and RNL designed and performed the experiments; HC, JJW, and RNL analyzed and interpreted the data; BDS provided unique reagents; RNL wrote and edited the manuscript with input from JJW and BDS. All authors read and approved the final manuscript.
We would like to gratefully acknowledge Dr. Steven Zheng for the TOR1-1 expression vector.
All supporting information is provided in Additional File 1 which contains three supplemental figures.
Research in the Laribee laboratory is supported by NIH Grant 1R01GM107040-01 awarded to R.N.L. Work in the Strahl laboratory is supported by NIH Grant R01GM110058 to B.D.S.
Department of Pathology and Laboratory Medicine, UT Center for Cancer Research, University of Tennessee Health Science Center, Memphis, TN, USA
Hongfeng Chen
, Jason J. Workman
& R. Nicholas Laribee
Department of Biochemistry and Biophysics, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
Brian D. Strahl
Search for Hongfeng Chen in:
Search for Jason J. Workman in:
Search for Brian D. Strahl in:
Search for R. Nicholas Laribee in:
Correspondence to R. Nicholas Laribee.
Additional file 1. Figure S1. TORC1 inhibition does not alter Abf2 or Hmo1 cellular localization. H3WT and H3K37A cells expressing Abf2-EGFP (A) or Hmo1-EGFP (B) were mock or 20 nM rapamycin treated for one hour before performing confocal microscopy analysis. The nucleus is indicated by Hoechst (blue) staining and cell outlines are indicated by the line trace. Scale bar indicates 5 μm. Figure S2. Rapamycin-resistant TOR1-1 expression restores nuclear Nhp6a localization in H3K37A. Nhp6a-EGFP expressing H3WT (A) or H3K37A (B) cells carrying control vector or the TOR1-1 expression vector were mock or 20 nM rapamycin treated for two hours before confocal microscopy analysis. The outline of individual cells is indicated by the line trace. Position of the nucleus is indicated by blue Hoechst staining. Arrows indicate Nhp6a at the nuclear periphery and cytoplasmic Nhp6a-EGFP signal. Scale bar indicates 5 μm. Figure S3. Brightfield images for Figure 3. A Brightfield images for Nhp6a-EGFP results presented in Figure 3A. B Brightfield images for the Sp16-EGFP data represented in Figure 3C. Scale bar indicated 5 μm.
13072_2016_83_MOESM2_ESM.docx
Additional file 2. Table S1. Yeast strains. Table S2. Yeast plasmids.
Chen, H., Workman, J.J., Strahl, B.D. et al. Histone H3 and TORC1 prevent organelle dysfunction and cell death by promoting nuclear retention of HMGB proteins. Epigenetics & Chromatin 9, 34 (2016). https://doi.org/10.1186/s13072-016-0083-3
High mobility group B
Target of rapamycin | CommonCrawl |
Past noon, I began to feel better, but since I would be driving to errands around 4 PM, I decided to not risk it and take an hour-long nap, which went well, as did the driving. The evening was normal enough that I forgot I had stayed up the previous night, and indeed, I didn't much feel like going to bed until past midnight. I then slept well, the Zeo giving me a 108 ZQ (not an all-time record, but still unusual).
Imagine a pill you can take to speed up your thought processes, boost your memory, and make you more productive. If it sounds like the ultimate life hack, you're not alone. There are pills that promise that out there, but whether they work is complicated. Here are the most popular cognitive enhancers available, and what science actually says about them.
Another well-known smart drug classed as a cholinergic is Sulbutiamine, a synthetic derivative of thiamine which crosses the blood-brain barrier and has been shown to improve memory while reducing psycho-behavioral inhibition. While Sulbutiamine has been shown to exhibit cholinergic regulation within the hippocampus, the reasons for the drug's discernable effects on the brain remain unclear. This smart drug, available over the counter as a nutritional supplement, has a long history of use, and appears to have no serious side effects at therapeutic levels.
Bacopa Monnieri is probably one of the safest and most effective memory and mood enhancer nootropic available today with the least side-effects. In some humans, a majorly extended use of Bacopa Monnieri can result in nausea. One of the primary products of AlternaScript is Optimind, a nootropic supplement which mostly constitutes of Bacopa Monnieri as one of the main ingredients.
CDP-Choline is also known as Citicoline or Cytidine Diphosphocholine. It has been enhanced to allow improved crossing of the blood-brain barrier. Your body converts it to Choline and Cytidine. The second then gets converted to Uridine (which crosses the blood-brain barrier). CDP-Choline is found in meats (liver), eggs (yolk), fish, and vegetables (broccoli, Brussels sprout).
Related to the famous -racetams but reportedly better (and much less bulky), Noopept is one of the many obscure Russian nootropics. (Further reading: Google Scholar, Examine.com, Reddit, Longecity, Bluelight.ru.) Its advantages seem to be that it's far more compact than piracetam and doesn't taste awful so it's easier to store and consume; doesn't have the cloud hanging over it that piracetam does due to the FDA letters, so it's easy to purchase through normal channels; is cheap on a per-dose basis; and it has fans claiming it is better than piracetam.
This would be a very time-consuming experiment. Any attempt to combine this with other experiments by ANOVA would probably push the end-date out by months, and one would start to be seriously concerned that changes caused by aging or environmental factors would contaminate the results. A 5-year experiment with 7-month intervals will probably eat up 5+ hours to prepare <12,000 pills (active & placebo); each switch and test of mental functioning will probably eat up another hour for 32 hours. (And what test maintains validity with no practice effects over 5 years? Dual n-back would be unusable because of improvements to WM over that period.) Add in an hour for analysis & writeup, that suggests >38 hours of work, and 38 \times 7.25 = 275.5. 12,000 pills is roughly $12.80 per thousand or $154; 120 potassium iodide pills is ~$9, so \frac{365.25}{120} \times 9 \times 5 = 137.
There are seven primary classes used to categorize smart drugs: Racetams, Stimulants, Adaptogens, Cholinergics, Serotonergics, Dopaminergics, and Metabolic Function Smart Drugs. Despite considerable overlap and no clear border in the brain and body's responses to these substances, each class manifests its effects through a different chemical pathway within the body.
Cytisine is not known as a stimulant and I'm not addicted to nicotine, so why give it a try? Nicotine is one of the more effective stimulants available, and it's odd how few nicotine analogues or nicotinic agonists there are available; nicotine has a few flaws like short half-life and increasing blood pressure, so I would be interested in a replacement. The nicotine metabolite cotinine, in the human studies available, looks intriguing and potentially better, but I have been unable to find a source for it. One of the few relevant drugs which I can obtain is cytisine, from Ceretropic, at 2x1.5mg doses. There are not many anecdotal reports on cytisine, but at least a few suggest somewhat comparable effects with nicotine, so I gave it a try.
10:30 AM; no major effect that I notice throughout the day - it's neither good nor bad. This smells like placebo (and part of my mind is going how unlikely is it to get placebo 3 times in a row!, which is just the Gambler's fallacy talking inasmuch as this is sampling with replacement). I give it 60% placebo; I check the next day right before taking, and it is. Man!
Take at 11 AM; distractions ensue and the Christmas tree-cutting also takes up much of the day. By 7 PM, I am exhausted and in a bad mood. While I don't expect day-time modafinil to buoy me up, I do expect it to at least buffer me against being tired, and so I conclude placebo this time, and with more confidence than yesterday (65%). I check before bed, and it was placebo.
Nootropics, also known as 'brain boosters,' 'brain supplements' or 'cognitive enhancers' are made up of a variety of artificial and natural compounds. These compounds help in enhancing the cognitive activities of the brain by regulating or altering the production of neurochemicals and neurotransmitters in the brain. It improves blood flow, stimulates neurogenesis (the process by which neurons are produced in the body by neural stem cells), enhances nerve growth rate, modifies synapses, and improves cell membrane fluidity. Thus, positive changes are created within your body, which helps you to function optimally irrespective of your current lifestyle and individual needs.
But perhaps the biggest difference between Modafinil and other nootropics like Piracetam, according to Patel, is that Modafinil studies show more efficacy in young, healthy people, not just the elderly or those with cognitive deficits. That's why it's great for (and often prescribed to) military members who are on an intense tour, or for those who can't get enough sleep for physiological reasons. One study, by researchers at Imperial College London, and published in Annals of Surgery, even showed that Modafinil helped sleep-deprived surgeons become better at planning, redirecting their attention, and being less impulsive when making decisions.
This is one of the few times we've actually seen a nootropic supplement take a complete leverage on the nootropic industry with the name Smart Pill. To be honest, we don't know why other companies haven't followed suit yet – it's an amazing name. Simple, and to the point. Coming from supplement maker, Only Natural, Smart Pill makes some pretty bold claims regarding their pills being completely natural, whilst maintaining good quality. This is their niche – or Only Natural's niche, for that matter. They create supplements, in this case Smart Pill, with the… Learn More...
That doesn't necessarily mean all smart drugs – now and in the future – will be harmless, however. The brain is complicated. In trying to upgrade it, you risk upsetting its intricate balance. "It's not just about more, it's about having to be exquisitely and exactly right. And that's very hard to do," says Arnstein. "What's good for one system may be bad for another system," adds Trevor Robbins, Professor of Cognitive Neuroscience at the University of Cambridge. "It's clear from the experimental literature that you can affect memory with pharmacological agents, but the problem is keeping them safe."
A total of 14 studies surveyed reasons for using prescription stimulants nonmedically, all but one study confined to student respondents. The most common reasons were related to cognitive enhancement. Different studies worded the multiple-choice alternatives differently, but all of the following appeared among the top reasons for using the drugs: "concentration" or "attention" (Boyd et al., 2006; DeSantis et al., 2008, 2009; Rabiner et al., 2009; Teter et al., 2003, 2006; Teter, McCabe, Cranford, Boyd, & Guthrie, 2005; White et al., 2006); "help memorize," "study," "study habits," or "academic assignments" (Arria et al., 2008; Barrett et al., 2005; Boyd et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; Low & Gendaszek, 2002; Rabiner et al., 2009; Teter et al., 2005, 2006; White et al., 2006); "grades" or "intellectual performance" (Low & Gendaszek, 2002; White et al., 2006); "before tests" or "finals week" (Hall et al., 2005); "alertness" (Boyd et al., 2006; Hall et al., 2005; Teter et al., 2003, 2005, 2006); or "performance" (Novak et al., 2007). However, every survey found other motives mentioned as well. The pills were also taken to "stay awake," "get high," "be able to drink and party longer without feeling drunk," "lose weight," "experiment," and for "recreational purposes."
"Who doesn't want to maximize their cognitive ability? Who doesn't want to maximize their muscle mass?" asks Murali Doraiswamy, who has led several trials of cognitive enhancers at Duke University Health System and has been an adviser to pharmaceutical and supplement manufacturers as well as the Food and Drug Administration. He attributes the demand to an increasingly knowledge-based society that values mental quickness and agility above all else.
But though it's relatively new on the scene with ambitious young professionals, creatine has a long history with bodybuilders, who have been taking it for decades to improve their muscle #gains. In the US, sports supplements are a multibillion-dollar industry – and the majority contain creatine. According to a survey conducted by Ipsos Public Affairs last year, 22% of adults said they had taken a sports supplement in the last year. If creatine was going to have a major impact in the workplace, surely we would have seen some signs of this already.
I do recommend a few things, like modafinil or melatonin, to many adults, albeit with misgivings about any attempt to generalize like that. (It's also often a good idea to get powders, see the appendix.) Some of those people are helped; some have told me that they tried and the suggestion did little or nothing. I view nootropics as akin to a biological lottery; one good discovery pays for all. I forge on in the hopes of further striking gold in my particular biology. Your mileage will vary. All you have to do, all you can do is to just try it. Most of my experiences were in my 20s as a right-handed 5'11 white male weighing 190-220lbs, fitness varying over time from not-so-fit to fairly fit. In rough order of personal effectiveness weighted by costs+side-effects, I rank them as follows:
Most epidemiological research on nonmedical stimulant use has been focused on issues relevant to traditional problems of drug abuse and addiction, and so, stimulant use for cognitive enhancement is not generally distinguished from use for other purposes, such as staying awake or getting high. As Boyd and McCabe (2008) pointed out, the large national surveys of nonmedical prescription drug use have so far failed to distinguish the ways and reasons that people use the drugs, and this is certainly true where prescription stimulants are concerned. The largest survey to investigate prescription stimulant use in a nationally representative sample of Americans, the National Survey on Drug Use and Health (NSDUH), phrases the question about nonmedical use as follows: "Have you ever, even once, used any of these stimulants when they were not prescribed for you or that you took only for the experience or feeling they caused?" (Snodgrass & LeBaron 2007). This phrasing does not strictly exclude use for cognitive enhancement, but it emphasizes the noncognitive effects of the drugs. In 2008, the NSDUH found a prevalence of 8.5% for lifetime nonmedical stimulant use by Americans over the age of 12 years and a prevalence of 12.3% for Americans between 21 and 25 (Substance Abuse and Mental Health Services Administration, 2009).
A 100mg dose of caffeine (half of a No-Doz or one cup of strong coffee) with 200mg of L-theanine is what the nootropics subreddit recommends in their beginner's FAQ, and many nootropic sellers, like Peak Nootropics, suggest the same. In my own experiments, I used a pre-packaged combination from Nootrobox called Go Cubes. They're essentially chewable coffee cubes (not as gross as it sounds) filled with that same beginner dose of caffeine, L-theanine, as well as a few B vitamins thrown into the mix. After eating an entire box of them (12 separate servings—not all at once), I can say eating them made me feel more alert and energetic, but less jittery than my usual three cups of coffee every day. I noticed enough of a difference in the past two weeks that I'll be looking into getting some L-theanine supplements to take with my daily coffee.
It looks like the overall picture is that nicotine is absorbed well in the intestines and the colon, but not so well in the stomach; this might be the explanation for the lack of effect, except on the other hand, the specific estimates I see are that 10-20% of the nicotine will be bioavailable in the stomach (as compared to 50%+ for mouth or lungs)… so any of my doses of >5ml should have overcome the poorer bioavailability! But on the gripping hand, these papers are mentioning something about the liver metabolizing nicotine when absorbed through the stomach, so…
12:18 PM. (There are/were just 2 Adderall left now.) I manage to spend almost the entire afternoon single-mindedly concentrating on transcribing two parts of a 1996 Toshio Okada interview (it was very long, and the formatting more challenging than expected), which is strong evidence for Adderall, although I did feel fairly hungry while doing it. I don't go to bed until midnight and & sleep very poorly - despite taking triple my usual melatonin! Inasmuch as I'm already fairly sure that Adderall damages my sleep, this makes me even more confident (>80%). When I grumpily crawl out of bed and check: it's Adderall. (One Adderall left.)
Legal issues aside, this wouldn't be very difficult to achieve. Many companies already have in-house doctors who give regular health check-ups — including drug tests — which could be employed to control and regulate usage. Organizations could integrate these drugs into already existing wellness programs, alongside healthy eating, exercise, and good sleep.
A rough translation for the word "nootropic" comes from the Greek for "to bend or shape the mind." And already, there are dozens of over-the-counter (OTC) products—many of which are sold widely online or in stores—that claim to boost creativity, memory, decision-making or other high-level brain functions. Some of the most popular supplements are a mixture of food-derived vitamins, lipids, phytochemicals and antioxidants that studies have linked to healthy brain function. One popular pick on Amazon, for example, is an encapsulated cocktail of omega-3s, B vitamins and plant-derived compounds that its maker claims can improve memory, concentration and focus.
There are certain risks associated with smart pills that might restrain their use. A smart pill usually leaves the body within two weeks. Sometimes, the pill might get lodged in the digestive tract rather than exiting the body via normal bowel movements. The risk might be higher in people with a tumor, Crohns disease, or some surgery within that area that lead to narrowing of the digestive tract. CT scan is usually performed in people with high-risk to assess the narrowing of the tract. However, the pill might still be lodged even if the results are negative for the CT scan, which might lead to bowel obstruction and can be removed either by surgery or traditional endoscopy. Smart pills might lead to skin irritation, which results in mild redness and need to be treated topically. It may also lead to capsule aspiration, which involves the capsule going down the wrong pipe and entering the airway instead of the esophagus. This might result in choking and death if immediate bronchoscopic extraction is not performed. Patients with comorbidities related to brain injury or chronic obstructive pulmonary disease may be at a higher risk. So, the health risks associated with the use of smart pills are hindering the smart pills technology market. The other factors, such as increasing cost with technological advancement and ethical constraints are also hindering the market.
One of the most common strategies to beat this is cycling. Users who cycle their nootropics take them for a predetermined period, (usually around five days) before taking a two-day break from using them. Once the two days are up, they resume the cycle. By taking a break, nootropic users reduce the tolerance for nootropics and lessen the risk of regression and tolerance symptoms.
The search to find more effective drugs to increase mental ability and intelligence capacity with neither toxicity nor serious side effects continues. But there are limitations. Although the ingredients may be separately known to have cognition-enhancing effects, randomized controlled trials of the combined effects of cognitive enhancement compounds are sparse.
Before taking any supplement or chemical, people want to know if there will be long term effects or consequences, When Dr. Corneliu Giurgea first authored the term "nootropics" in 1972, he also outlined the characteristics that define nootropics. Besides the ability to benefit memory and support the cognitive processes, Dr. Giurgea believed that nootropics should be safe and non-toxic.
From the standpoint of absorption, the drinking of tobacco juice and the interaction of the infusion or concoction with the small intestine is a highly effective method of gastrointestinal nicotine administration. The epithelial area of the intestines is incomparably larger than the mucosa of the upper tract including the stomach, and the small intestine represents the area with the greatest capacity for absorption (Levine 1983:81-83). As practiced by most of the sixty-four tribes documented here, intoxicated states are achieved by drinking tobacco juice through the mouth and/or nose…The large intestine, although functionally little equipped for absorption, nevertheless absorbs nicotine that may have passed through the small intestine.
There is no shortage of nootropics available for purchase online that can be shipped to you nearly anywhere in the world. Yet, many of these supplements and drugs have very little studies, particularly human studies, confirming their results. While this lack of research may not scare away more adventurous neurohackers, many people would prefer to […]
And in his followup work, An opportunity cost model of subjective effort and task performance (discussion). Kurzban seems to have successfully refuted the blood-glucose theory, with few dissenters from commenting researchers. The more recent opinion seems to be that the sugar interventions serve more as a reward-signal indicating more effort is a good idea, not refueling the engine of the brain (which would seem to fit well with research on procrastination).↩
Ashwagandha has been shown to improve cognition and motivation, by means of reducing anxiety [46]. It has been shown to significantly reduce stress and anxiety. As measured by cortisol levels, anxiety symptoms were reduced by around 30% compared to a placebo-controlled (double-blind) group [47]. And it may have neuroprotective effects and improve sleep, but these claims are still being researched.
Finally, two tasks measuring subjects' ability to control their responses to monetary rewards were used by de Wit et al. (2002) to assess the effects of d-AMP. When subjects were offered the choice between waiting 10 s between button presses for high-probability rewards, which would ultimately result in more money, and pressing a button immediately for lower probability rewards, d-AMP did not affect performance. However, when subjects were offered choices between smaller rewards delivered immediately and larger rewards to be delivered at later times, the normal preference for immediate rewards was weakened by d-AMP. That is, subjects were more able to resist the impulse to choose the immediate reward in favor of the larger reward.
Taurine (Examine.com) was another gamble on my part, based mostly on its inclusion in energy drinks. I didn't do as much research as I should have: it came as a shock to me when I read in Wikipedia that taurine has been shown to prevent oxidative stress induced by exercise and was an antioxidant - oxidative stress is a key part of how exercise creates health benefits and antioxidants inhibit those benefits.
The concept of neuroenhancement and the use of substances to improve cognitive functioning in healthy individuals, is certainly not a new one. In fact, one of the first cognitive enhancement drugs, Piracetam, was developed over fifty years ago by psychologist and chemist C.C. Giurgea. Although he did not know the exact mechanism, Giurgia believed the drug boosted brain power and so began his exploration into "smart pills", or nootropics, a term he coined from the Greek nous, meaning "mind," and trepein, meaning "to bend.
One of the most popular legal stimulants in the world, nicotine is often conflated with the harmful effects of tobacco; considered on its own, it has performance & possibly health benefits. Nicotine is widely available at moderate prices as long-acting nicotine patches, gums, lozenges, and suspended in water for vaping. While intended for smoking cessation, there is no reason one cannot use a nicotine patch or nicotine gum for its stimulant effects.
Some suggested that the lithium would turn me into a zombie, recalling the complaints of psychiatric patients. But at 5mg elemental lithium x 200 pills, I'd have to eat 20 to get up to a single clinical dose (a psychiatric dose might be 500mg of lithium carbonate, which translates to ~100mg elemental), so I'm not worried about overdosing. To test this, I took on day 1 & 2 no less than 4 pills/20mg as an attack dose; I didn't notice any large change in emotional affect or energy levels. And it may've helped my motivation (though I am also trying out the tyrosine).
Serotonin, or 5-hydroxytryptamine (5-HTP), is another primary neurotransmitter and controls major features of the mental landscape including mood, sleep and appetite. Serotonin is produced within the body by exposure, which is one reason that the folk-remedy of "getting some sun" to fight depression is scientifically credible. Many foods contain natural serotonergic (serotonin-promoting or releasing) compounds, including the well-known chemical L-Tryptophan found in turkey, which can promote sleep after big Thanksgiving dinners.
On 8 April 2011, I purchased from Smart Powders (20g for $8); as before, some light searching seemed to turn up SP as the best seller given shipping overhead; it was on sale and I planned to cap it so I got 80g. This may seem like a lot, but I was highly confident that theanine and I would get along since I already drink so much tea and was a tad annoyed at the edge I got with straight caffeine. So far I'm pretty happy with it. My goal was to eliminate the physical & mental twitchiness of caffeine, which subjectively it seems to do.
One fairly powerful nootropic substance that, appropriately, has fallen out of favor is nicotine. It's the chemical that gives tobacco products their stimulating kick. It isn't what makes them so deadly, but it does make smoking very addictive. When Europeans learned about tobacco's use from indigenous tribes they encountered in the Americas in the 15th and 16th centuries, they got hooked on its mood-altering effects right away and even believed it could cure joint pain, epilepsy, and the plague. Recently, researchers have been testing the effects of nicotine that's been removed from tobacco, and they believe that it might help treat neurological disorders including Parkinson's disease and schizophrenia; it may also improve attention and focus. But, please, don't start smoking or vaping. Check out these 14 weird brain exercises that make you smarter.
Now, what is the expected value (EV) of simply taking iodine, without the additional work of the experiment? 4 cans of 0.15mg x 200 is $20 for 2.1 years' worth or ~$10 a year or a NPV cost of $205 (\frac{10}{\ln 1.05}) versus a 20% chance of $2000 or $400. So the expected value is greater than the NPV cost of taking it, so I should start taking iodine.
If you could take a pill that would help you study and get better grades, would you? Off-label use of "smart drugs" – pharmaceuticals meant to treat disorders like ADHD, narcolepsy, and Alzheimer's – are becoming increasingly popular among college students hoping to get ahead, by helping them to stay focused and alert for longer periods of time. But is this cheating? Should their use as cognitive enhancers be approved by the FDA, the medical community, and society at large? Do the benefits outweigh the risks?
Qualia Mind, meanwhile, combines more than two dozen ingredients that may support brain and nervous system function – and even empathy, the company claims – including vitamins B, C and D, artichoke stem and leaf extract, taurine and a concentrated caffeine powder. A 2014 review of research on vitamin C, for one, suggests it may help protect against cognitive decline, while most of the research on artichoke extract seems to point to its benefits to other organs like the liver and heart. A small company-lead pilot study on the product found users experienced improvements in reasoning, memory, verbal ability and concentration five days after beginning Qualia Mind.
Nootropics (/noʊ.əˈtrɒpɪks/ noh-ə-TROP-iks) (colloquial: smart drugs and cognitive enhancers) are drugs, supplements, and other substances that may improve cognitive function, particularly executive functions, memory, creativity, or motivation, in healthy individuals.[1] While many substances are purported to improve cognition, research is at a preliminary stage as of 2018, and the effects of the majority of these agents are not fully determined. | CommonCrawl |
Minimum Cost Path Graph
Let S be the set of vertices whose minimum distance from the source vertex has been found. The following table lists the Port Cost value for different bandwidths. The shortest path computed in the reduced cost graph is the same as the shortest path in the original graph. Steps for finding a minimum-cost spanning tree using _____ Algorithm: Add edges in order of cheapest cost so that no circuits form. The next lowest cost. I am looking to run MST on some vectors with geometries and cost. In this graph, cost of an edge (i, j) is represented by c(i, j). Your program will either return a sequence of nodes for a minimum-cost path or indicate that no solution exists. Minimum weight perfect matching problem: Given a cost c ij for all (i,j) ∈ E, find a perfect matching of minimum cost where the cost of a matchinPg M is given by c(M) = (i,j)∈M c ij. A minimum-cost spanning tree is one which has the smallest possible total weight (where weight represents cost or distance). This problem is concerned with finding the cheapest path between verticesa and b in a graph G = (V,E). the cost-minimized pathfrom the seed point to the goal point. Dynamic Graph Clustering Using Minimum-Cut Trees 1 Introduction Graph clustering has become a central tool for the analysis of networks in gen-eral, with applications ranging from the eld of social sciences to biology and to the growing eld of complex systems. Given a weighted graph, find the maximum cost path from given source to destination that is greater than a given integer x. for example, if your answer is 1 write 1 without decimal points. Initially, this quantity is infinity (i. the total intuitionistic fuzzy cost for traveling through the shortest path. $\begingroup$ Hint: both shortest path and min-cost flow determine the minimum of a sum. We will use Prim's algorithm to find the minimum spanning tree. Version 05/03/2011 Minimum Spanning Trees & Shortest Path—Graph Theory ©2013 North Carolina State University Chapter 8 – Page 1 Section 8. Given a path state state of type AbstractPathState, return a vector (indexed by vertex) of the paths between the source vertex used to compute the path state and a single destination vertex, a list of destination vertices, or the entire graph. NOTE: This algorithm really does always give us the minimum-cost spanning tree. This example. The distance from a vertex v i to a vertex v j in G is the minimum cost over all paths from v i to v j in G denoted by d∗ ij. a) Suppose that each edge in the graph has a weight of zero. particular, this package provides solving tools for minimum cost spanning tree problems, minimum cost arborescence problems, shortest path tree problems and minimum cut tree problem. can u much detail abt this…its very helpful to me…. Tarjan Princeton and HP Labs Abstract Consider a bipartite graph G= (X;Y;E) with real-valued weights on its edges, and suppose that Gis balanced, with jXj= jYj. In this case, we start with single edge of graph and we add edges to it and finally we get minimum cost tree. along path p; and (2) path p has the minimum cost (toll fee) among all the paths satisfying the condition (1). Verify that your result is a maximum or minimum value using the first or second derivative test for extrema. Two vertices are adjacent when they are both incident to a common edge. Dijkstra's algorithm is a graph search algorithm that can solve the single-source shortest path problem for a graph with non- negative edge path cost, outputting a shortest path tree. A number of algorithms have been proposed to enumerate all spanning trees of an undirected graph. Figure 1: Example of a shortest path problem and its mapping to the minimum cost ow model 1. path scheduling with all activity durations assumed to be at minimum cost. A) $2100 B) $2400 C) $2900 D) $6200. Find a minimum cost spanning tree on the graph below using Kruskal's algorithm. The cost is determined depending upon the criteria to be optimized. The following will run the k-maximum spanning tree algorithm and write back results: MATCH (n:Place{id:"D"}) CALL algo. The Prim's algorithm operates on two disjoint sets of edges in the graph. This paper is organized as follows: In Section 2, we provide background ma- terial on network flow algorithms and present some preliminary results. We can, therefore define function for any V,. Long-run average cost (LRAC) curve is a graph that plots average cost of a firm in the long-run when all inputs can be changed. Removes the connection between the specified origin node and the specified destination node Keep in mind that this only removes the connection in one direction, for undirected graphs, the function must be called again with the destination node as the origin. Given a vertex s in graph G, find the shortest path from s to every other vertex in G A C B E 10 3 15 5 2 11 D 20 Closely related problem is to find the shortest (minimum cost) path between two nodes of a graph. The well-known basic problem concerns finding the shortest paths in graphs given any set of journeys and a weighted, connected graph. We analyze the problem of finding a minimum cost path between two given vertices such that the vector sum of all edges in the path equals a given target vector m. 2) Areas less than the minimum core habitat percentage times the area of the foraging radius are eliminated 3) A cost surface is created from the habitat quality raster, cells of high quality have a low cost and vise versa 4) The remaining patches are grown outwards across the cost surface to a distance equal to the foraging radius. of Gsuch that the undirected version of T is a tree and T contains a directed path from rto any other vertex in V. hk Ruifeng Liu Chinese University of Hong Kong rfl[email protected] The minimum cost spanning tree (MST) Spanning tree: is a free tree that connects all the vertices in V • cost of a spanning tree = sum of the costs of the edges in the tree Minimum spanning tree property: • G = ( V, E): a connected graph with a cost function defined on the edges; U ⊆V. Part 3: Remember that we are suppose to find the point (x,y) on the graph of the parabola, y = x 2 + 1, that minimizes d. Optimal Discounted Cost in Weighted Graphs Ashutosh Trivedi = Cost(ˇ): | Now consider the path ˇ we write DCost(v) as minimum discounted cost of all in nite. A connected graph with no circuits, called trees, are also discussed in this chapter. The idea is to start with an empty graph and try to add edges one at a time, always making sure that what is built remainsacyclic. In this way, each distant node influences the cen-ter node through a path connecting the two with minimum cost, providing a robust estimation and intact exploration of the graph structure. As A* traverses the graph, it follows a path of the lowest known cost, Keeping a sorted priority queue of alternate path segments along the way. Here is my Graph class that implements a graph and has nice a method to generate its spanning tree using Kruskal's algorithm. And here comes the definition of an AI agent. graph find a minimum cost to find the shortest path between two points. Total cost of a path to reach (m, n) is sum of all the costs on that path (including both source and destination). Specifically distance[v] stores the minimum distance so far from the source vertex s to some other vertex v. min_cost_flow (G[, demand, capacity, weight]) Return a minimum cost flow satisfying all demands in digraph G. In essence, the planner develops a list of activities on the critical path ranked with their cost slopes. The weight of a shortest path tree. The path is (0, 0) –> (0, 1) –> (1, 2) –> (2, 2). {Each node has a value b(v). For a given source vertex (node) in the graph, the algorithm finds the path with lowest cost (i. satisfaction) problems with costs. [Tree, pred] = graphminspantree(G) finds an acyclic subset of edges that connects all the nodes in the undirected graph G and for which the total weight is minimized. Eulerization of a graph is the process of finding an Euler circuit for that graph. It should return and integer that represents the minimum weight to connect all nodes in the graph provided. Initially, this quantity is infinity (i. the distance of the right path (between robot 3's vertex and 2's goal) be x 1. A simple graph with (a) a face-spanning subgraph of cost 11 and (b) another face-spanning subgraph of cost 13. To find the path, the image is first modeled as a graph. Such a route is easily obtained by a breadth first search method. It adds one more node in each iteration to the minimum cost spanning tree. Steps for finding a minimum-cost spanning tree using _____ Algorithm: Add edges in order of cheapest cost so that no circuits form. Describe and analyze an e cient algorithm for nding a minimum-cost monotone path in such a graph, G. An edge is the line segment connecting two nodes and has the same length in either direction. Prim's Algorithm. Proof: Consider any path from sto some node t. Tarjan Princeton and HP Labs Abstract Consider a bipartite graph G= (X;Y;E) with real-valued weights on its edges, and suppose that Gis balanced, with jXj= jYj. minimum cost on the section from s to t, which makes the max-flow also min-cost. We then will see how the basic approach of this algorithm can be used to solve other problems including finding maximum bottleneck paths and the minimum spanning tree (MST) problem. In the maze defined, True are the open ways. Both search methods can be used to obtain a spanning tree of the graph, though if I recall correctly, BFS can also be used in a weighted graph to generate a minimum cost spanning tree. • Route signals along minimum cost path • If congestion/overuse – assign higher cost to congested resources • Makes problem a shortest path search • Allows us to adapt costs/search to problem • Repeat until done Penn ESE 535 Spring 2015 -- DeHon 26 Key Idea • Congested paths/resources become expensive. Let G = (V; E) be an (undirected) graph with. Use Kruskal's algorithm for minimum-cost spanning trees on the graph below. Suppose you are given a connected graph G, with edge costs that are all distinct. Finding minimum cost to visit all vertices in a graph and returning back. path scheduling with all activity durations assumed to be at minimum cost. Before increasing the edge weights, shortest path from vertex 1 to 4 was through 2 and 3 but after increasing Figure 1: Counterexample for Shortest Path Tree the edge weights shortest path to 4 is from vertex 1. the total intuitionistic fuzzy cost for traveling through the shortest path. ) In this context, given an input graph G, one seeks a homomorphism f of G to H with minimum cost, i. Each cell of the matrix represents a cost to traverse through that cell. A minimum spanning tree of an undirected graph can be easily obtained using classical algorithms by Prim or Kruskal. Now, let's ask: what's the shortest path cost to, say, Ben and Jerry's? * To move to weighted graphs, we appeal to the mighty power of Dijkstra. Using this answer, by finding the minimum cost closed walk (or just it's cost) of an arbitrary 4-regular planar graph, with weights 1, we can decide whether it has a Hamiltonian Path, but this problem is NP-complete. As I stand now I'm using DFS, and it's pretty slow (high number of nodes and maximum length too). For positive edge weight graphs. NOTE: This algorithm really does always give us the minimum-cost spanning tree. The cost is determined depending upon the criteria to be optimized. This is the single-source minimum-cost paths problem. (eds) Graph Based Representations in Pattern Recognition. ° Among all the spanning trees of a weighted and connected graph, the one (possibly more) with the least total weight is called a minimum spanning tree (MST). shortest path finding as a complete graph G (V, E). The multistage graph problem is finding the path with minimum cost from source s to sink t. A spanning tree of a graph is a tree that has all the vertices of the graph connected by some edges. Dijkstra solves the shortest path problem (from a specified node), while Kruskal and Prim finds a minimum-cost spanning tree. Now any positive value (>0) may exist on each edge. Suppose we have a weighted graph G = (V, E, c), where V is the set of vertices, E is the set of arcs, and. along path p; and (2) path p has the minimum cost (toll fee) among all the paths satisfying the condition (1). There can be many spanning trees. A minimum directed spanning tree (MDST) rooted at ris a directed spanning tree rooted at rof minimum cost. Next, the planner can examine activities on the critical path and reduce the scheduled duration of activities which have the lowest resulting increase in costs. replacing the edge weights with ((LCM of all edges)/(weight of the edge)) makes the longest edge as smallest and smallest edge as longest. Minimum Spanning Tree Problem We are given a undirected graph (V,E) with the node set V and the edge set E. A number of problems from graph theory are called Minimum spanning tree. A minimum directed spanning tree (MDST) rooted at ris a directed spanning tree rooted at rof minimum cost. // Indexed in order of stages E is a set of edges. Assuming that you don't expect the paths to be more than 1000 steps long, you can choose p = 1/1000. There also can be many minimum spanning trees. Minimum Cost flow problem is a way of minimizing the cost required to deliver maximum amount of flow possible in the network. As mentioned there, grid problem reduces to smaller sub-problems once choice at the cell is made, but here move will be in reverse direction. ing tree, or a shortest path. Each unit of flow on each arc corresponds to onepathgoingthroughthatnode. A circuit that uses every edge exactly once is an Euler circuit. The cost w(T) of a directed spanning tree Tis the sum of the costs of its edges, i. The program must be possible to open these files (check the format). However, when we add a 2 to this graph, we penalize longer paths, so the shortest path from a to d is a !b !d. Version 05/03/2011 Minimum Spanning Trees & Shortest Path—Graph Theory ©2013 North Carolina State University Chapter 8 – Page 1 Section 8. The shortest path problem can be defined for graphs whether undirected, directed, or mixed. Successive Shortest Path Algorithm for the Minimum Cost Flow Problem in Dynamic Graphs MathildeVernet1,MaciejDrozdowski2,YoannPigné1,EricSanlaville1 1Normandie Univ, UNIHAVRE, UNIROUEN, INSA Rouen, LITIS, 76600 Le Havre, France. Graph Magics - an ultimate software for graph theory, having many very useful things, among which a strong graph generator and more than 15 different algorithms that one may apply to graphs (ex. An Extended Path Following Algorithm for Graph-Matching Problem Zhi-Yong Liu, Hong Qiao,Senior Member, IEEE, and Lei Xu,Fellow, IEEE Abstract—The path following algorithm was proposed recently to approximately solve the matching problems on undirected graph models and exhibited a state-of-the-art performance on matching accuracy. I fear you will have to work a little and make a program that check every possible path until you find the one with minimum cost. Metanet is a toolbox of Scilab for graphs and networks computations. On a graph, transportation problems can be used to express challenging tasks involving matching supply to demand with minimal shipment expense; in discrete language, these become minimum-cost network ow problems. This contradicts the maximality of M. A path that uses every edge exactly once is an Euler path. Connected graph: a path exists between every pair of find lowest cost set of roads to repair so that all cities are connected This is a minimum spanning tree. The path to reach (m, n) must be through one of the 3 cells: (m-1, n-1) or (m-1, n) or (m, n-1). applying IG-ÿskal's algorithm for finding a minimum-cost spanning tree for a graph. It adds one more node in each iteration to the minimum cost spanning tree. Abstract: We design and analyse approximation algorithms for the minimum-cost connected T-join problem: given an undirected graph G = (V;E) with nonnegative costs on the edges, and a subset of nodes T, find (if it exists) a spanning connected subgraph H of minimum cost such that every node in T has odd degree and every node not in T has even degree; H may have multiple copies of any edge of G. Minimum weight perfect matching problem: Given a cost c ij for all (i,j) ∈ E, find a perfect matching of minimum cost where the cost of a matchinPg M is given by c(M) = (i,j)∈M c ij. Shortest Distance Problems in Graphs Using History-Dependent Transition Costs with Application to Kinodynamic Path Planning Raghvendra V. Fig 1: This graph shows the shortest path from node "a" or "1" to node "b" or "5" using Dijkstras Algorithm. This paper involved in illustrating the best way to travel between two points and in doing so, the shortest path algorithm was created. If there is more than one minimum cost path from v to w, will Dijkstra's algorithm always find the path with the fewest edges? If not, explain in a few sentences how to modify Dijkstra's algorithm so that if there is more than one minimum path from v to w, a path with the fewest edges is chosen. @ inding an Euler circuit on a graph. Our results are based on a new approach to speed-up augmenting path based matching algorithms, which we describe next. Minimum Spanning Tree Problem We are given a undirected graph (V,E) with the node set V and the edge set E. Dijkstra's algorithm Like BFS for weighted graphs. This model consid-ers the phenomenon that some vehicles may choose to stop at some places to avoid tra c jams. Graph search algorithms explore a graph either for general discovery or explicit search. Similar is story for vertex C. Can you move some of the vertices or bend. 8 If the graph is directed it is possible for a tree of shortest paths from s and a minimum spanning tree in G. Each cell of the matrix represents a cost to traverse through that cell. Handout MS2: Midterm 2 Solutions 2 eb, we obtain a new spanning tree for the original graph with lower cost than T, since the ordering of edge weights is preserved when we add 1 to each edge weight. In this case, as well, we have n-1 edges when number of nodes in graph are n. Consider an undirected graph containing nodes and edges. Using this answer, by finding the minimum cost closed walk (or just it's cost) of an arbitrary 4-regular planar graph, with weights 1, we can decide whether it has a Hamiltonian Path, but this problem is NP-complete. In this Java Program first we input the number of nodes and cost matrix weights for the graph ,then we input the source vertex. The Dijkstra's algorithm gradually builds a short path tree using links in the network. This week we finish our look at pathfinding and graph search algorithms, with a focus on the Minimum Weight Spanning Tree algorithm, which calculates the paths along a connected tree structure with the smallest value (weight of the relationship such as cost, time or capacity) associated with visiting all nodes in the tree. Additionally, the graph is expected to have very few edges, so the average degree is very small. BANSAL Department of Mathematics, A. Need the graph to be connected, and minimize the cost of laying the cables. Unlike the situation where you're trying to find a minimum cost Hamilton circuit, there is an algorithm. of biconnected graph a linear cost function on the face cycles. Steiner tree problem or so called Steiner Problem in Graphs (SPG) is a classic combinatorial optimization problem. The minimum-cost spanning tree produced by applying Kruskal's algorithm will always contain the lowest cost edge of the graph. Minimum spanning tree is a tree in a graph that spans all the vertices and total weight of a tree is minimal. We describe a simple deterministic lexicographic perturbation scheme that guarantees uniqueness of minimum-cost flows and shortest paths in G. By Ion Cozac. i have an adjacency list representation of a graph for the problem, now i am trying to implement dijkstra's algorithm to find the minimum cost paths for the 'interesting cities' as suggested by @Kolmar. cost a configuration that is a Nash Equilibrium can get in total cost to the minimum solution. In order to be able to run this solution, you will need. ) In this context, given an input graph G, one seeks a homomorphism f of G to H with minimum cost, i. e the Global Processing Via Graph Theoretic technique and comes in sem 7 exams. This paper contains two similar theorems giving con-ditions for a minimum cover and a maximum matching of a graph. If all edge lengths are equal, then the Shortest Path algorithm is equivalent to the breadth-first search algorithm. The Minimum Weight Spanning Tree excludes the relationship with cost 6 from D to E, and the one with cost 3 from B to C. Kruskal's algorithm is used for finding a minimum cost spanning tree. {positive b(v) is a supply {negative b(v) is a demand. Hence, the cost of path from source s to sink t is the sum of costs of each edges in this path. A minimum spanning tree of an undirected graph can be easily obtained using classical algorithms by Prim or Kruskal. We transform an dependency attack graph into a Boolean formula and assign cost metrics to attack variables in the formula, based on the severity metrics. We have to go from A to B. This is the single-source shortest paths problem. A number of algorithms have been proposed to enumerate all spanning trees of an undirected graph. Given an n-d costs array, this class can be used to find the minimum-cost path through that array from any set of points to any other set of points. To find minimum cost at cell (i,j), first find the minimum cost to the cell (i-1, j) and cell (i, j-1). Nota: Java knows the length of arrays, in. [costs] is an LxM matrix of minimum cost values for the minimal paths [paths] is an LxM cell containing the shortest path arrays [showWaitbar] (optional) a scalar logical that initializes a waitbar if nonzero. hey, I am trying to find the cost or the length of the path by the following code. The cost reduction strategy converts an existing ow into a ow of lower cost by nding negative cost cycles in the residual graph and adding ow to those cycles. The goal is to obtain an Eulerian Path that has a minimal total cost. – fsociety May 11 '15 at 7:43. , there exist a path from ) •Breadth-first search •Depth-first search •Searching a graph -Systematically follow the edges of a graph to visit the vertices of the graph. the distance of the right path (between robot 3's vertex and 2's goal) be x 1. Assume that $ C_v = 1 $ for all vertices $ 1 \leq v \leq n $ (i. particular, this package provides solving tools for minimum cost spanning tree problems, minimum cost arborescence problems, shortest path tree problems and minimum cut tree problem. [1] There are some theorems that can be used in specific circumstances, such as Dirac's theorem, which says that a Hamiltonian circuit must exist on a graph with n vertices if each vertex has degree n /2 or greater. Finally, we open the black box in order to generalize a recent linear-time algorithm for multiple-source shortest paths in unweighted undirected planar graphs to work in arbitrary orientable surfaces. We describe a simple deterministic lexicographic perturbation scheme that guarantees uniqueness of minimum-cost flows and shortest paths in G. Continue until every vertex is on some-edge you have chosen. acyclic graphs (DAGs) and propose structured sparsity penalties over paths on a DAG (called "path coding" penalties). We propose search several fast algorithms, which allow us to define minimal time cost path and minimal cost path. 2-vertex-connected subgraphs of low cost; no previous approximation algorithm was known for either problem. hk ABSTRACT The computation of Minimum Spanning Trees (MSTs) is a funda-mental graph problem with. Minimum spanning tree Given a connected graph G = (V, E) with edge weights c e, an MST is a subset of the edges T ⊆ E such that T is a spanning tree whose sum of edge weights is minimized. Find a minimum cost spanning tree on the graph below using Kruskal's algorithm. Conclusion We have to study the minimum cost spanning tree using the Prim's algorithm and find the minimum cost is 99 so the final path of minimum cost of spanning is {1, 6}, {6, 5}, {5, 4}, {4, 3}, {3, 2}, {2, 7}. Now I relax all edges leaving B,and set path cost of C as 3, and path cost to e as -2. In this chapter, we consider four specific network models—shortest-path prob-lems, maximum-flow problems, CPM-PERT project-scheduling models, and minimum-spanning. In this paper, we address this issue for directed acyclic graphs (DAGs) and propose structured sparsity penalties over paths on a DAG (called "path coding" penalties). Algorithms in graphs include finding a path between two nodes, finding the shortest path between two nodes, determining cycles in the graph (a cycle is a non-empty path from a node to itself), finding a path that reaches all nodes (the famous "traveling salesman problem"), and so on. Lecture notes on bipartite matching 3 Theorem 2 A matching M is maximum if and only if there are no augmenting paths with respect to M. Repeatedly augment along a minimum -cost augmenting path. Minimum spanning tree. graph find a minimum cost to find the shortest path between two points. Part 3: Remember that we are suppose to find the point (x,y) on the graph of the parabola, y = x 2 + 1, that minimizes d. A logarithmic algorithm for the minimum path problem in Knödel graphs is an open problem despite the fact that they are bipartite and highly symmetric. MCP ¶ class skimage. In this graph, cost of an edge (i, j) is represented by c(i, j). A tax of 1 cent per mile on commercial trucks' travel would have raised $2. Weighted Shortest Path Problem Single-source shortest-path problem: Given as input a weighted graph, G = ( V, E ), and a distinguished starting vertex, s, find the shortest weighted path from s to every other vertex in G. • Total cost: C = C(v, w, q) Minimum Total Cost is a function of input prices and output quantity. i have an adjacency list representation of a graph for the problem, now i am trying to implement dijkstra's algorithm to find the minimum cost paths for the 'interesting cities' as suggested by @Kolmar. First, we identify optimality conditions, which tell us when a given perfect matching is in fact minimum. One classical model that has resurfaced in many multi-assembly methods (e. This is not a trivial problem, because the shortest path may not be along the edge (if any) connecting two vertices, but rather may be along a path involving one or more intermediate vertices. Must Read: C Program To Implement Kruskal's Algorithm Every vertex is labelled with pathLength and predecessor. true Suppose a veteran is planning a visit to all the war memorials in Washington, D. In this section we shall show how to find a minimum-cost spanning tree for G. The multistage graph problem is finding the path with minimum cost from source s to sink t. A typical application for minimum-cost spanning trees occurs in the design of communications networks. Suppose that each edge in the graph has a weight of zero (while non-edges have a cost of $ \infty $ ). The output is either a single float (when a single vertex is provided) or a vector of floats corresponding to the vertex vector. In this paper, we implemented two graph theory methods that extend the least-cost path approach: the Conditional Minimum Transit Cost (CMTC) tool and the Multiple Shortest Paths (MSPs) tool. {Find ow which satis es supplies and demands and has minimum total cost. It can be said as an extension of maximum flow problem with an added constraint on cost(per unit flow) of flow for each edge. // Indexed in order of stages E is a set of edges. Shortest Path using Dijkstra's Algorithm is used to find Single Source shortest Paths to all vertices of graph in case the graph doesn't have negative edges. As you can probably imagine, larger graphs have more nodes and many more possibilities for subgraphs. Generic approach: A tree is an acyclic graph. Consider a connected undirected graph G with not necessarily distinct edge costs. The cost of this spanning tree is (5 + 7 + 3 + 3 + 5 + 8 + 3 + 4) = 38. In case it can, what would be the minimum cost of such path?. Kruskal's algorithm is a minimum-spanning-tree algorithm which finds an edge of the least possible weight that connects any two trees in the forest. Prim's Algorithm. scrolling computer game in terms of nding a minimum-cost monotone path in the graph, G, that represents this game. Minimum Spanning Tree Problem Find a minimum-cost set of edges that connect all vertices of a graph at lowest total cost Applications Connecting "nodes" with a minimum of "wire" Networking Circuit design Collecting nearby nodes Clustering, taxonomy construction Approximating graphs Most graph algorithms are faster on trees. A tax of 1 cent per mile on commercial trucks' travel would have raised $2. I want to: Make it pythonic Improve readability Improve the abstracti. A third is in the process of obtaining a subset of the overall graph, called a Spanning Tree which connects every desired node with a path, but has no paths which can start and end on the same node (such a path is called a cycle). 3: Computing the Single Source Shortest Path in a graph. of biconnected graph a linear cost function on the face cycles. It's important to be acquainted with all of these algorithms - the motivation behind them, their implementations and applications. G is usually assumed to be a weighted graph. Both of these conditions depend on the concept of an alternating path, due to Petersen [2]. These fun activities will help students learn how to read and gather data and use tables to create. Looks at the successors of the current lowest cost vertex in the wavefront. If G is a weighted graph, then the minimum spanning tree Span(G) is the spanning tree over G with minimum weight. if there are multiple edges, keep the lowest cost one Iedge weights are g(x t;u t) Iadd additional target vertex z with an edge from each x 2X T with weight g T (x) Ia sequence of actions is a path through the unrolled graph from x 0 to z Iassociated objective is total, weighted path length 4. In this case, we start with single edge of graph and we add edges to it and finally we get minimum cost tree. To find minimum cost at cell (i,j), first find the minimum cost to the cell (i-1, j) and cell (i, j-1). Minimum spanning tree is the spanning tree where the cost is minimum among all the spanning trees. This model consid-ers the phenomenon that some vehicles may choose to stop at some places to avoid tra c jams. 32 to just. The first distinction is that Dijkstra's algorithm solves a different problem than Kruskal and Prim. Dijkstra's algorithm (also called uniform cost search) - Use a priority queue in general search/traversal. Conclusion We have to study the minimum cost spanning tree using the Prim's algorithm and find the minimum cost is 99 so the final path of minimum cost of spanning is {1, 6}, {6, 5}, {5, 4}, {4, 3}, {3, 2}, {2, 7}. We transform an dependency attack graph into a Boolean formula and assign cost metrics to attack variables in the formula, based on the severity metrics. Given a graph, the start node, and the goal node, your program will search the graph for a minimum-cost path from the start to the goal. The cost of a path is the sum of the costs of the edges and vertices encountered on the path. Weighted Graphs Data Structures & Algorithms 3 [email protected] ©2000-2009 McQuain Dijkstra's SSAD Algorithm* We assume that there is a path from the source vertex s to every other vertex in the graph. The limitation of this type of analysis is that only a single path is identified, even though alternative paths with comparable costs might exist. It works for non-loopy mazes which was already my goal. The cost of the spanning tree is the sum of the weights of all the edges in the tree. Repeatedly augment along a minimum -cost augmenting path. In this paper, the time dependent graph is presented. MCP(costs, offsets=None, fully_connected=True)¶. A heuristic is admissible if for any node, n, in the graph, the heuristic estimate of the cost of the path from n to t is less than or equal to the true cost of that path. There are nn–2 spanning trees of K n. (2)Then I process vertex b, and it is now included in S as it's shortest path from source is determined. The goal of the proposal is to obtain an optimal path with the same cost as the path returned by Dijkstra's algorithm, for the same origin and destination, but using a reduced graph. Maintains a cost to visit every vertex. 4 Problem 5. It is defined here for undirected graphs; for directed graphs the definition of path requires that consecutive vertices be connected by an appropriate directed edge. Dijkstra's Algorithm [2] successfully finds the lowest cost path for each journey. [1] There are some theorems that can be used in specific circumstances, such as Dirac's theorem, which says that a Hamiltonian circuit must exist on a graph with n vertices if each vertex has degree n /2 or greater. Total cost of a path to reach (m, n) is sum of all the costs on that path (including both source and destination). The problem is solved by using the Minimal Spanning Tree Algorithm. Starting from node , we select the lower weight path, i. Before increasing the edge weights, shortest path from vertex 1 to 4 was through 2 and 3 but after increasing Figure 1: Counterexample for Shortest Path Tree the edge weights shortest path to 4 is from vertex 1. Note : It is assumed that negative cost cycles do not exist in input matrix. The path is (0, 0) –> (0, 1) –> (1, 2) –> (2, 2). acyclic graphs (DAGs) and propose structured sparsity penalties over paths on a DAG (called "path coding" penalties). Determining a minimum cost path between two given nodes of this graph can take O(mlogn) time, where n = jV j and m = jEj: If this graph is huge, say n … 700000 and m. [Tree, pred] = graphminspantree(G) finds an acyclic subset of edges that connects all the nodes in the undirected graph G and for which the total weight is minimized. Given a square grid of size N, each cell of which contains integer cost which represents a cost to traverse through that cell, we need to find a path from top left cell to bottom right cell by which total cost incurred is minimum. [1] There are some theorems that can be used in specific circumstances, such as Dirac's theorem, which says that a Hamiltonian circuit must exist on a graph with n vertices if each vertex has degree n /2 or greater. A minimum-cost spanning tree is one which has the smallest possible total weight (where weight represents cost or distance). Computing a planarorthogonaldrawingofa planar graphwith the minimum number of bends over all possible embeddings is in general NP-hard [17,18]. A typical application of this problem involves finding the best delivery route from a factory to a warehouse where the road network has some capacity and cost associated. min_paths(+Vertex, +WeightedGraph, -Tree) Tree is a tree of all the minimum-cost paths from Vertex to every other vertex in WeightedGraph. , there exist a path from ) •Breadth-first search •Depth-first search •Searching a graph -Systematically follow the edges of a graph to visit the vertices of the graph. Minimum Cost Flow Problem • Objective: determine the least cost movement of a commodity through a network in order to satisfy demands at certain nodes from available supplies at other nodes. Kruskal's algorithm is a minimum-spanning-tree algorithm which finds an edge of the least possible weight that connects any two trees in the forest. 1 Shortest path problem The shortest path problem is one of the simplest of all network ow problems. In kruskal's algorithm, edges are added to the spanning tree in increasing order of cost. Minimum spanning tree. G is usually assumed to be a weighted graph. ] Each set V i is called a stage in the graph. A spanning tree in a given graph is a tree built using all the vertices of the graph and just enough of its edges to obtain a tree. Assuming that you don't expect the paths to be more than 1000 steps long, you can choose p = 1/1000. Therefore, we can use the same reduction to also compute a minimum-cost maximum cardinality matching in O~(mn2=5) time. Operations Research Methods 8. • The edges for shortest path are , , and is the minimum cost of path 1 to 4 is "4", cost of path 1 to 2 is "6", and cost of path 4 to 3 is "5" of the given graph 6 + 4 + 5 = 15. This problem is similar to Finding possible paths in grid. 412 G orke et al. These algorithms carve paths through the graph, but there is no expectation that those paths are computationally optimal. This contradicts the maximality of M. We can, therefore define function for any V,. Investigate ideas such as planar graphs, complete graphs, minimum-cost spanning trees, and Euler and Hamiltonian paths. The cost of a path from s to t is the sum of costs of the edges on the path. i have an adjacency list representation of a graph for the problem, now i am trying to implement dijkstra's algorithm to find the minimum cost paths for the 'interesting cities' as suggested by @Kolmar. An expansion path provides a long-run view of a firm's production decision and can be used to create its long-run cost curves. It is expanded, yielding nodes B, C, D. In any graph G, the shortest path from a source vertex to a destination vertex can be calculated using Dijkstra Algorithm. Must Read: C Program To Implement Kruskal's Algorithm Every vertex is labelled with pathLength and predecessor. the original graph. As it turns out, the minimum cost flow problem is equivalent to minimum cost circulation problem and transshipment problem in the sense that they can be reduce to each other while blowing up the input size by a constant factor. Describe and analyze an e cient algorithm for nding a minimum-cost monotone path in such a graph, G. This problem is also called the assignment problem. 2) Areas less than the minimum core habitat percentage times the area of the foraging radius are eliminated 3) A cost surface is created from the habitat quality raster, cells of high quality have a low cost and vise versa 4) The remaining patches are grown outwards across the cost surface to a distance equal to the foraging radius. Shortest Path, Network Flows, Minimum Cut, Maximum Clique, Chinese Postman Problem, Graph Center, Graph Median etc. Negative Edge Costs Single-Source Shortest-Path Problem Problem Given as input a weighted graph, G = (V,E), and a distinguished vertex, s, find the shortest weighted path from s to every other vertex in G. cost(e), but cannot be shared by more than cap(e) paths even if we pay the cost of e. Can you move some of the vertices or bend. Given a graph, the start node, and the goal node, your program will search the graph for a minimum-cost path from the start to the goal. An Extended Path Following Algorithm for Graph-Matching Problem Zhi-Yong Liu, Hong Qiao,Senior Member, IEEE, and Lei Xu,Fellow, IEEE Abstract—The path following algorithm was proposed recently to approximately solve the matching problems on undirected graph models and exhibited a state-of-the-art performance on matching accuracy. For example, consider below graph. Abstract: Let G be an edge-weighted directed graph with n vertices embedded on a surface of genus g. | CommonCrawl |
Search Results: 1 - 10 of 1313 matches for " Radu Vatasescu "
Page 1 /1313
Cold molecules formation by shaping with light the short-range interaction between cold atoms: photoassociation with strong laser pulses
M. Vatasescu
Physics , 2009,
Abstract: The paper investigates cold molecules formation in the photoassociation of two cold atoms by a strong laser pulse applied at short interatomic distances, which lead to a molecular dynamics taking place in the light-induced (adiabatic) potentials. A two electronic states model in the cesium dimer is used to analyse the effects of this strong coupling regime and to show specific results: i) acceleration of the ground state population to the inner zone due to a non-impulsive regime of coupling at short and intermediate interatomic distances; ii) formation of cold molecules in strongly bound levels of the ground state, where the population at the end of the pulse is much bigger than the population photoassociated in bound levels of the excited state; iii) the final momentum distribution of the ground state wavepacket keeping the signatures of the maxima in the initial wavefunction continuum. It is shown that the topology of the light-induced potentials plays an important role in dynamics.
Entanglement between electronic and vibrational degrees of freedom in a laser-driven molecular system
Mihaela Vatasescu
Physics , 2014, DOI: 10.1103/PhysRevA.88.063415
Abstract: We investigate the entanglement between electronic and vibrational degrees of freedom produced by a vibronic coupling in a molecular system described in the Born-Oppenheimer approximation. Entanglement in a pure state of the Hilbert space $\cal{H}$$=$$\cal{H}$$_{el}$$\bigotimes$$\cal{H}$$_{vib}$ is quantified using the von Neumann entropy of the reduced density matrix and the reduced linear entropy. Expressions for these entanglement measures are derived for the $2 \times N_v$ and $3 \times N_v$ cases of the bipartite entanglement, where 2 and 3 are the dimensions of the electronic Hilbert space $\cal{H}$$_{el}$, and $N_v$ is the dimension of $\cal{H}$$_{vib}$. We study the entanglement dynamics for two electronic states coupled by a laser pulse (a $2 \times N_v$ case), taking as an example a coupling between the $a^3\Sigma_{u}^{+} (6s,6s)$ and $1_g(6s,6p_{3/2})$ states of the Cs$_2$ molecule. The reduced linear entropy expression obtained for the $3 \times N_v$ case is used to follow the entanglement evolution in a scheme proposed for the control of the vibronic dynamics in a Cs$_2$ cold molecule, implying the $a^3\Sigma_{u}^{+}(6s,6s)$, $0_g^-(6s,6p_{3/2})$, and $0_g^-(6s,5d)$ electronic states, which are coupled by a non-adiabatic radial coupling and a sequence of chirped laser pulses.
Mid-term echocardiographic follow up of left ventricular function with permanent right ventricular pacing in pediatric patients with and without structural heart disease
Tchavdar Shalganov, Dora Paprika, Radu Vatasescu, Attila Kardos, Attila Mihalcz, Laszlo Kornyei, Andras Szatmari, Tamas Szili-Torok
Cardiovascular Ultrasound , 2007, DOI: 10.1186/1476-7120-5-13
Abstract: A group of 99 pediatric patients with previously implanted pacemaker was studied retrospectively. Forty-three patients (21 males) had isolated congenital complete or advanced atrioventricular block. The remaining 56 patients (34 males) had pacing indication in the presence of structural heart disease. Thirty-two of them (21 males) had isolated structural heart disease and the remaining 24 (13 males) had complex congenital heart disease. Patients were followed up for an average of 53 ± 41.4 months with 12-lead electrocardiogram and transthoracic echocardiography. Left ventricular shortening fraction was used as a marker of ventricular function. QRS duration was assessed using leads V5 or II on standard 12-lead electrocardiogram.Left ventricular shortening fraction did not change significantly after pacemaker implantation compared to preimplant values overall and in subgroups. In patients with complex congenital heart malformations shortening fraction decreased significantly during the follow up period. (0.45 ± 0.07 vs 0.35 ± 0.06, p = 0.015). The correlation between the change in left ventricular shortening fraction and the mean increase of paced QRS duration was not significant. Six patients developed dilated cardiomyopathy, which was diagnosed 2 months to 9 years after pacemaker implantation.Chronic right ventricular pacing in pediatric patients with or without structural heart disease does not necessarily result in decline of left ventricular function. In patients with complex congenital heart malformations left ventricular shortening fraction shows significant decrease.Chronic right ventricular (RV) apical pacing alters unfavorably left ventricular (LV) electrical activation, mechanical contraction, cardiac output, myocardial perfusion and histology. Permanent RV pacing may have detrimental effect on LV function and may promote to heart failure in adult patients with LV dysfunction [1-10]. The effect of chronic RV apical pacing on LV performance in pediatric pati
Efficient formation of strongly bound ultracold cesium molecules by photoassociation with tunneling
Mihaela Vatasescu,Claude M. Dion,Olivier Dulieu
Physics , 2006, DOI: 10.1088/0953-4075/39/19/S09
Abstract: We calculate the rates of formation and detection of ultracold Cs_2 molecules obtained from the photoassociation of ultracold atoms through the double-well 0g- (6S1/2 + 6P3/2) state. We concentrate on two features previously observed experimentally and attributed to tunneling between the two wells [Vatasescu et al 2000 Phys. Rev. A 61 044701]. We show that the molecules obtained are in strongly bound levels (v''=5,6) of the metastable a3Sigma_u+ (6S1/2 + 6S1/2) ground electronic state.
Optimizing the photoassociation of cold atoms by use of chirped laser pulses
Eliane Luc-Koenig,Mihaela Vatasescu,Francoise Masnou-Seeuws
Physics , 2004, DOI: 10.1140/epjd/e2004-00161-8
Abstract: Photoassociation of ultracold atoms induced by chirped picosecond pulses is analyzed in a non-perturbative treatment by following the wavepackets dynamics on the ground and excited surfaces. The initial state is described by a Boltzmann distribution of continuum scattering states. The chosen example is photoassociation of cesium atoms at temperature T=54 $\mu K$ from the $a^3 \Sigma_u^+(6s,6s)$ continuum to bound levels in the external well of the $0_g^-(6s+6p_{3/2})$ potential. We study how the modification of the pulse characteristics (carrier frequency, duration, linear chirp rate and intensity) can enhance the number of photoassociated molecules and suggest ways of optimizing the production of stable molecules.
Bizarre Parosteal Osteochondromatous Proliferation of the Skull in a Young Male [PDF]
Radu Baz, Cosmin Niscoveanu
Open Journal of Radiology (OJRad) , 2013, DOI: 10.4236/ojrad.2013.33022
Bizarre parosteal osteochondromatous proliferation (BPOP), as defined by Nora and colleagues in 1983 (also called Nora lesion), is a rare lesion. About 160 cases of BPOP have been presented in the literature to date. The lesion is an exophytic outgrowth from the cortical surface consisting of bone, cartilage and fibrous tissue. These types of lesions have been reported mostly in the hands and feet. Localization at the level of the skull is extremely rare. We report a case of a young man with multiple Nora's lesions with atypical localization in the skull and mandible.
ECONOMIC CRISIS AND THE COMPETITIVENESS OF TRANSNATIONAL COMPANIES
Liviu RADU,Carmen RADU
Lex et Scientia , 2012,
Abstract: In crisis situations, the competitiveness of transnational companies becomes a particularly complex concept, due to the fact that said business entities are continuously moving within the context of internationalization and increasing use of global strategies. Given the current economic context, one cannot merely assess the competitiveness level of any given transnational company from a static standpoint, depending on the turnover, sales volume or number of employees of said company, but such assessment needs to be made from a dynamic standpoint, in close connection with the internal and international business environment in which that company carries out its activity.
CAREER OPPORTUNITIES IN A DOWNTURN SOCIETY
Carmen RADU,Liviu RADU
Abstract: The world crisis that began in 2008 has negative influences over financial and economicalsocial structures, mainly affecting the young working population. The most affected by the current economical and financial crisis is the youth. Jobs offer for young people seems to have decreased to a significant extent, while they of all categories of job candidates are the most affected precisely due to their lack of experience and to the high costs for training new employees under the current competitive labour market conditions. Data from a study by the National Employment Agency indicate for 2010 that only 6.36% of young unemployed (under the age of 25) found jobs within the first three months. In the same time, the main specializations for which personnel was still being recruited at the end of 2010 were IT, outsourcing, accountancy, engineering, retail and pharmaceuticals, according to recruitment agencies.
MULTINATIONAL CARTELS
Abstract: Improving the functioning of market for the benefit of European consumers and companies remains an essential component of the European project. In 2007, the competition related policy had a significant contribution in the welfare of consumers by approaching the issue of cartels. Although the fight against cartels is becoming increasingly global and an ever larger challenge, the efforts of the European institutions in this fight have begun to take shape. Clemency European policy has proven to be an extremely powerful weapon in encouraging societies to admit that cartels exist. The competition policy fits more and more the other policies of the European Commission. The recent revisal of the Lisbon Strategy as approved by the European Council stipulates the competition norms as part of the areas in which EU can contribute with specific expertise, beneficial for its key partners. This is closely connected to the need of delivering loyal competition and equal conditions worldwide. The present paper represents a study on the positive and negative influences in the activity of cartels.
Challenges of the Knowledge Society , 2011,
Abstract: The world crisis that began in 2008 has negative influences over financial and economical-social structures, mainly affecting the young working population. The most affected by the current economical and financial crisis is the youth. Jobs offer for young people seems to have decreased to a significant extent, while they of all categories of job candidates are the most affected precisely due to their lack of experience and to the high costs for training new employees under the current competitive labour market conditions. Data from a study by the National Employment Agency indicate for 2010 that only 6.36% of young unemployed (under the age of 25) found jobs within the first three months. In the same time, the main specializations for which personnel was still being recruited at the end of 2010 were IT, outsourcing, accountancy, engineering, retail and pharmaceuticals, according to recruitment agencies. | CommonCrawl |
ch3sh intermolecular forcesbest business process management certification
ch3sh intermolecular forces
(b) Xe is a liquid at atmospheric pressure and 120 K, whereas Ar is a gas under the same conditions. kelly garrett detroit June 22, 2022. Which of the following products could not be produced in the combustion of methanthiol, CH3SH ? What is the strongest intermolecular force in ch3sh? For H-bonding there are 3 requirements: 1) The molecule should contain one of the atoms of the most electronegative elements, F , O , or N. 11. ch3oh intermolecular forces ch3oh intermolecular forces. (methanol) H-bonds / hydrogen bonding (methanethiol) dipole-dipole forces or van der Waals H-bonds are a stronger / c) PH3 lacks the hydrogen-bonding found in NH3. 8. It is a colorless, volatile liquid with a characteristic odor and mixes with water. here we are looking at types of inter molecular forces present so vastly we have A. G. Just heal him. What types of intermolecular force is (are) common to a) Xe and methanol (CH3OH), b) CH3OH and acetonitrile (CH3CN), c) NH3 and HF? The intermolecular force is weak compared to a covalent bond. 6. c) CH3OH - Hydrogen bonding CH3SH - Dipole-dipole interaction Hydrogen bonding is the strongest intermolecular force, so CH3OH will have the higher boiling point. The intermolecular forces present in the substance CH3SH include: A. Dispersion only B. Dispersion. Intramolecular Forces: The forces of attraction/repulsion within a molecule. Forces binding atoms in a molecule are due to chemical bonding. 6. c) CH3OH - Hydrogen bonding CH3SH - Dipole-dipole interaction Hydrogen bonding is the strongest intermolecular force, so CH3OH will have the higher boiling point. Dipole-Dipole 3. Explain.
The energy required to break a bond is called the bond-energy. ch3oh and ch3sh intermolecular forces; how to withdraw from hyperfund to bank account; trello business development board; flying horse gas station radcliff kentucky; anne boleyn costume six the musical. would be the strongest assuming the Expert Answer . This answer is: Study guides. Finally, there is a dipole formed by the difference in electronegativity between the carbon and fluorine atoms. Methanol (CH3OH) and methanethiol (CH3SH). ) CH3OH Hydrogen bonding CH3SH Dipole-dipole interaction Hydrogen bonding is the strongest intermolecular force, so CH3OH will have the higher boiling point 12.
Polar molecules must contain polar bonds due to a difference in electronegativity between the bonded atoms. a) London-dispersion forces Xe in nonpolar, and methanol is an organic compound with low polarity. What type of intermolecular force is CH3CH2OH? Only dispersion forces are present and these are largest in the longer chain alkane as it has more electrons (more bonds). Both molecules possess dipole moments but CH3CH2OH contains hydrogen bonded to an electronegative element so H-bonding is possible. The Lewis structure of CH 3 SH is: The S-H bond is polar, which means this is a polar molecule.
The primary intermolecular force responsible for this is: the London dispersion forces. Intermolecular forces between molecules are the forces of attraction or repulsion which act between neighboring particles (atoms, molecules, or ions ). What professor is this problem relevant for? 43. CH4 and CH3CH3 have only dispersion forces. CH4 has a lower molar mass than CH3CH3, therefore has weaker dispersion forces. CH3CH2Cl is a polar molecule, therefore has dipole-dipole forces in addition to dispersion forces. CH3CH2OH is an alcohol and exhibits hydrogen bonding. Xe is a liquid at atmospheric pressure and 120 K, whereas Ar is a gas. (b) Xe is a liquid at atmospheric pressure and 120 K, whereas Ar is a gas under the same conditions. b) H2S lacks the hydrogen-bonding found in H2O. Up Next. a) London-dispersion forces Xe in nonpolar, and methanol is an organic compound with low polarity. London A)H20 B)NH3 C)CH3 C=O OCH3 D)CH4 E)CH OH-C-OH CH3 what I have so far is A) 1 B) 1 C) 2 D) 3 E) I'm not sure about E, Chemistry. here we are looking at types of inter molecular forces present so vastly we have A. G. Just heal him.
Since the weakest hydrogen bonding is occurring, therefore it has the most powerful dipoledipole interactions owing to its high polarization resulting from stronger polar bonds between OH groups than in CH3CH2OH. H bond dipole dipole ldf. 0 1. Best Answer. This means CH3OH (Methanol) Intermolecular Forces Methanol is an organic compound. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor. Explain, in terms of their intermolecular forces, why the boiling points of these compounds are different. Intermolecular forces: effect on the main idea: Attractive intermolecular forces hold molecules together in the liquid state. Compare the change in the boiling points of Ne, Ar, Kr, and Xe with the change of the boiling points of HF, HCl, HBr, and HI, and explain the difference between the changes with increasing atomic or molecular mass. What type of intermolecular force is primarily responsible? 0 1. It is a natural substance found in the blood, brain and feces of animals (including humans), as well as in plant tissues.It also occurs naturally in certain foods, such as some nuts and cheese. Hint: Intermolecular forces refer to those forces that mediate interaction between the molecules and they include forces of attraction and repulsion which are supposed to act between the atoms or other neighbouring particles like atoms or ions. Types of Intermolecular Forces. Answer (1 of 5): HYDROGEN BONDING EXPLAINATION:- As we all know that water form h-bonding because it tha OH groups in which oxygen ha lone pair of electrons and attracts hydrogen atom of another water molecule similarly CH3OH also has a OH i) Dipole-Dipole Forces (not including Hydrogen Bonding) ii) Induced Dipole-Induced Dipole (London Dispersion) Forces iii) Hydrogen Bonding iv) Induced Dipole-Dipole Forces B)Consider a pure; Question: A) Consider a pure sample of CH3SH molecules. Robert Boyle first isolated pure methanol in 1661 by distillation of wood. Methyl group is an electropositive group attached to an atom of highly electronegative element fluorine. Forces than ch3br boiling point do on INTRAmolecular forces point order will be as- RbF > CH3OH > CH3Br >.. Which of the following compounds has the highest vapor pressure at 25C? Dipole-Dipole and London (Dispersion) Forces. 4.!Identifying the type of intermolecular force (london dispersion force, weak/strong dipole force, or hydrogen bond) indicated in each interaction. Which intermolecular forces are present in CH 3Cl(s)? 11. his ournal is ' the Oner Societies 2016 Phys. Neon and HF have approximately the same molecular masses. For each pair of compounds, pick the one with the higher boiling point. H 3C + O H. And in bulk solution, the molecular dipoles line upand this is a SPECIAL case of dipole-dipole interaction, intermolecular hydrogen bonding, the which constitutes a POTENT intermolecular force, which elevates the melting and boiling points of the molecule. This is a very strong intermolecular force in which the hydrogen on one molecule is attraction to the oxygen (or F or N) on an adjacent molecule. For example, the average bond-energy for O H bonds in water is 463 kJ/mol.
(a) Br2 or I2 (b) H2S or H2O (c) NH3 or PH3 56. The strongest intermolecular forces in a molecule are due to dipoledipole interactions and occur between H2O molecules. In liquids, intermolecular forces are attractive, E. Strong enough to hold molecules relatively close to each other, but not strong enough for molecules not to move side by side. Jump to Answer Section Category: Chemistry CH3OH boils a temperature than is 60C greater than the boiling point for CH3SH. What type of intermolecular forces accounts for the following differences in each case? Select all that are TRUE. If we look at the molecule, there are no metal atoms to form ionic bonds. What type of intermolecular force Continue Reading ch3cooh strongest intermolecular forces (b) Xe is liquid at atmospheric pressure | SolutionInn CH3SH boils at 6 oC. Which of the following intermolecular forces are present in this sample? Start studying Intermolecular Forces. So the major inter molecular force here is just dispassion forces because there is no difference in election negativity. CH3OH boils at 65oC, but CH3SH boils at 6oC. Great question! Ionic - result of electrostatic forces between ions. CF4 b.) CH3CH3 CH3OH CH3SH Both CH3OH And CH3SH All Are Equal In Intermolecular Forces Of Attraction. People also asked. (b) Xe is liquid at atmospheric pressure and 120 K, whereas Ar is a gas under the same conditions.
#1. e) Vapor Pressure As the intermolecular forces increase (), the vapor pressure decreases (). Methanol has been identified as a volatile emission product from evergreen cypress trees (1) What type (s) of intermolecular force is (are) common to each of the following a) London-dispersion forces Xe in nonpolar, and methanol is an organic compound with low polarity The bond between O & H within the methanol is not a hydrogen Search: Methanol Intermolecular Forces. Since hydrogen bonding is stronger than dipole-dipole interaction, CH 3 OH will have a stronger intermolecular force than CH 3 SH. Back CH3OH boils a temperature than is 60C greater than the boiling point for CH3SH. All of the following molecules have dispersion forces as their DOMINANT intermolecular force, EXCEPT a.) These intermolecular forces are also sometimes called London forces or momentary dipole forces or dispersion forces. It is the first member of homologous series of saturated alcohol. CCl4 c.) CH3CH2CH2CH2CH3 d.) CH2Cl2 25. Which type of intermolecular force accounts for each of these differences? In each pair, the liquid with the higher vapor pressure is the one with the weaker intermolecular forces. intermolecular forces present in ch3nh2. Kr, atomic weight 84, boils at 120.9 K, whereas Cl2, MW ? black and white alexander mcqueen men's; aboriginal print scrubs; falkirk fc players wages. All of the following molecules have dipole-dipole forces, EXCEPT a.)
This means CH 3 SH exhibits dipole-dipole interaction. CH3OCH2CH3 c.) CH3OCH3 d.) HF 24. Phys., , 1 , 25756--3 | 255 of sulfuric acid.16,17 Most recently we have examined the connection between NH 4SH and Jupiters Great Red Spot. I thought this onewas also London forces for teh reason I put for a. c. Intramolecular Forces: The forces of attraction/repulsion within a molecule. Solution for CH3OH boils at 65*C, while CH3SH boils at 6*C . golden retriever puppy feeding chart does ch3och3 have hydrogen bonding Jump to Answer Section Category: Chemistry CH3OH boils a temperature than is 60C greater than the boiling point for CH3SH. Chem. What types of intermolecular force is (are) common to a) Xe and methanol (CH3OH), b) CH3OH and acetonitrile (CH3CN), c) NH3 and HF? Different types of intermolecular forces include ionic bonds, Vander Waals dipole-dipole interaction, hydrogen On average, 463 kJ is required to break 6.023x10 23 O H bonds, or 926 kJ to convert 1.0 mole of water into 1.0 mol of O and 2.0 mol of H atoms. Copy. asparagus pasta sauce. 1. Explain your reasoning. CH3CH3 CH3OH CH3SH Both CH3OH And CH3SH All Are Equal In Intermolecular Forces Of Attraction. (a) CH3OH boils at 65 C; CH3SH boils at 6 C. Answer (1 of 2): Intermolecular Forces: DipoleDipole Intermolecular Force. Next we have H. B. R. So this is typo typo, so that is because of the polar nature of the molecule. (a) NH3 or CH4 (b) CS2 or CO2 (c) CO2 or NO2 55. 134.1k + views. The remaining six electrons will go in the 2p orbital The procedure we have been following therefore suggests that lanthanum should have the structure [Xe]6s 2 4f 1 because this atom contains one more electron than a barium atom and this should go into the next highest energy state, the 4f state And we have this data in the table 01; density 0 The DMC data from the See the answer Show transcribed image text Expert Answer Transcribed image text: Which of the following molecules listed below will have the strongest intermolecular forces of attraction? Were being asked the type(s) of intermolecular forces between CH 3 CH 2 CH 3 molecules. Intermolecular Forces (IMF) are the attractive forces between 2 molecules. Ion-dipole. strongest IMF deals with the attraction between an ion and a polar compound. Hydrogen Bonding. 2 nd strongest IMF CH3OH or CH3-O-CH3. Which type of intermolecular force accounts for each of these differences: (a) CH3OH boils at 65 C; CH3SH boils at 6 C (b) Xe is a liquid at atmospheric pressure and 120 K, whereas Ar is a gas under the same conditions. Lorem ipsum dolor sit amet, consectetur. In liquid methanol, CH3OH which intermolecular forces are present? CH3CH3 CH3OH CH3SH both CH3OH and CH3SH all are equal in intermolecular forces of attraction This problem has been solved!
Chem. This book is ideal for who want to use a strong molecular-orbital approach to explain structure and reactivity in inorganic chemistry. CH3CH3 CH3OH CH3SH Both CH3OH And CH3SH All Are Equal In Intermolecular Forces Of Attraction. black and white alexander mcqueen men's; aboriginal print scrubs; falkirk fc players wages. What types of intermolecular force accounts for the following differences in each case? CH4 b.) Determine the intermolecular forces between two molecules of CH3OH. Back CH3OH boils a temperature than is 60C greater than the boiling point for CH3SH. 11. A: Intermolecular forces are defined as the forces of attraction or repulsion which is present between question_answer Q: Between two molecules in the liquid state, which of the followi. Explain your reasoning. Intermolecular Forces: The forces of attraction/repulsion between molecules. A.
a. Br2 or I2 b. H2S or H2O c. NH3 or PH3. The Gibbs [free] energy (also known as the Gibbs function) is defined as The stronger the intermolecular force, the higher the melting point , N2, CO2, CH4) 36%; (b) oxygen = 56 Fe has the highest melting point Fe has the highest melting point. Furthermore, the molecule lacks hydrogen atoms bonded to nitrogen, oxygen, or fluorine; ruling out hydrogen bonding. What type of intermolecular force Continue Reading Explain why the boiling points of Neon and HF differ. a. CH3OH boils at 65 degrees, CH3SH boils at 6 degrees. ch3oh intermolecular forces 14 Jan. ch3oh intermolecular forces. Next we have S. N. H. four. asked Sep 12, 2016 in Chemistry by SOSVenezuela. new capitol cinema gaborone. Expert Answer . This is a very strong intermolecular force in which the hydrogen on one molecule is attraction to the oxygen (or F or N) on an adjacent molecule. b. Molecules that have only London dispersion forces will always be gases at room temperature $\left(25^{\circ} \mathrm{C}\right)$ c. The hydrogen-bonding forces in $\mathrm{NH}_{3}$ are stronger than those in $\mathrm{H}_{2} \mathrm{O}$ . Wiki User. What kinds of intermolecular attractive forces exist between acetone molecules? As the intermolecular forces increase (), the boiling point increases (). c) CH3OH Hydrogen bonding CH3SH Dipole-dipole interaction Hydrogen bonding is the strongest intermolecular force, so CH3OH will have the higher boiling point. 2012-01-31 13:06:15. b.Xe is liquid at atmospheric pressure and 120 K, whereas Ar is a gas. Next we have H. B. R. So this is typo typo, so that is because of the polar nature of the molecule. The electronegativity difference between the methyl group and Answer = CH3SH ( methanethiol ) is Polar What is polar and non-polar? Hydrogen 2. Methanethiol / m e n a l / (also known as methyl mercaptan) is an organosulfur compound with the chemical formula CH 3 SH.It is a colorless gas with a distinctive putrid smell. Examples of intermolecular bonds include dispersion forces, dipole-dipole forces and hydrogen bonds. Next we have S. N. H. four. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Draw two molecules of ethanthiol forming the intermolecular force, indicating partial charges & showing Shape. For each pair of compounds, pick the one with the higher vapor pressure at a given temperature. c) CH3OH Hydrogen bonding CH3SH Dipole-dipole interaction Hydrogen bonding is the strongest intermolecular force, so CH3OH will have the higher boiling point. What type of intermolecular force is primarily responsible? a. London dispersion forces are the only type of intermolecular force that nonpolar molecules exhibit. 1. So the major inter molecular force here is just dispassion forces because there is no difference in election negativity. Question. NH3 23. 18,19 We now turn from these inorganic sulfur species to the Identify the predominant intermolecular force in each of these substances.
71, boils at 238 K. Acetone boils at 56oC, whereas 2-methylpropane boils at - 12oC. d.
For this one I put London forces because there are no dipoles and ions involved. Answer to Which type of intermolecular force accounts for each of these differences:(a) CH3OH boils at 65 oC; CH3SH boils at 6 oC. Which intermolecular force accounts for this difference? does ch3och3 have hydrogen bondingwhich university offers cosmetology in nigeria Polar "In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole or multipole moment.
Mucus Discharge From Anus
Value Relationship Synonym
Montgomery High School Track And Field Records
Hawaii Hot Springs Resort
Best Protein For Weight Loss
Italian Restaurant Patriot Place
How To Delete Multiple Emails In Outlook On Mac
Tiny Treasures Smyths | CommonCrawl |
1. Applicable Mathematics in a Minimal Computational Theory of Sets
Avron, Arnon ; Cohen, Liron.
In previous papers on this project a general static logical framework for formalizing and mechanizing set theories of different strength was suggested, and the power of some predicatively acceptable theories in that framework was explored. In this work we first improve that framework by enriching it with means for coherently extending by definitions its theories, without destroying its static nature or violating any of the principles on which it is based. Then we turn to investigate within the enriched framework the power of the minimal (predicatively acceptable) theory in it that proves the existence of infinite sets. We show that that theory is a computational theory, in the sense that every element of its minimal transitive model is denoted by some of its closed terms. (That model happens to be the second universe in Jensen's hierarchy.) Then we show that already this minimal theory suffices for developing very large portions (if not all) of scientifically applicable mathematics. This requires treating the collection of real numbers as a proper class, that is: a unary predicate which can be introduced in the theory by the static extension method described in the first part of the paper.
2. Qualitative and Quantitative Monitoring of Spatio-Temporal Properties with SSTL
Nenzi, L. ; Bortolussi, L. ; Ciancia, V. ; Loreti, M. ; Massink, M..
In spatially located, large scale systems, time and space dynamics interact and drives the behaviour. Examples of such systems can be found in many smart city applications and Cyber-Physical Systems. In this paper we present the Signal Spatio-Temporal Logic (SSTL), a modal logic that can be used to specify spatio-temporal properties of linear time and discrete space models. The logic is equipped with a Boolean and a quantitative semantics for which efficient monitoring algorithms have been developed. As such, it is suitable for real-time verification of both white box and black box complex systems. These algorithms can also be combined with stochastic model checking routines. SSTL combines the until temporal modality with two spatial modalities, one expressing that something is true somewhere nearby and the other capturing the notion of being surrounded by a region that satisfies a given spatio-temporal property. The monitoring algorithms are implemented in an open source Java tool. We illustrate the use of SSTL analysing the formation of patterns in a Turing Reaction-Diffusion system and spatio-temporal aspects of a large bike-sharing system.
Section: Modal and temporal logics
3. Local Redundancy in SAT: Generalizations of Blocked Clauses
Kiesl, Benjamin ; Seidl, Martina ; Tompits, Hans ; Biere, Armin.
Clause-elimination procedures that simplify formulas in conjunctive normal form play an important role in modern SAT solving. Before or during the actual solving process, such procedures identify and remove clauses that are irrelevant to the solving result. These simplifications usually rely on so-called redundancy properties that characterize cases in which the removal of a clause does not affect the satisfiability status of a formula. One particularly successful redundancy property is that of blocked clauses, because it generalizes several other redundancy properties. To find out whether a clause is blocked---and therefore redundant---one only needs to consider its resolution environment, i.e., the clauses with which it can be resolved. For this reason, we say that the redundancy property of blocked clauses is local. In this paper, we show that there exist local redundancy properties that are even more general than blocked clauses. We present a semantic notion of blocking and prove that it constitutes the most general local redundancy property. We furthermore introduce the syntax-based notions of set-blocking and super-blocking, and show that the latter coincides with our semantic blocking notion. In addition, we show how semantic blocking can be alternatively characterized via Davis and Putnam's rule for eliminating atomic formulas. Finally, we perform a detailed complexity analysis and relate our novel redundancy properties to prominent redundancy properties from the […]
4. On the algebraic structure of Weihrauch degrees
Brattka, Vasco ; Pauly, Arno.
We introduce two new operations (compositional products and implication) on Weihrauch degrees, and investigate the overall algebraic structure. The validity of the various distributivity laws is studied and forms the basis for a comparison with similar structures such as residuated lattices and concurrent Kleene algebras. Introducing the notion of an ideal with respect to the compositional product, we can consider suitable quotients of the Weihrauch degrees. We also prove some specific characterizations using the implication. In order to introduce and study compositional products and implications, we introduce and study a function space of multi-valued continuous functions. This space turns out to be particularly well-behaved for effectively traceable spaces that are closely related to admissibly represented spaces.
5. The Complexity of Bisimulation and Simulation on Finite Systems
Ganardi, Moses ; Göller, Stefan ; Lohrey, Markus.
In this paper the computational complexity of the (bi)simulation problem over restricted graph classes is studied. For trees given as pointer structures or terms the (bi)simulation problem is complete for logarithmic space or NC$^1$, respectively. This solves an open problem from Balcázar, Gabarró, and Sántha. Furthermore, if only one of the input graphs is required to be a tree, the bisimulation (simulation) problem is contained in AC$^1$ (LogCFL). In contrast, it is also shown that the simulation problem is P-complete already for graphs of bounded path-width.
6. Codensity Lifting of Monads and its Dual
Katsumata, Shin-ya ; Sato, Tetsuya ; Uustalu, Tarmo.
We introduce a method to lift monads on the base category of a fibration to its total category. This method, which we call codensity lifting, is applicable to various fibrations which were not supported by its precursor, categorical TT-lifting. After introducing the codensity lifting, we illustrate some examples of codensity liftings of monads along the fibrations from the category of preorders, topological spaces and extended pseudometric spaces to the category of sets, and also the fibration from the category of binary relations between measurable spaces. We also introduce the dual method called density lifting of comonads. We next study the liftings of algebraic operations to the codensity liftings of monads. We also give a characterisation of the class of liftings of monads along posetal fibrations with fibred small meets as a limit of a certain large diagram.
7. Deciding Confluence and Normal Form Properties of Ground Term Rewrite Systems Efficiently
Felgenhauer, Bertram.
It is known that the first-order theory of rewriting is decidable for ground term rewrite systems, but the general technique uses tree automata and often takes exponential time. For many properties, including confluence (CR), uniqueness of normal forms with respect to reductions (UNR) and with respect to conversions (UNC), polynomial time decision procedures are known for ground term rewrite systems. However, this is not the case for the normal form property (NFP). In this work, we present a cubic time algorithm for NFP, an almost cubic time algorithm for UNR, and an almost linear time algorithm for UNC, improving previous bounds. We also present a cubic time algorithm for CR.
8. Extension by Conservation. Sikorski's Theorem
Rinaldi, Davide ; Wessel, Daniel.
Constructive meaning is given to the assertion that every finite Boolean algebra is an injective object in the category of distributive lattices. To this end, we employ Scott's notion of entailment relation, in which context we describe Sikorski's extension theorem for finite Boolean algebras and turn it into a syntactical conservation result. As a by-product, we can facilitate proofs of several related classical principles.
9. The Complexity of All-switches Strategy Improvement
Fearnley, John ; Savani, Rahul.
Strategy improvement is a widely-used and well-studied class of algorithms for solving graph-based infinite games. These algorithms are parameterized by a switching rule, and one of the most natural rules is "all switches" which switches as many edges as possible in each iteration. Continuing a recent line of work, we study all-switches strategy improvement from the perspective of computational complexity. We consider two natural decision problems, both of which have as input a game $G$, a starting strategy $s$, and an edge $e$. The problems are: 1.) The edge switch problem, namely, is the edge $e$ ever switched by all-switches strategy improvement when it is started from $s$ on game $G$? 2.) The optimal strategy problem, namely, is the edge $e$ used in the final strategy that is found by strategy improvement when it is started from $s$ on game $G$? We show $\mathtt{PSPACE}$-completeness of the edge switch problem and optimal strategy problem for the following settings: Parity games with the discrete strategy improvement algorithm of Vöge and Jurdzi\'nski; mean-payoff games with the gain-bias algorithm [14,37]; and discounted-payoff games and simple stochastic games with their standard strategy improvement algorithms. We also show $\mathtt{PSPACE}$-completeness of an analogous problem to edge switch for the bottom-antipodal algorithm for finding the sink of an Acyclic Unique Sink Orientation on a cube.
10. Do Hard SAT-Related Reasoning Tasks Become Easier in the Krom Fragment?
Creignou, Nadia ; Pichler, Reinhard ; Woltran, Stefan.
Many reasoning problems are based on the problem of satisfiability (SAT). While SAT itself becomes easy when restricting the structure of the formulas in a certain way, the situation is more opaque for more involved decision problems. We consider here the CardMinSat problem which asks, given a propositional formula $\phi$ and an atom $x$, whether $x$ is true in some cardinality-minimal model of $\phi$. This problem is easy for the Horn fragment, but, as we will show in this paper, remains $\Theta_2$-complete (and thus $\mathrm{NP}$-hard) for the Krom fragment (which is given by formulas in CNF where clauses have at most two literals). We will make use of this fact to study the complexity of reasoning tasks in belief revision and logic-based abduction and show that, while in some cases the restriction to Krom formulas leads to a decrease of complexity, in others it does not. We thus also consider the CardMinSat problem with respect to additional restrictions to Krom formulas towards a better understanding of the tractability frontier of such problems.
11. Intuitionistic Layered Graph Logic: Semantics and Proof Theory
Docherty, Simon ; Pym, David.
Models of complex systems are widely used in the physical and social sciences, and the concept of layering, typically building upon graph-theoretic structure, is a common feature. We describe an intuitionistic substructural logic called ILGL that gives an account of layering. The logic is a bunched system, combining the usual intuitionistic connectives, together with a non-commutative, non-associative conjunction (used to capture layering) and its associated implications. We give soundness and completeness theorems for a labelled tableaux system with respect to a Kripke semantics on graphs. We then give an equivalent relational semantics, itself proven equivalent to an algebraic semantics via a representation theorem. We utilise this result in two ways. First, we prove decidability of the logic by showing the finite embeddability property holds for the algebraic semantics. Second, we prove a Stone-type duality theorem for the logic. By introducing the notions of ILGL hyperdoctrine and indexed layered frame we are able to extend this result to a predicate version of the logic and prove soundness and completeness theorems for an extension of the layered graph semantics . We indicate the utility of predicate ILGL with a resource-labelled bigraph model.
12. Reasoning with Finite Sets and Cardinality Constraints in SMT
Bansal, Kshitij ; Barrett, Clark ; Reynolds, Andrew ; Tinelli, Cesare.
We consider the problem of deciding the satisfiability of quantifier-free formulas in the theory of finite sets with cardinality constraints. Sets are a common high-level data structure used in programming; thus, such a theory is useful for modeling program constructs directly. More importantly, sets are a basic construct of mathematics and thus natural to use when formalizing the properties of computational systems. We develop a calculus describing a modular combination of a procedure for reasoning about membership constraints with a procedure for reasoning about cardinality constraints. Cardinality reasoning involves tracking how different sets overlap. For efficiency, we avoid considering Venn regions directly, as done in previous work. Instead, we develop a novel technique wherein potentially overlapping regions are considered incrementally as needed, using a graph to track the interaction among the different regions. The calculus has been designed to facilitate its implementation within SMT solvers based on the DPLL($T$) architecture. Our experimental results demonstrate that the new techniques are competitive with previous techniques and can scale much better on certain classes of problems.
13. Game Characterization of Probabilistic Bisimilarity, and Applications to Pushdown Automata
Forejt, Vojtěch ; Jančar, Petr ; Kiefer, Stefan ; Worrell, James.
We study the bisimilarity problem for probabilistic pushdown automata (pPDA) and subclasses thereof. Our definition of pPDA allows both probabilistic and non-deterministic branching, generalising the classical notion of pushdown automata (without epsilon-transitions). We first show a general characterization of probabilistic bisimilarity in terms of two-player games, which naturally reduces checking bisimilarity of probabilistic labelled transition systems to checking bisimilarity of standard (non-deterministic) labelled transition systems. This reduction can be easily implemented in the framework of pPDA, allowing to use known results for standard (non-probabilistic) PDA and their subclasses. A direct use of the reduction incurs an exponential increase of complexity, which does not matter in deriving decidability of bisimilarity for pPDA due to the non-elementary complexity of the problem. In the cases of probabilistic one-counter automata (pOCA), of probabilistic visibly pushdown automata (pvPDA), and of probabilistic basic process algebras (i.e., single-state pPDA) we show that an implicit use of the reduction can avoid the complexity increase; we thus get PSPACE, EXPTIME, and 2-EXPTIME upper bounds, respectively, like for the respective non-probabilistic versions. The bisimilarity problems for OCA and vPDA are known to have matching lower bounds (thus being PSPACE-complete and EXPTIME-complete, respectively); we show that these lower bounds also hold for fully […]
14. Affine Sessions
Mostrous, Dimitris ; Vasconcelos, Vasco T..
Session types describe the structure of communications implemented by channels. In particular, they prescribe the sequence of communications, whether they are input or output actions, and the type of value exchanged. Crucial to any language with session types is the notion of linearity, which is essential to ensure that channels exhibit the behaviour prescribed by their type without interference in the presence of concurrency. In this work we relax the condition of linearity to that of affinity, by which channels exhibit at most the behaviour prescribed by their types. This more liberal setting allows us to incorporate an elegant error handling mechanism which simplifies and improves related works on exceptions. Moreover, our treatment does not affect the progress properties of the language: sessions never get stuck.
15. A Complete Quantitative Deduction System for the Bisimilarity Distance on Markov Chains
Bacci, Giorgio ; Bacci, Giovanni ; Larsen, Kim G. ; Mardare, Radu.
In this paper we propose a complete axiomatization of the bisimilarity distance of Desharnais et al. for the class of finite labelled Markov chains. Our axiomatization is given in the style of a quantitative extension of equational logic recently proposed by Mardare, Panangaden, and Plotkin (LICS 2016) that uses equality relations $t \equiv_\varepsilon s$ indexed by rationals, expressing that `$t$ is approximately equal to $s$ up to an error $\varepsilon$'. Notably, our quantitative deduction system extends in a natural way the equational system for probabilistic bisimilarity given by Stark and Smolka by introducing an axiom for dealing with the Kantorovich distance between probability distributions. The axiomatization is then used to propose a metric extension of a Kleene's style representation theorem for finite labelled Markov chains, that was proposed (in a more general coalgebraic fashion) by Silva et al. (Inf. Comput. 2011).
16. Separating regular languages with two quantifier alternations
Place, Thomas.
We investigate a famous decision problem in automata theory: separation. Given a class of language C, the separation problem for C takes as input two regular languages and asks whether there exists a third one which belongs to C, includes the first one and is disjoint from the second. Typically, obtaining an algorithm for separation yields a deep understanding of the investigated class C. This explains why a lot of effort has been devoted to finding algorithms for the most prominent classes. Here, we are interested in classes within concatenation hierarchies. Such hierarchies are built using a generic construction process: one starts from an initial class called the basis and builds new levels by applying generic operations. The most famous one, the dot-depth hierarchy of Brzozowski and Cohen, classifies the languages definable in first-order logic. Moreover, it was shown by Thomas that it corresponds to the quantifier alternation hierarchy of first-order logic: each level in the dot-depth corresponds to the languages that can be defined with a prescribed number of quantifier blocks. Finding separation algorithms for all levels in this hierarchy is among the most famous open problems in automata theory. Our main theorem is generic: we show that separation is decidable for the level 3/2 of any concatenation hierarchy whose basis is finite. Furthermore, in the special case of the dot-depth, we push this result to the level 5/2. In logical terms, this solves separation for […]
17. Termination in Convex Sets of Distributions
Sokolova, Ana ; Woracek, Harald.
Convex algebras, also called (semi)convex sets, are at the heart of modelling probabilistic systems including probabilistic automata. Abstractly, they are the Eilenberg-Moore algebras of the finitely supported distribution monad. Concretely, they have been studied for decades within algebra and convex geometry. In this paper we study the problem of extending a convex algebra by a single point. Such extensions enable the modelling of termination in probabilistic systems. We provide a full description of all possible extensions for a particular class of convex algebras: For a fixed convex subset $D$ of a vector space satisfying additional technical condition, we consider the algebra of convex subsets of $D$. This class contains the convex algebras of convex subsets of distributions, modelling (nondeterministic) probabilistic automata. We also provide a full description of all possible extensions for the class of free convex algebras, modelling fully probabilistic systems. Finally, we show that there is a unique functorial extension, the so-called black-hole extension.
18. Inducing syntactic cut-elimination for indexed nested sequents
Ramanayake, Revantha.
The key to the proof-theoretic study of a logic is a proof calculus with a subformula property. Many different proof formalisms have been introduced (e.g. sequent, nested sequent, labelled sequent formalisms) in order to provide such calculi for the many logics of interest. The nested sequent formalism was recently generalised to indexed nested sequents in order to yield proof calculi with the subformula property for extensions of the modal logic K by (Lemmon-Scott) Geach axioms. The proofs of completeness and cut-elimination therein were semantic and intricate. Here we show that derivations in the labelled sequent formalism whose sequents are `almost treelike' correspond exactly to indexed nested sequents. This correspondence is exploited to induce syntactic proofs for indexed nested sequent calculi making use of the elegant proofs that exist for the labelled sequent calculi. A larger goal of this work is to demonstrate how specialising existing proof-theoretic transformations alleviate the need for independent proofs in each formalism. Such coercion can also be used to induce new cutfree calculi. We employ this to present the first indexed nested sequent calculi for intermediate logics.
19. Reasoning About Bounds in Weighted Transition Systems
Hansen, Mikkel ; Larsen, Kim Guldstrand ; Mardare, Radu ; Pedersen, Mathias Ruggaard.
We propose a way of reasoning about minimal and maximal values of the weights of transitions in a weighted transition system (WTS). This perspective induces a notion of bisimulation that is coarser than the classic bisimulation: it relates states that exhibit transitions to bisimulation classes with the weights within the same boundaries. We propose a customized modal logic that expresses these numeric boundaries for transition weights by means of particular modalities. We prove that our logic is invariant under the proposed notion of bisimulation. We show that the logic enjoys the finite model property and we identify a complete axiomatization for the logic. Last but not least, we use a tableau method to show that the satisfiability problem for the logic is decidable.
20. Model Checking Flat Freeze LTL on One-Counter Automata
Lechner, Antonia ; Mayr, Richard ; Ouaknine, Joël ; Pouly, Amaury ; Worrell, James.
Freeze LTL is a temporal logic with registers that is suitable for specifying properties of data words. In this paper we study the model checking problem for Freeze LTL on one-counter automata. This problem is known to be undecidable in general and PSPACE-complete for the special case of deterministic one-counter automata. Several years ago, Demri and Sangnier investigated the model checking problem for the flat fragment of Freeze LTL on several classes of counter automata and posed the decidability of model checking flat Freeze LTL on one-counter automata as an open problem. In this paper we resolve this problem positively, utilising a known reduction to a reachability problem on one-counter automata with parameterised equality and disequality tests. Our main technical contribution is to show decidability of the latter problem by translation to Presburger arithmetic.
21. Taylor expansion in linear logic is invertible
de Carvalho, Daniel.
Each Multiplicative Exponential Linear Logic (MELL) proof-net can be expanded into a differential net, which is its Taylor expansion. We prove that two different MELL proof-nets have two different Taylor expansions. As a corollary, we prove a completeness result for MELL: We show that the relational model is injective for MELL proof-nets, i.e. the equality between MELL proof-nets in the relational model is exactly axiomatized by cut-elimination.
22. One-way definability of two-way word transducers
Baschenis, Félix ; Gauwin, Olivier ; Muscholl, Anca ; Puppis, Gabriele.
Functional transductions realized by two-way transducers (or, equally, by streaming transducers or MSO transductions) are the natural and standard notion of "regular" mappings from words to words. It was shown in 2013 that it is decidable if such a transduction can be implemented by some one-way transducer, but the given algorithm has non-elementary complexity. We provide an algorithm of different flavor solving the above question, that has doubly exponential space complexity. In the special case of sweeping transducers the complexity is one exponential less. We also show how to construct an equivalent one-way transducer, whenever it exists, in doubly or triply exponential time, again depending on whether the input transducer is sweeping or two-way. In the sweeping case our construction is shown to be optimal.
23. Axioms for Modelling Cubical Type Theory in a Topos
Orton, Ian ; Pitts, Andrew M..
The homotopical approach to intensional type theory views proofs of equality as paths. We explore what is required of an object $I$ in a topos to give such a path-based model of type theory in which paths are just functions with domain $I$. Cohen, Coquand, Huber and Mörtberg give such a model using a particular category of presheaves. We investigate the extent to which their model construction can be expressed in the internal type theory of any topos and identify a collection of quite weak axioms for this purpose. This clarifies the definition and properties of the notion of uniform Kan filling that lies at the heart of their constructive interpretation of Voevodsky's univalence axiom. (This paper is a revised and expanded version of a paper of the same name that appeared in the proceedings of the 25th EACSL Annual Conference on Computer Science Logic, CSL 2016.)
24. Subsumption Algorithms for Three-Valued Geometric Resolution
de Nivelle, Hans.
In our implementation of geometric resolution, the most costly operation is subsumption testing (or matching): One has to decide for a three-valued, geometric formula, if this formula is false in a given interpretation. The formula contains only atoms with variables, equality, and existential quantifiers. The interpretation contains only atoms with constants. Because the atoms have no term structure, matching for geometric resolution is hard. We translate the matching problem into a generalized constraint satisfaction problem, and discuss several approaches for solving it efficiently, one direct algorithm and two translations to propositional SAT. After that, we study filtering techniques based on local consistency checking. Such filtering techniques can a priori refute a large percentage of generalized constraint satisfaction problems. Finally, we adapt the matching algorithms in such a way that they find solutions that use a minimal subset of the interpretation. The adaptation can be combined with every matching algorithm. The techniques presented in this paper may have applications in constraint solving independent of geometric resolution. | CommonCrawl |
Effect of Covid-19 on households welfare in Afar Regional State, Ethiopia
Dagmawe Menelek Asfaw1,
Abdurhman Kedir Ali1 &
Mohammed Adem Ali1
Discover Sustainability volume 3, Article number: 25 (2022) Cite this article
The main objective of this study was to analyze the effect of COVID-19 on social welfare in the case of Afar regional, state, Ethiopia using panel data collected from a sample of 384 in Asyaita, Dubti Samara-Logia, and Awash town. Both descriptive statistics and econometric models were used to analyze the data. The descriptive analysis results revealed that the main source of income emanated from self-employment (81.67%), from the total households 70% of them were engaged in the service sector, due to COVID-19 the income trends of 81% of households decreased, increase expenditure on food & food items (13%) and service delivering (15%). After conducting necessary pre and post-estimation tests, the econometric model found that the three basic policy variables (number of COVID-19 victims, number of days with the COVID-19 disease and transportation ban) adversely affected the welfare of the society by lessening the income of households and growing their expenditures. Finally, considering regional experience, econometric and descriptive results, this study recommends that the government and the concerned policy maker should give more attention and subsidize the service sector, support those self-employee and daily laborers, make awareness to the society about COVID-19 epidemic, place an alternative mechanism to fill potential trade gaps.
Working on a manuscript?
Avoid the most common mistakes and prepare your manuscript for journal editors.
The Novel Coronavirus, or COVID-19, a pandemic is a global challenge that requires coordinated efforts from governments, individuals, businesses, and various stakeholders. The pandemic causes several shocks to occur at once, including health, supply, demand, and financial shocks, causing the world economy to experience historic and unprecedented shocks [1]. There will inevitably be a general drop in economic activity both locally and internationally as a result of government efforts to contain the COVID-19 epidemic through partial and complete business closures. If the pandemic continues over an extended length of time, this contraction in economic activity results in an economic recession. Cultures with lower socioeconomic classes are more susceptible to COVID-19's rising chronic illness rates, which are exacerbated by problems with the economy and social welfare. In turn, this lowers productivity even more and drives up health care expenses, increasing poverty and, by extension, disease. A "disease-driven poverty trap" exists here [2]. From an economic standpoint, the main problem is not simply the quantity of COVID-19 instances, but also the degree of economic activity disruption, which in turn increases the level of health risks [3].
The steps people and governments take to avoid the virus will have the greatest economic impact, and this response comes from three sources [4, 5]. First, the government places restrictions on particular commercial endeavors (such as eateries, stores, etc.). Second, businesses and institutions take preventative steps like shutting down, which costs workers' income, particularly in the informal sector where there is no paid time off. Individuals also cut back on social activities like going out and traveling, which has an impact on the demand side.
our main channels: labor income, non-labor income, direct effects on consumption, and service disruption are anticipated to have an impact on poverty and inequality as a result of the rapid spread of COVID-19 in Tunisia and any potential containment measures [3]. The effects on labor income may be directly due to lost wages resulting from illness or indirect due to changes in employment and wages. Changes in remittance and public transfer patterns may have an impact on non-labor income. Consumption may be directly impacted by price increases for goods that account for a sizable portion of household budgets or by an increase in out-of-pocket medical expenses. Finally, the closure of schools and overuse of the health care system might harm welfare in the long run [6].
On March 13, 2020, Ethiopia announced the discovery of its first COVID-19 case. By October 2020, there were 96,169 confirmed cases overall, and 1469 people had died as a result of COVID-19. Along with negative consequences on health, households also experienced food and income shocks. A little more than 8.4% of households reported a job loss as a result of COVID-19 between March and October 2020. An income reduction or loss was experienced by three out of every four households. Of them, 32.3% took drastic efforts to make up for the lost income, such as selling off assets or cutting back on non-food purchases. Additionally, according to 31.4% of all homes, at least one adult member went without meals for a full day due to a lack of resources [7, 8].
COVID-19 pandemic has caused in multi-dimensional effects crosswise the Ethiopian economy. The pandemic worsened the main food insecurity and damaged the livelihood of the people in Ethiopia [9]. The Ethiopian Economic Association (EEA) projected that Ethiopia's Gross Domestic Product (GDP) will fall by 127 billion Ethiopian Birr (ETB) in the 2019/20 Fiscal Year (FY) because of the COVID-19 pandemic. According to the estimation of the EEA, the country's GDP growth will extend 0.6 percent under a severe scenario of the pandemic in 2020/2021 [8]. The pandemic heavily damage the livelihoods of the households in Ethiopia, by which the income is condensed by more than half. The subjective income measures specified that a large number of households have been bare to job loss or decreased incomes during pandemics. The negative influence of the pandemic will be severe on the welfare of exposed households. The COVID-19 pandemic is likely to have argumentative effects on agrarian households in Ethiopia. Smallholder farmers are one of the exposed groups who might be delayed from working on their land, getting into markets to sell their products, or buying seeds and other vital inputs [10]. In afar regional, state the first COVID-19 virus confirmed case was recorded on April 22, 2020. Prices of staple food and other essential supplies have significantly increased in major market centers in the Afar region, particularly in Ab'ala, yellow, and Dalifage markets, because of restrictions imposed [11].
Many studies [2, 10, 12,13,14,15,16,17,18,19,20] and so on a study on the macroeconomic impact of epidemic disease. Notwithstanding, most of the papers, including the above one were mainly focused on the macroeconomic effect of epidemic disease (like COVID-19), they did not give insight on the effect of COVID-19 on social and microeconomics effect (in general on social welfare), in addition, no studies addressed the regional wise social welfare effect of COVID-19. Therefore, this study was analysis the effect of COVID-19 on households' welfare in the case of the Afar regional states.
2 Methodology
2.1 Description of the study area
This study was conducted on Dubti, Samara-Logia, Asayita, and Awashi districts which were found in the Afar region.
2.2 Type and sources of data
Both primary and secondary data were used for this study. Primary data were principally employed, which has been collected from sample representative of the society from Asayta, Dubti, A wash and Samara-loggia through questionnaires. Secondary data sources are governmental and non-governmental institutions, including both published and unpublished documents like the Afar bureau of health, Ministry of health, and online from worldometer website, WHO and other relevant information sources were used.
2.3 Sampling technique and sample size
In this study, a two-stage sampling technique was employed. In the first stage, from all major cities of the region state, Asayta, Dubti, Awash and Samara-logo were selected by using the purposive sampling technique. The purposive selection was based on the total population, and severity of the COVID-19 virus mainly from Djibouti and Addis Ababa. In the second stage, we were simply random sampling to select the sample of the study with the help of Cochran sample size determination techniques (see Table 1).
Table 1 Proportion of sample households from each town of Afar regional state, 2021
$$\text{n}=\frac{{Z}^{2}pq}{{e}^{2}}=\frac{{\left(1.96\right)}^{2}(0.5)(0.5)}{{(0.05)}^{2}}=384.16\approx 384$$
where: is the sample size; \(z\) is 1.96 to achieve 95% the level of confidence;\(p\) is the proximate proportion of the population, which has the attribute in question (50% as a rule of thumb);\(e\) is the tolerant marginal error as defined as in 0.05, that is, 5%maximum discrepancy between the sample and the general population.
2.4 Method of data analysis
Descriptive statistics and econometric models were employed to achieve the objective of the study. The descriptive statistics includes means, standard deviation, minimum, maximum, frequencies and percentage and in the econometric analyses use Social Welfare Function (SWF) to estimate the effect of COVID-19 on the society's social welfare with the help of panel data analysis.
2.4.1 Model specification
The utilitarian or Benthamite social welfare function measures social welfare as the total or sum of individual incomes:
$$W={Y}_{1}+{Y}_{2}+{Y}_{3}+\dots +{Y}_{n}$$
$$W=\sum_{i=1}^{n}{Y}_{i}$$
where: \(W\) is social welfare,\({Y}_{i}\) Is the income of the individual,\(i\) among, and \(n\) is individuals in society. In this case, maximizing social welfare means maximizing the total income of the people in the society. Therefore, in this study, we proxy social welfare by the individual income to analyze the effect of COVID-19 on social welfare (on the side of income) as follows.
Fixed Effect Model
$${Y}_{it}={\alpha }_{i}+{\alpha }_{it}{NCV}_{it}+{\alpha }_{it}{ND}_{it}+{\alpha }_{it}{FS}_{it}+{\alpha }_{it}{EDU}_{it}+{\alpha }_{it}{ACR}_{it}+{\alpha }_{it}{TB}_{it}+{\alpha }_{it}{LT}_{it}+{\alpha }_{it}{AG}_{it}+{\alpha }_{it}{G}_{it}+{\alpha }_{it}{TSE}_{it}+{e}_{it}$$
where: \({\alpha }_{i}\) Intercept for each individual, NCV (Number of COVID-19 victim), ND (Number of days), FS (Family size), EDU (Educational level), ACR (Access to credit), TB (Transportation ban), LT (Leisure time), AG (Age), G (Gender), TSE (Types of sector of the economy), E (Error term), t (Number months).
Alternatively, Social welfare can be expressed as a function W (X1,…, Xn) of the aggregate consumption expenditure (Xi) on goods by individuals I = 1,…,n, which means that the individualistic social welfare function is also a function of individual utility levels. If there are h consumers in the economy, then an individualistic social welfare function is written W (U1,…, Uh), where Uh is the utility of h. The social welfare functions are also written as
$$W={U}_{1}+{U}_{2}+{U}_{3}+\dots +{U}_{n}$$
$$W=\sum_{i=1}^{n}`{U}_{i}$$
Write know, we can express SWF as a function of aggregate consumption expenditure as follows
$$W=\sum_{i=1}^{n}{X}_{i}$$
In such a way, we were again proxy social welfare by the individual consumption expenditure to analyze the effect COVID-19 on social welfare (on the side of expenditure) as follows.
Fixed effect model for expenditure equation
$${X}_{it}={\alpha }_{i}+{\alpha }_{it}{NCV}_{it}+{\alpha }_{it}{ND}_{it}+{\alpha }_{it}{FS}_{it}+{\alpha }_{it}{EDU}_{it}+{\alpha }_{it}{ACR}_{it}+{\alpha }_{it}{RLD}_{it}+{\alpha }_{it}{AG}_{it}+{\alpha }_{it}{G}_{it}+{\alpha }_{it}{TSE}_{it}+{e}_{it}$$
where \({X}_{it}\) Consumption Expenditure.
3 Result and discussion
3.1 Demographic characteristics of the sample households
The average household members that live in one house for the sample households were about 4 persons that range between 1 and 10 persons. The average age of the sample household heads was 39.13 years with maximum of 75 and a minimum of 18 years old. This showed that the mean ages of the sampled households were within the range of economically active age and they were more energetic (see Table 2).
Table 2 Demographic characteristics of the sample households
The mean education level of the sample household in the study area was 4.58 ranging from 0 to first degree. The above table revealed that the mean educational levels of the sample household were very low. The correlation between the educational level of the household and the income of the households was significantly positively at a 1 percent probability level (see Table 2).
Sample respondents were composed of both male and female household heads. Out of the total sampled household head, about 84.79% were male-headed and the remaining 15.21% were female-headed households. Among the sample households in the study area majority of them (that is about 63.33%) were married, 23.33% were single, whereas 13.33% were divorced and none of them were widowed according to Table 2. F-test was employed to depict that there was an association between the marital status of the respondents and their level of income.
3.2 Types of economic sector and sources of income
Sample respondents were engaged in three basic economic sectors, from those respondents most of them were participation in the service delivering sector, which were accounted 69.79% of the total respondents. And also sampled households participated in the agricultural and industrial sector, which were 15.83% and 14.28% of the total sample respondents, respectively. F-test was employed to depict that there was association between types of economic sector of the respondents and their level of income (see Table 3).
Table 3 Economic sector and source of income
In the study are income sources can be broadly divided in two: agricultural income (livestock rearing and crop production and non-farm income). In general, in this study six different income sources for the households are identified, such as, farm income, non- formal wage-employment, formal wage- employment, self -employment, remittance income and rent income. This result agrees with the finding of [21]. From the total sample households 81.67% of them were received their income from self-employee and 35.83 and 33.33 of sample household also emanated their income from non-formal wage and remittance respectively. Around half of the sampled population, income generated from agriculture sector. The remaining sample households also acquired their income from formal wage and rent, which was accounted that 30% and 10.83% sample households respectively(see Table 3).
As we have seen Table 4, income trends of many sample households decreased due to COVID-19 epidemic disease, which was amounted that 81.67% of the sample household incomes were decreased because of COVID-19. From the total sample households 14.17% were increasing their income and the remaining 4.17% were does not change their income due to COVID-19. This may be due to that, most of the households were participating in service sector and self-employee source of income. According to World Bank report on 2020, the service sector was dwindled by 38% due to COVID-19 epidemic. Therefore, economic activities in the service sector and income of self-employed were highly affected by such epidemic disease compared to the other sectors and source of income. However, the income of some households was not decreased unless it was not affected or increased, this also may be the cause of, there was opportunistic entrepreneurship in the market due to COVID-19. This also creates additional income for thus opportunistic entrepreneurs. In a line with these, the income of some households also not affected by COVID-19, this was the case that, the income source of such households were from formal wage and rent, therefore such income source were not affected by such epidemic disease.
Table 4 Income trends of sample respondents
3.3 Expenditure related issues of the sample households
The total expenditure of the total sample households were goes to food and food related items, which were accounts for 51% of their total expenditure to food and food related items. Expenditure for service delivery like transportation, barberry, shoe shine and etc. was accounted for 12% of their total expenditure. Some sample households were not owner of some properties; therefore they must have paid some amount of money as rent, therefore, they had paid 10% their total monthly expenditure to rent. Utility expense and goods (excluding food items) accounts 6% and 7% of their total monthly expenditure of the sample households, respectively. After making expenditure to all items there might be left some amount of birr, which was as means of saving to the depositor. Expenditure for saving was 14% of total household expenditure (see Table 5).
Table 5 Source, percentage share and average growth rate of expenditure
COVID-19 epidemic has its own effect on social, economic and psychological in the world generally and in Ethiopia Afar region particularly. From the finding of our study from March 13, 2021 to June 13, 2021 the overall expenditure of households had significant changes, except expenditure on utilities and rent. From the above table, food and food related items and service expenditure were relatively significant increase after the COVID-19 epidemic. The total expenditure on food and food related items were increased by 13.3% per month on average after COVID-19 epidemic. This may be due to, difficulty in distribution and transportation of food and food related goods and decline in production capabilities (see Table 5).
Total expenditure on the side of service delivering activities was increased by 15.7% on average per month. Which was witnessed by specifically increased the cost of transportation. Expenditure on goods (excluding food related items) like expenditure on cloth, shoe, kitchen materials, construction materials and etc. had increased by 9.7% on average per month, this was also due to the problems of distribution and transportation of such products(transportation ban). Notwithstanding, sample household expenditure on rent (like expenditure properties rent) and utilities (like expenditure mobile air time and internet packages, water and electricity bill) hadn't a significant effect due to COVID-19 epidemic. Finally the saving behaviors of households were hurt by such disease, which was decreased by 11.7% per month on average from the total household expenditure for saving. In general, due to COVID-19 epidemic disease expenditure on food and food related items and expenditure on service were increased significantly relative to other item of expenditure, plus more than half of total expenditure was accounted for food and food related items. Therefore, the wellbeing as well as the welfare of sample households was highly affected by COVID-19 epidemic disease (see Table 5).
3.4 Effect of COVID-19 on income and consumption expenditure
The analysis done using Stata 14 and E-views 9 is as presented below. During estimation, we had to made necessary pre estimation test (stationary and houseman) and post estimation test (heteroscedasticity, serial autocorrelation, cross sectional dependence, multicollinearity and endogeneity). The objective of the study was analysis the effect of COVID-19 on social welfare of afar regional state. Linear panel data analysis was used to investigate the effect of COVID-19 on social welfare. When we had been analyzed the effect of COVID-19 on social welfare, by categorized the explanatory variables as a policy variables (number of COVID victims, number of days, transportation ban, types of economic sector) and control variables(leisure time, gender, access to credit, family size and educational level)(see Table 6). Accordingly, variables assumed to have influence on social welfare in different contexts were tested in the model and seven out of nine variables were found to be significant in income equation and five out of eight variables also to be significant in expenditure equation. Among variables fitted into the income fixed effect model:- number of COVID-19 victims, number of days, transportation ban, leisure time, gender, types of sector of the economy, access to credit and family size significant. On the side of expenditure fixed effect model: number of COVID-19 victims, number of days, transportation ban, gender and access to credit significant.
Table 6 Fixed effect estimation result for income and consumption expenditure
3.4.1 Number of COVID-19 victims
The model reveals that number of COVID-19 victim has a significantly positive relationship with income of households and expenditure at 5% and 10% probability level, respectively. As the number of COVID -19 victims by one person, the income of households decreased by 0.27% and the expenditure of households also increased by 0.28% per month (see Table 6). This may be due to the fact that, as the number of COVID-19 victims increase the government takes a measurement to restrict the movement of labor, which makes decreasing labor additional income and ban the distribution of goods and service which diminishing the quantity supplied and increase price of the product, in addition the peoples may be frustrated and not willing to move place to place and work. Therefore, these factors may have its own contribution to decrease the income of households and increase the amount of household expenditure. This results is similar with.
3.4.2 Number of days
The result also showed as there is a positive relationship between expenditure and number of days starting from MarchFootnote 1 13, 2020 and negative relationship between income of households and number of days starting from March 13, 2020 at 1% and 10% probability level respectively (see Table 6). When the number of days increased by one more days with COVID-19 epidemic, the income level was decrease by 0.15% and expenditure of households was increased by 0.54% per month. This may be due to, if the number of days with COVID-19 epidemic increase, there were a caused for shrinkage of both intra and inter regional trade, investment, tourism, manufacturing sector and a little bit decreasing in agricultural sector production. Such factors had their own contribution to decrease the income of households and increase the amount of household expenditure. This results is supported [22, 23].
3.4.3 Transportation ban
This variable affects income negatively and expenditure positively in significantly at 10% probability level. Thus, keeping other thing remain constant; the probability of the level of income decreased by 0.86% and expenditure increased by 0.18% when there was transportation ban compared to there was not transportation ban. As we have seen before,from the total sampled households 81.67% and 35.83% of their income were emanated from self-employee and non-formal (daily labor) wage, in such a way such activities required moving place to place to work (see Table 6). However, due to transportation ban it was difficult to do such activities as usual; therefore, at the end of the day the income of households will be decreased. On the side of expenditure, it increased due to transportation ban example between Dubti, Logia and samara town, the initial payment is increased by hundred percent this indicates the expenditure of households leads to increased [2, 24,25,26].
3.4.4 Leisure time
High wages due to working very long hours diminishes economic welfare. Leisure has quantified by hour and it has economic value on increase social welfare and/or the remaining hour from 160 h per month. The result also showed as there are negative relationships between of leisure time and income of household at 5% probability level (see Table 6). This indicated that, when the household leisure time increased by one hour, the level of household income was decreased by 0.18% per month, however their welfare may be improved because of additional leisure time. The reason behind the screen was due to, as postulated on the above total sampled households 81.67% and 35.83% of their income were emanated from self-employee and non-formal (daily labor) wage, therefore as they did not work as usual or takes more leisure time their level of income was decreased. This finding is similar with [6, 27]
3.4.5 Gender
It was found that male headship has a negative and significant effect on income of households at 5 percent probability level. Thus, keeping other thing remain constant; the probability of level of income decreased by 0.05% and the expenditure of the households increased by 0.325% per month when the household head is male (male headed households) compared to female (see Table 6). This will be the cause for, most of income was generated by male rather than female household head, and therefore any factor affected on income were initially affected male headed household rather than the female. In addition, male households head were diversified their income sources than female household head, this meant by male household participated in different non-formal income generating activities [21]. However, those non-formal or self-employee income sources were affected by COVID-19 epidemic. Finally most male households was spent more money than female, therefore the expenditure of household was increased when the household head male. This result is supported by [2, 28]
3.4.6 Types of economic sector
As expected, a type of economic sector was significant at 1% probability level, and has a negative relationship with the level of income of households. As the economic sector was service sector the probability of households income decreased by 0.011% compared to the industry and the agricultural sectors (see Table 6). As we have seen before around 70% of sample households was engaged in service sectors. The earliest and largest impacts are more likely to occur in the service sector, such as transport, retail sales, entertainment, tourism, and personal services, including those engaged in the gig economy, rather than in agriculture, large manufacturing, and public, professional, ICT, and financial services. In addition according to World Bank report on 2020, the service sector was decreased by 38% due to COVID-19 epidemic disease. This finding is similar with [25, 26, 29,30,31]
3.4.7 Access to credit
Access to credit affect the level of income of household's and expenditure households positively and significant at 1 percent level of significance. The result defines to our prior expectation. This means credit utilization by household would increase income level by 0.13% and expenditure by 0.8925% (see Table 6). This means, when households got an additional credit, the level of an additional income was increase, this leads to increase the additional expenditure.
Generally, as we have seen in the above finding due to COVID-19 epidemic disease, the expenditure of the households increased significantly with trade off decreasing in income of the sample households. This finding is similar with [30, 32, 33]
4 Conclusions and policy implications
This study attempts to investigate effect of COVID-19 on social welfare and trade in case of Afar Regional State using the sample data collected from 384 randomly selected households from three Samara-logia, Asyaita, Dubti and Awash towns, Afar region Ethiopia. Both descriptive analysis and econometric estimation (panel data analysis) results have been used to address the objective of the study. The descriptive statistics revealed that around 70% of sampled household's source of income was served in service delivering sector of the economy and the remaining 16% and 14% of households were employed in agricultural and industry sectors respectively. The main source of income of households were generated from self-employee, agriculture income and non-formal wage, accounted for 81.67%, 50% and 35.83% of total income were stemmed from those source respectively.
Due to COVID-19 disease 81% of household's income was decreased, 14% of household's income increased and 5% of household's income remain the same after such disease. Total expenditure items of sampled households were changed after COVID-19 i.e. expenditure for food & food items and service delivering were significantly increased compared to the remaining expenditure items, which were constitutes around 13% and 15% increment per months. The control variables of panel data model also its own significant effect on welfare i.e. leisure time and sex of households head had negative and access to credit has positive and significant effect on income of households, on the expenditure side of the household gender and access to credit had positive and significant effect on it. Considering regional experience, econometric and descriptive results, this study recommends that the government and the concerned policy maker could undertake the following policy actions for adversative effect of COVID-19 on social welfare. The government should give more attention to service sector (to protect the rate of unemployment) and supported those self-employee and daily laborer in a means of financial and material needs, responsible bodies should made encourage investment on food and food items production, processing and distribution specially in the regional state like tax free, provided land, electricity, water, agricultural inputs, access to credit and as long as public transportation to the society; the government and responsible bodies should makes awareness, provide mitigation and health materials kits, prepared well organized quarantines to keep the spread of the disease, the health status of the victims and increased the numbers of cure. In addition, the regional government should not ban all in all aspect of the transportation, especially the distributing and trade of food and foods items, place alternative mechanism to fill potential trade gaps.
Mobilize social self-help institutions and them with the formal structure to provide a coordinated support to the most vulnerable population during the pandemic. Put in place alternative mechanism to fill a potential import deficit, these may include planting short-season and early-maturing crop varieties, and prioritizing irrigation schemes for selected foods crops (e.g. potato, maize, etc.), With immediate effect, put in place measures that will ensure uninterrupted supplies of (chemical fertilizers, improved seeds, pesticides and herbicides as well as livestock medicine), these will minimize the adverse effects of the pandemic in the agricultural sector, initiate discussions with commercial banks on rescheduling bank loan repayments and write off interest payments for severely affected sectors until the shock is abated.
The National Bank of Ethiopia needs to consider reserve rate relaxation to enhance banking liquidity, the National Bank of Ethiopia shall initiate discussions to reduce interest rates to stimulate the economy, initiate discussions with financial institutions to support exporters by increasing foreign trade credits, deferring loan payments and extend debt rollovers.
Data will be made available on reasonable request of corresponding author.
Federal ministry of health Ethiopia has confirmed first coronavirus disease (COVID-19) case.
ILO. Covid and the world of work: impact and policy responses. Geneva: International Labour Organisation; 2020.
Goshu D, et al. Economic and welfare effects of COVID-19 and responses in Ethiopia: initial insights. 2020.
Triggs, A.a.K., H. The triple economic shock of COVID-19 and priorities for an emergency G-20 leaders meeting. 2020.
Barcelo J, Lopez-Leyva S. Mitigating the COVID economic crisis: Act fast and do whatever it takes. Economia Sociedad Y Territorio 2021: 305–314.
Baldwin R. Keeping the lights on: Economic medicine for a medical shock. VoxEU. 2020.
Kokas D, et al. Impacts of COVID-19 on Household Welfare in Tunisia. Available at SSRN 3755395, 2020.
Aragie E, Taffesse AS, Thurlow J. Assessing the short-term impacts of COVID-19 on Ethiopia's economy: External and domestic shocks and pace of recovery. Vol. 153. 2020: Intl Food Policy Res Inst.
Beyene LM, Ferede T, Diriba G. The economywide impact of the COVID-19 in Ethiopia: Policy and Recovery options. 2020.
Kassegn A, Endris E. Review on socio-economic impacts of 'Triple Threats' of COVID-19, desert locusts, and floods in East Africa: evidence from Ethiopia. Cogent Soc Sci. 2021;7(1):1885122.
Asegie AM, Adisalem ST, Eshetu AA. The effects of COVID-19 on livelihoods of rural households: South Wollo and Oromia Zones, Ethiopia. Heliyon. 2021;7(12): e08550.
HLPE. Impacts of COVID-19 on food security and nutrition: developing effective policy responses to address the hunger and malnutrition pandemic. 2020, Issue Pap., Comm. World Food Secur.
Annex I, Fiches VI. Mitigating the socio-economic impacts of COVID-19 in Ethiopia, with a focus on vulnerable groups.
Barro RJ, Ursúa JF, Weng J. The coronavirus and the great influenza pandemic: Lessons from the "spanish flu" for the coronavirus's potential effects on mortality and economic activity. National Bureau of Economic Research; 2020.
Correia S, Luck S, Verner E. Pandemics depress the economy, public health interventions do not: evidence from the 1918 flu. Public Health Interventions do not: Evidence from the, 1918
Harris D, et al. the Impact of COVID-19 in Ethiopia: policy brief. 2021.
Kohlscheen E, Mojon B, Rees D. The macroeconomic spillover effects of the pandemic on the global economy. Available at SSRN 3569554. 2020.
McKibbin W, Fernando R. The global macroeconomic impacts of COVID-19: seven scenarios. Asian Economic Papers. 2021;20(2):1–30.
Nechifor V, et al. COVID-19: socioeconomic impacts and recovery in Ethiopia. Publications Office of the European Union; 2020.
Nigussie H. The coronavirus intervention in Ethiopia and the challenges for implementation. Front Commun. 2021;6:93.
Baye K. COVID-19 prevention measures in Ethiopia: current realities and prospects. Vol. 141. 2020: Intl Food Policy Res Inst.
Adem M, Tesafa F. Intensity of income diversification among small-holder farmers in Asayita Woreda, Afar Region, Ethiopia. Cogent Econ Financ. 2020;8(1):1759394.
Abay K, et al. Sudan impacts of COVID 19 on production, household income & food systems. International Food Policy Research Institute. Cairo, Egypt. Draft report. 2020.
Andam KS, et al. Estimating the economic costs of COVID-19 in Nigeria. Vol. 63. 2020: Intl Food Policy Res Inst.
Amewu S, et al. The economic costs of COVID-19 in sub-Saharan Africa: insights from a simulation exercise for Ghana. Eur J Dev Res. 2020;32(5):1353–78.
Bank W. The impact of COVID-19 on the welfare of households with children : an overview based on high frequency phone surveys (English). Equitable Growth, Finance and Institutions Notes Washington, D.C.: World Bank Group. 2020. http://documents.worldbank.org/curated/en/099230003092226699/P1776560f3b3cc0eb0b5b50ce9d88cf44f6.
Owusu LD, Frimpong-Manso K. The impact of COVID-19 on children from poor families in Ghana and the role of welfare institutions. J Child Serv 2020.
Baulch B, Botha R, Pauw K. Short-term impacts of COVID-19 on the Malawian economy: Initial results. 2020: Intl Food Policy Res Inst.
Outlook AE. Developing Africa's workforce for the future. Abidjan, Cote d'Ivoire: African Development Bank Group; 2020.
Aragie E, Taffesse AS, Thurlow J. The short-term economywide impacts of COVID-19 in Africa: insights from Ethiopia. Afr Dev Rev. 2021;33:S152–64.
Swinnen J, Vos R. COVID-19 and impacts on global food systems and household welfare: introduction to a special issue. Agric Econ. 2021;52(3):365–74.
Bundervoet T, Dávalos ME, Garcia N. The short-term impacts of COVID-19 on households in developing countries: an overview based on a harmonized dataset of high-frequency surveys. World Dev 2022: 105844.
Aragie E, et al. Assessing the economywide impacts of COVID-19 on Rwanda's economy, agri-food system, and poverty: a social accounting matrix (SAM) multiplier approach. Vol. 1. 2021: Intl Food Policy Res Inst.
Mendiratta V, Nsababera OU, Sam H. The impact of Covid-19 on household welfare in the Comoros. 2022.
Thanks to all economics department academic staff, and research and community service vice-president of Samara University.
The author(s) reported there is no funding associated with the work featured in this article.
Department of Economics, College of Business and Economics, Samara University, P.O.Box 132, Samara, Ethiopia
Dagmawe Menelek Asfaw, Abdurhman Kedir Ali & Mohammed Adem Ali
Dagmawe Menelek Asfaw
Abdurhman Kedir Ali
Mohammed Adem Ali
Conceptualization, Analysis, Methodology, Investigation Supervision, Writing original draft by DM; Analysis, Review, Editing, Software by AK; Review, Editing, Investigation, and Data curtail by MA.
Correspondence to Dagmawe Menelek Asfaw.
Competing interest
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Asfaw, D.M., Ali, A.K. & Ali, M.A. Effect of Covid-19 on households welfare in Afar Regional State, Ethiopia. Discov Sustain 3, 25 (2022). https://doi.org/10.1007/s43621-022-00095-6
Panel data analysis
Fixed effect
Random effect | CommonCrawl |
Wireless Personal Communications
pp 1–20 | Cite as
Performance Analysis of Opportunistic, Reactive and Partial Relay Selection with Adaptive Transmit Power for Cognitive Radio Networks
Nadhir Ben Halima
Hatem Boujemâa
In this paper, we derive the packet error probability of cognitive radio networks. Our analysis is valid when the powers of secondary source and relays are adaptive. The secondary source and relays can adapt their transmitting power so that interference to primary receiver is below a given threshold T. The analysis takes into account interference from primary transmitter. Different relay selection techniques are investigated such as opportunistic amplify and forward (AF) relaying, partial and reactive relay selection. In opportunistic AF relaying, the selected relay offers the highest end-to-end signal to interference plus noise ratio (SINR). Partial relay selection activates the relay with the largest SINR of first hop. Reactive relay selection activates the relay with the largest SINR of second hop.
Cognitive radio networks Adaptive transmit power Packet error probability Primary and secondary users
We can write
$$\begin{aligned}&P\left( \frac{T|g_{SD}|^{2}}{|g_{SP_{{R}}}|^{2}N_{0}}<x||g_{SP_{{R}}}|^{2}>\frac{T}{E^{\max }}\right) \nonumber \\&\quad =P\left( |g_{SD}|^{2}<\frac{x|g_{SP_{{R}}}|^{2}N_{0} }{T}||g_{SP_{{R}}}|^{2}>\frac{T}{E^{\max }}\right) \nonumber \\&\quad =e^{\frac{T}{\lambda _{SP_{{R}}}^{2}E^{\max }}}\int _{\frac{T}{E^{\max }} }^{+\infty }P\left( |g_{SD}|^{2}<\frac{xuN_{0}}{T}\right) \frac{e^{-\frac{u}{ \lambda _{SP_{{R}}}^{2}}}}{\lambda _{SP_{{R}}}^{2}}du \nonumber \\&\quad =e^{\frac{T}{\lambda _{SP_{{R}}}^{2}E^{\max }}}\int _{\frac{T}{E^{\max }} }^{+\infty }\left[ 1-e^{-\frac{xuN_{0}}{T\lambda _{SD}^{2}}}\right] \frac{e^{- \frac{u}{\lambda _{SP_{{R}}}^{2}}}}{\lambda _{SP_{{R}}}^{2}}du \nonumber \\&\quad =1-\frac{e^{-\frac{N_{0}x}{\lambda _{SD}^{2}E^{\max }}}}{1+\frac{\lambda _{SP_{{R}}}^{2}xN_{0}}{T\lambda _{SD}^{2}}} \end{aligned}.$$
When the adaptive power exceeds \(P_{max}\), the secondary source transmit power is equal to \(P_{max}\). The SINR between S and D is expressed as
$$\Gamma _{S,D}=\frac{E^{max}|g_{SD}|^{2}}{E_{P_{T}}|g_{P_{T}D}|^{2}+N_{0}}$$
where \(E^{max}=T_sP^{max}\), \(T_s\) is the symbol period, \(E_{P_{T}}\) is the transmitted energy per symbol of primary transmitter \(P_{T}\), \(g_{P_{T}D}\) is the channel coefficient between \(P_T\) and node D. \(E_{P_{T}}|g_{P_{T}D}|^{2}\) is the interference at node D from \(P_{T}\).
The CDF of the SINR is expressed as
$$F_{\Gamma _{SD}}(\gamma )=P(E^{max}|g_{SD}|^{2}<\gamma (E_{P_{T}}|g_{P_{T}D}|^{2}+N_{0}))$$
For Rayleigh channels, \(Z_1=E^{max}|g_{SD}|^{2}\) follows an exponential distribution with mean \(E^{max}\lambda _{SD}^{2}\) where \(\lambda _{SD}^{2}=E(|g_{SD}|^{2})\). E(X) is the expectation of X. Also, \(Z_2=E_{P_{T}}|g_{P_{T}D}|^{2}\) follows an exponential distribution with mean \(E_{P_T}\lambda _{P_{T},D}^{2}\) where \(\lambda _{P_{T},D}^{2}=E(|g_{P_{T}D}|^{2})\)
Therefore, the CDF can be expressed as
$$\begin{aligned} F_{\Gamma _{SD}}(\gamma )& = P(Z_2<\gamma (N_{0}+Z_2)) \nonumber \\ & = \int _{N_{0}}^{+\infty }F_{Z_1}(\gamma u)f_{Z_2}(u-N_{0})du \end{aligned}$$
where \(F_{Z_1}(u)=P(Z_1<u)\) is the CDF of \(Z_1\) and \(f_{Z_2}(u)\) is the PDF of \(Z_2\).
Since \(Z_1\) and \(Z_2\) follow an exponential distribution, we have
$$\begin{aligned} F_{\Gamma _{SD}}(\gamma )& = \int _{N_{0}}^{+\infty }\left[ 1-e^{-\frac{ \gamma u}{E^{max}\lambda _{SD}^{2}}}\right] e^{-\frac{(u-N_{0})}{E_{P_T}\lambda _{P_{T},D}^{2}}}\frac{1}{E_{P_T}\lambda _{P_{T}D}^{2}}du \nonumber \\ & = 1-\frac{E^{max}\lambda _{SD}^{2}}{E^{max}\lambda _{SD}^{2}+\gamma E_{P_T}\lambda _{P_{T}D}^{2}} e^{-\frac{N_{0}\gamma }{E^{max}\lambda _{SD}^{2}}} \end{aligned}$$
The SINR (13) can be written as
$$\Gamma _{SD}=\frac{b_{1}U_{1}}{U_{2}(b_{2}+b_{3}U_{3})}$$
where \(U_{1}=|g_{SD}|^{2},U_{2}=|g_{S \ddot{} P_{R}}|^{2},U_{3}=|g_{P_{T}D}|^{2},e_{1}=T,e_{2}=N_{0},e_{3}=E_{P_{T}}.\)
For Rayleigh channels, \(U_{1}\),\(U_{2}\) and \(U_{3}\) are exponentially distributed with mean \(\lambda _{i}=E(U_{i})\).
We have to compute
$$\begin{aligned} P\left( \Gamma _{SD}<x|\frac{T}{|g_{SP_{R}}|^{2}}<E^{\max }\right)& = P\left( \Gamma _{SD}<x|U_{2}>\frac{T}{E^{\max }}\right) \nonumber \\ & = P\left( e_{1}U_{1}<xU_{2}(e_{2}+e_{3}U_{3})|U_{2}>\frac{T}{E^{\max }} \right) \end{aligned}$$
Let \(U_{4}=e_{2}+e_{3}U_{3}\). The CDF of \(U_{4}\) is equal to
$$F_{U_{4}}(w)=F_{U_{3}}\left( \frac{w-e_{2}}{e_{3}}\right)$$
We deduce the PDF
$$f_{U_{4}}(w)=\frac{1}{e_{3}}f_{U_{3}}\left( \frac{w-e_{2}}{e_{3}}\right) .$$
Equation (51) can be expressed as
$$\begin{aligned}&P\left( e_{1}U_{1}<xU_{2}(e_{2}+e_{3}U_{3})|U_{2}>\frac{T}{E^{\max }}\right) \nonumber \\&\quad =\int _{\frac{T}{E^{\max }}}^{+\infty }\int _{e_{2}}^{+\infty }P(e_{1}U_{1}<xvw)f_{U_{2}|U_{2}>\frac{T}{E^{\max }}}(v)f_{U_{4}}(w)dvdw\nonumber \\&\quad =\int _{e_{2}}^{+\infty }\int _{\frac{T}{E^{\max }}}^{+\infty }e^{\frac{T}{ E_{\max }\lambda _{2}}}\left[ 1-e^{-\frac{xvw}{e_{1}\lambda _{1}}}\right] \frac{e^{-\frac{v}{\lambda _{2}}}}{\lambda _{2}}dv\frac{e^{-\frac{(w-e_{2})}{ e_{3}\lambda _{3}}}}{e_{3}\lambda _{3}}dw \end{aligned}$$
$$\int _{\frac{T}{E^{\max }}}^{+\infty }e^{\frac{T}{E_{\max }\lambda _{2}}} \left[ 1-e^{-\frac{xvw}{e_{1}\lambda _{1}}}\right] \frac{e^{-\frac{v}{ \lambda _{2}}}}{\lambda _{2}}dv=1-\frac{e^{-\frac{Txw}{E^{\max }e_{1\lambda _{1}}}}}{1+\frac{\lambda _{2}xw}{\lambda _{1}e_{1}}}$$
Using (54) and (55), we deduce
$$\begin{aligned} P\left( \Gamma _{SD}<x|\frac{T}{|g_{SP_{R}}|^{2}}<E^{\max }\right)& = \int _{e_{2}}^{+\infty }\left[ 1-\frac{e^{-\frac{Txw}{E^{\max }e_{1\lambda _{1}}}}}{1+\frac{\lambda _{2}xw}{\lambda _{1}e_{1}}}\right] \frac{e^{-\frac{ (w-e_{2})}{e_{3}\lambda _{3}}}}{e_{3}\lambda _{3}}dw\nonumber \\& = 1-\frac{e^{\frac{e_{2}}{e_{3}\lambda _{3}}}}{e_{3}\lambda _{3}} \int _{e_{2}}^{+\infty }\frac{e^{-w\left( \frac{1}{e_{3}\lambda _{3}}+\frac{xT }{E^{\max }e_{1}\lambda _{1}}\right) }}{1+\frac{\lambda _{2}xw}{\lambda _{1}e_{1}}}dw \end{aligned}$$
$$z=1+\frac{\lambda _{2}xw}{\lambda _{1}e_{1}}$$
We deduce
$$\begin{aligned}&P\left( \Gamma _{SD}<x|\frac{T}{|g_{SP_{R}}|^{2}}<E^{\max }\right) \nonumber \\&\quad =1-\frac{ e^{\frac{e_{2}}{e_{3}\lambda _{3}}}}{e_{3}\lambda _{3}}\frac{\lambda _{1}e_{1}}{\lambda _{2}x}\times \int _{1+\frac{\lambda _{2}xe_{2}}{\lambda _{1}e_{1}} }^{+\infty }\frac{e^{-\left( z-1\right) \frac{\lambda _{1}e_{1}}{\lambda _{2}x}\left( \frac{1}{e_{3}\lambda _{3}}+\frac{xT}{E^{\max }e_{1}\lambda _{1} }\right) }}{z}dz\nonumber \\&\quad =1-\frac{e^{\frac{e_{2}}{e_{3}\lambda _{3}}}}{e_{3}\lambda _{3}}\frac{ \lambda _{1}e_{1}}{\lambda _{2}x}e^{\frac{\lambda _{1}e_{1}}{e_{3}\lambda _{3}\lambda _{2}x}+\frac{T}{E^{\max }\lambda _{2}}}\times E_{i}(\left( \frac{ \lambda _{1}e_{1}}{\lambda _{2}x}+e_{2}\right) \left( \frac{1}{e_{3}\lambda _{3}}+\frac{xT}{E^{\max }e_{1}\lambda _{1}}\right) ) \end{aligned}$$
where \(E_{i}(x)\) is the exponential integral function defined as
$$E_{i}(x)=\int _{x}^{+\infty }\frac{e^{-t}}{t}dt.$$
Mitola, J., & Maguire, G. Q. (1999). Cognitive radio: Making software radios more personal. IEEE Personal Communications, 6(4), 13–18.CrossRefGoogle Scholar
Akyildiz, I., Lee, W. Y., Vuran, M. C., & Mohanty, S. (2006). Next generation/dynamic spectrum access/cognitive radio wireless networks: A survey. Computer Networks, 50(13), 2127–2159.CrossRefGoogle Scholar
Rini, S., & Hupper, C. (2013). On the capacity of cognitive interference channel with a common cognitive message. Transaction on Emerging Telecommunication Technologies, 1, 12–18.Google Scholar
Alptekin, G. I., & Bener, A. B. (2011). Spectrum trading in cognitive radio networks with strict transmission power control. Transaction on Emerging Telecommunication Technologies, 22(6), 282–295.CrossRefGoogle Scholar
Tan, X., Zhang, H., Chen, Q., & Hu, J. (2013). Opportunistic channel selection based on time series prediction in cognitive radio networks. Transaction on Emerging Telecommunication Technologies, 17(3), 32–38.Google Scholar
Haykin, S. (2005). Cognitive radio: Brain-empowered wireless communications. IEEE Journal on Selected Areas in Communications, 23(2), 201–220.CrossRefGoogle Scholar
Menon, R., Buehrer, R., & Reed, J. (2005). Outage probabilitybased comparison of underlay and overlay spectrum sharing techniques. In IEEE international symposium on new frontiers in dynamic spectrum access networks (pp. 101–109). Baltimore.Google Scholar
Chamkhia, H., Hasna, M. O., Hamila, R., & Hussain, S. I. (2012). Performance analysis of relay selection schemes in underlay cognitive networks with Decode and Forward relaying. IEEE PIMRC, Syndney, Australia, 9–12 Sept 2012.Google Scholar
Manna, M. A., Chen, G., & Chambers, J. A. (2014). Outage probability analysis of cognitive relay network with four relay selection and end-to-end performance with modified quasi-orthogonal space-time coding. IET Communications, 8(2), 233–241.CrossRefGoogle Scholar
Hussain, S. I., Alouini, M.-S., Hasna, M., & Qaraqe, K. (2012). Partial relay selection in underlay cognitive networks with fixed gain relays (pp. 1–5). VTC Spring, Yokohama, Japan, 6–9 May 2012.Google Scholar
Hussain, S. I., Alouini, M. S., Qarage, K., & Hasna, M. (2012). Reactive relay selection in underlay cognitive networks with fixed gain relays (pp. 1784–1788). IEE ICC, Ottawa, Canada, 10–15 June 2012.Google Scholar
Lee, J., Wang, H., Andrews, J. G., & Hong, D. (2011). Outage probability of cognitive relay networks with interference constraints. IEEE Transactions on Wireless Communications, 10(2), 390–395.CrossRefGoogle Scholar
Luo, L., Zhang, P., Zhang, G., & Qin, J. (2011). Outage performance for cognitive relay networks with underlay spectrum sharing. IEEE Communications Letters, 15(7), 710–712.CrossRefGoogle Scholar
Zhong, C., Ratnarajah, T., & Wong, K.-K. (2011). Outage analysis of decode-and-forward cognitive dual-hop systems with the interference constraint in Nakagami-m fading channels. IEEE Transactions on Vehicular Technology, 60(6), 2875–2879.CrossRefGoogle Scholar
Kang, X., Zhang, R., Liang, Y.-C., & Garg, H. K. (2011). Optimal power allocation strategies for fading cognitive radio channels with primary user outage constraint. IEEE Journal on Selected Areas in Communications, 29(2), 374–383.CrossRefGoogle Scholar
Yan, Z., Zhang, X., & Wang, W. (2011). Exact outage performance of cognitive relay networks with maximum transmit power limits. IEEE Communications Letters, 15(12), 1317–1319.CrossRefGoogle Scholar
Kim, H., Lim, S., Wang, H., & Hong, D. (2012). Optimal power allocation and outage analysis for cognitive full duplex relay systems. IEEE Transactions on Wireless Communications, 11(10), 3754–3765.CrossRefGoogle Scholar
Tourki, K., Qaraqe, K. A., & Alouini, M.-S. (2013). Outage analysis for underlay cognitive networks using incremental regenerative relaying. IEEE Transactions on Vehicular Technology, 62(2), 721–734.CrossRefGoogle Scholar
Guo, Y., Kang, G., Zhang, N., Zhou, W., & Zhang, P. (2010). Outage performance of relay-assisted cognitive-radio system under spectrum-sharing constraints. Electronics Letters, 46(2), 182–184. Cited by: Papers (60).CrossRefGoogle Scholar
Chakraborty, P., & Prakriya, S. (2017). Secrecy outage performance of a cooperative cognitive relay network. IEEE Communications Letters, 21(2), 326–329.CrossRefGoogle Scholar
He, J., Guo, S., Pan, G., Yang, Y., & Liu, D. (2017). Relay cooperation and outage analysis in cognitive radio networks with energy harvesting. IEEE Systems Journal, PP(99), 1–12.Google Scholar
Arezumand, H., Zamiri-Jafarian, H., & Soleimani-Nasab, E. (2017). Outage and diversity analysis of underlay cognitive mixed RF-FSO cooperative systems. IEEE/OSA Journal of Optical Communications and Networking, 9(10), 909–920.CrossRefGoogle Scholar
Al-Qahtani, F. S., Abd El-Malek, A. H., Ansari, I. S., Radaydeh, R. M., & Zummo, S. A. (2017). Outage analysis of mixed underlay cognitive RF MIMO and FSO relaying with interference reduction. IEEE Photonics Journal, 9(2), 1–22.CrossRefGoogle Scholar
Xi, Y., Burr, A., Wei, J. B., & Grace, D. (2011). A general upper bound to evaluate packet error rate over quasi-static fading channels. IEEE Transactions on Wireless Communications, 10(5), 1373–1377.CrossRefGoogle Scholar
Zhang, R. (2009). On peak versus average interference power constraints for protecting primary users in cognitive radio networks. IEEE Transactions on Wireless Communications, 8(4), 2112–20.CrossRefGoogle Scholar
Hasna, M. O., & Alouini, M.-S. (2004). Harmonic mean and end-to-end performance of transmission systems with relays. IEEE Transactions on Communications, 52(1), 130–135.CrossRefGoogle Scholar
© Springer Science+Business Media, LLC, part of Springer Nature 2020
1.College of Computer Science and Engineering in YanbuTaibah UniversityMadinahSaudi Arabia
2.COSIM Lab.SUPCOMArianaTunisia
Halima, N.B. & Boujemâa, H. Wireless Pers Commun (2020). https://doi.org/10.1007/s11277-020-07027-5
Publisher Name Springer US
Get Access to
for the whole of 2020 | CommonCrawl |
Inferring regulatory element landscapes and transcription factor networks from cancer methylomes
Lijing Yao1,
Hui Shen2,
Peter W Laird2,
Peggy J Farnham1 &
Benjamin P Berman1,3
Genome Biology volume 16, Article number: 105 (2015) Cite this article
Recent studies indicate that DNA methylation can be used to identify transcriptional enhancers, but no systematic approach has been developed for genome-wide identification and analysis of enhancers based on DNA methylation. We describe ELMER (Enhancer Linking by Methylation/Expression Relationships), an R-based tool that uses DNA methylation to identify enhancers and correlates enhancer state with expression of nearby genes to identify transcriptional targets. Transcription factor motif analysis of enhancers is coupled with expression analysis of transcription factors to infer upstream regulators. Using ELMER, we investigated more than 2,000 tumor samples from The Cancer Genome Atlas. We identified networks regulated by known cancer drivers such as GATA3 and FOXA1 (breast cancer), SOX17 and FOXA2 (endometrial cancer), and NFE2L2, SOX2, and TP63 (squamous cell lung cancer). We also identified novel networks with prognostic associations, including RUNX1 in kidney cancer. We propose ELMER as a powerful new paradigm for understanding the cis-regulatory interface between cancer-associated transcription factors and their functional target genes.
ENCODE and other large-scale efforts have mapped transcription factor binding sites, histone modifications, and chromatin accessibility in a common set of cell lines [1, 2]. Integration of these genome-wide maps has led to the view that distinct epigenetic marks are not independent but rather that chromatin is organized into discrete functional states marked by particular combinations of individual features [3, 4]. Computational methods such as chromHMM [5] and Segway [6] have been developed to identify these states from individual histone and accessibility features, and the state most consistently linked to cellular identity is the 'active enhancer' state defined by the presence of histone H3 lysine 27 acetylation and low levels of the canonical promoter mark, H3 lysine 4 tri-methylation [5, 7, 8]. Active enhancers are enriched for sequences bound by cell-type specific transcription factors, reinforcing their preeminent role in encoding the cis-regulatory logic of the genome. Projects such as the NIH Roadmap [2, 9] and Blueprint [10] have also mapped histone modifications and chromatin accessibility in primary human tissues, identifying a large set of enhancers from many different cell types. Others have employed these datasets to identify large numbers of enhancer-promoter pairs in 12 human cell types [11, 12]. However, approaches such as ChIP-seq or DNAse hypersensitivity assays require careful tissue handling (to avoid protein degradation) and relatively large numbers of cells (106 to 107) and thus have not been applied to the identification of enhancers in primary tumor tissues.
Fortunately, enhancers can also be identified using patterns of 5-methylcytosine, an epigenetic mark that is maintained more stably than protein marks, and can be detected genome-wide in as few as 1,000 cells [13]. Historically, DNA methylation research has focused on gene promoter regions (reviewed in [14]). While early work suggested that DNA methylation could mark enhancer regions of interest [15], this was not widely appreciated until the first complete and unbiased study of DNA methylation in human cells revealed enhancer regions as being unmethylated in a cell-type specific manner [16]. A later study used the same whole-genome bisulfite sequencing (WGBS) approach to identify all genomic regions containing little or no methylation; these regions overwhelmingly corresponded to enhancers and other distal regulatory elements [17]. Cell-type specific demethylation of enhancers was confirmed by targeted bisulfite sequencing in the ENCODE project [1]. More recently, WGBS data from 30 diverse human cell types showed that enhancers had highly dynamic methylation patterns - roughly 30% of the most cell type-specific regions in the genome overlapped known enhancers (compared to 5% that overlapped gene promoters). The mechanism underlying these correlations is not well understood, but could involve de-methylation of DNA initiated by transcription factor binding ([17]; reviewed in [18]) and maintained by DNA methyltransferase protection by Histone H3 lysine 4 monomethyl groups [19].
In cancer tissues, recent studies have shown that cancer-specific enhancers and transcription factor binding sites can be identified from DNA methylation profiles. The first genome-scale analysis of transcription factor binding sites in cancer found that binding by transcription factors such as Sp1, NRF1, and YY1 could protect CpG island gene promoters from cancer-specific hypermethylation [20]. Our WGBS study of a human colon cancer identified all genomic regions that changed from a methylated state in the normal colon to an unmethylated state in the tumor; 90% of these regions overlapped known enhancers, and a highly disproportionate number contained binding sites for the AP-1 transcription factor [21]. A more recent study showed that DNA methylation changes at enhancer elements were significantly better than those at promoters for predicting gene expression changes of target genes in cancer [22]. WGBS was recently used to show that unmethylated regions were enriched for binding sites for subtype-specific transcription factors in pediatric medulloblastoma (LEF1 for the WNT subtype and GLI2 for the SHH subtype [23]).
Once an enhancer has been identified by DNA methylation, identification of the specific target gene or genes whose expression is modulated by that enhancer can be challenging because the target genes can be thousands to millions of base pairs away from the enhancer. A study using chromatin conformation sequencing (ChIA-PET) to study enhancer/promoter interactions found that the median distance between an enhancer and a promoter was approximately 50 kb, and that at least 40 % of enhancers skip one or more annotated genes to find their target promoter [24]. The ChIA-PET dataset was used in conjunction with DNA methylation and RNA-seq data from breast cancer cases in The Cancer Genome Atlas (TCGA) to identify enhancer/promoter pairs in vivo [25]. Other reports have also shown that methylation of distal regulatory sites is closely related to gene expression levels across the genome [26]. Here, we present a statistical framework for identification of cancer-specific enhancers and paired gene promoters, and use it to investigate approximately 3,000 cases from 11 tumors types in the TCGA 'Pan Cancer' analysis set [27]. Our R software package, ELMER, uses only methylation and expression data, and does not require any chromatin conformation or ChIP-seq data. Furthermore, by identifying transcription factor binding motifs present within enhancers, and incorporating expression patterns of upstream transcription factors, ELMER is able to infer transcription factor networks activated in specific cancer subtypes. This work suggests a general approach for identifying in vivo transcription factor networks and the associated regulatory control sequences altered in cancer.
Identifying cancer-specific DNA methylation changes in distal enhancer regions for 10 cancer types
To identify cancer-specific changes in DNA methylation, we obtained 3,381 DNA methylation datasets for 11 types of primary tumors from the TCGA Pan Cancer analysis set [27]. The cancer types we included in our analyses were leukemia (LAML), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), kidney renal clear cell carcinoma (KIRC), bladder urothelial carcinoma (BLCA), uterine corpus endometrioid carcinoma (UCEC), glioblastoma (GBM), head and neck squamous cell carcinoma (HNSC), breast cancer (BRCA), colon adenocarcinoma (COAD), and rectal adenocarcinoma (READ). Based on previous TCGA studies [28], COAD and READ are very similar and are often combined for analyses. Therefore we combined these two cancer types (indicated herein as CRC), resulting in 10 different primary tumor types. The TCGA ID numbers for all samples can be found in Additional file 1.
The DNA methylation datasets were produced using the Illumina Infinium HumanMethylation450 (HM450) BeadChip platform. The HM450 array allows the integration of more than 485,000 methylation sites at single-nucleotide resolution, covering 96 % of CpG islands and 99 % of RefSeq genes in the human genome. We used TCGA Level 3 data, which are normalized using platform-specific internal controls, and mask out probes for failure/SNP/repeats on the HumanMethylation450 array. Then, because we focused on distal enhancers, we selected only those probes that are greater than +/- 2 kb from a known TSS (defined using GENCODE v15 [29], resulting in a set of 145,265 distal probes. We next wanted to limit the number of candidate probes tested, so we filtered based on two large enhancer databases. While these databases do not include a large number of primary tumors, they do include cancer cell lines and a large number of cell types. The largest enhancer set came from a combination of enhancers from the Roadmap Epigenomics Mapping Consortium (REMC) and the Encyclopedia of DNA Elements (ENCODE) Project, in which enhancers were identified using ChromHMM [30] for 98 tissues or cell lines [2, 9, 31]. We used the union of genomic elements labeled as EnhG1, EnhG2, EnhA1, or EnhA2 (representing intergenic and intragenic active enhancers) in any of the 98 cell types, resulting in a total of 389,967 non-overlapping enhancer regions. A total of 101,918 distal probes from the HM450 array overlapped with these enhancer regions. We also downloaded from FANTOM5 enhancers having associated eRNAs for 400 distinct cell types [32]. The set of FANTOM5 enhancers (43,011) was much smaller than the set of REMC/ENCODE enhancers and only added an additional 600 probes, resulting in a total of 102,518 distal probe regions that overlapped with a previously identified enhancer region (Fig. 1a). This set of 102,518 distal enhancer probes (Additional file 2) included at least one CpG for 15 % of all enhancers in our annotation set, suggesting that the HM450k array can be used to sample a meaningful subset of enhancers genome-wide. It also included the majority (70 %) of all 145,265 distal probes on the array, so we believe that the analysis described below covers the vast majority of identifiable enhancers based on the HM450k array design. The ELMER R package also allows a complete search of all distal probes on the array, without filtering out the 30 % not associated with any known enhancer.
Identifying cancer-specific DNA methylation changes in distal enhancer regions. a Out of 145,265 distal probes on the HM450k platform, 102,518 were contained within our annotated enhancer regions (with approximately 1/8 of all distal enhancers being covered by at least one probe). b The statistical method used to identify probes hypomethylated (or hypermethylated) in cancer (see Methods for additional details). The heatmap in the top panel shows the DNA methylation level at each probe pi for each sample from a particular cancer type (either an adjacent normal, or a tumor). Each cell is a methylation β value, reflecting the fraction of methylated DNA molecules at each CpG probe. The remainder of the panel illustrates our statistical test, which compares only the most extreme 20 % of normal samples to the most extreme 20 % of tumor samples, in order to identify probes hypomethylated in only a subset of tumors. (c) Shown is a histogram representing the number of cancer-specific hypomethylated (top graph) or hypermethylated (bottom graph) distal enhancer probes identified for each cancer type. The fraction of these probes shared by one or more other tumor types is indicated by the color bars (1 indicates that the probe is hypomethylated in only that tumor type, 2 indicates that it is hypomethylated in one other tumor type, and so on)
To identify enhancers that displayed cancer-specific changes in DNA methylation, we applied a t-test to identify enhancer probes that were significantly hypermethylated or hypomethylated within tumor samples of each cancer type, relative to TCGA adjacent normal samples from the same tissue (Fig. 1b; see Methods for details); a list of the identified hypermethylated or hypomethylated enhancer probes for each tumor type can be found in Additional file 3. We identified many more hypomethylated enhancer probes than hypermethylated probes for each of the 10 cancer types (Fig. 1c). Interestingly, most of the probes showing DNA methylation changes were found to have similar changes in DNA methylation in more than one cancer type. However, some probes were uniquely hypermethylated or hypomethylated in only one of the 10 tumor types. We note that it is not possible for us to be certain that the adjacent tissues collected by TCGA correspond to the same cell type from which the cancer arose, and therefore some of these methylation changes may correspond to tissue-specific differences rather than changes arising in the cancer. However, these differentially methylated probes are only candidates, as the next steps of ELMER (described below) use differences across all normal and tumor tissues (of the same cancer type) to determine true regulatory interactions.
Linking methylation-affected enhancers to gene expression
Although we identified approximately 100,000 enhancer probes that showed DNA methylation changes, it was not clear if all of these enhancers were actually involved in regulating gene expression. Previous studies have shown that only a portion of genomic regions classified as enhancers by chromatin marks or recruitment of histone acetyltransferases show activity in various assays [33, 34]. In addition, it is difficult to know which gene is regulated by each enhancer since enhancers can work from a distance, in either orientation, and do not necessarily regulate the closest gene. For example, in a ChIA-PET study using an antibody for RNA polymerase II, Li et al. [24] identified approximately 20,000 to 30,000 enhancer-promoter loops in MCF7 or K562 cells. Of these, more than 40 % of the enhancers skipped over the nearest gene to loop to a farther one. In order to identify target genes regulated by the distal regulatory elements, we analyzed expression data (RNA-seq) for 10 genes upstream and 10 genes downstream from each distal regulatory element; these 20 nearby genes constituted candidate gene targets. We preferred this method rather than those that evaluate all genes within a fixed-length genomic window, because the statistical power is controlled for the large degree in variation in gene density across the genome. Because not all TCGA samples had matched gene expression datasets, we selected the 2,841 TCGA samples that had matched gene expression (RNA-seq) and HM450k DNA methylation data (in Additional file 1). Although we realize that this method cannot identify target genes that are farther than ten genes away or on different chromosomes, we anticipated that many of the enhancers would regulate a gene within this distance [5]. Genes that are positively regulated by the enhancers should show a negative correlation between the DNA methylation level of the probe and expression of a putative target gene. We identified statistically significant CpG probe-gene pairs by comparing expression of the candidate gene in the upper vs. the lower quintile of samples, as measured by enhancer probe methylation. For this and all other downstream analyses, we included both normal and tumor samples, and only included samples within an individual cancer type (for example, UCEC), to avoid effects of tissue-specific differences and potential batch effects. We did not explicitly require expression changes between normal and tumor samples, because the number of normal samples with expression data were often quite limited. However, most genes identified did in fact show expression changes in the expected direction (downregulated for hypermethylated enhancers, and upregulated for hypomethylated enhancers; see the 'tumor vs. normal expression' worksheet in Additional file 4). To compare methylation quintiles vs. expression, we used a non-parametric U test, calculating an empirical P value using randomly assigned permutations of the methylation probe tested, and kept all pairs with an empirical P value <0.001 (Fig. 2a; see Methods for details). An example of one probe and its relationship to the expression of the 20 nearby genes in UCEC is shown in Fig. 2b. In this case, the probe showed an inverse correlation of methylation with expression of TFAP2A, which was the nearest gene upstream of the probe (approximately 7 kb away). A list of all putative enhancer-gene interactions can be found in Additional file 4.
Linking differentially methylated probes to expression of nearby genes. a Shown is an illustration of the method used to associate each differentially methylated enhancer probe with one or more genes based on gene expression (see Methods section for additional details). For each of n probes identified as hypomethylated in a given cancer type (shown as blue circles), 10 genes upstream and 10 genes downstream were considered, yielding 20n statistical tests, one for each probe-gene pair. Each statistical test is performed across the complete set of normal and tumor samples within a particular cancer type. For instance, we show a scatterplot to illustrate such a test across the 258 endometrial (UCEC) tumor samples and 10 UCEC adjacent normals, showing the desired inverse correlation between methylation (x axis) and expression of the nearby gene (y axis). A Mann-Whitney U test was then performed, with the null hypothesis that the gene expression of group M samples is less or equal to that of group U samples. The U group consists of the 20 % least methylated samples for probe Pi, and the M group consists of the top 20 % most methylated. The raw P value (p r ) was compared to a permutation-based distribution of null P values, generated by performing 10,000 U tests between the actual gene Gj and DNA methylation a randomly selected distal non-enhancer probe. The empirical p e value was calculated by the rank of p r within the 10,000 trials. b Each scatter plot shows the methylation level of an example probe cg09606832 in all UCEC samples plotted against the expression of one of 20 adjacent genes. Only one gene, TFAP2A, shows a significant p e indicating negative correlation, and is considered the linked gene
Using this method, we identified a total of 11,972 hypomethylated probe-gene pairs and 2,308 hypermethylated probe-gene pairs in the set of 10 tumor types (Fig. 3a), with the number of hypomethylated probe-gene pairs ranging from 499 to 3,847 in different tumor types, and the number of hypermethylated probe-gene pairs ranging from 119 to 464 (see Additional file 5 for a breakdown by type). Analysis of the probe-gene pairs revealed that most of the identified pairs were only found in one cancer type, suggesting that each enhancer regulates a specific gene in a tumor type-specific manner (Fig. 3a). Because some enhancers contained two or more probe features, we clustered probes that were within 500 bp of each other into 6,068 hypomethylated and 1,288 hypermethylated enhancer regions. Each enhancer was associated with an average of 1.0 to 1.7 genes, depending on tumor type, and each gene was associated with an average of 1.2 to 2.1 enhancers (Fig. 3b). Our work is consistent with previous studies indicating that distal elements commonly loop to or are associated with expression from 1 to 3 promoters [35]. Although the enhancer-gene pairs that we identified were highly specific for a certain tumor type, we found that approximately 34 % of the genes identified as regulated by a hypomethylated probe and approximately 17 % of the genes identified as regulated by a hypermethylated probe were targets in more than one tumor type (Fig. 3a), suggesting that a gene could utilize different enhancers in different tumor types for cancer-specific regulation.
Comparison of probe-gene pairs between the different cancer types. a For the hypomethylated (top) and hypermethylated (bottom) probe-gene pairs, shown are pie charts that indicate the percentage of probe-gene pairs, probes, and genes that are present in one (purple) or shared by more than one of the 10 cancer types. b Using all probe-gene pairs, the distribution of the number of genes per enhancer (top) and the number of enhancers per gene (bottom) is shown for each individual cancer type. The mean of each is shown as a number within the bar plot
To further investigate the relationships between putative enhancers and linked target genes, we determined the frequency with which the probe-gene pairs we identified were separated by specific distances using window sizes of 50 or 200 kb (Fig. 4a). We found that both hypomethylated and hypermethylated probe-gene pairs were more frequent than random in the first 50 kb window, with hypermethylated pairs more dramatically so. A previous study using HiC to identify promoter-enhancer loops found that approximately 25 % of enhancer-promoter pairs were within a 50 kb range and approximately 75 % spanned 100 kb or larger genomic distance, with a median distance of 124 kb [36], whereas a recent study using in situ HiC identified contact domains ranging in size from 40 kb to 3 Mb, with a median size of 185 kb [37]. We then selected the set of probe-gene pairs where a single enhancer was only linked to a single gene (the great majority), and determined how often the linked gene corresponded to the nearest TSS. In previous studies, enhancers have been shown to loop to the nearest promoter only 27 % to 40 % of the time, skipping over the nearest TSS to loop to promoters farther away [24, 35]. We found that only approximately 15 % to 30 % of the time did the correlated gene correspond to the nearest TSS, with the percentage being higher for hypermethylated probe-gene pairs than for hypomethylated probe-gene pairs (Fig. 4b). This was significantly higher than the frequency of an enhancer being linked any other farther away gene (4 % to 8 %); because there was no selection for our statistical test to link to the nearest gene, the disproportional number of first-gene linkages gave us confidence that many or most of our linkages were true cis-regulatory links, including those that linked to more distant genes. If the linked gene did not correspond to the nearest TSS, there was very little preference to link to a nearby gene; the one exception was that hypermethylated enhancers were more likely to link to either the closest or second closest gene. This analysis is shown individually for each of the 10 tumor types in Additional file 6.
Physical characteristics of the probe-gene pairs. a A histogram of probe-gene distances for all pairs with a hypomethylated (green) or hypermethylated (yellow) probe. Shown is the distribution of the distance between linked distal enhancer probes and genes. The X-axis shows distances in bins of 50 kb or 200 kb. The Y-axis shows the proportion of all probe-gene pairs in the category (hyper- or hypomethylated) that fall into each range. These were compared to randomized datasets (gray bars), which were generated by randomly selecting 1,000 probes from the full set of 145,265 distal probes, and randomly pairing each with one of its 20 adjacent genes. We generated 1,000 such datasets to generate 95 % confidence intervals for each bin (+/-1.96* SD). b For each probe in a probe-gene pair, the 20 adjacent genes were ranked by distance, and shown is the proportion of all probes linked to genes of a given rank. For this analysis, probes linked to more than one gene and multiple probes linked to the same gene, were omitted
As indicated above, many of the genes that we identified as linked to enhancers with cancer-associated DNA methylation differences were actually identified in more than one cancer type, suggesting that they may have some common function in tumor initiation or progression. We selected all genes linked to an enhancer probe in more than one cancer type and performed a Gene Ontology enrichment analysis (Fig. 5). The 1,959 genes linked to hypomethylated (activated) enhancer probes correspond to genes upregulated in cancer, and the 284 genes linked to hypermethylated (inactivated) probes correspond to genes downregulated in cancer. Interestingly, we found that genes linked to hypermethylated (inactivated) enhancers were genes involved in development and differentiation. In contrast, genes linked to hypomethylated (activated) enhancers were classified as involved in the cell cycle and other cellular processes. Accordingly, we have identified known tumor suppressors (for example, TSG1, RBM6, SPRY2, CDKN1A, and UBE4B) in the set of genes potentially regulated by the hypermethylated enhancers and known oncogenes and cancer-associated genes (for example, MYC, TERT, ERBB3, ERBB4, FGFR3, VEGFA, CDK7, and CCND1) in the set of genes potentially regulated by the hypomethylated enhancers.
Gene Ontology (GO) enrichment analysis for genes identified in more than one cancer type. All genes identified in more than one cancer type by probe-gene pairs were analyzed for enrichment in particular GO categories, using the TopGO program. Activated genes (associated with hypomethylated enhancer probes) are shown in (a) and inactivated genes (associated with hypermethylated enhancer probes) are shown in (b). All GO categories with an adjusted enrichment P value of less than 0.01 (indicated next to the category name) and fold change more than 1.5 are included in the figure, and categories within the same biological process (color) are ordered by enrichment fold change (shown on the x axis). The adjusted enrichment P values are labeled in white in the graph
Identification of regulatory TFs in each cancer type
Changes in methylation status of an enhancer region can be due to gain (for hypomethyated enhancers) or loss (for hypermethylated enhancers) of site-specific transcription factors. To obtain insight into which site-specific TFs may be involved in setting the tumor-specific DNA methylation patterns, we examined the correspondence between cancer-specific hypermethylated or hypomethylated probes and known regulatory factor recognition sequence motifs. We used a combined set of motifs present in the JASPAR-Core [38] and FactorBook [39] datasets. We selected the enhancer probes that were identified in probe-gene pairs (using a cutoff of 0.001), then used the +/-100 bp sequence around each probe to search for instances of the 145 transcription factor motifs. We calculated the frequency of each motif within the hypomethylated (or hypermethylated) probe set for a given cancer vs. the frequency of the motif within the entire enhancer probe set. An odds ratio (OR) was calculated from these two frequencies, and only those motifs with an OR greater than 1.1 (at a confidence interval of 95 %) were selected as enriched within the given cancer type (motifs with less than 10 instances within the given probe set were excluded). All enriched motifs are listed in Additional file 7. For hypermethylated loci, we found that many of the identified motifs (such as E2F, EGR1, NRF1, Sp1) were associated with promoter regions (Additional file 8), suggesting that many of the hypermethylated loci may actually correspond to previously uncharacterized promoter regions. This likely accounts for the relatively high percentage of hypermethylated probe-pairs that showed linkage to the nearest annotated gene (Fig. 4b), which could reflect RNA-seq tags from the unannotated transcript isoform. Because many of the hypermethylated cases might not represent true distal enhancers, and because some may in fact be the result of cancer-related CpG Island promoter hypermethylation [14], we focused the remaining analyses on the 38 motifs found to be enriched within hypomethylated loci (Fig. 6a). Some of these motifs were common to various different cancers, such as AP1, which was enriched within nine of the 10 cancer types. Many motifs were more enriched in two or more specific tumor types, while others were limited to a single type, such as of GATA in BRCA, TP53/TP63 in LUSC, and HNF1A/B in UCEC.
Identification of enhancer sets predicted to be co-regulated by the same transcription factor. a For 38 motifs enriched within hypomethylated probe-gene pairs in one or more cancer types, we calculated the 95 % confidence interval (CI) for the motif enrichment odds ratio; the lower bound of the 95 % CI is shown for each cancer type in the heatmap. b An illustration of the method for linking sets of enhancers with the same motif to an upstream TF regulator (see Methods for additional details). For each of the 38 (m) enriched motifs identified in panel (a), the average DNA methylation at all distal enhancer probes having that motif (in a specific tumor type), was compared to the expression levels of each of 1,777 (k) human TFs (Additional file 17). One such pair is shown as a scatter plot of all breast cancer (BRCA) tumor and adjacent normal samples, for the GATA motif and the GATA3 TF. BRCA samples (660) are color coded by integrated molecular subtypes defined by the TCGA Pan Cancer project, and extremes are selected as the 20% of samples with the lowest methylation (U) and the 20% with the highest methylation (M). A Mann-Whitney U test was performed to obtain the raw P value (p r ). All 1,777 TFs were then ranked by p r (plot at upper right), and the top 5% of the ranked TFs (dashed blue line) were considered to be significantly associated. The top three ranked TFs, along with each member of the specific DNA-binding family (in this case, GATAs) are labeled. Additional file 10 contains ranked TF plots for all motifs and all cancer types. c One of the 230 hypomethylated probe-gene pairs in BRCA containing a GATA motif corresponds to a downstream enhancer of the CCND1 gene. ENCODE ChIP-seq data in the Luminal-subtype MCF7 cell line verify that this enhancer is bound by the ELMER-predicted GATA3 TF
Different members of a TF family have very similar DNA binding domains that can bind very similar or identical motifs. For example, we have previously shown that GATA1 and GATA2 bind to the same regulatory regions [40] and that members of the E2F family can bind to the same promoters [41]. Thus, identification of a motif does not uniquely identify the TF that binds in vivo to a region containing that motif. However, there is evidence to support the hypothesis that expression levels of a particular TF can correlate with levels of demethylation and subsequent gene expression [18, 42, 43]. To discover which members of a TF family are likely to be responsible for binding in vivo to the hypomethylated enhancer probes identified above and regulating expression of their putative target genes, we analyzed the correlation between the probes containing a particular motif and expression of all known TFs (Fig. 6b, left). We ranked all the TFs by the degree to which their expression inversely correlated with the methylation status of the enhancers containing the motif (Fig. 6b, right), which allowed us to determine the family member most likely to be involved in regulation of the putative target genes in that particular cancer. For example, the GATA motif was enriched in (expression-linked) enhancer probes in BRCA samples (Fig. 6a). There are six members of the GATA family, with different members being linked to different differentiation phenotypes. For example, GATA1–3 have been linked to the specification of different hematopoietic cell fates and GATA4–6 are involved in differentiation of cardiac and lung tissues [44–49]. GATA3 is one of the most highly enriched transcription factors in the mammary epithelium, has been shown to be necessary for mammary cell differentiation, and is specifically required to maintain the luminal cell fate [48, 49]. Studies of human breast cancers have shown that GATA3 is expressed in early stage, well-differentiated tumors but not in advanced invasive cancers. In addition, GATA3 expression is correlated with longer disease-free survival and evidence suggests that it can prevent or reverse the epithelial to mesenchymal transition that is characteristic of cancer metastasis [50]. Not surprisingly, our analysis of the correlation of the methylation of the GATA motif-containing hypomethylated probes identified GATA3 as the most likely member of the GATA family to be responsible for the observed hypomethylation of GATA-containing enhancers in the BRCA samples (Fig. 6b). Not only was GATA3 the second most correlated transcription factor overall, but the extent of correlation made it easily distinguishable from other members (GATA3 had a U test P value less than 10−40, vs. P values greater than 10−5 for all other GATA family members). Furthermore, expression of GATA3 and methylation of GATA-containing enhancer probes were co-linked to breast cancer subtypes. As shown using color-coding in the Fig. 6b scatterplot, Luminal tumors had high expression of GATA3 and low methylation of GATA-containing enhancer probes, while Basal-like subtype tumors showed the converse. Figure 6c shows an example of one of these GATA-containing enhancer probes (cg1396202), along with the target gene (CCND1) predicted by expression to be regulated by this putative enhancer. ENCODE ChIP-seq data in the Luminal-subtype MCF7 cell line confirm that this putative enhancer region is indeed bound by GATA3, confirming the relationship between transcription factor binding and demethylation shown in [25]. This case was among the easier to detect, since breast cancer has two large subtypes (Luminal and Basal-like), which are molecularly quite distinct and are increasingly seen as two different diseases. As with all cancer genomic approaches, rarer subtypes will require larger number of samples to be identified by ELMER. Nevertheless, our results on other more challenging cancer types were also promising, as described below.
The same correlation analysis was performed for all motifs enriched in hypomethylated enhancer probes, and the most highly correlated member of the TF family expected to bind to each motif was identified (Additional files 9 and 10). In all, we identified 38 enhancer-TF pairs in the 10 tumor types. Although some of these TFs have previously been implicated in tumor development in the cancer type in which they were identified (for example, GATA3 in BRCA), many other associations were novel and provide new hypotheses regarding basic cancer biology and new potential targets for cancer prevention and treatment. In order to investigate potential clinical relevance of the new TF networks identified, we searched for cases where the TF found to be overexpressed in a subset of cases was also linked to patient survival. Our TF family member analysis showed that RUNX1, RUNX2, and RUNX3 were all within the top 5 % of TFs correlated with hypomethylation of RUNX-containing enhancer probes in clear cell renal carcinoma (KIRC) (Fig. 7)a, b. Of these, RUNX1 and RUNX2 were very highly correlated, with RUNX3 being only moderately so (Fig. 7)a, b. When we investigated patient survival in KIRC, RUNX1 and RUNX2 had highly significant associations with poor survival outcome after controlling for other co-variates, while RUNX3 was more marginal (Fig. 7c and Additional file 11A). These results suggest that the identification of specific TFs based on enhancer methylation analysis may lead to new insights into tumor classification and clinical outcomes (other identified TFs with association to survival are listed in Additional file 11B).
High RUNX1 expression is associated with poor survival in clear cell renal carcinoma. a Shown are scatter plots for the average DNA methylation at hypomethylated-paired probes containing a RUNX motif, plotted against expression for RUNX family members RUNX1, RUNX2, and RUNX3. The number (and percentage) of hypomethylated-paired probes having a RUNX motif in each cancer type is indicated underneath the name of each cancer type. b The ranked TF plot, as described in Fig. 6, is plotted for the RUNX motif in clear cell renal carcinoma (KIRC); RUNX1, RUNX2, and RUNX3 are all within the top 5 % (dotted line) of all TFs. (c) Kaplan-Meier survival curves for TCGA KIRC samples, stratified by expression of RUNX1 (left), RUNX2 (middle), or RUNX3 (right). In each plot, the survival data for patients having tumors with the highest (top 30 %) vs. lowest (bottom 30 %) expression for the given RUNX family member is shown; the Log-Rank test P value between the high and low groups is indicated
In our studies, we have used tumor-specific changes of the DNA methylation status within distal enhancer regions to provide insight into the mechanisms of gene expression, transcription factor networks, and tumor classification. We have shown that this can be a powerful approach for generating hypotheses about master regulators in cancer, and we propose that ELMER analysis be applied along with other hypothesis-generating approaches in high throughput cancer genomics. For the TCGA Pan-Cancer dataset, we provide to the community prioritized lists of putative enhancer-target gene pairs for future validation, and lists of site-specific transcription factors that should be further investigated for their role in the development and progression of specific tumor types.
Starting with a set of approximately 100,000 distal enhancer probes, we identified tens of thousands of enhancer regions that showed changes in methylation status in primary human tumors (Fig. 8). We identified many more hypomethylated (ostensibly activated) enhancers than hypermethylated (ostensibly deactivated) enhancers and have focused mainly on the hypomethylated enhancers in this study. We identified from 5,147 to 26,787 hypomethylated probes in different tumor types, corresponding to between 4,841 and 21,374 distinct enhancer regions. However, only a smaller subset of these hypomethylated enhancer probes (a total of 6,559 for all tumor types combined) could be linked to a putative target gene (based on expression levels of the 10 nearest genes upstream and 10 nearest genes downstream of the enhancer), ranging from a low of approximately 200 enhancer-putative target gene pairs in acute myelogenous leukemia to approximately 4,000 enhancer-putative gene pairs in lung cell squamous carcinomas. We feel that the expression filtering step is important for identifying those regions truly associated with enhancer-specific methylation, as other long-range methylation changes (such as global hypomethylation [14]) may also affect enhancer probes.
Identification of in vivo TF networks, including upstream TFs and downstream enhancers and gene targets. The innermost black circle represents the 102,518 distal enhancer probes from the HM450 platform. The next level (labeled Hypo) shows the number of hypomethylated distal enhancer probes identified in each cancer type. The third level (labeled Paired hypo) shows the number of hypomethylated probes that were significantly linked to a putative target gene in each cancer type. The number in the outermost level corresponds to the number of putative target genes (each linked by expression level to a specific hypomethylated enhancer) predicted to be regulated by the indicated TF (fourth level); where multiple TF family members were identified, only the most strongly associated family member is listed
We found that most of the putative linkages between enhancer probes and local gene expression were cancer type-specific and that within each cancer type, most enhancers correlated with the expression of only one gene. In keeping with previous looping studies, we found that the putative target gene was typically not the nearest gene. In fact, the gene identified was the nearest gene in only approximately 15 % of the hypomethylated enhancer-gene pairs. As in other studies [51, 52], we found that the set of all hypomethylated enhancers was composed of similar proportions of intragenic and intergenic enhancers. We found that as compared to the intergenic enhancers, intragenic enhancers were 75 % more likely to be linked to expression of the nearest TSS (which in 88 % of the cases was the gene in which it resided); see Additional file 12. An intragenic enhancer can loop to regulate the 'upstream' promoter of the gene in which it resides but could also act as alternative promoter. Although we have eliminated all known promoters from our set of distal probes, we cannot eliminate the possibility that some of the intragenic enhancers represent as-of-yet unannotated, tumor-specific alternative promoters for the gene in which they reside [53, 54].
Our linking method is based strictly on correlation and therefore cannot absolutely rule out indirect (trans) interactions. For instance, if the same transcription factor or set of factors regulate both enhancer X and enhancer Y, the methylation patterns of X and Y across samples may be so similar that we link enhancer X to a gene that is in fact the direct target of enhancer Y. We have used high-confidence statistical thresholds in order to rule out as many of these indirect interactions as possible. Our search within the nearest 20 genes is unbiased, so the fact that we disproportionally find linkages to the gene nearest the enhancer probe provides strong evidence that we are identifying true direct (cis) interactions. We have provided a robust set of predicted linkages that can serve as a starting point for future experimental validations. Of course, we realize that we are working under a largely untested assumption that anti-correlation between an enhancer and expression level of a nearby gene indicate functional regulation. While this and prior correlative studies [22, 23, 25] provide strong supporting evidence for this, further experimental studies (for example, using CRISPR/Cas9 to delete the enhancers in appropriate tumor cell lines, followed by RNA-seq) will be needed to determine with certainty that the enhancers regulate their putative target genes, and what degree of correlation is required to infer functionality. Similarly, a comparison between our predicted enhancer-target pairs and global analysis of long-range chromatin looping would be of interest. Unfortunately, chromatin conformation assay data are not available for any of the tumor tissue samples and, in fact, very few studies of global chromatin looping have been completed for cancer cell lines. However, we have identified a set of chromatin loops derived from deep-sequenced ChIA-PET data from MCF7 cells [24]. Although MCF7 cells are not representative of all breast cancers (and are cultured cells, not tumor tissues), we did find that 166 of the 2,038 enhancer probes pairs we identified in breast cancer tumors (approximately 8%) were also identified as loops in the MCF7 ChIA-PET data. This was an almost four-fold enrichment over randomized enhancer probe-gene pairs (see Additional file 13 for an enrichment analysis, along with a complete list of BRCA enhancer-gene pairs falling within loops in MCF7 cells). We note that the various assays used to study looping are not yet optimized and do not always identify the same sets of loops [55]; in addition, some loops may not be related to transcriptional regulation. Thus, enhancer-gene pairs identified by expression assays are not necessarily concordant with the sets of promoter-enhancer loops identified by chromatin confirmation assays. Future comparisons between indirect (that is, correlative) mapping of enhancer-gene interactions of the type we described here, with direct physical mapping of enhancer-gene interactions, will be important to help to resolve the different mechanisms involved. However, in addition to the genome-wide confirmation by ChIA-PET, we note that at least two of the putative enhancer-gene pairs from our analysis have been studied in functional models confirming our results. The putative CCND1 enhancer we identified in breast tumors (Fig. 6c) was shown to directly regulate the CCND1 gene in response to estradiol in breast cancer cells [56] and a putative MYC enhancer we identified in colon tumors (Additional file 14) was shown to be directly responsible for MYC expression in colon cancer cells [57], and in vivo in a mouse model of colorectal cancer [58].
We realize that the relationship between TF binding and DNA methylation can be complex [18]. For example, reduced DNA methylation in an enhancer region in a tumor cell relative to a normal cell could allow a TF to bind and regulate a target gene in a tumor-specific manner without changes in the expression level of that TF in the tumor. However, it is likely that increased levels of a TF in a tumor can result in higher binding at a partially methylated enhancer, directly leading to loss of DNA methylation [17]. Based on this second mechanism, we have attempted to identify TFs that regulate the target genes of enhancers that are hypomethylated in tumors. First, we identified a list of site-specific TF binding motifs that are enriched within the enhancers linked to putative target genes. Then, by examining the expression patterns of each of the TF family members expected to bind to these motifs, we have predicted the TF that regulates specific sets of genes in the different cancer types (Fig. 8). For example, in bladder cancer (BLCA) we have provided a list of 65, 208, and 65 genes that may be regulated by POU3F1, FOXA1, or CEBPA, respectively, by binding to a specific hypomethylated enhancer. In all, utilizing enhancer methylation patterns, expression of putative target genes, motif enrichment, and expression of TF family members that bind to the motif, we have derived a list of 4,280 enhancer-TF-putative target gene linkages.
Some of the cancer type-specific TF networks we show in Fig. 8 are already known to have a functional role in the same tumor type, such as PU.1 in AML [59] and TCF7L2 in colorectal cancer [28, 60–63]. Two of the four TFs we identified in squamous cell lung cancer (LUSC), TP63 and SOX2, are oncogenes that are overexpressed in LUSC through genomic amplification [64, 65]. Recently, SOX2 and TP63 were shown to interact functionally and co-localize to a large number of genomic binding sites in squamous cell lung cancer [66]. In a number of cases, incorporating TF expression data allowed us to resolve between different members of the same family that would be indistinguishable by binding motif alone. For instance, FOXA1 clearly appears to be responsible for hypomethylation of FOX-containing enhancers in breast (BRCA) and bladder (BLCA) cancers, while FOXA2 appears to be responsible in endometrial (UCEC). Other TF networks we identified, such as RUNX1/2 and its association with poor outcome in kidney cancer, have never been reported and will form the basis for future studies.
The method we describe herein is based on detecting methylation and expression differences between samples of the same tumor type, and is therefore aimed at identifying changes that co-occur within particular subsets of cases. For instance, we found that GATA-containing enhancer hypomethylation occurred primarily in the subset of breast cancer cases belonging to the Luminal subtype, which also had high expression of the GATA3 gene (Fig. 6b, c). While GATA3 is a well-studied case, our method can be applied to identify, understand, and find biomarkers for novel molecular subtypes. Understanding the genome-wide transcriptional consequences of molecular subtypes will be particularly relevant for those that are defined by genetic mutation of transcriptional regulators; indeed, transcription factors make up the largest functional class within the list of 127 cancer genes with so-called 'driver' mutations identified by TCGA [67]. A number of the altered transcription factor networks we identified using ELMER (Fig. 8) were also present within the 30 or so transcription factors included in this TCGA driver gene list. These TFs included FOXA1, FOXA2, GATA3, NFE2L2, and SOX17. Intriguingly, ELMER often identified a particular TF in the same cancer type or types where it is most frequently mutated. For instance, FOXA1 is most frequently mutated in Breast and Bladder cancer, and ELMER identified it in these specific cancers. Likewise, FOXA2 and SOX17 are primarily mutated in endometrial cancers, and ELMER identified network alterations specifically in this cancer type (UCEC). NFE2L2 is most frequently mutated in lung squamous cell carcinoma (LUSC), the same cancer type where ELMER detected NFE2L2 alterations. It will take additional work to understand the relationship between genetic mutations of TFs and epigenetic/transcriptomic changes in each of these different examples, but the identification of important cancer driver genes underscores the power of studying enhancers, which sit at the cis-regulatory interface between transcription factors, epigenetic modifiers, and downstream effector genes.
We also note that in some cases, transcription factors that are not expected to bind to the specific motif being analyzed were identified as being highly correlated with the degree of enhancer hypomethylation. In all, we identified 186 TFs frequently correlated with multiple motifs that do not correspond to the known motif for that TF family (Additional file 15). These correlations could be due to indirect effects caused by TF networks. For example, transcription factors regulated by GATA3 may show a similar correlation of expression with the hypomethylated probes in BRCA as does GATA3 itself. Another possible cause is suggested by the case of AP-1. Our results indicate that hypomethylation of AP-1-containing enhancers is a common feature of many or most cancer types (including nine of our 10 cancer types, see Fig. 6a); this confirms our earlier whole-genome observations in colorectal cancer [21]. While the AP1 motif is classically described as a binding sequence for FOS/JUN dimers, it is found to be enriched in many ChIP-seq datasets, including those using antibodies that recognize factors other than FOS or JUN family members [68]. Phosphorylation of JUN can lead to histone acetylation at AP-1 motif-containing enhancers by inhibiting their association with the Mbd3 component of the NuRD complex [69]. This could in turn allow binding of other positive transcriptional regulators, activation of downstream genes, and a proliferative expression program. Because JUN activity is regulated post-transcriptionally, it is logical that our method (which is based on expression) would miss JUN itself, and instead identify the positive regulators binding these regions (which are often cell-type specific). For instance, the most strongly associated TF with the AP-1 motif in kidney cancer is RUNX1, while in breast cancer it is FOXA1, suggesting that many of the AP-1 motif-containing sites may require AP-1 dependent de-repression along with positive RUNX1/FOXA activation.
Also included in the list of 186 'commonly correlated' TFs are around 50 zinc finger domain-containing TFs (known as ZNFs). Although ZNFs are the most abundant class of human site-specific TFs, comprising around half of all site-specific TFs [70–72], few of them have been well studied. One of the commonly correlated factors was ZNF703, which correlated with 16 different motifs in the BRCA samples. Interestingly, high expression of ZNF703 has been shown to correlate with poor prognosis in patients with luminal B breast cancer [73]. We suggest that our analyses can point to a role for other ZNFs in tumorigenesis. In fact, 11 of the identified ZNFs showed associations with survival of the cancer in which they were identified (Additional file 16). For example, ZNF273 was correlated with four motifs in CRC and ZNF683 was correlated with nine motifs in KIRC; neither of these TFs has ever been associated with cancer. However, there is a strong correlation between high expression of ZNF273 and ZNF683 with poor survival rates in colorectal and kidney cancers, respectively. Most of the time, the 186 'commonly correlated' TFs showed cancer type-specific correlations. However, one factor (GRHL2) was identified in the top 1 % of all correlations for 31 different motifs spread among five of the 10 different cancer types studied. GRHL2 has been shown to directly bind and activate the hTERT promoter and has been suggested to be involved in telomerase activation during cellular immortalization [74]. Perhaps GRHL2 plays an important role in tumor development in many cancer types.
The results we describe here use motif analysis primarily to help identify the transcription factors responsible for enhancer hypomethylation. However, the most important output of this work may actually be the identification of enhancers in which mutations in individual transcription factor binding sites can be responsible for cancer risk or cancer progression. A number of studies have shown that population risk alleles for cancer reside preferentially in enhancer regions [31, 75–79] and a recent paper demonstrated that these could be identified in breast cancer by combining DNA methylation and chromatin conformation capture data to identify putative enhancers [25]. Somatic enhancer mutations are predicted to affect cancer progression, although these have not yet been identified due to the overwhelming use of exon sequencing as a means to identify new cancer mutations. The recent availability of whole-genome sequencing of tumors has started to allow the identification of non-coding mutations, which have been shown to affect transcription factor binding sites [80–82]. Methods like ELMER, which can identify in vivo enhancer regions in tumors, will be essential for analyzing non-coding cancer mutations arising from WGS studies.
Although our study is not comprehensive due to the nature of the DNA methylation platform used by TCGA (which only contains coverage of 15 % of known enhancers) and because enhancers have not yet been mapped in all normal and tumor cell types, our analyses have allowed us to identify a number of cancer type-specific transcriptional regulators, along with the cis-regulatory sequences mediating effects on target genes. Large-scale identification of such cis-regulatory regions will be critical for understanding the effects of non-coding genetic polymorphisms on cancer risk and non-coding somatic mutations on cancer progression [28, 59, 60]. Complete tumor methylation profiles using whole-genome bisulfite sequencing [21, 23, 83] are rapidly becoming available, and these will dramatically increase the power of the ELMER approach to reconstruct complete transcription factor network and identify important cis-regulatory regions.
Availability of source code and R package
All source code is available as an R package, ELMER, downloadable from the main Bioconductor repository [84] or from our GitHub repository [85]. Vignettes illustrating the use of the functions are available as part of the BioConductor package, along with an example replicating the results described in this paper using the ELMER function TCGA.pipe. A user manual and tutorial can be downloaded from the GitHub repository here: [86], and a full manual can be downloaded here: [87].
DNA methylation and RNA-seq datasets
TCGA level 3 DNA methylation data based on the Illumina Infinium HumanMethylation450 BeadArray platform was downloaded from [88]. Only the samples whitelisted by TCGA for Pan-Cancer Analysis Working Group were used in the study. The whitelist can be downloaded from Sage Bionetworks Synapse [89] with identifier syn1571603. The version numbers and final sample IDs for each cancer type are listed in Additional file 1. The DNA methylation level at each CpG is referred to as a beta (β) value, calculated as (M/(M+U)), where M represents the methylated allele intensity and U the unmethylated allele intensity, which are normalized using the TCGA standard pipeline. Beta values are in the range of 0 to 1, reflecting the fraction of methylated alleles at each CpG in the each tumor; beta values close to 0 indicating low levels of DNA methylation and beta values close to 1 indicating high levels of DNA methylation. Since there are no available normal tissues for acute myeloid leukemia (LAML) and glioblastoma multiforme (GBM) in TCGA, we also downloaded Infinium HM450K DNA methylation data from publicly available sources as normal tissue controls for these two cancer types. A set of 58 sorted glial cell samples from GEO (accession number GSE41826) was used as normal reference samples for glioblastoma. A set of 11 sorted blood samples from GEO (accession number GSE49618) was used for normal reference samples for leukemia. These data were generated at the USC Epigenome Center and were processed through the same data analysis pipeline that was used to create the TCGA Level 3 data files (all TCGA data were also generated by the USC Epigenome Center). The sample IDs are also listed in Additional file 1.
TCGA Level 3 RNA-seq data were downloaded from [88]. The version number of each package downloaded is listed in Additional file 1. TCGA uses gene-level expression values, meaning any alternative isoforms are included in a single normalized RSEM expression value. TCGA data production and analysis pipelines are described elsewhere, but a brief description follows: all data were generated on the Illumina HiSeq platform, with the exception of UCEC, which was generated on the Illumina GAII platform. Within each cancer type, data were mapped with MapSplice and quantitated with RSEM (RNA-seq by Expectation Maximization). RSEM outputs expression values that are normalized across samples, so that the third quartile for each sample equals 1,000. Entrez gene IDs were used for mapping to genomic locations using GenomicRanges [90]. The final RNA-seq sample IDs used in our analyses are listed in Additional file 1.
Selecting enhancer probes
Probes overlapping SNPs are removed as part of the standard TCGA Level 3 pipeline. Probes located less than 2 kb from an annotated transcription start site in GENCODE v.15 were filtered out to remove promoter regions from our analysis. ENCODE/REMC chromHMM data were downloaded from [91] and any HM450 probes falling within the genomic regions annotated as EnhG1, EnhG2, EnhA1, or EnhA2 were selected. FANTOM5 data were downloaded from [92] and any HM450 probes falling within regions annotated as eRNA were selected. This resulted in 102,518 enhancer probes, which are listed in Additional file 2. This functionality is implemented in the get.feature.probe function of the ELMER BioConductor package.
Identifying enhancer probes with cancer-specific DNA methylation changes
Each of the 10 cancer types was processed independently to identify cancer-specific DNA methylation changes. For each enhancer probe, we first ranked tumor samples and normal samples (within the cancer type) by their DNA methylation beta values. To identify hypomethylated probes, we compared the lower normal quintile (20 % of normal samples with the lowest methylation) to the lower tumor quintile (20 % of tumor samples with the lowest methylation), using an unpaired one-tailed t-test. Only the lower quintiles were used because we did not expect all cases to be from a single molecular subtype, and we sought to identify methylation changes within cases from the same molecular subtype. Twenty percent (that is, a quintile) was picked as a cutoff to include high enough sample numbers to yield t-test P values that could overcome multiple hypothesis correction, yet low enough to be able to capture changes in individual molecular subtypes occurring in 20 % or more of the cases. This number can be set arbitrarily as an input to the get.diff.meth function in the ELMER package, and should be tuned based on sample sizes in individual studies. The one tailed t-test was used to rule out the null hypothesis: μtumor ≥ μnormal, where μtumor is the mean methylation within the lowest tumor quintile and μnormal is the mean within the lowest normal quintile. Raw P values were adjusted for multiple hypothesis testing using the Benjamini-Hochberg method, and probes were selected when they had adjusted P value less than 0.01. For additional stringency, probes were only selected if the methylation difference |Δ|= |μnormal - μtumor | was greater than 0.3. This technique is illustrated in Fig. 1b, and carried out in the get.diff.meth function of the ELMER package. The same method was used to identify hypermethylated probes, except we used upper tumor quintile and upper normal quintile, and chose the opposite tail in the t-test. The full set of hypermethylated and hypomethylated probes we identified are provided in Additional file 3, and can be replicated using the TCGA.pipe vignette in the ELMER package.
Linking enhancer probes with methylation changes to target genes with expression changes
For additional stringency and to avoid correlations due to non-cancer contamination, we selected only those enhancer probes that had differential methylation as defined above, and where at least 5 % of all samples (combining tumor and normal) had beta values >0.3. Then, for each of these differentially methylated enhancer probes, the closest 10 upstream genes and the closest 10 downstream genes were tested for correlation between methylation of the probe and expression of the gene. To select these genes, the probe-gene distance was defined as the distance from the probe to a transcription start site specified by the TCGA RNA-seq Level 3 data files. We used the Level 3 TCGA RNA-seq data files; these represent expression at the gene level, and merge any alternate transcript isoforms into a single expression value for each gene. Thus, exactly 20 statistical tests were performed for each probe, as follows. For each probe-gene pair, the samples (all tumors and normals within a particular cancer type) were divided into two groups: the M group, which consisted of the upper methylation quintile (the 20 % of samples with the highest methylation at the enhancer probe), and the U group, which consisted of the lowest methylation quintile (the 20 % of samples with the lowest methylation.) The 20 % cutoff is a configurable parameter in the get.pair function of ELMER. We used 20 % as a balance, which would allow us to identify changes in a molecular subtype making up a minority (that is, 20 %) of cases, while also yielding enough statistical power to make strong predictions. For each candidate probe-gene pair, the Mann-Whitney U test was used to test the null hypothesis that overall gene expression in group M was greater or equal than that in group U. This non-parametric test was used in order to minimize the effects of expression outliers, which can occur across a very wide dynamic range. For each probe-gene pair tested, the raw P value P r was corrected for multiple hypothesis using a permutation approach as follows (implemented in the get.permu function of the ELMER package). The gene in the pair was held constant, and 10,000 random methylation probes were used to perform the same one-tailed U test, generating a set of 10,000 permutation P values (P p ). We chose the 10,000 random probes only from among those that were 'distal' (greater than 2 kb from an annotated transcription start site), in order to make these null-model probes qualitatively similar to the probe being tested. We only used non-enhancer probes, as using enhancer probes would introduce large numbers of co-regulated enhancers. An empirical P value P e value was calculated using the following formula (which introduces a pseudo-count of 1):
$$ Pe=\frac{num\left(Pp\le Pr\right)\kern0.5em +\kern0.5em 1}{10001} $$
ChIA-PET analysis
MCF7 ChIA-PET linkage pairs were taken from a previous publication [24]. The random pairs were generated by randomly selecting the same number of probes from the set of distal enhancer probes, and pairing each with one or more of the 20 adjacent genes; the number of links made for each random probe was identical to the corresponding 'true' probe. Thus, the random linkage set has both the same number of probes and the same number of linked genes as the true set. One hundred such random datasets were generated to arrive at a 95 % CI (+/-1.96* SD).
Gene Ontology (GO) enrichment analysis
Genes associated with hypo- or hypermethylated enhancer probes in more than one cancer type were selected for GO analysis. GO analyses were performed using the R package 'topGO' [93]. The classic Fisher test was used to generate enrichment P values. To select the GO terms that pass a significance cutoff, P values were adjusted using the Benjamini-Hochberg method; only those GO terms with a P value <0.01 and a fold change >1.5 are shown in Fig. 5.
Motif analyses
We used FIMO [94] with a P value <1e–4 to scan a +/- 100 bp region around each probe using Factorbook motif position weight matrices (PWMs) [39, 95] and Jasper core human motif PWMs generated from the R package MotifDb [96]. For each probe set tested (that is, the list of gene-linked hypomethylated probes in a given cancer type), a motif enrichment OR and a 95 % CI were calculated using following formulas:
$$ \begin{array}{l}p=\frac{a}{\left(a+b\right)}\\ {}P=\frac{c}{\left(c+d\right)}\\ {} Odds\ Ratio = \frac{p/\left(1-p\right)}{P/\left(1-P\right)}\\ {}SD=\sqrt{\frac{1}{a}+\frac{1}{b}+\frac{1}{c}+\frac{1}{d}}\\ {} lower\ boundary\ of\ 95\%\ confidence\ interval= \exp \left( \ln (OR)-SD\right)\end{array} $$
where a is the number of probes within the selected probe set that contain one or more motif occurrences; b is the number of probes within the selected probe set that do not contain a motif occurrence; c and d are the same counts within the entire enhancer probe set. A probe set was considered significantly enriched for a particular motif if the 95 % CI of the OR was greater than 1.1, and the motif occurred at least 10 times in the probe set. As described in the text, ORs were also used for ranking candidate motifs. This analysis is implemented in the get.enrichmed.motifs function of the ELMER package.
Associating TF expression with TF binding motif methylation
For each motif considered to be enriched within a particular probe set, we compared the average DNA methylation at all distal enhancer probes within +/− 100bp of a motif occurrence, to the expression of 1,777 human TFs ([97] and with further refinements, see Additional file 17). A statistical test was performed for each motif-TF pair, as follows. The samples (all tumors and normal within a particular cancer type) were divided into two groups: the M group, which consisted of the 20 % of samples with the highest average methylation at all motif-adjacent probes, and the U group, which consisted of the 20 % of samples with the lowest methylation. The 20th percentile cutoff is a parameter to the get.TFs function of the ELMER package, and was set to allow for identification of molecular subtypes present in 20 % of cases. For each candidate motif-TF pair, the Mann-Whitney U test was used to test the null hypothesis that overall gene expression in group M was greater or equal than that in group U. This non-parametric test was used in order to minimize the effects of expression outliers, which can occur across a very wide dynamic range. For each motif tested, this resulted in a raw P value (P r ) for each of the 1,777 TFs. All TFs were ranked by the -log10(P r ), and those falling within the top 5 % of this ranking were considered candidate upstream regulators. The best upstream TFs for each of these cases was automatically extracted as high-value candidates, and presented in Fig. 8. These high-value candidates are also shown in detail in Additional files 9 and 10.
Survival analyses
A Kaplan-Meier survival analysis was used to estimate the association of the TF expression with the survival of patients. For each selected TF and cancer type combination, tumor samples with the highest (top 30 %) and lowest (bottom 30 %) transcription factor expression were analyzed using a Log Rank test. Overall survival was calculated from the date of initial diagnosis of cancer to disease-specific death (patients whose vital status is termed dead) and months to last follow-up (for patients who are alive).
The TCGA samples can be downloaded at https://tcga-data.nci.nih.gov/tcgafiles/ftp_auth/distro_ftpusers/anonymous/tumor/. The whitelist from Pan-Can group is available on Synapse (https://www.synapse.org/) as syn1571603. The enhancer genomic coordinates can be downloaded at http://egg2.wustl.edu/roadmap/data/byFileType/chromhmmSegmentations/ChmmModels/coreMarks/jointModel/final/) and http://enhancer.binf.ku.dk/Welcome.html.
Bernstein BE, Birney E, Dunham I, Green ED, Gunter C, Snyder M. An integrated encyclopedia of DNA elements in the human genome. Nature. 2012;489:57–74.
Roadmap Epigenomics Consortium. Integrative analysis of 111 reference human epigenomes. Nature. 2015;19:317–30.
Henikoff S. ENCODE and our very busy genome. Nat Genet. 2007;39:817–8.
Filion GJ, Van Bemmel JG, Braunschweig U, Talhout W, Kind J, Ward LD, et al. Systematic protein location mapping reveals five principal chromatin types in Drosophila cells. Cell. 2010;143:212–24.
CAS PubMed Central PubMed Article Google Scholar
Ernst J, Kheradpour P, Mikkelsen TS, Shoresh N, Ward LD, Epstein CB, et al. Mapping and analysis of chromatin state dynamics in nine human cell types. Nature. 2011;473:43–9.
Hoffman MM, Buske OJ, Wang J, Weng Z, Bilmes J, Noble WS. Unsupervised pattern discovery in human chromatin structure through genomic segmentation. Nat Methods. 2012;9:473–6.
Creyghton MP, Cheng AW, Welstead GG, Kooistra T, Carey BW, Steine EJ, et al. Histone H3K27ac separates active from poised enhancers and predicts developmental state. Proc Natl Acad Sci U S A. 2010;107:21931–6.
Rada-Iglesias A, Bajpai R, Swigut T, Brugmann SA, Flynn RA, Wysocka J. A unique chromatin signature uncovers early developmental enhancers in humans. Nature. 2011;470:279–83.
Bernstein BE, Stamatoyannopoulos JA, Costello JF, Ren B, Milosavljevic A, Meissner A, et al. The NIH Roadmap Epigenomics Mapping Consortium. Nat Biotechnol. 2010;28:1045–8.
Beck S, Bernstein BE, Campbell RM, Costello JF, Dhanak D, Ecker JR, et al. A blueprint for an international cancer epigenome consortium. A report from the AACR Cancer Epigenome Task Force. Cancer Res. 2012;72:6319–24.
He B, Chen C, Teng L, Tan K. Global view of enhancer-promoter interactome in human cells. Proc Natl Acad Sci U S A. 2014;111:E2191–9.
Sheffield NC, Thurman RE, Song L, Safi A, Stamatoyannopoulos JA, Lenhard B, et al. Patterns of regulatory activity across diverse human cell types predict tissue identity, transcription factor binding, and long-range interactions. Genome Res. 2013;23:777–88.
Pastor WA, Stroud H, Nee K, Liu W, Pezic D, Manakov S, et al. MORC1 represses transposable elements in the mouse male germline. Nat Commun. 2014;5:5795.
Bergman Y, Cedar H. DNA methylation dynamics in health and disease. Nat Struct Mol Biol. 2013;20:274–81.
Thomassin H, Flavin M, Espinás ML, Grange T. Glucocorticoid-induced DNA demethylation and gene memory during development. EMBO J. 2001;20:1974–83.
Lister R, Pelizzola M, Dowen RH, Hawkins RD, Hon G, Tonti-Filippini J, et al. Human DNA methylomes at base resolution show widespread epigenomic differences. Nature. 2009;462:315–22.
Stadler MB, Murr R, Burger L, Ivanek R, Lienert F, Schöler A, et al. DNA-binding factors shape the mouse methylome at distal regulatory regions. Nature. 2011;480:490–5.
Blattler A, Farnham PJ. Cross-talk between site-specific transcription factors and DNA methylation states. J Biol Chem. 2013;288:34287–94.
Ooi L, Wood IC. Chromatin crosstalk in development and disease: lessons from REST. Nat Rev Genet. 2007;8:544–54.
Gebhard C, Benner C, Ehrich M, Schwarzfischer L, Schilling E, Klug M, et al. General transcription factor binding at CpG islands in normal cells correlates with resistance to de novo DNA methylation in cancer cells. Cancer Res. 2010;70:1398–407.
Berman BP, Weisenberger DJ, Aman JF, Hinoue T, Ramjan Z, Liu Y, et al. Regions of focal DNA hypermethylation and long-range hypomethylation in colorectal cancer coincide with nuclear lamina-associated domains. Nat Genet. 2012;44:40–6.
CAS PubMed Central Article Google Scholar
Aran D, Sabato S, Hellman A. DNA methylation of distal regulatory sites characterizes dysregulation of cancer genes. Genome Biol. 2013;14:R21.
Hovestadt V, Jones DT, Picelli S, Wang W, Kool M, Northcott PA, et al. Decoding the regulatory landscape of medulloblastoma using DNA methylation sequencing. Nature. 2014;510:537–41.
Li G, Ruan X, Auerbach RK, Sandhu KS, Zheng M, Wang P, et al. Extensive promoter-centered chromatin interactions provide a topological basis for transcription regulation. Cell. 2012;148:84–98.
Aran D, Hellman A. DNA methylation of transcriptional enhancers and cancer predisposition. Cell. 2013;154:11–3.
Wiench M, John S, Baek S, Johnson TA, Sung MH, Escobar T, et al. DNA methylation status predicts cell type-specific enhancer activity. EMBO J. 2011;30:3028–39.
Weinstein JN, Collisson EA, Mills GB, Shaw KR, Ozenberger BA, Ellrott K, et al. The Cancer Genome Atlas Pan-Cancer analysis project. Nat Genet. 2013;45:1113–20.
The Cancer Genome Atlas. Comprehensive molecular characterization of human colon and rectal cancer. Nature. 2012;487:330–7.
GENCODE gene annotations. [http://www.gencodegenes.org/releases/15.html]
Ernst J, Kellis M. ChromHMM: automating chromatin-state discovery and characterization. Nat Methods. 2012;9:215–6.
ENCODE_Project_Consortium. An integrated encyclopedia of DNA elements in the human genome. Nature. 2012;489:57–74.
Andersson R, Gebhard C, Miguel-Escalada I, Hoof I, Bornholdt J, Boyd M, et al. An atlas of active enhancers across human cell types and tissues. Nature. 2014;507:455–61.
Kwasnieski JC, Fiore C, Chaudhari HG, Cohen BA. High-throughput functional testing of ENCODE segmentation predictions. Genome Res. 2014;24:1595–602.
Blow MJ, McCulley DJ, Li Z, Zhang T, Akiyama JA, Holt A, et al. ChIP-Seq identification of weakly conserved heart enhancers. Nat Genet. 2010;42:806–10.
Sanyal A, Lajoie BR, Jain G, Dekker J. The long-range interaction landscape of gene promoters. Nature. 2012;489:109–13.
Jin F, Li Y, Dixon JR, Selvaraj S, Ye Z, Lee AY, et al. A high-resolution map of the three-dimensional chromatin interactome in human cells. Nature. 2013;503:290–4.
Rao SS, Huntley MH, Durand NC, Stamenova EK, Bochkov ID, Robinson JT, et al. A 3D map of the human genome at kilobase resolution reveals principles of chromatin looping. Cell. 2014;159:1665–80.
Mathelier A, Zhao X, Zhang AW, Parcy F, Worsley-Hunt R, Arenillas DJ, et al. JASPAR 2014: an extensively expanded and updated open-access database of transcription factor binding profiles. Nucleic Acids Res. 2014;42:D142–7.
Wang J, Zhuang J, Iyer S, Lin XY, Greven MC, Kim BH, et al. Factorbook.org: a Wiki-based database for transcription factor-binding data generated by the ENCODE consortium. Nucleic Acids Res. 2013;41:D171–6.
Fujiwara T, O'Geen H, Keles S, Blahnik K, Linnemann AK, Kang YA, et al. Discovering hematopoietic mechanisms through genome-wide analysis of GATA factor chromatin occupancy. Mol Cell. 2009;36:667–81.
Xu X, Bieda M, Jin VX, Rabinovich A, Oberley MJ, Green R, et al. A comprehensive ChIP-chip analysis of E2F1, E2F4, and E2F6 in normal and tumor cells reveals iterchangeable roles of E2F family members. Genome Res. 2007;17:1550–61.
Shakya A, Callister C, Goren A, Yosef N, Garg N, Khoddami V, et al. Pluripotency transcription factor oct4 mediates stepwise nucleosome demethylation and depletion. Mol Cell Biol. 2015;35:1014–25.
Lee DS, Shin JY, Tonge PD, Puri MC, Lee S, Park H, et al. An epigenomic roadmap to induced pluripotency reveals DNA methylation as a reprogramming modulator. Nat Commun. 2014;5:5619.
Bresnick EH, Lee HY, Fujiwara T, Johnson KD, Keles S. GATA switches as developmental drivers. J Biol Chem. 2010;285:31087–93.
Brewer A, Pizzey J. GATA factors in vertebrate heart development and disease. Expert Rev Mol Med. 2006;8:1–20.
Chou J, Provot S, Werb Z. GATA3 in development and cancer differentiation: cells GATA have it! J Cell Physiol. 2010;222:42–9.
Patient RK, McGhee JD. The GATA family (vertebrates and invertebrates). Curr Opin Genet Dev. 2002;12:416–22.
Kouros-Mehr H, Bechis SK, Slorach EM, Littlepage LE, Egeblad M, Ewald AJ, et al. GATA-3 links tumor differentiation and dissemination in a luminal breast cancer model. Cancer Cell. 2008;13:141–52.
Kouros-Mehr H, Kim JW, Bechis SK, Werb Z. GATA-3 and the regulation of the mammary luminal cell fate. Curr Opin Cell Biol. 2008;20:164–70.
Yan W, Cao QJ, Arenas RB, Bentley B, Shao R. GATA3 inhibits breast cancer metastasis through the reversal of epithelial-mesenchymal transition. J Biol Chem. 2010;285:14042–51.
Heintzman ND, Stuart RK, Hon G, Fu Y, Ching CW, Hawkins RD, et al. Distinct predictive chromatin signatures of transcriptional promoters and enhancers in the human genome. Nature Genetics. 2007;39:311–8.
Blattler A, Yao L, Witt H, Guo Y, Nicolet CM, Berman BP, et al. Global loss of DNA methylation uncovers intronic enhancers in genes showing expression changes. Genome Biol. 2014;15:469.
Maunakea AK, Nagarajan RP, Bilenky M, Ballinger TJ, D'Souza C, Fouse SD, et al. Conserved role of intragenic DNA methylation in regulating alternative promoters. Nature. 2010;466:253–7.
Kowalczyk MS, Hughes JR, Garrick D, Lynch MD, Sharpe JA, Sloane-Stanley JA, et al. Intragenic enhancers act as alternative promoters. Mol Cell. 2012;45:447–58.
Raviram R, Rocha PP, Bonneau R, Skok JA. Interpreting 4C-Seq data: how far can we go? Epigenomics. 2014;6:455–7.
Eeckhoute J, Carroll JS, Geistlinger TR, Torres-Arzayus MI, Brown M. A cell-type-specific transcriptional network required for estrogen regulation of cyclin D1 and cell cycle progression in breast cancer. Genes & Dev. 2015;20:2513–26.
Yochum GS, Cleland R, Goodman RH. A genome-wide screen for beta-catenin binding sites identifies a downstream enhancer element that controls c-Myc gene expression. Mol Cell Biol. 2008;28:7368–79.
Konsavage Jr WM, Yochum GS. The myc 3' wnt-responsive element suppresses colonic tumorigenesis. Mol Cell Biol. 2014;34:1659–69.
Rosenbauer F, Wagner K, Kutok JL, Iwasaki H, Le Beau MM, Okuno Y, et al. Acute myeloid leukemia induced by graded reduction of a lineage-specific transcription factor, PU.1. Nat Genet. 2004;36:624–30.
Bass AJ, Lawrence MS, Brace LE, Ramos AH, Drier Y, Cibulskis K, et al. Genomic sequencing of colorectal adenocarcinomas identifies a recurrent VTI1A-TCF7L2 fusion. Nat Genet. 2011;43:964–8.
Frietze S, Wang R, Yao L, Tak YG, Ye Z, Gaddis M, et al. Cell type-specific binding patterns reveal that TCF7L2 can be tethered to the genome by association with GATA3. Genome Biol. 2012;13:R52.
Sur IK, Hallikas O, Vaharautio A, Yan J, Turunen M, Enge M, et al. Mice lacking a Myc enhancer that includes human SNP rs6983267 are resistant to intestinal tumors. Science. 2012;338:1360–3.
Pomerantz MM, Ahmadiyeh N, Jia L, Herman P, Verzi MP, Doddapaneni H, et al. The 8q24 cancer risk variant rs6983267 shows long-range interaction with MYC in colorectal cancer. Nat Genet. 2009;41:882–4.
The Cancer Genome Atlas. Comprehensive genomic characterization of squamous cell lung cancers. Nature. 2012;489:519–25.
Massion PP, Taflan PM, Jamshedur Rahman SM, Yildiz P, Shyr Y, Edgerton ME, et al. Significance of p63 amplification and overexpression in lung cancer development and prognosis. Cancer Res. 2003;63:7113–21.
Watanabe H, Ma Q, Peng S, Adelmant G, Swain D, Song W, et al. SOX2 and p63 colocalize at genetic loci in squamous cell carcinomas. J Clin Invest. 2014;124:1636–45.
Kandoth C, McLellan MD, Vandin F, Ye K, Niu B, Lu C, et al. Mutational landscape and significance across 12 major cancer types. Nature. 2013;502:333–9.
Worsley Hunt R, Wasserman WW. Non-targeted transcription factors motifs are a systemic component of ChIP-seq datasets. Genome Biol. 2014;15:412.
Aguilera C, Nakagawa K, Sancho R, Chakraborty A, Hendrich B, Behrens A. c-Jun N-terminal phosphorylation antagonises recruitment of the Mbd3/NuRD repressor complex. Nature. 2011;469:231–5.
Tupler R, Perini G, Green MR. Expressing the human genome. Nature. 2001;409:832–3.
Razin SV, Borunova VV, Maksimenko OG, Kantidze OL. Cys2His2 zinc finger protein family: classification, functions, and major members. Biochemistry (Mosc). 2012;77:217–26.
Vaquerizas JM, Kummerfeld SK, Teichmann SA, Luscombe NM. A census of human transcription factors: function, expression and evolution. Nat Reviews Genetics. 2009;10:252–63.
Reynisdottir I, Arason A, Einarsdottir BO, Gunnarsson H, Staaf J, Vallon-Christersson J, et al. High expression of ZNF703 independent of amplification indicates worse prognosis in patients with luminal B breast cancer. Cancer Med. 2013;2:437–46.
Kang X, Chen W, Kim RH, Kang MK, Park NH. Regulation of the hTERT promoter activity by MSH2, the hnRNPs K and D, and GRHL2 in human oral squamous cell carcinoma cells. Oncogene. 2009;28:565–74.
Schaub MA, Boyle AP, Kundaje A, Batzoglou S, Snyder M. Linking disease associations with regulatory information in the human genome. Genome Res. 2012;22:1748–59.
Maurano MT, Humbert R, Rynes E, Thurman RE, Haugen E, Wang H, et al. Systematic localization of common disease-associated variation in regulatory DNA. Science. 2012;337:1190–5.
Akhtar-Zaidi B, Cowper-Sal-lari R, Corradin O, Saiakhova A, Bartels CF, Balasubramanian D, et al. Epigenomic enhancer profiling defines a signature of colon cancer. Science. 2012;336:736–9.
Hardison RC. Genome-wide epigenetic data facilitate understanding of disease susceptibility association studies. J Biol Chem. 2012;287:30932–40.
Yao L, Tak YG, Berman BP, Farnham PJ. Functional annotation of colon cancer risk SNPs. Nat Commun. 2014;5:5114.
Fredriksson NJ, Ny L, Nilsson JA, Larsson E. Systematic analysis of noncoding somatic mutations and gene expression alterations across 14 tumor types. Nat Genet. 2014;46:1258–63.
Huang FW, Hodis E, Xu MJ, Kryukov GV, Chin L, Garraway LA. Highly recurrent TERT promoter mutations in human melanoma. Science. 2013;339:957–9.
Weinhold N, Jacobsen A, Schultz N, Sander C, Lee W. Genome-wide analysis of noncoding regulatory mutations in cancer. Nat Genet. 2014;46:1160–5.
Hansen KD, Timp W, Bravo HC, Sabunciyan S, Langmead B, McDonald OG, et al. Increased methylation variation in epigenetic domains across cancer types. Nat Genet. 2011;43:768–75.
BioConductor. [http://www.bioconductor.org/]
ELMER source code. [https://github.com/lijingya/ELMER.git]
ELMER usage vignette. [https://github.com/lijingya/ELMER/blob/master/vignettes/vignettes.pdf]
ELMER user manual. [https://github.com/lijingya/ELMER/blob/master/inst/doc/ELMER_manual.pdf]
TCGA data access. [https://tcga-data.nci.nih.gov/tcgafiles/ftp_auth/distro_ftpusers/anonymous/tumor/]
TCGA pan-can analysis (Synapse).[https://www.synapse.org/#]
Lawrence M, Huber W, Pages H, Aboyoun P, Carlson M, Gentleman R, et al. Software for computing and annotating genomic ranges. PLoS Comput Biol. 2013;9:e1003118.
Epigenomics Roadmap data access. [https://sites.google.com/site/epigenomeroadmapawg/project-updates/finalsignaltracksandalignmentfiles]
FANTOM enhancer annotations. [http://enhancer.binf.ku.dk/presets/]
Alexa A, Rahnenfuhrer J. topGO: Enrichment analysis for Gene Ontology. R package version 2180. 2010. [http://www.bioconductor.org/packages/release/bioc/html/topGO.html]
Grant CE, Bailey TL, Noble WS. FIMO: scanning for occurrences of a given motif. Bioinformatics. 2011;27:1017–8.
Wang J, Zhuang J, Iyer S, Lin X, Whitfield TW, Greven MC, et al. Sequence features and chromatin structure around the genomic regions bound by 119 human transcription factors. Genome Res. 2012;22:1798–812.
Shannon P. MotifDb: An annotated collection of Protein-DNA binding sequence motifs. Bioconductor. 2014, R package version 1.8.0. [http://www.bioconductor.org/packages/release/bioc/html/MotifDb.html]
Ravasi T, Suzuki H, Cannistraci CV, Katayama S, Bajic VB, Tan K, et al. An atlas of combinatorial transcriptional regulation in mouse and man. Cell. 2010;140:744–52.
We thank the ENCODE Project, the Roadmap Epigenome Mapping Consortium, and the FANTOM Consortium for the use of the genomic locations of enhancers [2, 9, 31, 32]. The results published here are largely based upon data generated by the TCGA Research Network (http://cancergenome.nih.gov/), and our use of these data is in accordance with the guidelines at: http://cancergenome.nih.gov/publications/publicationguidelines. We thank the TCGA Consoritum members for use of these datasets. We thank Simon Coetzee and Toshinori Hinoue for suggestions and help with the ELMER BioConductor package. BPB and LY were supported in part by NCI grant 1U01CA184826, and BPB in part by 5R01HG006705. PJF and LY were supported in part by National Cancer Institute (NCI) grants 1U01ES017154, U54HG006996, and P30CA014089. HS and PWL were supported in part by the TCGA Consortium through NCI grant 1U24CA143882. High-performance computing support was provided by the USC High Performance Computing Center (HPCC).
Norris Comprehensive Cancer Center, Keck School of Medicine, University of Southern California, 1450 Biggy Street, NRT 6503, Los Angeles, CA, 90089-9601, USA
Lijing Yao, Peggy J Farnham & Benjamin P Berman
Center for Epigenetics, Van Andel Research Institute, Grand Rapids, MI, 49503, USA
Hui Shen & Peter W Laird
Bioinformatics and Computational Biology Research Center, Department of Biomedical Sciences, Cedars-Sinai Medical Center, AHSP Bldg., Suite A8111, Los Angeles, CA, 90048, USA
Benjamin P Berman
Lijing Yao
Hui Shen
Peter W Laird
Peggy J Farnham
Correspondence to Peggy J Farnham or Benjamin P Berman.
LY performed all analyses, BPB conceived of the project and participated in analysis and manuscript preparation, PJF participated in the enhancer and transcription factor analyses and drafted the manuscript. HS and PWL contributed key concepts to the analysis strategy used, and HS assisted with analyses. All authors read and approved the final manuscript.
TCGA DNA methylation and RNA-seq sample ID numbers. The data platform and archive version number are listed in the sheet named 'Version number'. The 'DNA methylation sample ID' sheet provides information concerning the TCGA sample ID, the tissue type (normal or tumor), and the cancer type for the DNA methylation datasets. The 'RNA-seq sample ID' sheet provides information concerning the TCGA sample ID, the tissue type (normal or tumor), and the cancer type for the RNA-seq datasets.
Distal enhancer probes on the HM450 array. The chromosomal location and the name of each of the 102,518 distal enhancer probes used in this study are indicated.
Hypo- and hypermethylated enhancer probes identified for each tumor type. Individual worksheets are provided that list the hypermethylated probes and hypomethylated probes identified for each specific cancer type.
Probe-gene pairs showing inverse correlations between methylation and expression. Individual worksheets are provided that list all of the significant probe-gene pairs for hypomethylated probes and hypermethylated probes in each cancer type. Pr represents real P value from the Mann-Whitney U test for each pair; Pe represents the empirical P value for each pair; the distance between the probes and the putative target genes are shown in the Distance column; the ranking based on the relative distance of the putative target gene among the 20 adjacent genes (10 on either side of the enhancer) is shown; and the cancer type (CT) is indicated. The P value for promoter methylation' column specifies the anti correlation between methylation at the promoter itself and the expression level of the gene. This is calculated using the same Mann-Whitney statistic we use to evaluate enhancer-expression correlation, but we average beta values within the standard promoter methylation region, from -300 to +500 bp relative to the transcription start site (TSS). This region is consistently methylated in all active promoters based on whole-genome bisulfite sequencing (in cell lines [95] and primary TCGA tumors, manuscript in preparation). As with our enhancer method, we select for strong changes in methylation filtering out any case where 95 % of samples have methylation less than 0.3; in the spreadsheet, these have a P value of 1.0. The worksheet 'tumor vs. normal expression' contains a table showing that the majority of target genes linked to hypermethylated enhancers have lower expression in tumors than normal tissues, while the majority linked to hypomethylated enhancers have higher expression in tumors. The worksheet 'promoter methylation' shows the fraction of enhancer-linked genes in each cancer type that also have significant correlation with methylation of the promoter. It is under 10 % of genes for all cancer types.
Quantitative summary of links, probes, and genes for each cancer type. (A) Shown are histograms representing the number of putative probes-gene pairs, the number of total probes in the set of paired-probes, and the number of total genes in the set of paired probes for the set of hypomethylated (top) and hypermethylated (bottom) probe-gene pairs in each cancer type. For each plot, the number of probes identified in one or more tumor types is indicated by the colored bars. (B) Shown is a heatmap illustrating the similarity of probe-gene pairs, probes in the pairs, and genes in the pairs among the different cancer types. The color bar indicates the OR for the similarity (overlap) between the indicated cancer types (a higher OR indicates a more significant similarity).
Rank of putative target gene according to distance in the enhancer-gene pairs for each cancer type. (A) Shown is the distribution for the ranking (by distance) of each putative target gene linked to an enhancer for enhancers that are significantly associated with more than one gene. (B) Shown is the distribution for the ranking (by distance) of each putative target gene linked to an enhancer for each cancer type. The left panel shows the pairs for which the enhancer is significantly associated with more than one gene and the right panel shows the pairs for which the enhancer is significantly associated with only one gene.
Summary of enriched motifs for enhancer-gene pairs with hypomethylated distal enhancers. On the worksheet entitled 'Summary', the fold enrichment of each indicated motif in a specific cancer type (CT) is shown. Shown in the Enhancer column is the number of the paired enhancers (after clustering distal enhancer probes within 500 bp) containing the enriched motif (the percentage of total paired hypomethylated enhancers in that cancer type containing each motif is shown in parentheses). For each motif, also shown is the number of genes linked to the probes containing the motif and the number of probe-gene pairs. The worksheet entitled 'Detail' contains information for each individual probe linked to a putative target gene via a distal region containing an enriched motif. Pe represents the empirical P value for each pair; the distance between the probes and the putative target genes are shown in the Distance column; the ranking based on the relative distance of the putative target gene among the 20 adjacent genes (10 on either side of the enhancer) is shown; and the cancer type (CT) is indicated.
Motif enrichment heatmaps. (A) Shown are the heatmaps for motifs that are enriched in the sets of all hypomethylated probes (top panel) and all hypermethylated probes (bottom panel). (B) Shown are the heatmaps for motifs that are enriched in the sets of only those hypomethylated (top panel) or hypermethylated (bottom panel) probes that are linked to putative target genes (B bottom panel).
Plots of association between all human TFs and DNA methylation at enriched motif sites. Shown are TF ranking plots based on the score (-log10(Pr)) of association between TF expression and DNA methylation of the motif in the cancer type in which the motifs are enriched. The dashed blue line indicates the boundary of the top 5 % association score. The top three associated TFs and the TF family members (dots in red) that are associated with that specific motif are labeled in the plot.
Additional file 10:
Scatter plots for TF family members significantly associated with DNA methylation at distal enhancer regions having enriched motifs. Shown are scatter plots for average DNA methylation at probes having the indicated enriched motif (x axis, shown on the top of each set of panels) vs. the expression of the significantly correlated motif-relevant TF family members (y axis, shown on the right side of each panel). Each dot represents a different patient sample; red and green indicate the tumor and normal samples, respectively. Pairs that are within the top 5 % of TFs linked to a given motif are indicated with a number inside the cell. The number corresponds to the rank of the given TF relative to all 1,777 TFs (with '1' being the most strongly correlated).
Survival plots for TF family members significantly associated with DNA methylation at distal enhancer regions having enriched motifs. (A) The output of a Cox model regression analysis for the effects of expression of RUNX1 on survival within KIRC samples. Leukocyte methylation signature was calculated as in (PMID 22120008), and staging information was taken from TCGA clinical data. Leukocyte methylation signature was included to rule out RUNX1 expression from contaminating leukocytes, which are the main source of non-cancer cells in KIRC samples. (B) Kaplan-Meier survival curves for TF family members significantly associated with DNA methylation at the distal enhancer regions with enriched motifs in the indicated cancer type. The survival data for patients having tumors with the highest (top 30%) and lowest (bottom 30%) transcription factor expression are shown; the Log Rank test P value between the high and low groups is indicated.
Proportion of intragenic vs. intergenic enhancers that regulate the nearest gene. Shown are bar graphs indicating the number of intergenic vs. intragenic enhancers, the number of each category that is associated with expression of the nearest gene, and the number of intragenic enhancers associated with expression of the nearest gene with that gene being the one in which the enhancer resides.
Overlapping analysis between putative probe-gene pairs in BRCA and interactions from ChIA-PET data from the MCF7 breast cancer cell line. A list of putative probe-gene pairs in BRCA that overlap with interactions from ChIA-PET data from the MCF7 cell line is provided. The bar graph shows the comparison of the number of probe-gene pairs identified within MCF7 ChIA-PET data using the putative pairs from BRCA vs. random pairs. The random pairs were generated by randomly selecting the same number of probes from the set of distal enhancer probes, and pairing each with one or more of the 20 adjacent genes; the number of links made for each random probe was identical to the corresponding 'true' probe. Thus, the random linkage set has both the same number of probes and the same number of linked genes as the true set. One hundred such random datasets were generated to arrive at a 95 % CI (+/-1.96* SD).
MYC 3' end enhancer regulates MYC expression in colorectal cancer tissue. (A) Shown is a scatter plot showing DNA methylation at probes located at the 3' end of the MYC gene vs. the expression of MYC RNA. Each dot represents a different patient sample; red and green indicate the tumor and normal samples, respectively. (B) Shown is the location of the MYC 3' enhancer and the ENCODE ChIP-seq histone and transcription factor tracks from the University of California, Santa Cruz genome browser. The green bar indicates the location of enhancer that has been previously identified to regulate MYC expression in the HCT116 colon cancer cell line [57, 58].
Transcription factors significantly associated with multiple different motifs. Each row represents individual transcription factors and each column represent different cancer types. The numbers in the table show the number of enriched motifs that the transcription factors associate with in each cancer type; the transcription factor must be in the top 1 % of all ranked TFs for that specific motif in that specific cancer type to be listed on the table.
Survival analysis of commonly identified ZNFs. (A) Shown is a table listing the subset of ZNFs (the entire list can be found in Additional file 17) which were identified in the top 1 % of ranked TFs, which were significantly associated with multiple different motifs in a specific cancer type (the number of motifs with which the TF was associated is listed in parentheses), and whose expression level significantly correlates with patient survival. The direction of correlation is labeled in column labeled 'Survival' (red and green color represents high expression correlated with worse survival or better bad survival, respectively) and log Rank test P value between the high and low expression groups is provided in the column labeled 'logRankP'. (B) Shown are example Kaplan-Meier survival curves for two ZNFs. The survival data for patients having tumors with the highest (top 30 %) and lowest (bottom 30 %) transcription factor expression is shown; the Log Rank test P value between the high and low groups is indicated.
List of the TFs used in this study. Shown is the list of TFs used to compare expression analysis of all TFs to motif methylation in the different cancer types, taken from [97].
Yao, L., Shen, H., Laird, P.W. et al. Inferring regulatory element landscapes and transcription factor networks from cancer methylomes. Genome Biol 16, 105 (2015). https://doi.org/10.1186/s13059-015-0668-3
Enhancer Region
Putative Target Gene
Squamous Cell Lung Cancer
Transcription Factor Network
Putative Enhancer | CommonCrawl |
Seminars By Semester
Seminar: All Talks Algebra Analysis and PDE Applied Mathematics CoIntegrate Mathematics Colloquium Geometry and Topology Student Geometry/Topology Semester: SUMMER SEM 2020 SPRING SEM 2020 FALL SEM 2019 SUMMER SEM 2019 SPRING SEM 2019 FALL SEM 2018
Show past talks?
Download iCal
Talk_id
22730 Wednesday 1/8
4:10 PM François Greer, Stony Brook University Enumerative geometry and modular forms
Speaker: François Greer, Stony Brook University
Title: Enumerative geometry and modular forms
Gromov-Witten invariants are counts of holomorphic curves on a smooth projective variety X. When assembled into a generating series, these invariants often produce special functions. A folklore conjecture predicts that when X admits an elliptic fibration, the Gromov-Witten generating functions are quasi-modular forms. I will discuss recent progress on this conjecture and a program to prove it in general.
22733 Thursday 1/9
4:10 PM Daxin Xu, California Institute of Technology Exponential sums, differential equations and geometric Langlands correspondence
Speaker: Daxin Xu, California Institute of Technology
Title: Exponential sums, differential equations and geometric Langlands correspondence
The understanding of various exponential sums plays a central role in the study of number theory. I will first review the relationship between the Kloosterman sums and the classical Bessel differential equation. Recently, there are two generalizations of this story (corresponding to GL_2-case) for arbitrary reductive groups using ideas from the geometric Langlands program, due to Frenkel-Gross, Heinloth-Ngô-Yun. In the end, I will discuss my joint work with Xinwen Zhu where we unify previous two constructions from the p-adic aspect and identify the exponential sums associated to different groups as conjectured by Heinloth-Ngô-Yun.
22732 Friday 1/10
4:10 PM Tetiana Shcherbyna, Princeton University Random matrix theory and supersymmetry techniques
Speaker: Tetiana Shcherbyna, Princeton University
Title: Random matrix theory and supersymmetry techniques
Starting from the works of Erdos, Yau, Schlein with coauthors, the significant progress in understanding the universal behavior of many random graph and random matrix models were achieved. However for the random matrices with a spacial structure our understanding is still very limited. In this talk I am going to overview applications of another approach to the study of the local eigenvalues statistics in random matrix theory based on so-called supersymmetry techniques (SUSY) . SUSY approach is based on the representation of the determinant as an integral over the Grassmann (anticommuting) variables. Combining this representation with the representation of an inverse determinant as an integral over the Gaussian complex field, SUSY allows to obtain an integral representation for the main spectral characteristics of random matrices such as limiting density, correlation functions, the resolvent's elements, etc. This method is widely (and successfully) used in the physics literature and is potentially very powerful but the rigorous control of the integral representations, which can be obtained by this method, is quite difficult, and it requires powerful analytic and statistical mechanics tools. In this talk we will discuss some recent progress in application of SUSY to the analysis of local spectral characteristics of the prominent ensemble of random band matrices, i.e. random matrices whose entries become negligible if their distance from the main diagonal exceeds a certain parameter called the band width.
22736 Monday 1/13
4:10 PM Oliver Pechenik, University of Michigan $K$-theoretic Schubert calculus
Speaker: Oliver Pechenik, University of Michigan
Title: $K$-theoretic Schubert calculus
Schubert calculus studies the algebraic geometry and combinatorics of matrix factorizations. I will discuss recent developments in $K$-theoretic Schubert calculus, and their connections to problems in combinatorics and representation theory.
20713 Wednesday 1/15
3:00 PM Eli Matzri, Bar-Ilan University The vanishing conjecture for Massey products in Galois cohomology
Speaker: Eli Matzri, Bar-Ilan University
Title: The vanishing conjecture for Massey products in Galois cohomology
In this talk I will explain what Massey products are and focus on the vanishing conjecture due to Minac and Tan. I will survey the known results and the different methods used to obtain them, focusing on triple Massey products.
4:10 PM Nathan Dowlin, Columbia University Quantum and symplectic invariants in low-dimensional topology.
Speaker: Nathan Dowlin, Columbia University
Title: Quantum and symplectic invariants in low-dimensional topology.
Khovanov homology and knot Floer homology are two powerful knot invariants developed around two decades ago. Knot Floer homology is defined using symplectic techniques, while Khovanov homology has its roots in the representation theory of quantum groups. Despite these differences, they seem to have many structural similarities. A well-known conjecture of Rasmussen from 2005 states that for any knot K, there is a spectral sequence from the Khovanov homology of K to the knot Floer homology of K. Using a new family of invariants defined using both quantum and symplectic techniques, I will give a proof of this conjecture and describe some topological applications.
4:10 PM Joseph Waldron, Princeton University Birational geometry in positive characteristic
Speaker: Joseph Waldron, Princeton University
Title: Birational geometry in positive characteristic
Birational geometry aims to classify algebraic varieties by breaking them down into elementary building blocks, which may then be studied in more detail. This is conjecturally accomplished via a process called the log minimal model program. The program is now very well developed for varieties over fields of characteristic zero, but many of the most important proof techniques break down outside that situation. In this talk, I will give an overview of the main aims of the log minimal model program, and then focus on recent progress in the classification of varieties defined over fields of positive characteristic.
22744 Tuesday 1/21
4:10 PM Laure Flapan, Massachusetts Institute of Technology Modularity and the Hodge/Tate conjectures for some self-products
Speaker: Laure Flapan, Massachusetts Institute of Technology
Title: Modularity and the Hodge/Tate conjectures for some self-products
If X is a smooth projective variety over a number field, the Hodge and Tate conjectures describe how information about the subvarieties of X is encoded in the cohomology of X. We explore the role that certain automorphic representations, called algebraic Hecke characters, can play in understanding which cohomology classes of X arise from subvarieties. We use this to deduce the Hodge and Tate conjectures for certain self-products of varieties, including some self-products of K3 surfaces. This is joint work with J. Lang.
4:00 PM Brandon Bavier An Introduction to Hyperbolic Knot Theory
Student Geometry/Topology
Speaker: Brandon Bavier
Title: An Introduction to Hyperbolic Knot Theory
When studying knots, it is common to look at their complement to find invariants of the knot. One way to do this is to put a geometric structure on the complement, and look at common geometric invariants, such as volume. In this introductory level talk, we will cover the basics of hyperbolic geometry, and how we can use its properties to find invariants of hyperbolic knots, knots whose complement is hyperbolic.
4:10 PM Pei-Ken Hung, Massachusetts Institute of Technology Einstein's gravity and stability of black holes
Speaker: Pei-Ken Hung, Massachusetts Institute of Technology
Title: Einstein's gravity and stability of black holes
Though Einstein's fundamental theory of general relativity has already celebrated its one hundredth birthday, there are still many outstanding unsolved problems. The Kerr stability conjecture is one of the most important open problems, which posits that the Kerr metrics are stable solutions of the vacuum Einstein equation. Over the past decade, there have been huge advances towards this conjecture based on the study of wave equations in black hole spacetimes and structures in the Einstein equation. In this talk, I will discuss the recent progress in the stability problems with special focus on the wave gauge.
4:10 PM Felix Janda, IAS, Princeton University Enumerative geometry: old and new.
Speaker: Felix Janda, IAS, Princeton University
Title: Enumerative geometry: old and new.
For as long as people have studied geometry, they have counted geometric objects. For example, Euclid's Elements starts with the postulate that there is exactly one line passing through two distinct points in the plane. Since then, the kinds of counting problems we are able to pose and to answer has grown. Today enumerative geometry is a rich subject with connections to many fields, including combinatorics, physics, representation theory, number theory and integrable systems. In this talk, I will show how to solve several classical counting questions. I will then move to a more modern problem with roots in string theory which has been the subject of intense study for the last three decades: The computation of the Gromov-Witten invariants of the quintic threefold, an example of a Calabi-Yau manifold
3:00 PM Tony Feng, MIT The Spectral Hecke Algebra
Speaker: Tony Feng, MIT
Title: The Spectral Hecke Algebra
We introduce a derived enhancement of local Galois deformation rings that we call the "spectral Hecke algebra", in analogy to a construction in the Geometric Langlands program. This is a Hecke algebra that acts on the spectral side of the Langlands correspondence, i.e. on moduli spaces of Galois representations. We verify the simplest form of derived local-global compatibility between the action of the spectral Hecke algebra on the derived Galois deformation ring of Galatius-Venkatesh, and the action of Venkatesh's derived Hecke algebra on the cohomology of arithmetic groups.
22748 Thursday 1/30
12:00 PM Willie Wong, MSU; Andrew Krause, MSU Deploying Computer-Based Lab Activities in Mainstream Calculus II
CoIntegrate Mathematics
Speaker: Willie Wong, MSU; Andrew Krause, MSU
Title: Deploying Computer-Based Lab Activities in Mainstream Calculus II
Place: 133F Erick
The course MTH133 is the second semester in our main calculus sequence, and focuses on integral calculus, sequences and series, and the calculus of planar curves. The majority of enrolled students (approximately 2000 per year) have declared interest in engineering and are in their first three semesters at MSU; the remainder are primarily students from the College of Natural Sciences. Over the past 4 years, we developed and piloted the lab activities, with an eye towards deploying them at scale. This year, the labs are in use across all MTH133 sections. We will begin our presentation with a detailed demonstration of one of the labs, mainly to showcase the student experience. We will follow this up with a discussion of our philosophy toward the "place" the labs occupy in calculus instruction, specifically in relation to the extant curriculum. We will also describe ongoing research aimed at understanding students' learning experiences with the labs, as well as some of our findings.
2:00 PM Kevin Sackel, Stony Brook TBA
Speaker: Kevin Sackel, Stony Brook
Title: TBA
20687 Friday 2/7
4:10 PM Jin Wang, Stony Brook University TBD
Speaker: Jin Wang, Stony Brook University
3:00 PM Yuan Liu, University of Michigan TBA
Speaker: Yuan Liu, University of Michigan
2:00 PM Ákos Nagy, Duke University TBD
Speaker: Ákos Nagy, Duke University
4:10 PM Chris Henderson, University of Arizona TBA
Speaker: Chris Henderson, University of Arizona
2:00 PM Spiro Karigiannis, University of Waterloo TBD
Speaker: Spiro Karigiannis, University of Waterloo
4:10 PM Jacob Tsimerman, University of Toronto TBD
Speaker: Jacob Tsimerman, University of Toronto
4:10 PM Katya Krupchyk, University of California, Irvine Inverse boundary problems for semilinear elliptic PDE
Analysis and PDE
Speaker: Katya Krupchyk, University of California, Irvine
Title: Inverse boundary problems for semilinear elliptic PDE
In this talk we shall discuss recent progress for partial data inverse boundary problems for semilinear elliptic PDE. It turns out that the presence of a nonlinearity allows one to solve inverse problems in situations where the corresponding linear counterpart is open. In the first part of the talk, we shall also discuss some previous work on partial data inverse boundary problems for linear elliptic PDE, focusing on the case of coefficients of low regularity. This talk is based on joint work with Gunther Uhlmann.
2:00 PM Aleksander Doan, Columbia University TBD
Speaker: Aleksander Doan, Columbia University
4:10 PM Lawrence Craig Evans, UC Berkeley TBD
Speaker: Lawrence Craig Evans, UC Berkeley
22731 Monday 3/9
2:00 PM Kevin Hughes, University of Bristol $L^p$-improving for spherical maximal functions
Speaker: Kevin Hughes, University of Bristol
Title: $L^p$-improving for spherical maximal functions
I will discuss recent work on $L^p$-improving estimates for spherical maximal functions - continuous and discrete. In the continous setting, this is joint work with Anderson, Roos and Seeger building on important work of Seeger-Wainger-Wright. In the discrete setting, these were independently discovered by myself and by Kesler-Lace.
6:30 PM Laure Saint-Raymond, École normale supérieure de Lyon Disorder increases almost surely (First Phillips Lecture)
Speaker: Laure Saint-Raymond, École normale supérieure de Lyon
Title: Disorder increases almost surely (First Phillips Lecture)
Place: 105AB Kellogg Center
In the every day life, there are many examples of mixing phenomena : milk and water in a same container will not stay separate from each other, marbles in a bag will not line up spontaneously according to their color, ... In this first talk, we intend to study a simple mathematical model which explains why we can observe spontaneous mixing but not the reverse phenomenon.
4:00 PM Laure Saint-Raymond, École normale supérieure de Lyon Irreversibility for a hard sphere gas (Second Phillips Lecture)
Title: Irreversibility for a hard sphere gas (Second Phillips Lecture)
Place: 115 International Center
Consider a system of small hard spheres, which are initially (almost) independent and identically distributed. Then, in the low density limit, their empirical measure $\frac{1}{N} \sum_{i=1}^N \delta_{x_i(t), v_i(t)}$ converges almost surely to a non reversible dynamics, described by the Boltzmann equation.
4:00 PM Laure Saint-Raymond, École normale supérieure de Lyon The structure of correlations (Third Philips Lecture)
Title: The structure of correlations (Third Philips Lecture)
Although the distribution of hard spheres remains essentially chaotic in this low density regime, collisions give birth to small correlations, which keep part of the information. The structure of these dynamical correlations is amazing, going through all scales. This analysis provides actually a characterization of small fluctuations (central limit theorem), and large deviations.
4:10 PM June Huh, Institute for Advanced Study TBD
Speaker: June Huh, Institute for Advanced Study
2:00 PM Xuemiao Chen, University of Maryland TBD
Speaker: Xuemiao Chen, University of Maryland
4:10 PM Dimitri Shlyakhtenko, UCLA TBD
Speaker: Dimitri Shlyakhtenko, UCLA
4:10 PM James Murphy, Tufts University TBD
Speaker: James Murphy, Tufts University
4:10 PM Jianfeng Lu, Duke University TBA
Speaker: Jianfeng Lu, Duke University
4:10 PM Yilin Wang, MIT TBA
Speaker: Yilin Wang, MIT
4:10 PM Tan Bui-Thanh, University of Texas at Austin TBD
Speaker: Tan Bui-Thanh, University of Texas at Austin
2:00 PM Roger Casals, UC Davis TBA
Speaker: Roger Casals, UC Davis
4:10 PM Adrian Ioana, UC San Diego TBD
Speaker: Adrian Ioana, UC San Diego
2:00 PM Bahar Acu, Northwestern University TBD
Speaker: Bahar Acu, Northwestern University
4:10 PM Rafe Mazzeo, Stanford University TBD
Speaker: Rafe Mazzeo, Stanford University
3:00 PM J. Maurice Rojas, Texas A&M University and NSF TBA
Speaker: J. Maurice Rojas, Texas A&M University and NSF | CommonCrawl |
Key Formulas
Exponent Laws and Logarithm Laws
Trig Formulas and Identities
Differentiation Rules
Trig Function Derivatives
Table of Derivatives
Table of Integrals
Jump to solved problems
Evaluating Limits
Limits at Infinity
Limits at Infinity with Square Roots
Calculating Derivatives
Chain Rule
Equation of a Tangent Line
Mean Value Theorem & Rolle's Theorem
Least expensive open-topped can
Printed poster
Related Rates
Snowball melts
Snowball melts, area decreases at given rate
How fast is the ladder's top sliding
Angle changes as a ladder slides
Lamp post casts shadow of man walking
Water drains from a cone
Given an equation, find a rate
Get notified when there is new free material
Home » Calculus 1 » Related Rates
Calculus Related Rates Problem:
As a snowball melts, its area decreases at a given rate. How fast does its radius change?
A spherical snowball melts symmetrically such that it is always a sphere. Its surface area decreases at the rate of $\pi$ in$^2$/min. How fast is its radius changing at the instant when $r = 2$ inches?
Calculus Solution
[Scroll down for text-based (non-video) version of the solution.]
Let's unpack the question statement:
We're told that the snowball's area A is changing at the rate of $\dfrac{dA}{dt} = -\pi$ in$^2$/min. (We must insert the negative sign "by hand" since we are told that the snowball is melting, and hence its area is decreasing.)
As a result, its radius is changing, at the rate $\dfrac{dr}{dt}$, which is the quantity we're after.
The snowball always remains a sphere.
Toward the end of our solution, we'll need to remember that the problem is asking us about $\dfrac{dr}{dt}$ at a particular instant, when $r = 2$ inches.
To solve this problem, we will use our standard 4-step Related Rates Problem Solving Strategy.
1. Draw a picture of the physical situation.
See the figure.
2. Write an equation that relates the quantities of interest.
B. To develop your equation, you will probably use. . . a simple geometric fact.
This is the hardest part of Related Rates problem for most students initially: you have to know how to develop the equation you need, how to pull that "out of thin air." By working through these problems you'll develop this skill. The key is to recognize which of the few sub-types of problem it is; we've listed each on our Related Rates page. In this problem, the diagram above reminds us that the snowball always remains a sphere, which is a Big Clue.
We need to develop a relationship between the rate we're given, $\dfrac{dA}{dt} = -\pi$ in$^2$/min, and the rate we're after, $\dfrac{dr}{dt}$. We thus first need to write down a relationship between the sphere's area A and its radius r. But we know that relationship since it's a simple geometric fact for a sphere that you could look up if you don't know it:
$$A = 4\pi r^2$$
That's it — that's the key relationship we need to be able to proceed with our solution.
3. Take the derivative with respect to time of both sides of your equation. Remember the chain rule.
\frac{d}{dt}A & = \frac{d}{dt} \left( 4\pi r^2\right) \\[12px] \frac{dA}{dt} &= 4\pi\, \frac{d}{dt} \left( r^2\right) \\[12px] &= 4\pi \,\left( 2r \frac{dr}{dt} \right) \\[12px] &= 8 \pi \,r \frac{dr}{dt}
Open to read why that dr/dt is there.
Jump ahead to 6:08 in the video to see the relevant discussion.
Are you wondering why that $\dfrac{dr}{dt}$ appears? The answer is the Chain Rule.
While the derivative of $r^2$ with respect to r is $\dfrac{d}{dr}r^2 = 2r$, the derivative of $r^2$ with respect to time t is $\dfrac{d}{dt}r^2 = 2r\dfrac{dr}{dt}$.
Remember that r is a function of time t: the radius changes as time passes and the snowball melts. We could have captured this time-dependence explicitly by writing our relation as
$$A(t) = 4\pi [r(t)]^2$$
to remind ourselves that both A and r are functions of time t. Then when we take the derivative,
\frac{d}{dt}A(t) &= \frac{d}{dt}\left[ 4\pi [r(t)]^2\right] \\ \\
\frac{dA(t)}{dt} &= 4\pi \, \frac{d}{dt}\left[ [r(t)]^2\right] \\ \\
&= 4\pi \, [2r(t)] \left[\frac{d}{dt}r(t)\right]\\ \\
&= 8\pi\, r(t) \left[\frac{dr(t)}{dt}\right] \end{align*}
[Recall $\dfrac{dA}{dt} = -\pi$ in$^2$/min in this problem, and we're looking for $\dfrac{dr}{dt}$.]
Most people find that writing the explicit time-dependence A(t) and r(t) annoying, and so just write A and r instead. Regardless, you must remember that r depends on t, and so when you take the derivative with respect to time the Chain Rule applies and you have the $\dfrac{dr}{dt}$ term.
4. Solve for the quantity you're after.
Solving the equation above for $\dfrac{dr}{dt}$:
\frac{dA}{dt} &= 8 \pi r \frac{dr}{dt} \\[12px] \frac{dr}{dt} &= \frac{1}{8\pi r} \frac{dA}{dt} \\[12px] \end{align*}
Now we just have to substitute values. Recall $\dfrac{dA}{dt} = -\pi$ in$^2$/min,
and the problem asks about when $r=2$ inches:
\frac{dr}{dt} &= \frac{1}{8\pi r} \frac{dA}{dt} \\[12px] &= \frac{1}{8 \pi (2\, \text{in})} (-\pi \, \tfrac{\text{in}^2}{\text{min}}) \\[12px] &= -\frac{1}{16} \text{ in/min} \quad \cmark
That's the answer. The negative value indicates that the radius is decreasing as the snowball melts, as we expect.
Caution: IF you are using a web-based homework system and the question asks,
At what rate does the radius decrease?
then the system has already accounted for the negative sign and so to be correct you must enter a POSITIVE VALUE: $\boxed{\dfrac{1}{16}} \, \dfrac{\text{in}}{\text{min}} \quad \checkmark$
Return to Related Rates Problems
Want access to all of our Calculus problems and solutions? Buy full access now — it's quick and easy!
Do you need immediate help with a particular textbook problem?
Head over to our partners at Chegg Study and gain (1) immediate access to step-by-step solutions to most textbook problems, probably including yours; (2) answers from a math expert about specific questions you have; AND (3) 30 minutes of free online tutoring. Please visit Chegg Study now.
If you use Chegg Study, we'd greatly appreciate hearing your super-quick feedback about your experience to make sure you're getting the help you need.
Check out our free materials: Full detailed and clear solutions to typical problems, and concise problem-solving strategies.
Access free materials
Get complete access: LOTS of problems with complete, clear solutions; tips & tools; bookmark problems for later review; + MORE!
Matheno
AP® is a trademark registered by the College Board, which is not affiliated with, and does not endorse, this site.
© 2014–2020 Matheno, Inc. | CommonCrawl |
Role of the rank of the filter mask matrix in image processing?
I'm reading a material where it says that a filter mask or kernel can be separable if the matrix of the filter mask has a rank 1. The two slides which describes this are as below:
Reading these slides it seems to me that it's trying to mean that the averaging filter can be separable, while Laplacian of Gaussian(LoG) is not. But it doesn't make sense to me, because LoG is the combination of two filters, laplacian and gaussian while in the contrary the averaging filter is just one filter, how can an averaging filter be separable?
I'm really confused on this matter. It would be helpful if you can make any sense out of this and explain me. Thanks.
image-processing filtering
the_naivethe_naive
Separable just means you can do it in the x-direction and then in the y-direction and have it come out the same as if you did it in both dimensions simultaneously to begin with. It's not too hard to see that this will work for an average filter. If the filter is averaging over a 3x3 grid then in the 2-d case you take an average of nine values. In the separable filter case you first take three averages of three values. Then you average those three averages together. In both cases you get the same answer.
The separable case is much faster because you get to reuse some of the work you did in the x-dimension when you are doing the y-direction. In other words, each average of three values you computed in the x-direction will be used multiple times when filtering in the y-direction. The filters that can be made separable are precisely those whose matrix rank is one.
AaronAaron
$\begingroup$ So it means the principle of separability is useful for lesser computation cost, right? $\endgroup$ – the_naive Apr 25 '14 at 19:37
$\begingroup$ Yes. That's why separable filters are interesting. This is the point made at the bottom of the first slide you posted. $\endgroup$ – Aaron Apr 25 '14 at 20:30
Theoretical filters for image processing are naturally 2D functions. Yet, since discrete images are sampled along a rectangular grid, and 2D convolutions used to be very expensive, a lot of standard discrete 2D filters are compact support and fast to compute. This includes being able to filter along rows or columns, in an independent fashion. And separability is a way to do that. For background, one can check How to find out if a transform matrix is separable?.
In the given examples of rank 1 (Average, Gaussian, Sobel):
the average is evident: a non-zero constant matrix is rank $1$ (all rows or colums repeat one single row or col)
Sobel is a tensor product of $3$-point discrete derivative $[1,\,0,\,-1]$ and $3$-point discrete Gaussian approximation $[1,\,2,\,1]$
"Gaussian" is a tensor product of $3$-point discrete Gaussian approximations $[1,\,2,\,1]$
Those have a $3\times 3$ support. They are combination of 1D discretized operators (separable). A lot of other classicaly-taught operators are $3\times 3$, like the two Laplacian operators:
$$ \begin{bmatrix}0 &1 & 0\\1&-4 & 1\\0 &1 & 0\end{bmatrix} $$ and $$ \begin{bmatrix}1 &1 & 1\\1&-8 & 1\\1 &1 & 1\end{bmatrix} $$
They have higher rank, since they derive from some limited-support approximations of the genuine 2D, continuous opeator. For instance, you can find wider kernels ($5\times 5$, $7\times 7$), see for instance: Laplacian/Laplacian of Gaussian or Laplacian of Gaussian (LoG). More technical details can be found in Farid and Simoncelli or Kroon.
Not the answer you're looking for? Browse other questions tagged image-processing filtering or ask your own question.
How to find out if a transform matrix is separable?
Gonzales Digital Image Processing and the Laplacian Operator
What is the relationship between the sigma in the Laplacian of Gaussian and the two sigmas in the Difference of Gaussians?
Difference between Sub Sampling and Down Scaling of Images
Looking for Open Source Image Processing Library that is equivalent to HIPS
Why Is the Result of the Convolution of a Row and a Column a Two Dimensional Matrix?
How to choose the size of a Laplacian of Gaussian kernel for filtering images fast?
Gaussian Equivalent of Convolving an Image 50 Times with a Box Filter
In What Way Is the Difference of Gaussian (DoG) More Tunable than the Laplacian of Gaussian (LoG)?
Sobel and Prewitt operators decomposition
How Is the Gaussian Kernel Related to the Euclidean Distance of the Neighborhoods in the Non Local Means Denoising Algorithm?
Losslessness of Laplacian Pyramid | CommonCrawl |
Reality is viciously sexist
Better Identification of Viking Corpses Reveals: Half of the Warriors Were Female insists an article at tor.com. It's complete bullshit.
What you find when you read the linked article is an obvious, though as it turns out a superficial problem. The linked research doesn't say what the article claims. What it establishes is that a hair less than half of Viking migrants were female, which is no surprise to anyone who's been paying attention. The leap from that to "half the warriors were female" is unjustified and quite large.
There's a deeper problem the article is trying to ignore or gaslight out of existence: reality is, at least where pre-gunpowder weapons are involved, viciously sexist.
It happens that I know a whole lot from direct experience about fighting and training with contact weapons – knives, swords, and polearms in particular. I do this for fun, and I do it in training environments that include women among the fighters.
I also know a good deal about Viking archeology – and my wife, an expert on Viking and late Iron Age costume who corresponds on equal terms with specialist historians, may know more than I do. (Persons new to the blog might wish to read my review of William Short's Viking Weapons and Combat .) We've both read saga literature. We both have more than a passing acquaintance with the archeological and other evidence from other cultures historically reported to field women in combat, such as the Scythians, and have discussed it in depth.
And I'm calling bullshit. Males have, on average, about a 150% advantage in upper-body strength over females. It takes an exceptionally strong woman to match the ability of even the average man to move a contact weapon with power and speed and precise control. At equivalent levels of training, with the weight of real weapons rather than boffers, that strength advantage will almost always tell.
Supporting this, there is only very scant archeological evidence for female warriors (burials with weapons). There is almost no such evidence from Viking cultures, and what little we have is disputed; the Scythians and earlier Germanics from the Migration period have substantially more burials that might have been warrior women. Tellingly, they are almost always archers.
I'm excluding personal daggers for self-defense here and speaking of the battlefield contact weapons that go with the shieldmaidens of myth and legend. I also acknowledge that a very few exceptionally able women can fight on equal terms with men. My circle of friends contains several such exceptional women; alas, this tells us nothing about woman as a class but much about how I select my friends.
But it is a very few. And if a pre-industrial culture has chosen to train more than a tiny fraction of its women as shieldmaidens, it would have lost out to a culture that protected and used their reproductive capacity to birth more male warriors. Brynhilde may be a sexy idea, but she's a bioenergetic gamble that is near certain to be a net waste.
Firearms changes all this, of course – some of the physiological differences that make them inferior with contact weapons are actual advantages at shooting (again I speak from experience, as I teach women to shoot). So much so that anyone who wants to suppress personal firearams is objectively anti-female and automatically oppressive of women.
Categorized as Martial Arts, Science
I'm guessing that you mean to say that males have a 50% advantage, i.e. are on average 1.5 times as strong, as opposed to 150%, or three times.
PapayaSF says:
Interfering with the current progressive narrative again, eh? OK, I'll listen.
Will Brown says:
I can't imagine you and the Mrs haven't seen this long since, but I found this YouTube video about Viking swords (and the modern recreation of one via historically accurate methods) most interesting:
https://www.youtube.com/watch?v=FKvRHaJ2w6w&list=TLxjxt0skCRkj-UXbjoLK2eaOkEhlCoUqJ
The video is 50+ minutes long and worth watching more than once for the wealth of detail even for someone having no great interest in Viking history especially. Just how accurate is this video from (both of) your perspective?
> Just how accurate is this video from (both of) your perspective?
Extremely. I'm quite impressed. Didn't catch 'em in a single error or unsupported speculation.
The theory that the crucible steel used was imported over the Volga trade route from Central Asia is new to me but quite plausible. Curiously, I didn't know the name "Ulfberht" – I thought of these weapons as Rhineland swords after where they were made in the territory of the old Frankish kingdom of Austrasia.
Which, by the way, is exactly where my paternal-line ancestors were from. Some of them might well have been swordsmiths.
Supporting this, there is only very scant archeological evidence for female warriors (burials with weapons).
Most of your article is obviously true. Especially the part where the tor.com writer is conflating "Viking settler" with "Viking warrior". But this passage is circular. You're rejecting the claim that many of the warriors were women, because there are few if any women buried with weapons. But what the linked article says is that this is because most archaeologists automatically classify any body buried with weapons as male!
The researchers from WA claim that when you bother to actually look at the bones and sex them, you find significant numbers of female skeletons buried with weapons, which by your own admission means they were warriors. They do not claim that half the burials with weapons are women. What they do say is that the previous idea that the Viking settlements in the Danelaw were overwhelmingly male was based on the fact that there are far more burials with weapons than with female jewelry. And they say that when you actually look at the bones, you find enough women buried with weapons to erase that disparity.
What I would point to as a possible flaw is the very small sample size. They only examined 14 burials, and found 6 or 7 women, including some who were buried with female jewelry, but also including at least 3 buried with weapons. That's well and good, but the chance of a fluke seems quite high. This may be an indication of something, but it's nothing remotely like proof.
Shenpen says:
>And if a pre-industrial culture has chosen to train more than a tiny fraction of its women as shieldmaidens, it would have lost out to a culture that protected and used their reproductive capacity to birth more male warriors.
In the long run, with a constant rate of birth and attrition, yes. But reality tends to be a bit more chaotic than that. There is utility in suddenly doubling the rate of warriors in short periods of extreme need, such as sieges or raids. Thus the smart strategy would be to treat women like a National Guard, short, basic training, with frequent but brief repetitions, not investing too much time, but still being able to assist in defense somewhat. The idea would be the same as other kinds of non-professional warriors who fight only in times of extreme need, such as peasants / serfs. The logic is similar: a population that would not make any men produce food would lose as raiding is not a stable form form of food supply, yet, a peasant man can be taught to do simple poking with a spear without losing too much time. It's a similar trade off with peasant men: you want more warriors, or you want a more secure food supply for the warriors? Same with women. Using a spear does not require that much strength either, and lower-class pre-industrial women tended to be strong enough, doing manual housework all day.
There is an entirely different reason it was generally not done so with women. Basically, if women are armed and fighting, it makes them valid targets. Any sense of chivalry suddenly goes out of the window. And from a reproductionary viewpoint women are more important than men, populations can afford to lose them less.
Not arming women is a way of ensuring that women may be kidnapped, enslaved or raped, but usually not killed. And it is probably not even a conscious, rational strategy, it is probably evolved: any defeated population that had a gene for not arming women (crude way of putting it, I know, don't take it literally) would be likelier to have the women survive as forced concubines of the victors – and give this gene to their kids as well.
BTW you know more about HEMA than I do, but even a longsword is just 1.5kg. Why does upper body strenght matter much? It is not boxing. I haven't tried it, perhaps such an experiment should be tried: I would offer a bet than even a 12 years old child could run a spear through a side of beef or pork, which would be a good enough approximation of the human body. It just does not look that hard. Courage is probably a more limiting factor than strength.
>Why does upper body strenght matter much?
There have been several good responses to this question, but looking back at them I see one aspect that I think has not been emphasized: More strength gives you both better endurance and better force control.
If, in order to fight or train, you have to use 100% of your strength or close to it, you're going to have control issues. You'll have difficulty controlling and stopping the motions you start. You'll overcommit to attacks because you lack the physical option to not overcommit. You'll suck at precise targeting.
One of the biggest advantage my strength gives me in martial arts training is that I don't really have to exert myself at all to strike at drill speed, and even dialing it up to combat intensity isn't very taxing until I overheat. One thing the margin is good for is endurance. Tonight, for example, I was doing a knife-striking drill alternating forehand and backhand strikes against a rattan target. I can do that with speed and power for much longer, using the 30% of my strength it takes, than most people who are exerting 70-80%. When they're hitting anaerobic failure, I'm recruiting another subset of muscle fibers.
Think that'd win a fight or three? You betcha.
Now, control. The other thing I can do with the 70% of my strength that I'm not normally using is meter the force I exert very precisely by using the "spare" fibers as antagonists. When I spar with people who are advanced enough to understand the question, I ask "do you want cloth contact, skin contact, or light muscle contact?" Then I deliver it.
This is, by the way, far more difficult to do with weapons. My force control with a sword is good, but nowhere near as fine-grained as it is empty-hand.
Finally, there is a lot to be said for strength – or at least muscle density – as armor against hits. My last school was run by a 6'4″ karate master made of rawhide and whalebone who'd been a successful full-contact fighter. He was a good instructor who dutifully sparred with his students to train them. With me he enjoyed it – because he could dial up to something not exactly like full combat power but much closer to it than he normally got to exert, without sending me screaming for mama. For exactly the same reason I enjoyed the hell out of sparring with him – I just couldn't use that much power on another student.
We'd go out there and, by the school's normal standards, whack the stuffing out of each other – and be smiling the whole time. I mean, I saw people look at us and wince. But it was OK; we both had right attitude and, at least at empty hand, decently toned bulk muscle makes fair armor.
Also relevant: http://en.wikipedia.org/wiki/Albanian_sworn_virgins
The interesting aspect is that the price for adopting a masculine role and being allowed to fight in blood feuds role was strict virginity. I suspect it supports my idea. If there was a general taboo on killing women in motherly roles, then those women who fought and therefore could be killed must have not became mothers in order not to erode that taboo, I think that must have been the reason.
Simon Smith says:
Woman with sword still beats man or woman without sword, though, yes?
And woman with sword and basic martial training takes longer to die when attacked by man with sword. And she might win. And two women with swords ganging up on one man…
Even if the local women are just militia, the very fact that a raider can't stroll through a settlement killing women and children without facing resistance, but instead has to fight for every kill, has to be of net benefit to that settlement. Often they wouldn't even get raided in the first place because it's a harder proposition.
BTW you know more about HEMA than I do, but even a longsword is just 1.5kg. Why does upper body strenght matter much? It is not boxing.
As someone who's primarily done unarmed but has a smattering of melee-weapons experience, I'll say that adding the weight makes upper-body strength more of a factor. At least with basic strikes, the main virtue is how much momentum you can bring to bear when you hit, and increased speed compounds the blow by transferring the impulse over a shorter time period, resulting in greater force.
Once you start adding mass to the blows, you have to put more muscular effort into bringing it up to speed in the short space/time between initiation and impact. I've trained with some small-to-medium women who could land pretty vicious punches by precise placement and speed, but put something heavy in their hands, and they're worse off than if they'd just stayed unarmed.
Depends, and unless she's had the significant training necessary to be proficient in the sword, it's probably useless against a man who knows how to fight. All he has to do is to get in close enough to disarm her. Thus the advantage already noted of ranged weapons.
Undoubtedly it's better to have some defensive capability than none, but that's a far cry from saying that the "militia" were warriors, and the Vikings' best defense was probably the same as their best offense: Their skill and technological prowess at sailing meant that they could strike far and wide, including across open seas or up shallow rivers, well away from their homes, and their enemies couldn't follow.
Thus the smart strategy would be to treat women like a National Guard, short, basic training, with frequent but brief repetitions, not investing too much time, but still being able to assist in defense somewhat. The idea would be the same as other kinds of non-professional warriors who fight only in times of extreme need, such as peasants / serfs.
To add to this, if all/most of your men are away raiding, your female National Guard can at least begin the basic training of the youngsters.
BTW you know more about HEMA than I do, but even a longsword is just 1.5kg. Why does upper body strenght matter much? It is not boxing. I haven't tried it, perhaps such an experiment should be tried: I would offer a bet than even a 12 years old child could run a spear through a side of beef or pork, which would be a good enough approximation of the human body. It just does not look that hard.
I wouldn't call this definitive but i'll give you what my experience comes down to.
The hollywoodized image of what you use the brute strength for is generally contradicted by the manuals. It's a naive way of looking at it ("hey if you're strong you can blow past that ward!") that anyone with some actual training will use against you (manipulating the bind is literally in every longsword manual). What strength is useful for comes down to two things for me… speed and control.
Fast attacks are pretty self evidently better and when you're maneuvering a longsword around you don't precisely need bear arms, but it helps. At the same time, a sword point flying spent off to the side is a wasted opportunity. What you really need is the strength to strike fast but still be able to end your strike near the centre line. 1.5kg isn't exactly heaven and earth but you wrench that through 270 degrees and it's got a whole heap of momentum going on.
In terms of what the manuals actually say, Fiore talks about the elephant as one of his animal virtues, but basically implies its value is mostly in the wrestling which is core to Fiore's sword art(The translated quote is "I am the elephant and I carry a castle as cargo,
And I do not kneel nor lose my footing" ). Recently a friend of mine found a late 19thC british sabre training manual that talks about strength being a much smaller factor next to velocity for the purposes of actually causing damage.
One of the things I've noticed over the years of martial arts training (and even more reading about the subject) is that most of the time you work on defeating the opponents attack, not making an attack yourself. That is the principle difference between simply hitting someone and fighting them. The discussion so far seems to point this out.
In the present example of women and swords (and, I suggest, non-projectile weapons generally), the distinction between being physically able to use a sword (even using it well), and fighting with one, is the determining factor governing the frequency of women participating in historical combat settings (in esr's term, shieldmaiden). Training to fight with a sword takes literally years of dedicated work; training to make effective spontaneous use of a sword in extremis takes the occasional hour or two every few months (and a vigorous lifestyle or regular strength training).
As esr noted in the title, a pre-industrial society's division of labor has the men doing the dedicated training to successfully perform with the tools of conflict and conquest, and the women making regular use of all the tools available on an ad hoc basis. Which would, logically enough, extend to the odd sword, axe, knife, shield, mace, or what-have-you that might be lying to hand come the need.
If a really accomplished farmer's daughter happens to make good enough use of a sword to qualify for the title of shieldmaiden on the day then, "well done her" say I. But she just doesn't have the time to devote to training for the job and help keep food on the family table too. I suspect the few examples that might have actually existed were themselves daughters of rich men willing to indulge them with expensive training and custom equipment.
@ JonCB
Let's you and I call a quick "time out" about an hour-and-a-half into the melee and have a brief conversation about your theory regarding the relative utility of "brute strength", shall we?
On second thought, let's don't; I'm 60 now and well past any hope of lasting more than maybe 5 minutes (at best) these days.
The point being, strength is the source from which speed is generated in any contest involving endurance (like fighting – with or sans sword). Compare any two fighters between the opening round and the final round of the bout; the relatively stronger of the two is that bit faster at the end.
I agree. I wouldn't really expect a for-realsies fight between a this kind of "militia" and a raiding force to last much beyond the "opening round" one way or the other but the point still stands.
across open seas or up shallow rivers, well away from their homes, and their enemies couldn't follow.
I'm assuming that the theoretical aggressor in this scenario is other raiders not retaliation from their targets. So neither open seas, shallow rivers nor distance are a defense here. All the amazing viking raid loot in the world still ends with an evolutionary loss if you come back to a torched village with no survivors.
"God made us all. Sam Colt made us all equal."
As for how you select friends, that selection seems to be based on either being dangerous to piss off or else of hackerly intelligence and sensibilities. Wonder how much overlap there is between the two? I'm no use in a hand-to-hand melee, but why get hand to hand when you can reach out and kill someone before they have a chance to get personal?
>Wonder how much overlap there is between the two?
Not a huge amount, but more than you might guess offhand. My sword-geek friends mostly aren't hackers, but the ones who aren't know enough to respect hacker culture.
Firearms changes all this, of course – some of the physiological differences that make them inferior with contact weapons are actual advantages at shooting (again I speak from experience, as I teach women to shoot).
Literature has no shortage of women driving men into combat, even leading them if necessary.
(Young) Women can be extremely fanatic and brutal. There is a famous German instruction to anti-terrorism forces:
Shoot the women first
http://www.ghi-dc.org/files/publications/bulletin/bu043/59.pdf
Reportedly, a lot of the "jihadists" traveling to ISIS from Europe are women. Many single.
>Reportedly, a lot of the "jihadists" traveling to ISIS from Europe are women. Many single.
It is highly unlikely they will be hand-to-hand or contact-weapon fighters though.
Women can be highly dangerous – "fanatic and brutal" – yes. If you get past the men and threaten their children they will fight with an utter lack of restraint or fear of injury that is chilling. This is not a response that can usually be evoked in battlefield conditions.
>The point being, strength is the source from which speed is generated in any contest involving endurance
That begs the definition of strength, as there are apparently multiple ones. Today, the popular definition is roughly like "bench press 1 rep max" and a few decades ago it was "100 push-ups". The later actually tested muscle endurance, not strength. Yet, in the scenario you mention, it is literally about muscle endurance, not strength, at least definitely not strength in the one rep max sense.
It is a bit confusing, because before the recent body building trend, people do tended to confuse strength and muscle endurance a lot, and say strength when they mean muscle endurance, because more often than not muscle endurance was more useful than strength. Even in my childhood, a tough guy would often be defined as someone who would not get tired fast when digging trenches, unloading wagons, and doing similar kinds of physical labor. Which is all about muscle endurance. OK my background is about 50 years backwards in such trends than say the US, so I figure this kind of trench-digging attitude to defining strength is only familiar to your grandfather. For example when I started going to the gym around 1990 and it was a very new thing here, most guys were contemptuous, and said stuff like "body builders just have show muscles, they could not last three hours in laying bricks" etc. so clearly they cared about muscle endurace more, and tended to call that "real" strength.
>It is a bit confusing, because before the recent body building trend, people do tended to confuse strength and muscle endurance a lot, and say strength when they mean muscle endurance, because more often than not muscle endurance was more useful than strength. Even in my childhood, a tough guy would often be defined as someone who would not get tired fast when digging trenches, unloading wagons, and doing similar kinds of physical labor. Which is all about muscle endurance. OK my background is about 50 years backwards in such trends than say the US, so I figure this kind of trench-digging attitude to defining strength is only familiar to your grandfather. For example when I started going to the gym around 1990 and it was a very new thing here, most guys were contemptuous, and said stuff like "body builders just have show muscles, they could not last three hours in laying bricks" etc. so clearly they cared about muscle endurace more, and tended to call that "real" strength.
I think it's even more complicated than that, because the 'endurance vs. percentage of your strength used' curve is non-linear.
Ex, if you're doing a task that requires 100% of your strength you might last 'n' minutes. If you're using 80% of your strength you could last '2n' minutes. 50% of your strength '6n' minutes. 10% of your strength and you could keep it up until you got bored. Of course I'm making those numbers up just from my own general experience with physical labor, but being stronger *will* mean that any given task uses a lower percentage of your available strength and that therefore you can keep doing that task much longer.
Timothy O'Neill says:
A problem with research (but particularly with research papers in the social sciences) is that journalists do not know enough science to read and interpret even the short header summaries. Some words do not mean what they think they mean, and summaries of statistical analysis are lost on naïve readers.
Two examples: Some years ago, Dan Rather reported a research finding that millions of American children were going to school hungry and were "in danger of starving." The paper his researcher cited simply reported a survey of children in middle school that suggested many wished they had eaten more for breakfast.
More recently: When I was a consultant to the Department of Transportation, there was a proposal for a major education program designed to reduce consumption of alcohol by long-haul truck drivers. This was prompted by a study that showed trucks being involved in a greater than expected number of fatal crashes involving drinking. What nobody asked was who was doing the drinking. A second look revealed that the truckers were almost invariably sober, but fatal truck involved crashes almost always involve a truck and a passenger car. In virtually every case, it was the passenger car driver who was sloshed.
Bottom line: to understand a research finding, you have to understand research, including: What is the question being asked? Is it the right question? What does the answer mean? Is the question valid and the answer valid? Without this, reports of scientific studies are too often misused.
There are women suicide bombers.
https://en.wikipedia.org/wiki/Suicide_attack#Female_suicide_bombers
" If you get past the men and threaten their children they will fight with an utter lack of restraint or fear of injury that is chilling. This is not a response that can usually be evoked in battlefield conditions."
The history of European terrorist organizations (e.g., RAF) tells a different story. Women without children can be extremely fanatic and brutal for "random" causes. See also the woman from the German Neo Nazi NSU trio:
http://www.spiegel.de/international/germany/nsu-neo-nazi-terror-trial-enters-summer-recess-a-915356.html
I agree that it is uncommon to use women on battlefields for many reasons. Historically a woman with child would be a very bad soldier. And women tended to get many children from an early age.
>There are women suicide bombers.
Well, duh. Explosives. Like firearms, they largely eliminate the requirements for burst and endurance strength characteristic of pre-gunpowder weapons.
>The history of European terrorist organizations (e.g., RAF) tells a different story. Women without children can be extremely fanatic and brutal for "random" causes. See also the woman from the German Neo Nazi NSU trio:
All this tells us is that psychopathy swamps some gender differences. You can't take the behavior of women (or men) in an organization like RAF as predictive of what you should expect in the general population.
Paul Brinkley says:
Given your last paragraph, Eric, this seems as good a time as I'll get to ask you and the rest of this crowd about the mechanics of young children wielding fully automatic weapons. Or more precisely: people at the margin of upper body strength operating same. What's an advisable limit here? I know plenty of 9-year-olds that can work a shotgun, but I can't think of any working an Uzi, for example, either standing or prone.
(Yes, this is related to that accident in Arizona a few days ago.)
Trimegistus says:
There was also the question of mortality. A man who survived adolescence would likely live past 40. A woman would have a non-trivial chance of dying every time she gave birth. Investing time and surplus food in training a woman to fight, only to have her die in childbed, would be a tremendous waste. More useful to put her to "women's work" while you train her brother.
Technology and the germ theory of disease changed all that, but there's a weird and slightly Orwellian drive to make the past conform to how we prefer to live today.
>Given your last paragraph, Eric, this seems as good a time as I'll get to ask you and the rest of this crowd about the mechanics of young children wielding fully automatic weapons. Or more precisely: people at the margin of upper body strength operating same. What's an advisable limit here? I know plenty of 9-year-olds that can work a shotgun, but I can't think of any working an Uzi, for example, either standing or prone.
It's more a qualified user issue. Whatever people's physical ability, if they have the training, discipline and skill to control the weapon within the limits of their physical abilities, no problems.
People need training and familiarity before they can drive, so it's only to be expected they would need training and familiarity before attempting something like a select-fire firearm on full auto.
Lots of people have written on how to build up a new user while training, 'one in the mag, two in the mag, three in the mag single shot, then one in the mag, two in the mag, three in the mag full auto', etc etc
Recoil and muzzle rise happen, to different degrees on different weapons. Different people have different ability to manage the recoil and muzzle rise on long, continuous strings of fire (as in, mag dumps). But that's not the only way to shoot full auto, in fact we make fun of usually-Islamic 'fighters' who do that habitually. Bursts work, too. (There have been weapons commonly issued that were select fire, that *nobody* could control during long strings of continuous fire.)
So even if a person was physically unable to control a weapon while doing a mag dump, if they had the training and discipline to fire bursts that they *could* control, it's golden.
>(Yes, this is related to that accident in Arizona a few days ago.)
Wanted to add, even without being capable of full auto, it is MANDATORY that a user be able to effectively control a firearm within the limits of their physical capabilities. And that requires a qualified user, with training and some self-discipline.
People have killed themselves with semi-autos before, in the same way that the AZ instructor was killed only a little extra step that they need to pull the trigger more than once (usually out of panic) is required.
As in, fire weapon it recoils and the muzzle rises. User can't control it, and panics. Clenches the gun to try to control it, and thereby squeezes the trigger again. Recoil and muzzle rise, some more. Repeat clenching and inadvertent trigger squeeze. Repeat until gun is pointing up and back at firer's head when it fires. Game over.
Jakub Narebski says:
I think that important issue in deciding "to train or not" is how fight and training injuries might affect the fertility of person.
I grant that the RAF members were not mentally sound. But I seriously doubt that they were psychopaths.
Jorge Dujan says:
>some of the physiological differences that make them inferior with contact weapons are actual advantages at shooting
Regardless of the kind of gun? I mean: taking into account women's disadvantage at strength, does their advantage with guns start to dwindle as their training incorporates heavier guns?
I may be conflating strength with muscle endurance, as Shenpen said. But I'm curious.
Shit. Where I wrote "taking into account", I should have simply wrote "given". That's odd: I've heard of mistaking map for territory, but it seems I made the reverse mistake!
strongpoint says:
s/firearams/firearms, but the point is well made regardless.
LS says:
The sagas seem to show a world where men and women had clearly different roles and duties. I remember some dialog (I forgot from where) as, "I am a woman and may not fight."
OTOH, I haven't figured out the full meaning of the scene in the Greenland Saga where the prophetess runs a sword over her naked breasts and scares off the North Americans. Where did she get the weapon? Was she carrying it into battle? Was it meant to shame her menfolk into making a stand? (It all seems to be symbolic, as I'm sure that had she actually killed or wounded someone in the battle, the saga would have recorded that.)
Joshua Brule says:
> some of the physiological differences that make them inferior with contact weapons are actual advantages at shooting (again I speak from experience, as I teach women to shoot)
Has there been a formal study along these lines? My experience matches this – I've found that most women are naturally better shots then men and learn the skills faster (although it also seems that women have worse hunting instincts), but I don't have anything resembling a significant sample size.
Joseph W. says:
My experience matches this – I've found that most women are naturally better shots then men and learn the skills faster.
Massad Ayoob once observed the "faster learning" thing — he thought it was because women didn't feel a need to pretend they already knew about guns or shooting, so they could absorb the training faster.
he thought it was because women didn't feel a need to pretend they already knew about guns or shooting, so they could absorb the training faster.
I LOLed. Could well be true.
In the matter of women being better learners at certain skills, I believe that women make better use of 'the error signal'. Men try something, and if the result isn't so good, try something else. Women will pay more attention to what the gun is actually doing, and take that into account when correcting their technique.
This is certainly true in riding. Women generally make much better riders than men – they pay much more attention to the horse as they ride.
"What it establishes is that a hair less than half of Viking migrants were female, which is no surprise to anyone who's been paying attention."
And that's not news; we already knew that from the genetic evidence. Bryan Sykes discusses it in detail in his book _Saxons, Vikings, and Celts_.
Long story short: he says that in most cases an invading population consists primarily of men, but the women who bear their children are local and thus the mitochondrial line of descent goes through the original inhabitants. But genetic testing on modern-day descendants of the Danes who came to the British Isles show that the mitochondrial, matrilineal lines go back to Scandinavia.
As Sykes puts it, "They brought their women with them."
Rich Rostrom says:
There's been at least one "lab experiment" on this.
Dr. Richard Raskind, a middle-aged opthalmologist, changed genders and became Renée Richards. Raskind had been a good amateur tennis player (won the U.S. Navy championship). Richards became the 20th ranked player among women professionals. Richards even reached the finals of the U.S. Open doubles competition.
Richards was over 40 at the time, and according to Richards' own memoir, had lost substantial upper body strength due to female hormone treatments.
Roger Phillips says:
Every gender thread, there's more-or-less the same Winter post bringing up some exception or another to prop up his fantastical delusions.
Only if you take "no shortage" to mean "a few exceptions". I would like to see you post ONE thing that's more than fantasies of relevance from the losers of history. Next you'll be telling me about the great contributions of Africans to technology and science.
Fail Burton says:
Given this is Tor.com, this is just more of the new breed of feminists grasping at straws and bicycle-pumping women, as if a few obscure graves are going to overturn all of human history. Kameron Hurley won 2 Hugos doing this exact thing and even linked to the same Viking article.
This is my favorite example of the bicycle pump:
"Saladin Ahmed ?@saladinahmed 20h The Woman In The Green Mantle, erased by Crusader historians, immortalized by her impressed Muslim enemies. pic.twitter.com/2Mmey7lR0j 17 Aug 13"
"Kate Elliott ?@KateElliottSFF 16h @saladinahmed didn't you get the memo? Women never did anything back then !!!"
"Chia Evers ?@ChiaLynn 16h @KateElliottSFF @saladinahmed They certainly never stepped outside the bounds of culturally-proscribed femininity. That's unpossible."
"Kate Elliott ?@KateElliottSFF 16h @ChiaLynn @saladinahmed And our projections of what was proscribed/allowed back then must be accurate! "
It's wishful thinking carried to an absurd level of childish glee. White patriarchal European supremacists sought to hide the women (one) in their midst while the noble Arab PoC honored her in history. This is also the kind of race and sex-baiting work that got Nebula nominated this year. It's more clownish than anything else. Go occupy a military recruiting station and go kill ISIS. Then I'll be impressed.
So, you might say that combat is "sexist" in exactly the same way that Dr. Miriam Grossman (female) is also "sexist" except that she is talking about _lower_ body strength: http://www.miriamgrossmanmd.com
JIm Richardson says:
Discussing women as members of a home guard, or militia equivalent is a whole different kettle of fish than as 'warriors'. As Eric points out, there are outliers, but the muscles don't lie, and muscles are important in a muscle powered conflict.
Of course, all bets are off when Sam Colt steps in. My wife may not be *quite* as good a shot as I am with a pistol, but that's because I have a lot more practice, and she picked it up really quickly.
Hand/eye coordination counts, with firearms. Strength (and endurance) less so.
Roger Phillips > Next you'll be telling me about the great contributions of Africans to technology and science.
Peanuts and brain surgery. I can give you the names. That is a highly abridged list, but I think that's already more than you can handle.
As for "no shortage", when it comes to this sort of thing, you really only need "a few examples". Most times, women in the Bible are in their specialties (i.e. Joseph can't be the mother of Jesus and Mordecai can't marry Xerxes), where a woman fills an exceptional man's role in the Bible, it's because all the men in sight are either enemies or chicken shit (i.e. see Judges 4:8 and the verses around it.) Likewise there is "no shortage" of bullets randomly hitting each other over battlefields because it is so incredibly unlikely that you'd never expect to see even one, let alone "a few examples" (i.e. https://www.youtube.com/watch?v=GIek9VkrHnA)
@ Shenpen
That begs the definition of strength …?
And your reply takes this to a level I was trying to allude to without getting all elbow deep into.
With the understanding that I'm basically an old(er) gym rat who reads a lot, my definition of "strength" can be summed up as being a synthesis measure of gross weight/mass moved over a defined period of time, within a measured period of time. IOW how much you can move (or kinetic impulse you can deliver if you prefer), in a given period of time, as rapidly as you are able.
Max gross lift is a valid metric, but limited in what it tells you outside of the lift technique context. Muscle endurance is also a valid metric, but focuses more on the athletes ability to manage energy expenditure in a stipulated application. I don't think there can be a straightforward, simple definition of strength to tell the truth; we don't apply our strength in simple, straightforward circumstances for the most part.
@Terry
Jesus Christ you are a moron.
Likewise there is "no shortage" of bullets randomly hitting each other over battlefields because it is so incredibly unlikely that you'd never expect to see even one, let alone "a few examples"
>Jesus Christ you are a moron.
Stop this. Terry's response was weak, but that doesn't excuse replying to it with content-free insults. Insults, if they nust be present, must be accompanied with counterargument. House rule, violated on pain of my displeasure. Persistent content-free insults are grounds for banning.
Yes, I know you're abrasive, misanthropic, and deficient in both social skills and empathy. I also know you can do better than this.
@esr Your house, your rules.
Roger Phillips > Jesus Christ you are a moron.
I'm not Jesus Christ. I'm also pretty sure He wasn't a moron.
To get back on topic, He was neither a woman, nor Aftrican, but he was from a similarly persecuted ethnic group that has likely fallen far short of their potential contribution to the legacy of humankind because of such racial persecution. …an ethnic group that has proven consistently effective at both military intelligence and warfare throughout their entire recorded history, both biblical and modern. Their women have gotten more than their fair share of work men are usually better at, not just because of the Judges 4:8 problem, but because the Gestapo could spot an undercover Jewish man quite easily because of circumcision, a problem the women were inherently immune to. The TSA and PRNYPD aren't quite as bad as the Gestapo once was, but it seems they can hardly wait to get there, so this may again become a factor.
Unfortunately, women's liberation has probably hurt women more than it has helped them. Certainly, it has helped them get jobs that were normally for men, and in those jobs their more domestically-oriented strengths are sometimes assets. However, a career woman often can't be a family woman at the same time, and by the time she gets around to having children, it is too late. Full term pregnancy protects against breast cancer, so there is some additional suffering on that front. These effects of women's lib is probably a major, if not _the_ major, factor in the steady reversal of population growth trends in the modern industrial world. Not all of the dual-income and bachelorette empty-nesters are happy about it either.
jim hurlburt says:
>> but I found this YouTube video about Viking
The blacksmithing is quite real as well.
They spoke of crucible steel, I know it as a
specific kind called wootz.
It is a simple, very high carbon steel — ~1.5%
carbon.
Ordinary mild steel is about .25% carbon. Ordinary
high carbon steel runs from .6% to .95%. It will
be worked at a high yellow heat — 17-1900 degrees F or
Wootz will mush and sort of splatter at those kinds of
temperatures, as in the video you work it at a
much lower temperature, a dull to medium red,
perhaps 1200 degrees. Incidentally a temperature
that would not work with lower carbon steels —
the steel would crack and tear if forged at that
low a temperature.
Polished and etched wootz will have the patterns
they showed in the video. Wootz is a very
flexible steel, the stories of a sword that could
be bent tip to hilt, then released and spring
straight would almost certainly have been wootz.
I don't know of any modern use of wootz. Chrome
moly alloy steels are much easier to work and
would make a sword probably superior to wootz. It
might be difficult to get the extreme flexibility
that is attributed to wootz, but the alloy sword
would probably be stronger and harder to break.
I doubt that the flex was considered an inherently
good thing, just that it's presence would indicate
a sword that would be very hard to break in
combat.
A steel called graphitic steel is readily
available and is used to make forming dies
where there needs to be a sliding motion of the
material as it is formed. Probably mostly for
forming steel where pressure welding would be a
problem. It too must be forged at a much lower
temperature than alloy or lower carbon steels.
A very interesting video. If I had been that
blacksmith I would probably (and quite likely he
did) practice with the commercial very high carbon
steels to till I could do things well — then take
the ingot made with medieval methods and start
hammering out a sword.
Sorry to maybe ask an off-topic question, Mr. Raymond, but have you heard anything about what is going on in the video game communities that the Social Justice folks are trying to invade? If not, I recommend you check into it. There's been tons of censorship at the big gaming sites (who, incredibly, have been running article after article insulting their readers )…ah, enough. Anyway I think you might find it interesting being that while you don't identify as a gamer, I'm sure you have some interest in the feminist attack on all sorts of tech (heck, 'guy') culture in general.
>Sorry to maybe ask an off-topic question, Mr. Raymond, but have you heard anything about what is going on in the video game communities that the Social Justice folks are trying to invade?
I am. I have not yet formed a firm opinion about the merits of that argument. If and when I do I might write about it.
@Greg
>10% of your strength and you could keep it up until you got bored.
The trick is in the measurement. I was 10 years old when I told my dad my dumbbells are too light and its time to build some new ones. I told him I could keep it up until I get bored. He gave me a challenge to do overhead presses without weights, just bare hands. My shoulders gave out at around 100-120 reps and I was quite surprised, I expected to be able to go on forever. What I did not take into consideration that my arms have a weight on their own, and probably they are far more than 10% of the dumbbells that I used back then. Probably more like 25-40%. Like most people who just do weights and no real sports, I half-assumed by body and its parts to be for practical purposes "weightless". No real athlete would think like that :-)
So yes, you are right, but the body weight makes it weird. If you are 85 kg, minus calves 80, you would need a 800 kg weighted squat one rep max to make that body weight squat 10%. And the world record is like 575, so it never happens.
I assume this is something real athletes instinctively know, but for a gym-only guy like me these calculations are quite a bit surprising.
>he thought it was because women didn't feel a need to pretend they already knew about guns or shooting, so they could absorb the training faster.
This is just one golden truth about life in general. This is one of the major issues anyone who ever tries to teach anyone anything has to count with. Another, worse scenario when the trainee pretends to know _better_ – I had lot of "fun" times training accountants to use accounting software who generally thought the software is wrong and they could design it better. The absorption was not particularly fast to say the least.
Mr. Raymond, thank you for answering my question.
I won't say any more about this but to say that if you do decide to cover it, well, basically Kotaku, Gamusatra, those sites – basically aren't covering it but are running article after article attacking their base. I'll drop a few sources to look into: A Youtuber named Mundane Matt, Christina Hoff Summers twitter feed, and google searches for #gamergate and Zoey Quinn. Places like A Voice for Men have covered it, and even newspapers like the Guardian and Breitburt. It's ironic that other press organizations are picking at this when the gaming journalists won't. And of course, before I forget to mention some smaller gaming websites have covered it. But from the big ones either articles all parroting the same talking -points or silence. And Reddit has been censored.
Anyway, this will be my last post on this unless you choose to throw your hat into the ring, but I wanted to give you a bit of a scoop so it will be easier to do research should you choose to. One side (or no side) is all you'll get from the big gaming sites. I have hope for your information gathering facilities because you know lots of programmers or at least you used to, though I'm not sure you ever had any contact with the game scene given the importance of lots of the stuff you have done in the technology field.
Tom Kratman says:
No. IIRC, for Musashi's most famous duel, he whittled a wooden sword of sorts from an oar as he was being ferried to the duel. Swords are still just long strong wedges, and not necessarily all that advantageous compared to a club.
Secondly, though, what difference if it were true if there was no reasonable prospect of said woman running into a disarmed man?
Thirdly, raiders don't usually stroll through a settlement killing the women. Remember your priorities of work: "First rape, then kill, then pillage, and _then_ burn." Except you don't kill the females, if you have a choice, you take them as slaves; to use, to sell, or to use and then sell. And rarely if ever have the wolves cared how many be the sheep.
"So much so that anyone who wants to suppress personal firearams is objectively anti-female and automatically oppressive of women."
Indeed. Looking past leftist lunacies, there may actually be something we might call, "rape culture." If so, it is whatever facilitates rape. Among those things are disarming women or, perhaps worse, telling them that in a fair and decent world they would not need to be armed, but then neglecting to tell them we do not live in that world, but in this one.
Cambias says:
Jim Hurlburt: I know of one use. This past weekend there was an article in the Wall Street Journal about high-end cooking products, and apparently one of the most expensive brands of chef's knife is one hand-crafted from what is obviously Damascus steel (=wootz). According to the article, the guy who makes the very top-end ones used to do them only on commission, but there were too many orders so now he just makes knives and _auctions them off_ as he finishes them.
TomA says:
In evolutionary time, that which works best endures. If employing women as warriors had provided comparative advantage, then they would have evolved a muscular body type suitable for hand-to-hand combat. As it is, they have small torsos with large breasts, which implies that their greater utility lies in nurturing offspring. Present day female morphology is record of a few million years of selection that cannot be undone by a few ambiguous burial sites.
Chris Gerrib says:
Eric:
First, the Tor article has been updated (in text, not headline) to reflect a 10% figure for women in combat. (Based on what I've seen, the TV series "Vikings" seems to be following a similar 10% rule for offensive operations.) Second, I think there's a difference between "optimal" and "effective." It would be optimal to have all strong young men in your line of battle. This doesn't mean weaker and/or older men or women wouldn't be effective.
Third, as Stalin said, quantity is a quality all it's own. You need enough bodies to fill a line, and at least some raids will consist of making an armed demonstration outside the target followed by some kind of negotiated payoff. (Danegeld was a real thing.)
it would have lost out to a culture that protected and used their reproductive capacity to birth more male warriors. The current theory is that overpopulation was driving the entire Viking culture of raiding and emigration. If overpopulation is a problem, one of the ways to fix that is for women to have fewer children. In short, shieldmaidens may have been a feature, not a bug.
@cambus
I knew that high end knife makers use it. Mostly these days for cute, again the best alloy steels make better knives and are easy to get and (compared to the labor that goes into hand making a knife) cheap.
I know of no commercial production use — the guy you spoke of is still doing one off hand work, and while his knives are no doubt very very good, as a practical matter there are mass produced ones that are far more than adequate for preparing food. I currently use a ceramic knife — which is a marvelous thing for the task.
Quite admittedly I would like to have one of his knives — just not very many hundreds of dollars of want. And I'm certainly not criticizing him for finding a way to make good money at his art.
goth1856 says:
as a trucker i've had female co-drivers. they were great at the driving itself – shifting gears, backing up, etc. but when it came to following directions, road maps and how the national (numerical) road system works, they were totally lost. one woman driver cost me half a day's work by going in the wrong direction after i had given direction on where to go while i caught some sleep. imagine my surprise, when i woke, when i saw that we were on the wrong road over 200 miles from where we should be.? our brains are def wired differently.
Tor.com! Now With 40% More White Male Privilege and Square-Jawed White Men Fixes All!!!!!! JOIN US ON 'N0-SUPREMACY' FRIDAYS!!!!
And remember: don't underreport the rape of 1,400 children – cuz "rape culture!" could me on the march in a neighborhood near you!
Vee Half Owlways Fot!!!
Lambert says:
I have read that the art of forging Damascos steel has been lost, most likely after the trade of wootz declined. What is called 'Damascus' is just fancy stripy steel.
TWS says:
Leif Erickson's sister took the sword from a dead viking (he had been killed by a stone to the head if I recall) and slapped her breast with the blade in defiance of the skraelings. I am sure that after seeing several men killed with the same type of weapon and now seeing that this pregnant big boobed woman was smacking herself with it convinced them she was fierce and not likely to be hurt.
The reason it made it into the sagas is because it was rare. Look at the standards that are used to evaluate whether or not a woman can be a police officer (commonly called the 'Cooper Test'). These tests are designed for women to pass them at a rate no less than 4/5ths that of men. The test is adjusted for age. At no time do the women ever meet the standards for men in running much less pushups. I think there is a tiny overlap in crunches between the twenty year old woman and the fifty year old man but it's been years since I looked at it. When the first fire breaks out on a US Navy ship with too many women we'll see how they do as firefighters. I think the good Col. Kratman has already mentioned the likely outcome.
Women are not built for combat. That's not a mistake of nature or a function of our nurture, it ensures that women will live to do the most important and difficult thing they can, bear and raise children.
> Unfortunately, women's liberation has probably hurt women more than it has helped them.
So you think that women are worse off than when they:
1. Were considered little more than the property of their husbands and fathers
2. Were legally subject to physical violence from their husbands or fathers
3. Could not own property
4. Did not have the same legal rights as men, such as the ability to sue in court
5. Could not vote
6. Had reduced rights to sue for divorce
7. Were largely unable to function in society without a husband or father
8. Had no right to control their sexuality in face of the demands of their husband
9. Were socially oppressed to the point where they were sluts if they had more than one sex partner, whereas men in the same situation were studs.
10. Were legally barred from certain professions
11. Were denied access to education
Not all of these applied in all countries or at all times, but that is what women's liberation has fixed for women. Some things that women's "liberation" has done, especially some recent, more extreme developments, has been detrimental to either women or men or both. But the idea that on the whole women's liberation has been a bad thing is just silly.
>But the idea that on the whole women's liberation has been a bad thing is just silly.
Some rather surprised social scientists have reported that measures of female happiness have declined, both absolutely and relatively to men, since "liberation". So the counter-case isn't entirely crazy, on that level.
Myself, I prefer a world with un-oppressed women in it. But I have doubts that the ideology of sexual equality is biologically sustainable. I have written about this before.
I don't think women's lib has hurt women at all. I know damn well I'd trade some theory about a slippery slope for society that may never come true for the right to be treated as an equal before the law in my actual and real life in the here and now.
I think what we're talking about here is confused because of semantics. Third Wave Intersectionalists call themselves "feminists" but they are not. They are a racist sexist supremacist cult. They have nothing to do with feminism, or liberalism for that matter.
Yes, they are damaging women because in true Orwellian style, intersectionalists are co-opting terms to camouflage their weird narcissism, as witness the sub-cult of semantics at the Ministry of Tor, where the word "fought" or "racism" could mean almost anything, depending on skin and sex. Imagine America as baseball but with the strike zone changing every minute based on the race and sex of pitcher and batter and imagine America destroyed.
The recent SF anthology webzine issue called "Women Destroy Science Fiction" with a heavy-handed moronic irony is a case in point.
A true title would've been "Third Wave QUILTBAG Intersectionalists Destroy Science-Fiction," and they'd have been right. There is no irony to THAT title, only bald-faced truth. Just look at the table of contents. The idea that's just "the gals" is ludicrous. If that really was just "the gals" – half of all humans – I'd strip away a woman's right to vote in about 2 seconds, cuz destroying the country.
The winners of the Nebulas and Hugos this year was a celebration of intersectionalism – not SF, not a literary movement, not art, not feminism, not liberalism, not women, not the genre. It was a celebration of identity. Given the obvious reality of what happens to any cultural expression where identity or social position trump talent, yes, "women" have destroyed SF, and mainstreaming such insanity into America will wreck the joint.
This is a cult who in actual and real fact do NOT prosecute real rape based on skin and prosecute NO rape based on skin. In typical PC madness, access to due process has institutionally and legally come to depend on your skin and sex, as witness the new law in California for colleges and the rapes in the U.K. So-called "feminists" led the charge on Ferguson and Gaza and sit on their Twitter hands about ISIS. Trust me, there'll be no #NotAllMuslims coming from this insane cult of endemic liars.
Random832 says:
> And if a pre-industrial culture has chosen to train more than a tiny fraction of its women as shieldmaidens, it would have lost out to a culture that protected and used their reproductive capacity to birth more male warriors.
What if it had other advantages that outweighed this cost? Something has to actually win against it for it to "lose out" – for example, to get rid of the mammalian (I thought it was all vertebrates, actually) eye, all mammals would have to go extinct.
>What if it had other advantages that outweighed this cost?
If there were any substantial net advantage here, the historical evidence for women warriors wouldn't be as thin and fragmentary as it is.
> Some rather surprised social scientists have reported that measures of female happiness have declined, both absolutely and relatively to men, since "liberation"
I'd like to read those reports. "Reports" from social scientists are always dubious, especially about metrics like "happiness" that are both extremely hard to measure, and extremely easy to tweak to get the result you want.
I'm also curious why "liberation" is in quotes. Do you think that lifting some of the legal impediments I listed is somehow non liberating, irrespective of the consequences to happiness?
>I'm also curious why "liberation" is in quotes.
Because some (not all) of the claims and causes travelling under that banner have not been liberating at all, to women or anyone else. It's no longer a term I feel I can use completely without irony. You have Third Wave difference feminism and Fail Burton's intersectionalists to thank for this.
@Roger Phillips
"Only if you take "no shortage" to mean "a few exceptions"."
I think both Elizabeth I and Catherina the Great were driving considerable numbers of men into the battlefield. So were quite a number of Wags of Chinese emperors.
In European history we know examples of women fighting off armies. Our local hero is Kenau from Haarlem. We also know of Jeanne d'Arc. I am too lazy to sift through the whole of European history for more less known examples.
"I would like to see you post ONE thing that's more than fantasies of relevance from the losers of history."
Women the losers of history? That is a strange worldview.
"Next you'll be telling me about the great contributions of Africans to technology and science."
Do you mean "Africans" as in people from the continent or "Africans" as in black people from south of the Sahara?
North African people have contributed a lot to Western development.
I would like to see that social science report. I think the conclusion you draw here is bogus.
If there is one thing that has come out of hapiness research, it is that more control over your life leads to more hapiness. That holds for men and women and every culture.
If someone could please let Keynesians know, we might get somewhere.
> Because some (not all) of the claims and causes travelling under that banner have not been liberating at all,
Ah, OK, well FWIW, I agree with that assessment.
>>I have read that the art of forging Damascos steel has been lost.
Clearly false. Case in point, the blacksmith in the video. Also, while I have never worked at it, my one attempt with O6 graphitic steel was a failure, I expect that given real incentive to put in the effort, that I would be able to make a functional knife in 40-200 hours of work. The second one would be better and much faster, the twentieth would probably be a pretty nice weapon/tool.
"Because some (not all) of the claims and causes travelling under that banner have not been liberating at all, to women or anyone else."
What is wrong with:
http://www.merriam-webster.com/dictionary/liberation
lib·er·a·tion
noun \?li-b?-?r?-sh?n\
: the act or process of freeing someone or something from another's control : the act of liberating someone or something
: the removal of traditional social or sexual rules, attitudes, etc.
The Paradox of Declining Female Happiness
By many objective measures the lives of women in the United States have improved over the past 35 years, yet we show that measures of subjective well-being indicate that women's happiness has declined both absolutely and relative to men. The paradox of women's declining relative well-being is found across various datasets, measures of subjective well-being, and is pervasive across demographic groups and industrialized countries. Relative declines in female happiness have eroded a gender gap in happiness in which women in the 1970s typically reported higher subjective well-being than did men. These declines have continued and a new gender gap is emerging — one with higher subjective well-being for men.
They are my intersectionalists only by default. I have no idea why people in the SFF community who oppose what's happening continue to ignore the evidence right in front of their eyes. These morons clearly look up to Donna Haraway and her Cyborg Manisfesto, Joanna Russ and her "How to Suppress Women's Writing," any sacred word Octavia Butler ever wrote and Peggy McIntosh's "Unpacking the Invisible Knapsack" and actually use the word "intersectionalism" themselves.
None of that defaults to any "liberalism" I ever heard of, or feminism, unless dreaming of a world without men and a queer future and putting the genocidal colonial privilege mark of Cain on straight white men is liberalism and feminism nowadays. It may be, but changing the name of a horse to a donkey doesn't create a donkey, nor will it run fast.
Is the neo-Nazi view of white supremacy and Jews as the world's great evil liberalism then? Forget identity, race and gender, donkeys and horses; in principle Intersectionalism and neo-Nazism are one and the same.
Orwell once wrote a little book about not being able to make such comparisons. But there's no rat-cages on our heads and no excuse for being conned by morons.
Well I don't know that 'more control leads to more happiness' or perhaps I should phrase it this way not all happiness is equal. Give a woman the right to divorce her husband, take his money, have random sex with random strangers, use drugs and alcohol while pregnant, hell abort her children.
She sure has lots more control over her life and she's likely to be an unhappy, shrew, with messed up kids. Or let's say she is happy. I doubt her happiness or her children's (those that are living) happiness translates to well-being and a happy life for all.
My great grandmother was happy her entire life, my grandmother was mostly happy, my mother almost never and she was a dedicated feminist with extensive post grad education and all the correct political attitudes.
When I compare the spirits and happiness of my great grandmother and her sisters (who were adults before suffrage) with those of my generation or younger generations I do not see any comparison. They were happier before Susan B. Anthony and 'liberation'. Their lives were better, they had happier kids, and they lived better more fulfilling lives despite not having liberation. Now your mileage may vary but society was much happier before women were given agency in every facet of their lives.
Sorry for kind of being off topic here.
@TWS
"Give a woman the right to divorce her husband, take his money, have random sex with random strangers, use drugs and alcohol while pregnant, hell abort her children.
She sure has lots more control over her life and she's likely to be an unhappy, shrew, with messed up kids."
Both men and women have these rights nowadays. They hardly use them. Such are the wonders of control over your life: You can do as you wish. Say, by not doing all these things you mention. By not having control over your life, you have to endure that others enforce all kind of nasty things onto you.
"When I compare the spirits and happiness of my great grandmother and her sisters (who were adults before suffrage) with those of my generation or younger generations I do not see any comparison."
Sorry, but the plural of anecdotes in not data. Statistics tell us a different story. There are nice tables and explanations in the report below.
http://www.earth.columbia.edu/sitefiles/file/Sachs%20Writing/2012/World%20Happiness%20Report.pdf
Winter you apparently missed the part where I mentioned that happiness does not equal well being or good choices. I'm sure my children would be happier with more 'control' over their lives. They'd go to sleep when they wanted, eat what they wanted etc. But happiness nor satisfaction does not equal good choices or good outcomes.
If you really think the world is a better place than it was when most people did not have to lock their doors or cars, worry about their kids playing outside, or finding condoms and needles in the schoolyard, good for you. At least somebody is happy with the world the feminist/progressive mindset has created.
Research shows that more people are happier in modern, liberal, feminist countries (Scandinavia, Netherlands, etc.) than anywhere else in the world.
Happiness is most certainly correlated to well being.
You mean the "modern, liberal, feminist countries" that have extremely low birthrates and so are set to die off in the next century or so, unless they go back to the pre-feminist practice of having babies?
Just a couple off odd factoids to throw into the pot:
Women fighters include the "Dahomey Amazons". (19th century). They used muskets, so swords were not a factor.
There was a Scientific American article about Damascus steel swords published a few years ago. It mentioned that the steel came in the form of round cakes from India. The source dried up some time in the 1800s.
> You mean the "modern, liberal, feminist countries" that have extremely low birthrates and so are set to die off in the next century or so […] ?
Is a fertility rate of 1.7 to 1.9 children per woman "extremely low" ?
Do you have reasons to believe that lower rates in, say, Bosnia, Romania or Vatican City are caused by significantly higher influence of feminism in those countries ?
But I have doubts that the ideology of sexual equality is biologically sustainable.
The ideology of "sexuality equality" is feelgood bullshit. What next? Children and adults are equals? Dumb and smart people are equals? Strong and weak people are equals? I don't take you for a nihilist..
Did you make any attempt at all to understand the text you quoted? IMAGINE if it were up to people like you to interpret historical evidence. Discovery of the hot dog contest leads to belief that obscene displays of competitive eating were a common 21st century habit. "No shortage of examples!" your braindead 25th century spawn will say. Pop-science article reads: "hot dogs responsible for 21st century obesity epidemic?"
No, what's strange is that your idea of "reading" is to skim for a few juicy keywords and guess what was meant. So the fact that "women" and "losers of history" appear somewhere in the same vicinity is in your illiterate mind grounds for sticking them together into one sentence and treating that sentence as though it had come directly from my mouth.
It seems my influence is finally starting to rub off on you. So far as I can remember this is the first honest attempt you've made to understand something I wrote. But you still fall short by trying to "catch me out". Which is why you present two possible interpretations then talk about the unfavorable one. If you think about what I said, and think about all the possible interpretations you will find your query becomes redundant. You will not advance at all until you learn that the purpose of reading is to understand what the writer meant. To see what was in his brain at the time he was writing. That is why I call you a quibbler.
Is a fertility rate of 1.7 to 1.9 children per woman "extremely low"?
Yes, when the replacement rate is 2.1, and with social security/pension plans which are unsustainable with a shrinking population.
Low birth rates can have different causes. I was just commenting on the "modern, liberal, feminist" countries.
Brian Marshall says:
To go off topic in a slightly more on-topic direction, I quote Kipling – from "THE YOUNG BRITISH SOLDIER". (If I have quoted this before, I apologize.) Women can be (or at least have been) scary on the battlefield after the battle was over…
When you're wounded and left on Afghanistan's plains,
An' go to your Gawd like a soldier.
In my parent's generation, the family archetype predominantly consisted of fathers working and mothers staying home to care for the kids. Typically, male/female roles were very differentiated, with little overlap.
Then things changed as a result of the modern women's liberation movement, and more women (including mothers) started working and pursuing careers. In addition, out of necessity, many husbands started helping out with home and child rearing tasks.
One consequence of this change was that men and women acquired differing perspectives whenever they compared themselves with their own fathers/mothers. Men frequently acquired high self esteem because they saw themselves are doing much more than their fathers ever did, e.g. supporting the family financially while also contributing significant additional roles at home.
For women, the opposite was occurring whenever they compared themselves against their stay-at-home mothers. They typically viewed themselves as doing less than their mothers and often felt a sense of guilt when work conflicted with family obligations. This effect may explain why some modern women fell less happy in their personal life, even though they may be more empowered.
That individual women are famous for such feats, while men are less so, speaks more to the scarcity of them than it does to their presence. It's the same reason that crashes of small aircraft are newsworthy, while crashes of automobiles are not: far more of the latter happen.
@ Jay
The Marie Curie effect.
William O. B'Livion says:
Shenpen
http://www.havokjournal.com/fitness/2014/3/25/military-and-special-operations-fitness
Strength matters in ALMOST EVERYTHING.
Being in good condition will make everything easier.
Simon Smith :
A very well trained woman who is defending home, hearth and family? Probably. Assuming the male is *unarmed*, not just not as armed. And there are techniques for unarmed against a sword, which is almost the definition of "a bad day".
Anything less than that and an aggressive trained male will probably overcome her.
JonCB:
You're joking right?
And endurance, and in soaking up the little injuries and insults that go alone with fighting with hand weapons.
When someone comes with an overhand blow putting most of what hey have behind it, and you block, I'm not saying in necessarily hurts, but there's some sting there. And you get bruised here and there. The stronger you are the more of this you can handle.
And ultimately if I'm significantly stronger I can find ways to use that (like locking your weapon up and just busting you in the face with my fist).
Will Brown on 2014-09-03 at 06:24:50 said:
One of the things I've noticed over the years of martial arts training (and even more reading about the subject) is that most of the time you work on defeating the opponents attack, not making an attack yourself. That is the principle difference between simply hitting someone and fighting them.
That's because generations of martial arts in America have been sold to American Soccer Moms as "self defense and discipline". Not "How to launch an attack that will break bones, rend flesh and spill blood".
Jay Maynard
I'm no use in a hand-to-hand melee, but why get hand to hand when you can reach out and kill someone before they have a chance to get personal?
Because unless you go hunting bad guys you're most likely to meet them at close ranges, and they're likely to be the one launching the attack.
https://www.youtube.com/watch?v=I4EqqSH871A
Which one do you think you'll be? The guy in the blue shirt, or the guy in black that got the barrel shoved in his face?
The other reason is that the world isn't binary. I shoot when I can, and I carry ALMOST everywhere.
But the world isn't binary. There's a lot of times when physical force might be needed, but escalating immediately to lethal force might be unacceptable (the example I usually use is "Aunt Sally" has too much sherry at a family function and gets a little too physical. Yeah you'd LIKE to shoot her. Heck 90% of the people there would, but there's not articulable threat to life or of serious physical injury, and not being in texas "She needed killing" isn't useful. So you'd like to be able to take care of that sort of problem).
Shenpen:
Most people who can do a significant "one rep max" can do pretty well in a 60 second or 2 minute pushup test.
When looking at work done by hand there's always 3 aspects–one is the actual muscular effort, the other is training of the neuromuscular system (aka "neuromuscular adaption"), and the other is flat out skill.
In the martial art I study–based on Japanese battlefield techniques–there is a concept called "living off the mist"–that on a battlefield you need to preserve strength. It's the same when shoveling coal or digging a ditch. Yes, they are physically demanding, but there are ways of moving the body–even when bucking bales or stacking wood–that take less energy. Part of this does come from the neuromuscular adaption. Others are just how you move. "Taijutsu" as we call it.
http://www.havokjournal.com/fitness/2014/3/25/military-and-special-operations-fitnes
That's an excellent article. I've never been SOF, but it's congruent with what I've learned from a quarter of a century of martial-arts training. Anyone who didn't read it the first time the link was posted should now. Among other things, it indirectly explains why all but a tiny percentage of females are hopelessly outclassed on the battlefield.
Sigh. It confirms what I suspected from other sources. I have the upper-body and core strength required to hack it in that environment. And the warrior spirit, too, I believe. What I do not have is the minimum agility/mobility required – I just can't run well enough. And never will, dammit.
Oh well. I come a lot closer to making the physical quals than most guys pushing 56 could even dream of. That's something, I guess.
>That's something, I guess.
It is. And don't forget about your intellectual and professional accomplishments. Your writings have influenced me, and I'm certainly not the only one who can claim that.
>And don't forget about your intellectual and professional accomplishments.
I don't. But I internalized the Heinleinian ideal of becoming the the omnicompetent man early, so even knowing I'm really good at some things doesn't prevent some disappointment when I find there are other skills I respect that I'll never be A-list at.
There's a LOT more than just the upper body strength and such.
When I was in boot camp I came >< that close to maxing the Marine Corps PFT. I ran a 19:30 3 mile instead of 18 minute. I think I might have come close one other time, but was too busy retching (and I was a smoker at the time).
There has never been a day in my life I could meet the standards of the SEALs or Delta. Maybe MAYBE I could have made it in a Ranger battalion physically, but mentally I never had the focus or the self discipline.
Neal Stephenson put it:
"Until a man is twenty-five, he still thinks, every so often, that under the right circumstances he could be the baddest motherfucker in the world. If I moved to a martial-arts monastery in China and studied real hard for ten years. If my family was wiped out by Colombian drug dealers and I swore myself to revenge. If I got a fatal disease, had one year to live, and devoted it to wiping out street crime. If I just dropped out and devoted my life to being bad."
This is actually not true. Most men maintain this delusion WELL into their 40s.
I've met these guys. I've shot with this guys.
Hell, one of the groomsmen at my wedding quit his job and went back in the army (several years after 9/11) and at 39 went through SFQC. It was literally the case that there were people there young enough to be his son…If he'd started having sex when he first went to college… (my groomsman wound up General Counsel and Commercial Director for Freescale in EMEA for a while. Damn that was a high powered party).
I could run with them, but not for far.
>This is actually not true. Most men maintain this delusion WELL into their 40s.
I was a late bloomer. I was only beginning to discover my inner badass at 30, and I wasn't then nearly as strong as I would later become – the bulk muscle popped out around 32 or 33.
The upside, I suppose, is that I'm a capable – even intimidating – hand-to-hand fighter now at an age when most men are hitting the physical skids. Gonna suck when my connective tissues lose their elasticity, though. If I'm lucky I'll get another decade…
Foo Quuxman says:
Is it necessarily an either or proposition on liberation (the real kind, not the insanity)?
To give an extreme example from our favorite rooster; Since blacks are inferior as a group one can either claim that they are equal and then genocide any non-black who does better, or that every individual black is about to destroy civilization and must be actively held down.
Or to come back to the current topic: Why does the influence of affirmative action style policies causing the male-female dynamic to be thoroughly screwed up necessarily mean that the proper solution is for women to not have agency?
In other news, the way to solve a food shortage due to price controls is to nationalize the agricultural industry.
>Yes, when the replacement rate is 2.1, and with social security/pension plans which are unsustainable with a shrinking population.
ah, pension plans. I thought you said "dying off in the next century".
Even at below replacement rate fertility a population may (and usually does) continue to increase for quite a while, or become constant, eg when the effects of sub replacement rate fertility is compensated by increased life expectancy a,o.
That doesn't fix the greying population problem, but it does mean we shouldn't expect to get rid of the Dutch by next century.
You've absorbed the standard equivocation here, Eric. A lack of oppression means that women have free choice about what role to take on, and a sizeable proportion of women's choice is something similar to a traditionally-roled relationship with a man. I've been informed that men who identify as dominants on sites such as Fetlife tend to get swamped with responses from women (in contrast to even the dynamics at traditional matchmaking sites), and Madison Young is a vocal proponent of the claim that freedom to choose includes a voluntary submissive role.
In contrast, I see widespread and blatant hypocrisy in the communities that self-apply labels such as "feminism" and "women's liberation", who say that they want freedom for women but then condemn those Auntie Toms who choose to be full-time mothers and homemakers instead of dropping their kids off to be raised by strangers at 6 weeks old.
There's no contradiction at all between fighting against actual genuine oppression and simultaneously saying that many women would prefer a more traditional arrangement and that that choice is okay.
@PapayaSF
> The Paradox of Declining Female Happiness
I read this paper, it was interesting, and it leads me to three comments:
1. The methodology seems sound although sample sizes were not apparent to me.
2. The conclusions did not obviously follow from the data from what I could see (there were a couple of early outlier peaks in female happiness, and after that it all looked like statistical noise)
3. The data started in 1970, which is pretty significant because it was around then the women's "liberation" moved from being dominated by th removal of real legal and social impediments of the kind I listed to more affirmative action type things, which I do not favor.
So I although it is interesting, I don't find this paper particularly supportive of the original claim that liberation has hurt women more than helped.
VD says:
"Give me a 180-200lb guy that can squat, deadlift, press, clean, and snatch close to the "accepted" standards for athletic performance. Add in cardio to his regimen – sprints, preferably. Every once in a while, with safety in mind, force him to work longer than 40 minutes."
It seems I missed my calling. I weigh 185, my max bench is 330, and I'm a former NCAA D1 sprinter. I still play veteran's soccer at a fairly high level; in the off-season I train to be able to complete our 40-minute halves.
On the other hand, I'd really rather not have people shooting at me.
"Why does upper body strenght matter much?"
It's connected to speed and reaction time. Stan Lee's Superhumans or whatever it is called tested a freakishly strong guy who could rip phone books in half. It turned out that his muscles were unusually fast. I noticed that I lost my ability to bench twice my weight around the same time I lost my sub-11 speed in my mid-30s. I could bench more at 170 pounds than I can at 185 and it's not because I put on a lot of fat. I lift more on most exercises, but the explosive bench press is gone.
Presence of women among Viking settlers hardly startling, however: "IF you think you're going to sneak off and see that trollop in Whitby, Bjorn Bjornsson, no better than she should be and no knickers either even if they haven't been invented yet you've got another think coming. When do we sail?"
UK Author Liz Williams on FB
Question for you Eric, with respect to your female friends. I think one advantage that women tend to have is that they are better multi-taskers, and I think for good evolutionary reasons.
My experience is that although not as strong as most of the guys I do relatively better with regards to multiple attackers because I seem to be able to hold their positions and anticipate their moves better than guys.
That might just be a peculiarity of me, but I'm interested to know if you have seen anything similar.
I don't do swords and stuff much, I am talking hand to hand combat here.
FWIW, I often wonder about this with respect to football. A quarterback is much less about body strength and much more about a broad situational awareness, something I would have thought women would have an advantage over. However, women, insofar as they play football, are almost always kickers. So I don't know what that says really. I suppose QBs are a target and so women are less robust to take the hits.
>That might just be a peculiarity of me, but I'm interested to know if you have seen anything similar.
I have. I do think women average better at multitasking than men do. Also, better average reaction time.
Also, better G tolerance. Decades ago, some military planners looked at these facts and thought "Hmmm…we ought to be able to train women into combat pilots that would beat men, on average". It didn't work out. Aggression and 3D visualization turn out to be more important.
As I've mentioned before, Orwell warned us of the sheen of semantics. As an example, (#notallmen) I think many would look at the last 100 or even 50 years of "feminism" in America and find it supportable in the beginning and insupportable now. What's changed? I haven't. Has the word changed? It's been hijacked, and with intent. The irony there is intersectionalist "feminists" reject the previous first and second waves yet keep the name. In fact these PC gals have thrown in the kitchen sink and kept the word.
Feminista tyrant and new Deanette of SF John Scalzi linked us to a PDF as he asked us to "bone up" on "intersectionalism." As "vectors of oppression," it lists "age, attractiveness, body type, caste, citizenship, education, ethnicity, height and weight assessments, immigration status, income, marital status, mental health status, nationality, occupation, physical ability, religion, sex, sexual orientation."
That reminds me of the Orwellian Monty Python sketch about "the first man to cross the Atlantic on a tricycle. His tricycle, specially adapted for the crossing, was ninety feet long, with a protective steel hull, three funnels, seventeen first-class cabins and a radar scanner."
So now "feminists" sexually, racially and institutionally harass straight white men and it's just a coincidence their behavior mimics neo-Nazis to a tee. Even funnier, if you disagree with the attacks and say something, you are a misogynist, pro-oppression and harassing women online, cuz these mentally deranged women are the Pope for all women on Earth. Even funnier, anyone who disagrees also defaults to "right wing" and "conservative." Just ask neo-SF wizard John Scalzi about the "frothosphere" we apparently belong to by fiat.
After a few million years of evolution, men and women have characteristically different body types. This outcome is neither bigotry nor intelligent design; just plain old selection-driven advantage. The fact that the sexes have different morphology implies that the species benefits from these differences, which include superior upper body strength in men. This trend is likely to continue because both men and women tend to make mating decisions based, in part, upon appearance selection. Women are choosing to mate with big, strong men. Men are not choosing to mate with big, strong (male-looking) women.
Ms. Boxer:
Almost no one can multitask effectively (I think it's something like 1-2 percent can actually do it). Women who think they're good at multitasking *tend* to wind up in positions where the work they do allows fast "task switching"–the "stack" involved in a particular task is relatively short, so they can push and pop fairly quickly (this is why some of my posts get disjointed. I get popped in the middle of writing one and don't get quite back on track).
BTW, this is true of guys as well. Systems Administrators like to talk about "multitasking", but really they (we) are just failing to commit totally to any one task.
Ms Boxer:
A quarterback is much less about body strength and much more about a broad situational awareness, something I would have thought women would have an advantage over. However, women, insofar as they play football, are almost always kickers. So I don't know what that says really. I suppose QBs are a target and so women are less robust to take the hits.
Insofar as I understand American Football the QB is a legitimate target, but a kicker is not. There is a specific penalty for "roughing the kicker", mostly because if you nail someone who's got their ankle extended over their head you're very likely to break something.
Fail Burton: So then, "vectors of oppression" more or less translates as "anything that humans use to make judgments about other people"?
OK, to be fair, it doesn't list "behavior" and "intelligence." But what does that matter? If I criticize Obama or Mike Brown for their actions, I can still be called "racist." If I criticize Nancy Pelosi as unintelligent, I can still be called "sexist." So, in effect, anything can count as "oppression," which means nothing does.
>So, in effect, anything can count as "oppression," which means nothing does.
Charles Lutwidge Dodgson, updated:
'When I use a word,' Social Justice Warrior said, in rather a scornful tone, 'it means just what I choose it to mean — neither more nor less.'
'The question is,' said Alice, 'whether you can make "oppression" mean so many different things.'
'The question is,' said SJW, 'which is to be master — that's all.'
@Fail Burton
> So now "feminists" sexually, racially and institutionally harass straight white men and it's just a coincidence their behavior mimics neo-Nazis to a tee.
But you were surely agree that there are a small number of men who actually do treat women as stupid, incapable of rational thought, and nothing more than sex objects? I am sure that there are a small number of men who would deny women the vote, or think that it is OK for a husband to force her to have sex with him, or think that beating her is perfectly acceptable.
So feminists have radical assholes and the men's movement has radical assholes. I don't judge all men's movement people as if they were misogynist pigs who want to chain me to the kitchen sink and keep me quiet, barefoot and pregnant, notwithstanding the fact that some small number of them are. Why should the men's movement judge reasonable women (and men) who want to advocate for equal rights for women, or encourage change in social structures and attitudes that are detrimental to women?
We are all adults here, can't we have a civilized conversation where two people from two ends of the spectrum eschew the extremist assholes from their group and have an honest conversation about the legitimate concerns that both ends have?
I have several friends who are practicing Muslims. I have no fear that they are going to cut off my head, despite the horrible monsters that roam free in Northern Iraq, and despite the recent situation in Rotherham, in England I have no fear that they will kidnap me and traffic me as a sex slave.
Yet it seems that whenever this subject comes up here the feminists always are portrayed as being characterized by their most extreme wing…
I don't know why I am even engaging in this, because this conversation always ends up badly and unpleasantly for me. I'd rather talk about programming, politics or feline behavior. So perhaps I should stick to that.
Jessica, that's all valid to a point, but the symmetry isn't perfect. It's been true for at least a generation now that left extremists have far more power and influence than right extremists. Government, the education system, and the mainstream media have far more of the former than the latter. You'd have a hard time finding anyone in those areas who could be connected to Nazis or the KKK or who want to disenfranchise women, but there are plenty on the other side with strong connections to Communism, minority-centric racial hate groups, or outright misandry.
John D. Bell says:
@Jessica Boxer –
Very possibly because the "most extreme wing" is dominating the public discourse, either not realizing that their extreme positions are not representative of reality, or worse, knowing that they speak nonsense, and wishing to foist that worldview on all the rest of us.
It seems to me that it is very similar to the 'poisoning of the debate' that has occurred in the 2nd Amendment arena – the demonstrable fact that the anti-gun people are lying hasn't stopped them from continuing to try to whip up the middle ground with their rhetoric.
bpsouther says:
> FWIW, I often wonder about this with respect to football. A quarterback is much less about body strength and much more about a broad situational awareness, something I would have thought women would have an advantage over. However, women, insofar as they play football, are almost always kickers. So I don't know what that says really. I suppose QBs are a target and so women are less robust to take the hits.
Throwing a football 50 yards with speed and accuracy is very much about upper body strength.
That doesn't diminish the need for broad situational awareness or a host of other required attributes for the job. There are very few Tom Bradys in the world.
I think that the reason may be that, in the civilized world, for the most part, "good feminism" – re: voting, violence, etc. – is a done deal. There are men who think that they can punch their wives, but this is now considered to be a crime. The best of feminism is history.
Today, Women that are actively into feminism are more likely to be the "Third Wave" types – nuts.
Iconochasm says:
Jessica, to add on to what Brian just said, some 90%+ of Americans support economic, intellectual and political equality for women. Fewer than 20% will self-identify as feminists. Essentially, the good feminists have decided that the remaining issues don't warrant a massive time and energy consuming movement. What remains, in the diminished cohort claiming the term, are the nut jobs. I've seen the term "evaporative cooling" used to describe the way cults become *more* radical after a huge, embarrassing failed prediction, because the more moderate and sane members get fed up and leave, increasing the relative crazy in the group that persists. Feminism has experienced a very similar phenomena as it has achieved it's goals. Almost all women will fight for the right to vote, or to divorce and abusive husband. Comparatively few will fight for the "right" to be considered utterly incapable of moral agency after half a beer.
More like "evaporative condensation," because they don't get cooler, they get more dense. ;-)
A related process happens when reform movements succeed. They rarely just declare victory and disband. Instead, members who wish to be "at the forefront" just move the goalposts. Hence the feminist movement has gone from legitimate protests (women simply not allowed to have some jobs, etc.) to things like assuming that anything that's less than 50% female is proof positive of discrimination.
@John D. Bell
> Very possibly because the "most extreme wing" is dominating the public discourse, either not realizing that their extreme positions are not representative of reality, or worse,
Which public discourse? The mainstream media? I don't think they even matter anymore (partly because people are tired of all that nonsense). The government? You really think that is the worst thing about the government? It is a flyswat. I agree with you about schools, and I have expressed my concerns about how schools deal with all kids of all genders.
But if you guys think the feminist game is over, that we won and it is time to move on, I respectfully disagree. I am not appealing to the legislature (unless it is to get rid of some of the silly "pro-women" rules that are actually anti-women.) No I am talking about some deep running threads in the memetics of our culture that are quite appalling. If you doubt me read this article, and I can assure you it corresponds very closely with my experience.
http://www.psmag.com/navigation/health-and-behavior/women-arent-welcome-internet-72170/
I'm not by any means advocating legislative action to fix this horrible problem, but part of the problem is the whole counter-meme that women are whiny-man-hating-bitches-who-are-stealing-our-kids-and-out-money-and-don't-put-out-except-when-they-want-something-from-you offers some sort of tentative justification for this sort of behavior. Do the divorce laws need fixed? Yes. I embrace the men's movement if you hack off the crazy loons. But just because men get a raw deal in some areas doesn't mean that women don't have legitimate complaints about certain prevailing attitudes in society some of which are extraordinarily sexually diamorphic and destructive.
Let's talk about reality here Jessica: there is no "mens' movement in SFF – none. What there is in larger America is small and unimportant. Were they to morph into a thing like intersectionalism I'd have no use for them either.
Small numbers either way is not the issue but how successful such movements are in mainstreaming what is nothing more than hate speech. Even were such a movement to arise among men it would have no traction in America. They have no suffrage, Jim Crow or anti-homosexuality laws from the past to point to.
You talk about two ends as if there is an other side to intersectionalism and there's just not. In the SFF community there are no straight white males recommending literature because the writers are straight, white or male. The other way 'round is an avalanche. Straight white men are painted as bigots by default because of "privilege." Thus they can't help but mimic racist and sexist scenarios simply by existing. It's a very clever demonization tactic.
Go read SFF author Kate Elliott's Twitter feed a day past about Martin Petto's recent article in the LA Review of Books for a taste of how one must not even dare unfavorably compare a women SF author to a man.
The reason "feminists" are being characterized by their most extreme wing is because their dogma is the go-to orthodoxy today not only in SFF, but the Dem Party. They have actually got the Calif. legislature to act on their insane "rape culture" at universities and trigger warnings are being institutionlized into our classrooms. So men don't need due process; they're men – every one a potential rapist. Intersectionalists have no use for law, which is based on principle, the complete opposite of their identity-worship.
There are still traditional feminists but as a movement they have being marginalized as out of touch. Intersectionalists hate Eve Ensler of all people. Read what Lauren Chief Elk says about her. Chief Elk is fanatically anti-white and anti-male.
Adele Wilde-Blavatsky is a staunch feminist, advocating the abolishment of the terms "Mr." and "Mrs." throughout Europe. Even she's not radical enough. Read this exchange:
"Mikki Kendall ?@Karnythia 11 Jan Y'all I understand the urge to argue with bigoted white feminists. I do. But…can we leave me out of that fight today? Kthxbye."
"Adele Wilde-Blavatsk ?@lionfaceddakini 12 Jan Looks like the intersectional Left are becoming more misogynistic and racist in their Twitter insults than sexist men, so sad @jacobinism"
"Pippi Långstrumpf ?@pippi_esfumarse 11 Jan This bitch @lionfaceddakini has to be one of the dumbest white women I've ever wasted the time to read her tweets. Retweeted by Adele Wilde-Blavatsk"
As Blavatsky herself puts it, "Intersectionality has been hijacked."
We need not talk about Muslims. That is not an ethnic group and it is not the same thing as being profiled for your skin, sex and sexual expression. Islam is an actual ideology. One can agree and disagree about it. One cannot agree Jews, Arabs, men, gays or women carry a stigma.
Here's the important point: intersectionalists stipulate that being straight, white and male in and of itself constitutes a white male supremacist ideology. Intersectionalists will not read Golden Age SF simply because of who wrote the books, not what's in them. Intersectionalists are almost insanely racist and sexist, and the core in SFF pushing all this is supremacist down to their bones.
>Jessica, to add on to what Brian just said, some 90%+ of Americans support economic, intellectual and political equality for women.
That may well be so (and I have no reason to doubt it) but I think the point is that you can hardly mention anything related to the economic, intellectual and political sotuation of women on this blog without triggering a flood of replies about the extremes of feminism, and only that.
Somewhat like you can hardly mention welfare or social security on this blog without a consequent litany about the evils of Stalin, and nothing else. Or expres doubt about unlimited gun ownership without being called a supporter of government organized genocide.
Hmm. Unpacking that article – part of me wishes Hess had put the studies up front rather than the examples; the examples were anecdotal and had the extra problem of being directed at journalists who routinely got wide dissemination of what they write, and hence who were not ordinary Internet users, regardless of what Hess believes – there's a potentially worthwhile experiment mentioned, in that it might be reproducible.
Anyone care to create and use an account with a female username, and see what happens in various chatrooms?
The closest I got to this was when I used to play World of Warcraft for several years. (From vanilla through the end of the second expansion.) I played equal amounts of male and female avatars, and the worst I got was being called "babe" for helping someone with quests while playing my female priest. Meanwhile, I got hassled a lot more on my male avatars. None of it was sexual, but it was violent.
In either case, it was extraordinarily rare. I'm led to wonder what chatrooms those U of MD researchers were in, and what they said, that they managed to get *100* bad messages a day. Something doesn't quite add up there. But again, if it's really that bad, it ought to be reproducible.
"…I am talking about some deep running threads in the memetics of our culture…"
Now you're doing it yourself Jessica. You are conflating unquantifiable anecdotes and cultural threads with institutions, laws and ideology, as if they are a thing you can measure and in which you have enrolled me as a member.
I can do that with blacks if I want, or any group. It's called racial profiling and bigotry.
You are again acting as if two things are equal just because. The existence of an ideology doesn't magically create an opposition ideology in exacting measure.
As for the internet, the idea men aren't harassed is ludicrous. As always, intersectionalists resort to the magic trick of simply ignoring it or calling it "criticism."
SFF author and SFWA member Beth Bernobich recently Tweeted "I want LC (Larry Correia) and his fucking minions to die in a fire," and I've seen scores more vulgar and vicious insults of whites and men just from within the SFF community, and I have the quotes to back that up. An SFF blogger called Requires Only That You Hate made death threats against male SFF authors. It was dismissed as "performance rage." The moronic Hugo and Nebula nominated author who used that term scrubbed it from her website when I publicized it. Now ROTYH has been nominated for awards at WorldCon 2 years in a row.
In other words, it's a double standard wide enough to hold the grand canyon, and your link impresses me not at all.
There is no institutionalized and ideological "counter-meme that women are whiny-man-hating-bitches-who-are-stealing-our-kids-and-out-money-and-don't-put-out-except-when-they-want-something-from-you." It's anecdotes you've read and puffed up into an actual cogent theory like Marxism or intersectionalism when it's not.
Sorry @Fail, but I don't know what SFF means, so can't really comment on it. I tried an acronym finder online, but I am assuming it doesn't mean "Scottish Fishermen's Foundation." Oh, and I don't know what "intersectionalism" means either.
And if you can dismiss the information on that link, which is very representative of the experience of a lot of female online participants — particularly in male dominated forums — then I'm afraid I don't have any other information to share with you.
"A related process happens when reform movements succeed. They rarely just declare victory and disband."
This is the March of Dimes problem. By the time victory is achieved, a lot of the leaders are dependent on the issue for their livelihoods, and can't see themselves doing anything else. Without The Issue, they realize they'd have to go back to burger-flipping.
Jessica, I just read the cyberbullying article about women being harassed with threatening messages via new internet-based technologies. I have no doubt that this new form of bullying is a serious concern for the recipients, but I don't see how this can be construed as a feminist cause issue. There is no practicable way to rid society of cowardly nutjobs (of either sex), and consequently pitching a bitch in broad terms unfairly paints all men as co-conspirators. That's not a road to a solution; it's senseless warfare. If the modern feminist movement wants to be taken seriously, then it needs to choose it's battles carefully and come to the party with a strategy that is more than "maybe if I bitch loud enough, somebody will so something."
Yeah… You are always going to have guys with grudges against women and the Internet makes it so easy… I don't see this going away and I don't really see it as a Womans Liberation issue.
@Brian and @Tom
Sorry you guys are missing the point. I'd characterize it as something considerably stronger than "bullying" however, the point is that there is a structure in the memetics of society that allows this sort of thing, or accepts it. It is absolutely a legitimate women's rights issue to attempt to change that sort of way of thinking in society.
The men's movement objects to the way men are portrayed as hen pecked pathetic idiots on TV, the Raymond Barone syndrome. I completely agree with the men's movement on this matter. I find it a gross stereotype of the least appealing type of men. I don't think there should be a law to fix it, but I think there should be a movement to change the way people think about this stuff, and to object to it loudly.
The women's movement (and the men's movement) isn't just about changing the law, it is also about changing the memetics of society.
SFF = science fiction and fantasy.
http://www.thedailybeast.com/articles/2014/09/04/men-are-harassed-more-than-women-online.html
I'm generally not one to side with the more agressive side of the "Men's Rights" movement, but frankly women *do* wind up whinging more than men.
The male workplace has, at least since the 1950s (tales from my father) been the sort of place where rough language, scatological humor and sexual innuendo were common. It used to be that men of breeding didn't act that way around women, but that was because they had places–the club or the pub–where they *could* act like that, and men of coarser upbringing, well, that's just the way it was.
Nowdays those men of coarser upbringing aren't just the laborers and the rougher atisans, they're in every layer and niche of the workplace from the docks to the board room. Have been since the 60s.
I can tell you straight up that 95% of the women out there have NO desire to be treated equally on the job, especially once they know what "equally" is. Men will *routinely* make comments about other men's sexual proclivities and abilities (we have one guy at work right now who is very open about being treated for enlarged prostate. Jokes about finger waves, performance issues etc. are common.)
It has been this way in 90% of the jobs I've had. In fact the ONLY place I've worked where there was none of this behavior was a rather well known non-profit in Sillycon Valley where everyone had their own office and barely spoke to each other.
Yes, women do get harassed online. So do men. I know this because back in the day /I/ abused the hell out of people who came in to "my" spaces and were impolite, stupid or otherwise transgressed local norms. On that one particular space I was in the /faq/ about it.
It is one thing to insist that there be (to the extent that biology permits) equality between men, and that there be parity in compensation for work done. It is another thing entirely to demand that men change their behavior because your sensibilities are being bruised.
I was raised different than most men. I don't particularly feel the need to express my masculinity through dick and fart jokes and etc. I can even generally avoid getting into politics at work these days as long as everyone else does. But if you're going to go there, I will too.
Of course then *I* am the one who's creating the hostile work environment?
"…unfairly paints all men as co-conspirators."
Right. That is what classic bigotry does. All blacks have some complicity in crime, all Arabs have some complicity in terrorism, all Jews have some complicity in financing.
"Privilege" is purpose-built to have the same effect, so is "rape culture." Radical intersectionalism needs to stop acting as if it is an arm of law enforcement but whose "evidence" are anecdotes, myths, rumors, innuendoes, lying, racial and sexual profiling and demonization tactics. What kind of "law" only targets straight white men?
Using terms like "memetics" is no different from using "privilege." In fact it is a term that represents something that cannot be measured or used to predict something but which is being used as if it is a rule. I can use "memetics" to assert anything if I target only one group.
Mycroft Jones says:
This may be of interest:
http://sciencenordic.com/what-vikings-really-looked
The skeletons reveal another difference between us and the Vikings: men's and women's faces were more similar in appearance in the Viking Age than they are today.
"It's actually more difficult to determine the gender of a skeleton from the Viking era," says Harvig. "The men's skulls were a little more feminine and the women's skulls a little more masculine than what we're seeing today. Of course, this doesn't apply to all skeletons from the Viking period, but generally it's quite difficult to determine the gender of a Viking Age skeleton."
She explains that Viking women often had pronounced jawbones and eyebrows, whereas in the men, these features were more feminine than what archaeologists are accustomed to when trying to determine the gender of ancient skeletons.
"The men's skulls were a little more feminine and the women's skulls a little more masculine than what we're seeing today."
What? You mean Viking men were not all craggy-faced hypermasculine stud-gods? Inconceivable!
Next you'll be telling me they didn't wear horned helmets. The very idea…
And getting back to the immediate topic at hand, it is about something we already knew: a clique within Tor.com acts as a propaganda arm for intersectionalism. There is no other reason to have this article, nor the recent one taking down Guardians of the Galaxy, nor Liz Bourke's relentless women-centric reviews that puff up SFF novels by women far beyond their actual status as art, nor the recent anti-white, anti-male podcast where Kate Elliott and N.K. Jemisin assert epic fantasy is white "comfort food fiction" enjoyed by white men because it "embraces white male power fantasies."
And again take note of the twinned acts intersectionalism always does, as in the case of the article about Viking burials: it puffs up women at the expense of men, in this instance, making a case for women warriors while remarking men somehow didn't quite want to admit there were women warriors. Supremacy and bigotry always go hand in hand. Intersectionalists couldn't be happy only sweeping the Nebulas this years, they had to twist the knife by having a Hugo-winning and Nebula-nominated author Tweet "At @SFWA's #NebulaAwards, only one award went to a white male and that wasn't one of the ones voted on by the membership." and a panelist at that SFWA awards weekend Tweet "Not a single white man won an award tonight. OPPRESSION." Kameron Hurley won two Hugos for an article that militarily/historically falsely puffs up women with an equally false assertion that men have erased that historic record.
Try an imagine a movement within SFF that has all-men Kickstarter anthologies, relentlessly promotes work by heterosexuals, and assembles lists of authors and editors who are white. Imagine them Tweeting "Yaay! Eurofuturism" and having 2 day symposiums on ethnic European SFF. Imagine this movement creating demonization theories about why women are this and that, why non-whites are this and that, and why each is appropriating a male and white-owned cutural creation. That's in fact what intersectionalism does daily within SFF and that dogma has default support throughout all of SFF's core institutions.
@ Jessica – "the point is that there is a structure in the memetics of society that allows this sort of thing"
To my knowledge, there are no prominent male voices in society advocating for cyberbullying. Most men I know would kick the shit out of one of though degenerate assholes if the opportunity to do so arose. If your complaint is that not enough public opinion leaders are actively condemning this scourge, then I share your lament. We are living through a death valley of strong leadership in almost all venues of current life.
However, this is not a strong case for feminists to take on. Back in the 1990s, Bill Clinton was hailed as a feminist icon and he happened to be an actual serial rapist. My guess is that most cyberbullys are true cowards rather than a tangible personal threat. But even if I'm wrong about that, a women's best defense is still likely to be a sidearm and the skill to use it.
Of course, the flip-side of redifining cultural norms results in many women who would rather stay at home and raise kids, but are pressured into a stressful job just to fulfill the need to feel 'sucsessful' and 'empowered'.(And for every woman who decides that they want to raise kids rather than continue with work, another disparity is blamed on the 'glass cieling'.
Viz. Whether feminism is crazy nowadays, I predict it will be soon. Today, only 20% of women identify as feminists, and with hope, sane women will cease to feel the need to identify as feminists any more than they feel the need to identify as supporting the emancipation of the serfs. Until then, I will still have some uneasy interactions with not-crazy friends who identify as feminists.
@William O. B'Livion
> The male workplace has, at least since the 1950s (tales from my father) been the sort of place where rough language, scatological humor and sexual innuendo were common.
Good, but it isn't the male workplace anymore. People, regardless of their gender, have a perfect right to be treated respectfully in their workplace. Sorry if that cramps your style, but it is nothing to do with gender, it is a basic expectation that everyone has. Lots of men don't like that kind of thing, and lots of women have mouths like a sailor.
However, that is an entirely different thing. I am talking about guys who go online and say "I know your address, I am going to come when you least expect it, torture you, rape you them kill you." Is that a hold over from the "male workplace of the 1950s?" Should we girls just suck it up and laugh it off as some male sophomoric prank?
I forgot to mention that the Mikki Kendall in those Wilde-Blavatsky Tweets has been a ReaderCon SFF convention panelist and is well-known in SFF circles among the intersectionalist crowd. Think about her remark (and there are tons more) about being a panelist. She of course went nuts over Ferguson and even announced she was going. I don't know if she did.
Then think about the non-stop profilings of white folks by SFF author N.K. Jemisin, the fact Vox Day got booted from the SFWA over remarks about Jemisin and then think about the phrase "compared to what?" If law operated like that in America tall people wouldn't get prosecuted for burglaries – only short. The word "racism" in the SFF community literally has no neutral meaning.
Simple comparisons are not available to identity freaks; they have no principles.
@TomA:
Bill Clinton was hailed as a feminist icon and he happened to be an actual serial rapist.
Under the laws of this country, that might actually be an actionable statement.
To my knowledge, there are no prominent male voices in society advocating for cyberbullying.
OK, I don't know what you mean by "prominent." I don't keep up at all with SFF, and had no idea what was apparently going on there, or even who any of these people who are being quoted here are.
But I have heard of Rush Limbaugh, and although I imagine that most people's definitions of cyberbullying have some of the amorphous "I know it when I see it" quality to it, if there were an objective cyberbullying standard that included all of these women, I would imagine that I could easily find examples of Rush being Rush that would fall under such a standard, as well as him egging on his audience to participate. I think there are also probably a few Fox News commentators who could similarly be caught by such a standard.
Of course, I'm sure that in person, Rush is kind and considerate, and overall much better to women than Clinton ever was…
@ Patrick Maupin – "Under the laws of this country, that might actually be an actionable statement."
Patrick, I can't tell if you're pretending to be an attorney or insinuating a cyber threat. Please use more clarity in the future.
In addition, you may want to go back and re-read Jessica's original post in which she linked to a feminist article on cyberbullying and nutjobs that were making direct threats of murder and rape to specific individuals. That is the context.
I don't listen to Rush Limbaugh, as apparently you do, but my guess is that he does not use his radio program to directly threaten murder and rape of specific individuals. That degree of cyberbullying is both serious and harmful, but it is still not in the same league as actual violence and rape.
Juanita Broderick went on national television and described in excruciating detail her rape by then Arkansas Attorney General Bill Clinton. Bill Clinton went on TV and said "I did not have sexual relations with that women, Ms. Lewinsky", which was later shown to be a lie. Believe whomever you wish, but numerous women are on record recounting how they were sexually assaulted by Bill Clinton.
Hardcore feminist are incapable of understanding how much damage they did to their cause when they chose Bill Clinton as a mascot. That was the point I was trying to make.
"insinuating a cyber attack" "Please use more clarity in the future". Yeah, interesting.
I don't listen to Rush Limbaugh, as apparently you do,
I see that I was so perfectly clear in that section that you know everything about my radio listening habits. Whatever.
No, it's pretty clear from that comment that Patrick does not listen to Rush, and instead has formed his opinion by reading selective and distorted accounts by political opponents.
>I see that I was so perfectly clear in that section that you know everything about my radio listening habits. Whatever.
So you aren't familiar with any of the feminist SFF crowd and what they've done that people are complaining about, but you feel well-enough informed to argue with people who are.
Then you trot out a moral equivalence prop but you choose one that you have no actual direct knowledge of, then actually get snippy with people who assume you'd at least use something you knew something about.
My Rush-listening experience is pretty limited, mainly when my father-in-law is around but I have at least spent some time listening to his show. Rush does mock what he sees as foolishness, but he is surprisingly intelligent almost witty, gracious and funny. There are reasons he has an audience.
I guess the general theme of this comment is, 'please inform yourself'.
Rush Limbaugh once tried to throw me off the fantail of an aircraft carrier. I sued him and was awarded Mars.
>Rush Limbaugh once tried to throw me off the fantail of an aircraft carrier. I sued him and was awarded Mars.
If this is parody, I'm not getting it. If it's not, what drugs are you on?
@ Jessica – "Should we girls just suck it up and laugh it off as some male sophomoric prank?"
Perhaps, buy a sidearm or marry a Viking. Bitching in a public forum isn't going to improve your personal safety. And to play the memetic infection game at the national level requires a lot of resources and knowhow.
People just make up stuff about people they have decided in advance they don't like. That poses the question of why they don't like them in the first place. I may have mentioned recently an intersectionalist Tweeted she'd need trigger warnings to read Golden Age SF she admittedly had never read. People who don't like people like Limbaugh of Larry Correia or literature written during a certain era can rarely tell you exactly why in a factual manner. However they never stop asserting that disdain, no matter what facts they're presented with.
Within intersectionalism in SFF, the idea old school SFF was created by a colonialist and racist impulse in simply taken for granted. Quotes by Correia are never produced nor is the great trend of racist and colonialist SFF stories it would take to make them the bedrock of a genre. The truth is we're just talking about prejudiced attitudes and outright bigotry and lying.
They're still talking about the "massacre" of 250 people at Lydda in Palestine in 1948 while ISIS may have done that many this week alone. 3 women in Pakistan are victims of honor-killings every day and 1,400 kids raped in the U.K. but that goes into a special intersectionalist file along with black slavery and Arab colonialism. Intersectionalists Tweet every video of anything done to a black person in America but if you present the far greater number of videos of blacks beating the crap out of whites for no reason you're a racist.
Nothing a bigot says ever makes any sense. All skeins of so-called "logic" lead back to whatever justifies the disdain and defamation of the bigot's intended target. Since that disdain precedes facts, it is no wonder intersectionalists seem as if they are living in an alternate reality. And it is a particularly mad reality. Who writes millions of words each year to buttress what is after all a very simple thing: supremacy and hatred. What supremacists and racists don't know they're supremacists and racists? Even Nazis knew they just didn't like Jews. The answer is they wrote a lot to convince the mainstream and that's what intersectionalists do. These people just don't like us – period. They don't like anything about us.
Who dislikes Baen Books based on myth? I understand prejudice is part of the human condition but it is particularly galling when it comes from people who stake a very loud claim that they have examined their bias and racism and you need to do the same.
If one can write without a hint of reality in sight that old SFF authors used to substitute aliens for black folks, why not just claim an SFF novel once tried to kill me or that blue is red? The joke is that one Hugo winner has written living in America is like being punched in the face and a double Hugo winner announces she keeps a file for the FBI in case of her untimely demise, presumably by being dragged to death behind a pick-up truck.
I showed you how intersectionalists took a possible single female archer and not only made an army out of her, but asserted it was all a cover up. Our FBI-ready Hugo winner won her Hugos for asserting the exact same thing and the tone of the article in question about Vikings unsurprisingly does the same thing as well.
The racist web site MedievalPoC asserts there is an ongoing cover up to hide PoC medieval Europe, while they themselves conspicuously hide any PoC colonialists of that same Europe. Every time a movie about Egypt comes out the same assertion of a conspiracy to hide black Egypt from the world by white supremacists comes out.
The entire cabal is out of touch with reality. If they can alter reality at will then I can enjoy my homestead on Mars.
Parody.
According to The Narrative, Rush Limbaugh is one of the most despicable, hateful, and baby eating blowhards to have ever walked the planet. Even worse than that, he is male.
Ah, so you are the one who keeps blocking my view of Neptune.
Please move Mars out of the way before I have to take steps.
As soon as I'm done pulling this hyar planet around.
http://s29.postimg.org/bx25spomf/Planet_Movers.jpg
What is the alternative?
@ Fail Burton
I thought the deal was you are to mention intersectionalism in every comment.
@Brian
Planetary orbits can intersect.
I think we are officially off topic now…
@TomA on 2014-09-06 at 17:16:53 said:
> Perhaps, buy a sidearm
I already have one, and I am a damn good and merciless shot.
> or marry a Viking.
Seriously Tom? You always struck me as a sensible, reasonable person. Don't you think that suggestion is a little offensive?
Regardless, sidearms are the last line of defense. The first line of defense is a social attitude that discourages the dangerous behavior.
> Bitching in a public forum isn't going to improve your personal safety.
Bitching? Really? That is how you would characterize my argument the the ubiquity of specific personal threats of extreme violence that are common against women, with a dramatically higher frequency than against men, is bitching rather than communicating a deeply flawed thread in society? It isn't about my personal safety, it is about communicating to a group of smart people a perspective that they might not be aware of given that they don't experience it as frequently as me.
> And to play the memetic infection game at the national level requires a lot of resources and knowhow.
I don't agree, in fact I think it is a thread that I see in a lot of responses here, this idea that a top down approach to changing these kind of things is the only way to do it. I think that is wrong, It is such a statist way of thinking. I think especially in today's connected world it is perfectly possible to create a meme bottom up. Three obvious examples are pouring water on your head, or the tea party — an organization largely ruined when it became controlled top down rather than bottom up, and the "Occupy" movement, which was bottom up but was much more quickly co-oped by the top downers, which, given their agenda, is hardly surprising.
And I might add, on the specific subject matter, the suffragettes were a mostly bottom up movement. In fact there is an argument that the only way real change takes place is when a small group of committed individuals are willing to do what it takes to make it happen. I don't think that is entirely true, but there is certainly a lot of truth in it.
>In fact there is an argument that the only way real change takes place is when a small group of committed individuals are willing to do what it takes to make it happen.
Ah, yes. It was Margaret Mead who said that, I think.
Anyway, if you're planning a campaign against the disturbing Internet tendency you've described, I'd like to help–however small my contribution could be.
Yes, it is bitching because the nutjobs aren't going to disappear as a result of your grassroots meme campaign. You're harping on a problem with no realistic prospect of solution.
And the nutjobs are unlikely to be intimidated by your "social attitude that discourages dangerous behavior." That's why they're nutjobs in the first place. Keep the sidearm handy and don't waste your time being macho in public to ward off potential danger. Most men are normal and we like women who are both feminine and a good shot.
Last, I wasn't being condescending with Viking reference. You clearly have warrior spirit and will make someone a damn fine wife. Aim high and find your equal in life. You deserve nothing less.
I read the Huff-Po article (but not the paper it was based on) a few days ago and I sure thought they said that a significant number of burials that used to be identified "male" were actually women once they went by the bones instead of what was in the grave. I had accepted that, but I hadn't accepted the interpretation that this meant that those women were warriors. Or if they did train seriously to fight it was as defensive forces and not expeditionary forces.
I did want, very much, to thank you for this:
William O. B'Livion on 2014-09-05 at 14:36:55 said:
The QB is a legitimate target while he is holding the football. Once he has thrown the ball, there is a penalty for "roughing the passer", It only applies if the defender starts the tackle after the ball is away; one will often see a QB knocked down by a charging defender a fraction of a second after releasing the ball.
As to kickers (and punters): they are indeed extremely vulnerable in action, so there is a serious penalty for intentional contact ("roughing the kicker"); there is also a lesser penalty for accidental contact ("running into the kicker"). Placekickers never hold the ball, so they are never legitimate targets. Punters hold the ball, and sometimes run or pass instead of kicking (the "fake punt" play). A punter holding the ball before kicking or instead of kicking would be a legitimate target.
"The closest I got to this was when I used to play World of Warcraft for several years. (From vanilla through the end of the second expansion.) I played equal amounts of male and female avatars, and the worst I got was being called "babe" for helping someone with quests while playing my female priest."
I've played MMOs since Ever Quest was released, and have played at least a half dozen different ones… as a woman playing exclusively female characters who I designed to be sexy because if you're going to be a fantasy character why would you want to look dumpy? I can think of *once* when someone I was chatting with wanted to talk dirty. I responded with some sort of, "Um, I think I'm gonna go over there… good hunting to you, good bye." The most common thing is that no one actually believes that you're a girl, which might be a tiny bit irritating if a person actually cared. But the thing is… I don't flirt. If I decide that my toon has a "boyfriend" for the purpose of role-play it's one of the NPCs… or my husband if he's playing. And if you don't flirt, you don't accidentally signal that you're open to sexy-chat.
Which is an extremely long-winded way of saying… yeah, me neither.
A young girl was telling me and a couple of others the other day about this guy who'd been stalking her on social media/texting, etc. for four years. Never violent stuff, just that he clearly had a fantasy life where they were a couple and he'd refer to stuff they supposedly did together. Probably harmless, just not right in the head.
I told her if she ever ever EVER saw him physically stalking her that she was to come to me and I'd teach her to shoot so she could defend herself. This was seconded by one of the other people there, a guy, who was just as alarmed as I was. She just thought it was funny and was "laughing it off".
In any case… the answer is NOT to laugh it off. A threat needs to be reported to police. And steps need to be taken to deal with the possibility the person making the threat wasn't just blowing hot air.
Winter on 2014-09-04 at 14:01:49 said:
I think both Elizabeth I and Catherina the Great were driving considerable numbers of men into the battlefield.
They were political leaders, not military commanders. If political leadership of a country at war is considered evidence of personal military prowess, then the elitest of elite warriors would be elderly wheelchair-bound paralytics. Because it was just such an individual who was commander-in-chief of the most powerful military force in history, victorious in the largest war in history.
In European history we know examples of women fighting off armies. Our local hero is Kenau from Haarlem. We also know of Jeanne d'Arc.
Jeanne d'Arc didn't fight off any armies. She inspired an army consisting entirely of men and commanded by men by leading them, carrying a banner. AFAIK, she never swung a sword or shot an arrow. Kenau of Haarlem is famous in legend, but all that is actually known about her is that she helped carry earth to rebuild damaged ramparts. The Spanish did not even arrest her after the city fell.
Otherwise… Of the tens of thousands of Swiss mercenaries who served across Europe, how many were women? Zero. Of the thousands of generals in the Napoleonic Wars, how many were women? Zero. Of the hundreds of thousands of Janissaries of the Ottoman Empire, how many were women? Zero. Et cetera, et cetera, et cetera.
Let's look at an earlier era. There were some famous "warrior queens" in the ancient world: Cleopatra, Boudicca, Zenobia, and… well, that's about it. Those three are the only women comparable to… the hundreds of Roman Emperors and Consuls, hundreds of chieftains of barbarian tribes, and hundreds of Greek and Persian kings recorded as commanding and usually leading armies.
Of the hundreds of thousands of Roman legionaries and Greek hoplitoi, how many were women? Zero.
What about non-western societies? There is no evidence of any female warriors among the Mongols, the Gurkhas, the Maori, the Zulus, or the Apaches.
Exceptio probat regulam means "the exception tests the rule"; is the rule still true in spite of a single exception (or a few)? The rule that warriors are men, not women is clearly such a rule.
Those threats will not end until women stop being soft targets. Take responsibility for your own safety. Nobody else will do so, nor should they.
Jay Maynard on 2014-09-07 at 02:45:28 said: Those threats will not end until women stop being soft targets. Take responsibility for your own safety. Nobody else will do so, nor should they.
Women are, as a class, smaller and weaker than men. Their "soft target" status is intrinsic to their physical qualities. That's unarguable. Men, who are larger and stronger in general, and constitute nearly all the very strong, have an obligation to defend the weak: women, children, the elderly, sick That's what we're here for.
See the behavior of a herd menaced by predators. Females and young move to the center; the old bulls form a defensive perimeter.
Also, is it really plausible that every 100-lb woman should or even could maintain the level of fitness, martial skills, and armament required to guarantee her security against 250-lb attackers?
>Women are, as a class, smaller and weaker than men. Their "soft target" status is intrinsic to their physical qualities. That's unarguable. Men, who are larger and stronger in general, and constitute nearly all the very strong, have an obligation to defend the weak: women, children, the elderly, sick That's what we're here for.
Whenever possible, yes.
>See the behavior of a herd menaced by predators. Females and young move to the center; the old bulls form a defensive perimeter.
People aren't herd animals and don't live in herds. At least not anywhere I want to be, As such, individuals are often called on to be, individually, not helpless.
>Also, is it really plausible that every 100-lb woman should or even could maintain the level of fitness, martial skills, and armament required to guarantee her security against 250-lb attackers?
It is an ideal to be hoped for, but unrealistic to expect it to actually happen. Fortunately it is only necessary for some percentage of women to be armed and capable, and *known* to be armed and capable, to introduce doubt in the minds of predators and establish deterrence. (The truly crazy don't really respond to incentives, positive and negative, like everyone else and when they get to the point of initiating violence must be physically stopped.)
I would like to add, that fortunately most of those who make threats as if they are truly crazy, are *not* actually truly crazy they are just speaking that way because they have previously been immune to any consequences for such actions. One example being sheltered, often of a protected class, bubble-dwelling blowhards (tie in to the SFF SJW crazies).
They are the *perfect* audience to be corrected by the negative incentive that, if you try to turn your threats into acts the person you are attacking may just shoot you dead.
JohnMc says:
The problem is contact combat is not equivalent to sticking a pig with a sword. The attacker has to get past the defensive components, ie shields, breast plate, etc. to make a blow that is definitive. Take a sword and go take some whacks on a tree. You will find out very quickly that it takes considerable force to chip off even a considerable chunk. And poking at it yields little at all. It all boils down to —
F = M * A
Swords are built as light as possible for control and repeatability of stroke. Imagine two warriors. One with an axe one with a sword. If the axe has any heft at all which it should for its intended function, it forces the warrior to follow thru with the stroke for best effect then deaccelerate that mass for the next stroke. The sword carrier on the other hand will attempt to accelrate a lighter mass to the opponent. The strike if made will daccelerate the sword for the attacker not requiring follow thru and can immediately attempt another stroke. Rinse and repeat.
>Take a sword and go take some whacks on a tree. You will find out very quickly that it takes considerable force to chip off even a considerable chunk.
Even cutting a large piece of fresh meat with a chopping or slicing strike from distance is rather more difficult than you'd expect, if your experience is limited to kitchen knives and chopping boards.
Hexe Froschbein says:
Tacitus's describes in his book Germania how the women of the Germanic tribe would kill their own fleeing warriors and their attackers once the battle was lost.
Then there is also the Japanese Naginata (long stick with blade), which was mainly used by women, as a last resort defence.
How effective does a 'warrior' have to be? The skill level required for a raiding party is different to that of what's required in the defensive. I doubt that the Vikings would have taken along old men, women and small men or young boys. But if you were to try and attack a Viking settlement, or the baggage train of their raiding party/army everyone would have taken up arms, effective or not.
Finally, weapons in the grave can also just have ceremonial purposes — a wife or daughter buried with her lost husband's, fathers' or brothers weapon is also a likely scenario.
Just as the cops can't be around all the time, neither can we men as a class. A woman can't depend on anyone but herself.
You know what? That's just what the feminists say. They say that women should be empowered to look after themselves and their own interests. I couldn't agree more.
Women don't need white knights. They need the ability and the freedom to take care of themselves.
Why is it, then, that feminists want to destroy the one thing that does more to empower women than anything else: personal carry of firearms?
ESR> Firearms changes [sic]all this, of course – some of the physiological differences that make them inferior with contact weapons are actual advantages at shooting (again I speak from experience, as I teach women to shoot).
No, Firearms do not change all this. The fundamental reason why women are not trained to go off to war is based on their reproductive value in an environment of high infant/child mortality. The fundamental question is how many girls a woman births who survive long enough to become mothers themselves. The loss of a fertile woman (or of a girl who is not yet fertile) represents the loss to the society of countless future fertile women, which are the only source that can produce the men in those future generations. Modern nutrition, sanitation, and medicine are what made it demographically acceptable for women to be in actual combat roles, as well as doing other dangerous jobs that could prune their branches from our collective family tree. (To the extent that attitudes about this are programmed into our DNA, we have to deliberately patch in memes to override the firmware, or we are reflexively repulsed by the very notion of women going off to war.)
Men are individually expendable; those who survive the wars can keep multiple wives in the baby-production business. This practice leads to the strongest warriors getting more chances to reproduce, improving the stock for the wars those future generations fight.
Now, if women are trained to fight defending the village against invaders after their husbands have already fallen in battle, that is an entirely different matter. But there is a certain evolutionary advantage to genes that program women not to resist against those invaders, but instead to accept them as their new mates, rather than dying along with the men of their tribe. But in stating this obvious and historically well-established fact, I am guilty of the thoughtcrime of "perpetuating rape culture". (There can be no argument that the practice of killing the enemy tribe's men and taking their women is literally "rape culture" under modern understanding of the term.)
The irony of that is that the very Leftists who say that rather than arming themselves to resist rapists, women should instead wear "rape whistles" or urinate on their predators, are implicitly accepting the very strategy outlined above.
>No, Firearms do not change all this. The fundamental reason why women are not trained to go off to war is based on their reproductive value in an environment of high infant/child mortality.
As I mentioned in the OP, in fact. What changes is that firearms neutralize the male strength advantage. Women warrior are still not a great idea en masse, but at least with gunpowder weapons they can defend themselves effectively. You know this; don't be unnecessarily contentious.
Eric Cowperthwaite says:
Perhaps this will settle the strength and combat question, although I notice that not everyone seems to be convinced that physical combat requires strength, endurance, mobility and speed. But I'm going to try.
For the record, I served in the US Army for 11 years, including in combat. I was a tank crewman. I would never suggest that I was on par with the special operator types referenced in the Havok article, but I wholeheartedly agree with everything in that article. This part I put in just to establish my personal bonafides to talk on this subject. My experience is real, not theoretical.
Someone said:
This really made me chuckle, to be honest. I would suggest that the commenter go get an axe, which weighs perhaps 2 kg, and find a small tree, say a diameter of 25 cm, and chop the tree down. Report back on how long it took, how winded you were, and how cleanly the tree was cut.
Here is the reality of combat. You have had very little sleep for many days on end. You have marched/driven/ridden for many, many miles. You are carrying 30-50% of your body weight in terms of gear and weapons. You probably haven't eaten enough calories to sustain your level of exertion and so your body is burning fat stores to provide enough energy. If you are a modern warrior on a tank, you must load shells that weigh 25 kg into the gun in less than 6 seconds every time the gun is fired, as an example of the level of exertion.
The old school warrior, with a sword, armor of some sort, maybe a shield, has to swing that sword at the level of effort of tree chopping for the length of a battle. Battles last a long time, as such things go. An hour, or more. He has to run, jump, physically check his enemy's bodies, carry all that gear. And all while fear and anger are dumping massive amounts of adrenaline in his body.
There is a reason why soldiers train to be as strong and as fast as possible. Because anything less than as strong and as fast as possible means death. Even today, with firearms rather than swords. Fighting with swords was a million times harder. Want some idea of how tough it is? Watch Lone Survivor. True story and the physical trauma is real.
The idea that a woman could engage in the sort of combat that Vikings engaged in during the 8th century is pretty ludicrous, really. PS My wife is of Viking stock, actually. She is tall, big, strong … for a woman. She's 5'6″ and I'm 5'8″. She weighs 155 lbs and I weigh 180 lbs. And she always hands me the axe when we need to cut a tree down on our property. This is reality.
Go find a spear. Then go to your local butcher shop and get a dead pig that is ready for a luau. Now hang the pig by its feet from a tree branch and start trying to kill the pig with your spear. Notice how hard it is. Now imagine the pig is a 6′ tall, 190 lb man in armor with his own spear. Now tell me again about how it doesn't take much strength.
The problem, I fear, with understanding this, is that the average joe today does not do physically demanding work. And thus has no real understanding of what it means to work all day. Someone who has "worked all day" has a serious appreciation for what an hour of physical combat will mean.
I assume you're joking, but maybe not. I definitely get the sense people think this is a pet peeve or even obsession of mine but it's actually one most of us here share: a disgust for political correctness. The dividing line is one of research and knowledge, not my inaccuracy or exaggeration. It is the same divide which exists for people with a casual knowledge of Islam and tend to lump it all together, rather than talking about Qutbists, Wahhabis, Sufism, Salafis, Ibadis and arguments about the first two Caliphs compared to the first four.
One can call all Egyptians throughout history by the name "Egyptians" and ask who the guy is who keeps mentioning "Mamluks" but the Mamluks existed, whether one likes it or knows it or not, and they were not "Egyptians" any more than Cleopatra was or Mohammed Ali, whose 150 year dynasty ended with Nasser. Ali was an ethnic Albanian Ottoman born in Macedonia. The bottom line is names are important; that is why Orwell created "Newspeak."
PC is a term that doesn't truly serve. "Intersectional" is far more accurate but also covers much of the same ground. The difference is one of labels to a certain extent, but it's a mistake to blend "intersectionalism into "PC" and more so "Left" in the same way it would be to blend the KKK into "conservatism" or Ali into "Egyptian." The huge difference is the KKK doesn't possess the anti-oppression camouflage that enables intersectionalism to mainstream hate-speech. The fact so many people on my side think my use of the term is weird is a testament to that camouflage. The KKK can't hide, intersectionalism can, for the simple reason they don't look like the KKK, have the same targets. But in fact intersectionalism does use the same rhetoric in principle as white supremacy. Unless one thinks the Dem Party is an analog to a KKK which demonizes heterosexual ethnic European men, it's time to start using correct names; this is a specific ideology.
No chides anyone for saying "PC" or "Left" in every comment simply because they are familiar. The problem is they don't adequately cover the topic.
The irony is this post about the Vikings is addressing core intersectionalism 101. More irony regarding your comment is, whether they know it or not, every single pro and anti-PC blog post in SFF today is about intersectionalism, not liberalism, not Leftism, not socialism, not Marxism. Both the Hugo and Nebula winners this year was the result of core intersectionalism in its most fundamental application and desires, not the Dem Party.
The bottom line is I learned the term from the people we're up against, and they make no bones about what they are in that regard. People can keep calling them "Egyptians" but they may be "Mamluks" or "Albanians." In fact in that analogy of labels, they are.
One last point on the difference between my wife and I. It's not just weight that matters. It's lean body mass compared to total weight. Lean body mass is everything other fat. Muscle, organs, bone, cartilage. A woman who is in good shape will be somewhere around 20-25% body fat, in general. In other words, my wife (who is in fine shape!) has 125 lbs of lean body mass. An average conditioned male will have about 20% body fat. I am in average shape and thus carry about 145 lbs of lean body mass. So, my wife, who works out (heavy weights plus walking for cardio) has 20 lbs less muscle, bone, etc than I do even though I exercise, etc much less than she does (moderate weights, plus walking).
At my peak, when I was 25 and had been training for 7 years in the military, I weighed 170 lbs and my body fat was in the 12-15 percent range. That would be pretty comparable, physically, to warriors in the 8th century. I had 150 lbs of lean body mass, and my LBM to total mass ratio was nearly .9 ….. this is almost impossible for a woman to achieve. And this is the sort of physical conditioning combat requires if you want to live.
The question seems to be why women do not get involved in armed fights as much as men?
I think we can get a long way by assuming that the cost/benefit ratios are different.
The obvious suggestions from the world's literature and history are:
1) For a man, loot and status means more women, with a low end expectation of no status or money, no women. Normally, every woman could get a man to take (some) care for her.
2) Hormones (testosterone) make men more competitive and willing to take gambles. They also predispose men to do anything to get a woman. Much less so for women it seems.
3) Historically, the men of the losing side were killed or labored to death (and lost access to women). Women were kept alive.
4) Babies have to be fed. A mother going into war means risking the lives of her children much more than a father.
Women seem to have less to win by taking up arms than (young) men. So why should they?
Winter said:
Why do you ignore the obvious? Women are not physically capable of winning armed fights, as a general rule (yes, I know there are exceptions) and women are necessary to the survival of the race. Thus, any tribe that had women warriors was contra-survival because all the women would end up dead.
Your 4 points are downstream from this, a result of this reality.
To summarize . . .
Evolution has not equipped women to be actual warriors, except as needed in defense.
Women have essential utility as baby makers, and hence are more valuable then men.
If you want to be a winning side warrior, men must maximize their strength & conditioning.
And I would add, as a personal note, that women can be warriors in spirit and thereby contribute as a force-multiplier to male warrior commitment and ferocity. Long live Sparta!
@PapayaSF:
No, it's pretty clear from that comment that Patrick does not listen to Rush
Do you understand the difference between "does not listen to" and "has not heard?" Can you imagine one without the other?
, and instead has formed his opinion by reading selective and distorted accounts by political opponents
That's funny. I'll tell you what — I'd love to sit down with Fred Reed and buy him a beer, but if Rush showed up, I'd probably tell him to get lost.
@Greg:
So you aren't intelligent enough to read and understand that I was responding to a single point, but feel smart enough to set up a strawman.
Then you trot out a moral equivalence prop but you choose one that you have no actual direct knowledge of,
Again with the reading comprehension and the strawman.
he is surprisingly intelligent almost witty, gracious and funny.
He may be all those things at various times, but he's also a professional asshole. Maybe it's all an act, and he's a teddy bear in real life, but maybe it's just not possible to be that much of an asshole professionally unless you really are one.
@Jay Maynard:
If you're scared of firearms, then you're simply not going to carry one, and since you're not armed, maybe you're going to do your best to make sure nobody else is armed either.
I assume you're joking, but maybe not.
I was joking.
Now, from my history, I am close to the last person that should be instructing people on what belongs in this blog, but you do seem to work "intersectionalism" into a lot of comments.
From your comments and a quick look into the matter, "intersectionalism"
seems to mean that the only thing better than being oppressed is being oppressed for more than one reason – unless you are a guy.
I found this concept to be interesting (although not surprising).
I don't see how intersectionalism relates to this post about women Vikings. We are talking about women, but generally in a rational way (ie. not as oppressed victims) and I don't see multiple lines of oppression involved here.
>He may be all those things at various times, but he's also a professional asshole. Maybe it's all an act, and he's a teddy bear in real life, but maybe it's just not possible to be that much of an asshole professionally unless you really are one.
…says the person who has never listened to him.
You have a gaping credibility gap on this one, sorry.
Eric Cowperthwaite on 2014-09-07 at 12:50:40 said:Go find a spear. Then go to your local butcher shop and get a dead pig that is ready for a luau. Now hang the pig by its feet from a tree branch and start trying to kill the pig with your spear.
Shades of the Adventure of Black Peter! "Have you tried to drive a harpoon through a body? No? Tut, tut, my dear sir, you must really pay attention to these details. My friend Watson could tell you that I spent a whole morning in that exercise. It is no easy matter, and requires a strong and practised arm. But this blow was delivered with such violence that the head of the weapon sank deep into the wall. Do you imagine that this anaemic youth was capable of so frightful an assault?"
ESR:What changes is that firearms neutralize the male strength advantage.
They reduce it substantially, especially as regards personal security, but they don't neutralize it. Women, in general, are most effective with smaller, lighter, and less powerful handguns, for instance. Manufacturers have responded with models such as the Charter Arms "Pink Lady" series.
When it comes to military arms, size and strength still matter even more. Few men and essentially no women can manhandle heavy weapons such as machine gunes and mortars.
Also, in current-era combat, soldiers carry a lot of gear. Between helmet, body armor, comms, sensors, water, food, ammo, and miscellaneous gear, U.S. troops carry 50 to 100 pounds of "battle rattle". (A while back I saw an essay by an officer arguing that current Army physical standards overweight endurance and neglect strength – soldiers mostly ride in vehicles rather than march, but in action they have to move all that weight around.)
And there are many non-fighting tasks such as digging trenches or filling and stacking sandbags which require strength, and the more the better.
I saw another recent essay, by a female combat engineer officer who served in Afghanistan. She succeeded and survived, but trying to keep up with the physical burden borne by her troops left her permanently damaged (back, hips, knees). There was a similar case in the Panama operation (female commander of an MP platoon which got into combat).
Technology has created some combat roles that women can do as well as men – UAV operator, radar operator, maybe sniper. And for most combat roles, the gender difference in capacity is much less than in the pre-gunpowder era. But overall, it's still dominant.
You're wrong Brian. If you wish a short education all in one post, The Other McCain has been writing often and well about this for some time.
http://theothermccain.com/2014/09/06/rachel-maddow-feminist-lesbian-heteronormative-patriarchy/
If you knew who Kameron Hurley is, why she won 2 Hugos, who Liz Bourke is, what she does at Tor.com, and what a small but persistent slice of Tor.com promotes, and 1,000 other facts, you'd realize the Viking piece is classic radical feminist propaganda, and by no means at Tor by accident. I am not shoe-horning in anything anywhere, but speaking directly to the post at hand based on facts and observation of those facts, not wishful thinking.
… says the person who has never listened to him.
Your reading comprehension is even worse than I thought. PapayaSF at least had the excuse that I hadn't yet pointed out that there might be a difference between "does not listen to" and "has not heard". At this point, about the only way that you can make such an excuse is if you admit that you read so slowly that you only bothered to read the part of my comment that had your name attached to it.
Hmm. Let me try to use really small words here.
Both my father and my maternal grandfather on my mother's side listened to Rush assiduously all the time in the late 80s and early 90s. We live in Texas and they were both squarely within Rush's target demographic audience.
So I heard a fair amount of Rush, without actively listening to Rush. But then one day, maybe 15 years ago or so, Rush came on the radio when I was riding in the car with my Dad, and my Dad changed the channel. I said "I thought you liked listening to Rush" and he replied "I like a lot of what he says, but sometimes I don't particularly care for how he says it."
To be fair, that was a long time ago, so maybe Rush isn't like that any more. I don't know for sure, because, as I said, I don't listen to him. On the other hand, I have no real reason to disbelieve doubt the veracity accuracy truth of the transcript text that wikipedia teh interwebs has of the Sandra Fluke incident, so I doubt that either my dad or I would change our opinions based on further listening.
@Rich Rostrom:
In a post above, esr mentioned that women were actually better suited physically in some ways to be combat fighters than men (they can sustain higher G-forces, for example), but that they weren't suited as well mentally.
If that's true, then for a recon UAV, those differences probably don't matter, but for a combat UAV, they might.
Interestingly, I read (don't remember where) that shooting people from UAVs is actually more stressful than doing it from a manned aircraft. Apparently it's easier to emotionally justify shooting to kill when you are yourself being shot at.
I have no idea how well suited men vs women are to this task, but it is the sort of task that lends itself to the most realistic possible simulations, so it's certainly an ideal candidate task for completely removing up-front sex discrimination and seeing how people do as individuals.
Actually, considering the way everyone here seems to have observed feminist harpies 'go for the jugular' in their rhetoric, I can't see how anyone can doubt that women would make fine combat killers, given suitable weapons.
Lady MacBeth, anyone?
bsouther says:
> Yes, it is bitching because the nutjobs aren't going to disappear as a result of your grassroots meme campaign. You're harping on a problem with no realistic prospect of solution.
The best example of a memetic change that I can think of is the change in attitude about drinking and driving. This was extremely prevalent in the 70s and 80s and it was very common for people to brag about how drunk they were when they drove home around the water cooler.
The people who brought this change about were not bitching, in the hope that drunk drivers would hear them and go away. They exposed the practice for what it was and appealed to the intelligence and reason of the majority of the population as well as law enforcement and the law makers.
They made drinking driving "not cool" anymore and that had a powerful effect on the amount of it that occurs. The guy who still drinks too much and drives doesn't brag about it because he doesn't want to look like a tool.
A truly demented person will still make rape threats no matter the consequences, but if more people condemned the trolls and sophomoric clowns who also do this, and if the sites and hosting companies who allow this to continue to occur in their spaces as free speech were to start loosing subscribers and add revenue, we would quickly see these incidences narrowed down to the actual rapists. It would be a lot easier for law enforcement to learn how to start to deal with this.
Even if you don't fully agree and don't want to take place in helping out, do you really want to put yourself in the camp with people dismissing this attempt as "bitching"?
You're wrong Brian.
I assume that you are referring to my description:
intersectionalism seems to mean that the only thing better than being oppressed is being oppressed for more than one reason – unless you are a guy
I was being somewhat facetious (as usual). Historically, intersectionalism included Female/Black and possibly Female/Disabled in addition to Female/Lesbian. I'm not sure – "Disabled" might only count for black and/or lesbian women. I don't know because I certainly don't care enough to get any deeper into the history.
You seem to have made the points that:
– intersectionalism is now all about being Female/Lesbian
– intersectionalists and third-wave feminists may call themselves "feminists", but "they are not. They are a racist sexist supremacist cult."
– They are doing bad things in the SFF community.
In relation to the last point, I am sure you have the sympathy of most of us here, but it seems to be totally off-topic.
This entire subject seems to be off-topic, particularly the lesbian aspect. Andrew W, in the comment to ESR's linked article, suggests that it is "modern sexism" to conclude that a body was male because it was buried with battle weapons. I think that calling this "sexism" is silly, but hey, I don't have a PhD.
So, other than this (maybe silly) point, what does intersectionalism have to do with this post?
@Patrick:
Having actually listened to Rush, and having seen the Fluke transcript… I do wonder where you find the 'full of threats and hate' part. Seriously.
And you are continuing to string together data points that you are a very obnoxious sonofabitch, for no visible reason. What exactly the fuck is wrong with you?
>And you are continuing to string together data points that you are a very obnoxious sonofabitch, for no visible reason.
This, um, isn't actually like Patrick, in my experience. I wonder if he's OK.
Bitching? Really? That is how you would characterize my argument the the ubiquity of specific personal threats of extreme violence that are common against women, with a dramatically higher frequency than against men, is bitching rather than communicating a deeply flawed thread in society?
Shrill tumblrettes and twitterbirds are not a representative sample, nor are hashtags etc.
It's called "bitching" because it's all coming from people living in a fantasy pieced together from postings on twitter and tumblr (much of the counter-frothing comes about in the same way). Reality (that thing outside your front door) is something very different. This is all due to selection effects.
"Professional asshole" is a good description of most comedians at one time or another. They insult people for money.
I didn't write about threats, and AFAICT the definition of cyberbullying does not require threats. But how is it not hateful to call somebody (a real person who is still only in college) a slut and a prostitute? Especially when it's done in a calculated fashion to throw red meat, by someone with an extremely big soapbox?
Even if you can rationalize that it's not hateful, how can it possibly be helpful? If you want to convince people that there's no "rape culture" that engages in "slut shaming", wouldn't censuring people who use the word "slut" to describe people with different political views be a good start?
And you are continuing to string together data points that you are a very obnoxious sonofabitch
Yes, I certainly can be.
, for no visible reason.
I've been told to "inform myself" because I'm apparently not allowed to have the opinion that there are prominent male voices who engage in bullying, because I have no "actual knowledge" according to someone who knows nothing about me, but who nonetheless is more than willing to make all sorts of assumptions (including the major projection that I have never heard Rush).
Then when I call him out on the fact he didn't read to or respond to what I actually wrote, he says I have a gaping credibility gap, while still not managing to fully respond to what I wrote.
What exactly the fuck is wrong with you?
Nothing that will be cured by preachy passive-aggressive comments from the peanut gallery.
ESR:
> What changes is that firearms neutralize the male strength advantage.
I think you might be failing to realize just how much a difference there is between a fight and combat.
No, let me rephrase that. There is a difference between a woman attempted to repulse a point attack and combat, and I don't think you've quite groked it yet.
A point attack is, depending on what's happening, decided relatively quickly–The attack is going to succeed or fail within the first minute, but might take much more time to end. In these sorts of events a firearm is The Great Leveler. A single shot to the face will dissuade all but the luckiest of the most determined attackers and even multiple body shots will ensure that the fight ends fairly rapidly.
A single .38 revolver deployed well–a function of skill and reflexes–will stop 2-3 attackers dead, yes pun intended.
Combat is a different thing. It takes place over a longer stretch of time, it involves more players, it is rarely resolved quickly.
A pistol is (in most hands) a defensive tool. Rarely do you shoot into your second magazine.
A rifle is an offensive tool and rarely do soldiers leave base with less than 180 rounds for a 5.56 rifle. That alone ups the physical requirements. Ceramic plates for body armor are 4 to 8 pounds EACH, and this does not include the plate carriers or the kevlar panels that cover other bits. The helmet is uncomfortable and heavy. IIRC there were 4 plates on the "interceptor" kit I had to wear in Baghdad (no, I was not a soldier or a "shooter" there, but regs were outside the wire, wear the gear. Even when flying at 10000 feet).
Unmounted fighting with a rifle is BRUTALLY hard work.
Patrick Maupin wrote upthread:
[…] if there were an objective cyberbullying standard that included all of these women, I would imagine that I could easily find examples of Rush being Rush that would fall under such a standard, as well as him egging on his audience to participate.
(Emphasis added.) And then he wonders why people object? One could classify Rush's original Fluke comments as over the line, and he did apologize for them. Calling it "bullying" seems like a stretch, and while I'm no Rush expert, "egging on his audience to participate" doesn't sound like him at all. AFAIK he avoids anything along the lines of "call your congresscritter" or whatever. So I think it's appropriate to ask Patrick for actual evidence of what he "imagines" he can "easily find."
Jessica Boxer on 2014-09-06 at 10:28:12 said:
> @William O. B'Livion
>> The male workplace has, at least since the 1950s (tales from my father) been the sort of
>> place where rough language, scatological humor and sexual innuendo were common.
> Good, but it isn't the male workplace anymore. People, regardless of their gender, have a
> perfect right to be treated respectfully in their workplace. Sorry if that cramps your style, but it is
So two sides to this:
1) Women demanded access to these workplaces, then got pissy when they were treated marginally *better* than men treat each other.
2) This *is* how men treat other men with respect. You are demanded special privileges.
3) You missed out where I said I was raised…differently than that. I *don't* like it, but as long as it's the culture in that environment you've got to adjust. I can't come into a place and insist that everyone else conform to my expectations.
>However, that is an entirely different thing. I am talking about guys who go online and say
> "I know your address, I am going to come when you least expect it, torture you, rape you
> them kill you." Is that a hold over from the "male workplace of the 1950s?" Should we
> girls just suck it up and laugh it off as some male sophomoric prank?
That's not just women getting that. Hell, *I* had people threaten to kick my ass. I posted my address and phone number and told them to call in advance to make sure I was home. Admittedly this was 1996/7, I would not do the same today, mostly because of SWATTING. I seriously don't want the SWAT team kicking in my door.
There is quite a bit of evidence, including anecdotal evidence in my wanderings that yes, women catch shit, and they catch ugly shit. But in many areas they are *less* likely to catch shit than the men, and are treated as gentlemen have always treated women–most treat them with respect and deference and a small number of assholes will sneak around and, well, be assholes.
Eric Cowperthwaite on 2014-09-07 at 12:50:40 said:
exertion and so your body is burning fat stores to provide enough energy. If you are a modern warrior on a tank, you must load shells that weigh 25 kg into the gun in less than 6 seconds every time the gun is fired, as an example of the level of exertion.
This is what he's talking about:
https://www.youtube.com/watch?feature=player_detailpage&v=HyrAqNv1odM#t=152
25 kilos is roughly 55 pounds.
There is no f*king way I'd do that job. Tanks are group coffins.
@Eric Cowperthwaite
"Thus, any tribe that had women warriors was contra-survival because all the women would end up dead. "
Evolution works in wondrous ways. That also holds for cultural evolution.
Telling us a just so story about how in (really) ancient times people died doing certain things does not tell us how nature and nurture proceed to prevent people from doing similar things now.
You tell us that it decreased female fitness to enter combat. I listed a number of "economical" and biological factors that leads modern (Iron age ;-) ) women to shun combat.
The difference between ultimate and proximate causes. What counts now is how men and women differ so that they differ in their eagerness for entering combat. During most of human history, combat is an euphemism for raiding neighboring tribes.
My 4 points are some of the influences that work in a modern person's life. Your point worked by killing people thousands of years ago in circumstances that are not effective anymore.
I think I'm OK, but maybe not. Who ever really knows? Life has been stressful and busy lately.
Absolutely! But is Rush "just" a comedian?
One could classify Rush's original Fluke comments as over the line, and he did apologize for them. Calling it "bullying" seems like a stretch
If the timeline and transcripts at the wikipedia page are at all accurate, then he dumped fuel on the fire for 3 days and didn't actually half-heartedly apologize until he started losing advertisers. I'd call that classic bullying (which only stops when the bully starts feeling pain). But I haven't seen any bullying definitions with bright lines, so opinions could certainly vary.
, and while I'm no Rush expert, "egging on his audience to participate" doesn't sound like him at all. AFAIK he avoids anything along the lines of "call your congresscritter" or whatever. So I think it's appropriate to ask Patrick for actual evidence of what he "imagines" he can "easily find."
The Sandra Fluke transcript was similar to (if a bit more extreme than) the sorts of things I remember hearing from Rush when riding with my dad 20+ years ago. I thought I also remembered some exhortations to do this or that, but maybe that's a faulty memory, or maybe he doesn't do that any more.
Since the beginning of the 16th century, European combined arms tactics have employed areas women should have been able to have roles in, namely musketeers, archers, crossbowmen and perhaps cannoneers. Those ranks of specialists supported pikemen, cavalry, swordsmen where superior strength was more of an issue.
Women combat deaths went from statistical zero in WW II and Vietnam to 2.3% in Afghanistan and Iraq. Presumably that will rise in the future. The question becomes if that can rise to 50% and if it doesn't, how much can be attributed to a thing innate to womens' natures, e.g., pregnancy and psychology, and how much to cultural custom and practice, as in something like the so-called "patriarchy." In a volunteer and professional army, birthrates affecting the nation would be a non-issue. Only in total war, a thing like WW II, would that become an issue. I seem to remember reading that at the end of WW II we had 15 million in the armed services in a country of 140 million. Manpower issues would've been worse for the European combatants. Germany, instead of putting its women to work, had only a fraction of the number of women working in factories devoted to the war effort that Great Britain did, and for no real reason.
"If the timeline and transcripts at the wikipedia page are at all accurate, then he dumped fuel on the fire for 3 days and didn't actually half-heartedly apologize until he started losing advertisers. "
I just read the transcripts. I am at a loss here.
Quote from the transcripts:
What does it say about the college co-ed Susan Fluke [sic], who goes before a congressional committee and essentially says that she must be paid to have sex, what does that make her? It makes her a slut, right? It makes her a prostitute.
Is Mr Limbaugh seriously unaware that 30 year old students tend to have stable heterosexual relations? And that even monogamous couples need contraceptives?
Or is he convinced these are the believes of his audience?
@ bsouther – "Even if you don't fully agree and don't want to take place in helping out, do you really want to put yourself in the camp with people dismissing this attempt as "bitching"?
Where does it end? How many causes. How many crusades?
How many of these cyberbully nutjobs are out there? Dozens, hundreds, perhaps a few thousand. If you're going to hold a bitchfest, why aim low?
Obesity is huge epidemic effecting tens of millions and foreshadowing a healthcare cost crisis when the diabetes, heart disease, and joint replacement surgeries start flooding hospitals soon. Michelle Obama wants to put diet Nazis in public schools. Would you like to volunteer?
A majority of Americans are now hooked on government handouts. Entitlement junkies are people too, and they need help. How about a 12 Step Program for the lazy.
What about rap music? Likely more actual violence against women has incited by this scourge than the impact of a few cyberbullys. Next time you see a brother on the street listening to his tunes, tell what a jerk he his.
We have become so damn affluent, privileged, and whiny in this country that we now view every hardship in life as a crisis. Hardship is what makes us stronger, and eliminating every manifestation of it makes us weaker.
>Where does it end? How many causes. How many crusades?
I get you. This is a tirade that I've made myself, a lot.
But, don't you think that what JB is talking about is different; using the internet to make specific threats to a particular person, often over multiple media? How far should a person be allowed to go in making another person's life miserable before before the community starts to step on him/her?
Could you picture Eric (as far from a squishy lefty type as they get) allowing someone to use this blog to make these types to threats to any of the women who post here?
Winter, what Rush was objecting to was the idea that because Sandra Fluke was having sex, that meant that someone else should be forced to pay for her birth control.
@Winter:
Is Mr Limbaugh seriously unaware that 30 year old students tend to have stable heterosexual relations?
It wouldn't surprise me if he knew that. He's not stupid or uninformed.
Bearing in mind that my actual listening experiences are around 20 years old:
Limbaugh never simply says what he thinks his audience wants to hear. He's always trying to move the needle. I have no real problem with that (especially as there are plenty of people tugging the needle the other way on most issues), but to me, his methods, even when he's not being cruel, leave a lot to be desired. His arguments are often specious.
It may be the format he works in makes it impossible to both get rich and be logically coherent, so he's settled for getting rich and moving the needle. I honestly don't know.
And it was obviously impossible for him to do that without using using those inflammatory words.
@bpsouther:
There's another good reason to support a reasonable memetic campaign on this, and libertarians, in particular, should be at the forefront. What we seriously need to avoid is laws attempting to define and criminalize bullying, because laws are always blunt instruments that are severely misused by prosecutors, whenever available.
As you point out, the drinking and driving campaign was so spectacularly successful that legal changes accompanied the social changes. To me, this is a warning that you should think very carefully about the sort of memetic campaign you support. You would definitely want the "I support your right to say that" meme right in front of "but it's uncool", "but not on my blog", "but I'm not going to listen to it" or whatever.
Much of what Rush does amounts to live improv comedy about politics. Do that for 15 hours a week for years, and some jokes will cause offense. I have no doubt that even if they didn't agree with him politically, Lenny Bruce and George Carlin would have defended Rush on that one.
@ jsouther
Like you and most of the A&D family, I am actually quite fond of Jessica, even though I have never met her. She is exceptionally intelligent, courageous, and does not back down. If we could clone her, we might stand a fighting chance of reversing the decline this country has been in for the past half century.
My guess is that she is sensitive about cyberbullying because it has effected her directly, and possibly traumatically. If so, a big part of me would like to track down the SOB and provide a memetic education he would not likely forget. However, that is a fantasy and Jessica lives in the real world, so I would sleep better at night knowing she kept a 9mm close at hand.
As to the root issue, I agree with Jay Maynard. We need tougher women, not a TSA of the internet. Evolution will take us there eventually, if we don't fuck it up. I'm all for speaking out against cyberbullying (of either sex), I just don't think it's a major feminist cause, and you're not likely to recruit real men to help by making it about feminism.
Mr. Maupin:
> Absolutely! But is Rush "just" a comedian?
"just" isn't relevant.
Limbaugh is an entertainer, part of his schtick is comedy and parody.
I believe one of his taglines is "We're Illustrating Absurdity by Being Absurd."
I don't think this is quite accurate, but he seems to think so.
First off Limbaugh used the wrong word. The correct word would have been "whore".
To understand why you would need some additional information not generally provided by the press.
Ms. Fluke had made it clear at one point that she deliberate chose to attend Georgetown University *because* it was a jesuit college who's insurance program did not cover birth control in order to advance her career as a "progressive" activist. She was a *part time* student for 9 years when the controversy broke.
So yeah, calling her a slut was wrong. Calling her a whore was de'classe', but in frankly given the hate and venom embedded in the left-wing agitaprop these days, phuq them.
Fluke also made the absurd claim that birth control costs $3,000 a year, and did so while having a millionaire boyfriend. So her whole "unaffordable" argument was b.s. from the start. (And note how progressives, who have spent much of the last three decades promoting condom use, suddenly forget about them when they demand someone else pay for their birth control.)
> If there were any substantial net advantage here,
I'm not talking about there being a net advantage from this factor, I'm talking about the possibility of a culture that happens to have female warriors having other unrelated advantages. The field isn't so large that every possibility is going to have competition that differs by only one individual factor.
"First off Limbaugh used the wrong word."
I don't agree with him at all, but you've forgotten the specifics of Limbaugh's argument. He asserted that the only reason for wanting birth control is if you are having enough sex that it's economically cheaper than condoms.
As to the root issue, I agree with Jay Maynard. We need tougher women, not a TSA of the internet.
Is that even what's being requested here? It sounds like another TSA is the last thing she wants. She mostly just wants people to clean their own act up, to the extent it's dirty. (And she wants a little cred for the feminist movement, to the extent it helped that along.) And she's all for tougher women, too – your disagreement sounds rather mild here.
I have no doubt that even if they didn't agree with him politically, Lenny Bruce and George Carlin would have defended Rush on that one.
Well, obviously they're not around to ask. Interestingly, if you google for Rush Limbaugh and George Carlin, you can hear a George Carlin monologue that many seem to think is about Limbaugh. Carlin's not around to ask about that either, and it's probably impossible to know because he didn't actually name any names.
@William O. B'Livion:
> "just" isn't relevant.
> Limbaugh is an entertainer, part of his schtick is comedy and parody.
But, for example, George Carlin didn't see anything as sacred and Rush (a) is unabashedly partisan and (b) claims he's a comedian when it's useful, and claims he's a commentator when that's useful.
> I believe one of his taglines is "We're Illustrating Absurdity by Being Absurd."
He certainly does some of that, but there are plenty of conservative absurdities he chooses not to illustrate.
> I don't think this is quite accurate, but he seems to think so.
I don't know that you can tell what he believes based on his professed beliefs. As pointed out, he is a professional entertainer, and he has apparently described himself first and foremost as a businessman. I don't think you would find Lenny Bruce or George Carlin doing that. Thinking about it, that self-description may actually be part of the catalyst for the Carlin monologue about businessmen and their cigars.
Ms. Fluke had made it clear at one point that she deliberate chose to attend Georgetown University *because* it was a jesuit college
I'd like to see a cite on that. What I read is that she did enough due diligence to figure out the various insurance options, but at the end decided that she wasn't going to forego the education at georgetown simply because the insurance wasn't up to par.
That may, of course, be a later sanitized version, and even so, if that's what happened, "you knew what you were getting into" may be a valid argument, but it's still not the same argument as "you deliberately chose a particular college because you wanted to make a scene."
Rush (a) is unabashedly partisan
I would say ideological, not partisan. He criticizes Republicans and has been known to praise Democrats.
Re Fluke, I found this:
Fluke came to Georgetown University interested in contraceptive coverage: She researched the Jesuit college's health plans for students before enrolling, and found that birth control was not included.
If Carlin didn't see anything as sacred, he still conspicuously chose not to satirize anything held sacred by the hippie crowd, beyond the point of that being mere coincidence. (I've listened to a lot of Carlin, but not everything, for sure, so if you know of a bit where he rags on hippies in his later years, I'd be mildly surprised.)
If this holds, then Limbaugh is the analog of this, for conservatives. And rather tangential to social conservatives at that; his rants don't dwell on the Bible much, and he doesn't seem much for buttoned-up, puritanical asceticism. Ann Althouse referred to him as a "shameless sybarite" for this reason.
>If Carlin didn't see anything as sacred, he still conspicuously chose not to satirize anything held sacred by the hippie crowd
That is so not true. Who can forget "Toledo Window box"? https://www.youtube.com/watch?v=TjKaVPWTW08
Or, for that matter, the Hippy-Dippy Weatherman? https://www.youtube.com/watch?v=4z2yIOM-R-w
I would say ideological, not partisan.
I would concur with this. There's an interview he did for the Today Show where he vigorously criticized then Republican Party chairman Michael Steele for being "off message".
Fluke also made the absurd claim that birth control costs $3,000 a year
There are a few different issues here. The first is that hormones are tricky things and some women suffer side effects with some birth control pills and not others, and of course, the more recent formulations designed to be less problematic are going to cost more. A second thing is, of course, the ever-present drug salesmen in the physician's offices. The third thing has to do with the way health "insurance" is run in this country. It is inimical to the free market, does not give consumers any reason to shop around on price, and often raises prices for the uninsured, because the insurers only pay 80% of "usual and customary." How exactly does that work?
The other problem with the third thing is that it caused so much problems that we now have even less free market. You want a free market in health, you could do worse than mandating that people can use whatever insurance their employer supplies, or not, with after-tax money.
Having said all that, for people who need fancy birth control, $3000 may not be that far out of line. Here's a generic drug:
http://www.goodrx.com/loseasonique
I wonder what it cost when it wasn't generic?
@ Paul Brinkley – "It sounds like another TSA is the last thing she wants."
Agreed, but as was pointed out in other posts using MADD as an example; even a worthwhile grassroots movement can lead to new legislation. Who's to say that a radical feminist organization won't co-opt her noble grassroots movement and turn it into a political and legal inquisition. Not such a stretch based upon recent history (see rape culture legislation in California).
> found that birth control was not included.
Yeah, that's what I read. But I read the next sentence too:
"I decided I was absolutely not willing to compromise the quality of my education in exchange for my health care," says Fluke
I think it's admirable, when comparing colleges, to look at all the things that are important to you. It's also admirable to try to make things more fair, which, in her mind, she was attempting. You can certainly disagree about what's fair, but most of the valid arguments against mandated contraceptive coverage are also valid against any employer based insurance coverage. Get employers out of the insurance business and you won't have to worry about the details.
Now if somebody points out something that truly shows she was being more calculating, then I'd like to see it, but absent more information about her mindset pre-law-school, Occam's Razor says she probably really did want to go to a school with contraceptive coverage, and that was a negative for Georgetown in her calculus.
Pointing to one kind of birth control pill that seems to cost $100-$200 for a three month supply is not a good way of "proving" that birth control can cost $3,000/year. But even if you can find an outlier at the fabled $3,000 price, the fact remains that most birth control pills cost a fraction of that, other forms of birth control cost even less, and any Planned Parenthood clinic will help out anyone who asks. The basic point is that Fluke lied because she's an activist with money pretending to be a poor oppressed college student. Rush called her on that, with edgy humor that some found offensive. Overall, I think he was closer to the truth than she was.
"I decided I was absolutely not willing to compromise the quality of my education in exchange for my health care,"
She's from Pennsylvania, and went to college at Cornell. A quick look at law school rankings shows that U. of Penn. and Cornell both rank higher than Georgetown. If she wanted to be in Virginia, U. of Virginia also ranks higher than Georgetown.
Face it: she chose Georgetown in order to be an activist there. You may consider that an interest in "fairness," but I don't think hassling a Catholic school to pay for birth control counts, any more than would demanding a Jewish caterer serve ham sandwiches on a Saturday.
> I would say ideological, not partisan.
I would somewhat accept that characterization, except:
@Paul Brinkley:
To me, that sounds partisan, as in "I'm part of the team and I have a say in the messaging."
If Carlin didn't see anything as sacred, he still conspicuously chose not to satirize anything held sacred by the hippie crowd, beyond the point of that being mere coincidence.
I'd like to know what exactly you have in mind. I'm sure he would have criticized it if he had thought of it.
> That is so not true.
Yeah, and don't forget:
http://vhemt.org/carlinsaveplanet.htm
> Pointing to one kind of birth control pill that seems to cost $100-$200 for a three month supply…
Hmmm, my experiences with buying birth control for my family sometimes had a co-pay that approached $1000/year, so $3000 didn't seem implausable to me.
Since my experiences were month to month, I didn't notice the 91 day supply thing. You may be right. I have no idea about current birth control prices or if they changed. Maybe a few patents have expired since then, or maybe she managed to find the most expensive contraceptive at the most expensive pharmacy in the most expensive neighborhood around georgetown.
Interestingly, I've read recently that a lot of Republicans are suggesting that birth control should be OTC. That's an excellent solution to the problem — since insurance companies don't cover OTC, Hobby Lobby has nothing to complain about, and OTC status generally makes drugs cheaper because the free market comes into play. Expect the drug company lobbyists to fight this tooth and nail.
A quick look at law school rankings shows that U. of Penn. and Cornell both rank higher than Georgetown. If she wanted to be in Virginia, U. of Virginia also ranks higher than Georgetown.
Without knowing which, if any, of those she got accepted into, and how much, if any, aid she was offered, and the conditions, if any, of that aid, or her preferred geographic location (maybe nearby her boyfriend?), that information by itself is completely useless.
Face it: she chose Georgetown in order to be an activist there.
In your opinion. Without knowing more, I'd give that low odds.
comment in moderation queue
That is so not true. Who can forget…
First, that's really old Carlin, before I think he got his comic politics more congealed. Secondly… that's just it. I claim the hippie crowd holds none of what he's satirizing sacred. "Marijuana makes you forgetful" is cheerfully acceptable. "Marijuana is a complete waste of your time" would be more sacred, and he doesn't go there. Al Sleet is loopy, but loved.
(Matter of fact, I'd say every entertainer holds something sacred. When they say something and the audience claps and cheers because they're Speaking Truth, you'll know you've found it.)
My previous comment will probably give you more triangulation, but I'll try to be even clearer: I'm distinguishing between earnest environmentalists (like the VHEMT guy) and hippies who take a laid-back view on everything.
I really believe Carlin's core groove was that narrow, and that definite. What I'd consider more of a counterexample to my claim, is a bit where Carlin were to suggest that buttoning down and getting serious beats taking it easy and riding along on planet earth.
I have something in the moderation queue, but here is another version of it:
http://directorblue.blogspot.com/2011/08/flashback-george-carlin-obliterates.html
Oh, forgot to mention: the reason I find Limbaugh's criticism of Steele ideological rather than partisan is precisely because Limbaugh perceived a conservative ideology to stick to, even if the Republican Party or its leadership left it, as Limbaugh felt Steele was doing. Partisan would be if he closed ranks with their leadership.
@Paul:
Sorry, was confused about what was in the queue. You've seen what I was referring to.
I don't see Limbaugh as being anything like an ideologue. He's a paid one-sided polemicist and not nearly as neutral as Christopher Hitchens, who would've argued with jaybirds in a pinch.
Bigboss says:
Women as warriors doesn't make sense. They make decent guerillas, sappers, etc, but the bottom line is that the actual heat of battle comprises a small percentage of the time an Army spends "in combat." A soldier on the battlefield carrying a 7 lb rifle needs to carry ammo, which is itself heavy, food, water (also heavy), over great distances, spend weeks at a time in unsanitary conditions without so much as an underwear change, and a plethora of other tasks that women are simply less suited for than men. When I spent 2 weeks pounding the bush, I and my comrades came out with jock itch and all other forms of minor irritations related to not showering. Women in such conditions would be in danger of serious health problems.
You heard about women in combat in Iraq 2003, but not so much by 2006. It's because the Army figured out FAST that it was a disaster.
I suppose we've settled the question of the OP, but for the three of you who don't read Instapundit, there's a link there to:
http://www.missedinhistory.com/blog/raining-on-your-parade-about-those-women-viking-warriors/
@TomA
> As to the root issue, I agree with Jay Maynard. We need tougher women, not a TSA of the internet.
First, sorry, I faded out;I had a work crisis… but seriously, how can anyone who has ever read anything I have written here think I want a TSA of the internet? I barely ever get on airplanes anymore because I hate the TSA so much.
Look, consider something that is far less bad than the stuff I am talking about, namely the Westboro Baptist "chuch". What a bunch of nasty scum those guys are. I would NEVER advocate laws to ban their speech, though I think laws to enforce trespass and nuisance should be vigorously enforced, and I think an occasional fist down a few throats may well qualify under EED affirmative defense.
However, I would never suggest that the mother of the dead Marine they are calling a "fag lover" should just toughen up and deal with it, or the poor people grieving whatever celebrity died and had his funeral picketed should stop being a bunch of cry babies, or quit their bitching and whining.
However, I also don't think they should be unopposed. Opposed by ostricizing them, opposed by shouting down their nonsense, opposed by jumping all over anyone who starts with the sentence "well I don't agree with their methods but they do have a point…" Opposed by mocking their pathetic-ness, or revealing how much a bunch of nobodies they are. And, in this particular case, opposed by simply ignoring them, which is about the worst thing you could do to them. It always makes me laugh to read the furrowed brow press reporting on them, as they suck at the very teat they criticize.
But again, what they do is tame compared to what many women who participate in online forums have to suffer, and, as I said, I think it is just a manifestation of a deep thread in society that needs rooted out. Hell, it even has happened here in this bastion of clear thinking, not too often thankfully, but those who have been around for a while will remember the whole "fluffy girl in a man's world" comment. Hardly, a death threat, but a small dose of sexism right here on A&D.
I claim the hippie crowd holds none of what he's satirizing sacred. "Marijuana makes you forgetful" is cheerfully acceptable. "Marijuana is a complete waste of your time" would be more sacred, and he doesn't go there. Al Sleet is loopy, but loved.
So his target audience cheerfully lets him caricature them.
What I'd consider more of a counterexample to my claim, is a bit where Carlin were to suggest that buttoning down and getting serious beats taking it easy and riding along on planet earth.
I guess I'm still not seeing your distinction — I don't see how that wouldn't just be another caricature in the same vein. After all, if you can afford tickets to see Carlin, you've probably got a job, right?
Well-said.
Speaking of which, this just showed up at popehat:
http://www.popehat.com/2014/09/06/u-c-berkeley-chancellor-nicholas-dirks-gets-free-speech-very-wrong/
If you can afford to tickets to see Carlin now, you probably have supernatural powers.
Or someone just found a gig that puts scalping to shame. But moving on.
It's possible that you're looking to hard or closely for the distinction. I can't tell from here. I don't think my claim is that bold, although I also think it's better than the null hypothesis. I'll try to put it yet another way: there exists (multiple) X such that X != "nothing is sacred" (no Russell paradoxes here, please) and Carlin held X to be sacred and the hippie crowd takes X seriously, too. Oh, and X is not a universally shared belief (as in not even 97% or what have you).
Now, I'll grant that hippies stand out as one of those crowds that can take a joke at itself, and that this is to their credit, but they still have values beyond this, including stuff like "none of us truly owns anything" and "Bush stole the 2000 election" and "children should ask more questions". Carlin didn't ridicule this class of things. At least, I don't remember him poking at any of them.
This all feels a bit offtopic from a thread that was itself offtopic, so I'll try to bring it back: Carlin had his own form of partisanship. I don't see anything wrong with that (rather, the error I see would be in claiming that he did not). In fact, practically every entertainer I can think of has partisan beliefs, and incidentally, if I've heard of them, then they're likely to have enough affluence to be able to afford their partisanship affecting their professional choices. Again, nothing wrong with that. It's just that on that front, I don't see Rush Limbaugh standing out particularly (modulo his specific entertainment form necessarily revolving around it).
Jessica, when a person asserts a thing they couldn't possibly know the truth of, it shows more about what that person wants to believe than the truth.
Here's the bottom line to all of this: are these anonymous threats by 5 people, 80, a thousand? We don't know. If you don't know you can't measure a thing and if you can't measure a thing you can't compare it to something else. How many men are abused online? Given the absurdly great numbers involved on the internet, it may be impossible to know.
It may not be obvious to you but it is obvious to me from your comments that when you think of the abuse of human beings it's women first, men second. Imagine if law worked like that and then imagine why I reject your feminist slant on the world and conflating common insults with sexism. Has it occurred to you your very comment is sexist?
"Fluffy girl in a man's world" – I still chuckle at that. What was that, a year ago now? (Of course, I feel I can laugh at it, because I felt you gave as good as you got…)
\begin{Sarcasm}
Because whenever some calls for changes that have to externally enforced, especially when it's a woman, they mean they want an overbearing government agency to be created to enforce it.
\end{Sarcasm}
True dat. Somehow when I condensed my response, it wound up in present tense…
In fact, practically every entertainer I can think of has partisan beliefs, and incidentally, if I've heard of them, then they're likely to have enough affluence to be able to afford their partisanship affecting their professional choices.
True dat, too, but Carlin's focus was entertainment, and Rush's… isn't.
In fact, practically every entertainer I can think of has partisan beliefs, and incidentally, if I've heard of them, then they're likely to have enough affluence to be able to afford their partisanship affecting their professional choices. Again, nothing wrong with that. It's just that on that front, I don't see Rush Limbaugh standing out particularly (modulo his specific entertainment form necessarily revolving around it).
Rush does particularly stand out. To the extent his audience primarily listens to him for the humor factor, 42% more of Rachel Maddow's audience does the same thing, which is actually too much for me to think about without getting a headache right now. In any case, it's pretty clear from this chart that a typical Stewart or Colbert fan thinks he's getting comedy, and a typical Limbaugh fan thinks he's getting news and opinions:
http://www.people-press.org/2010/09/12/section-4-who-is-listening-watching-reading-and-why/
If this chart is accurate, then it says to me that the argument put forth by several here — that Rush is only or mostly an entertainer, who can be expected to screw it up occasionally because he's joking 15 hours a week, and who should get the same leeway as any comedian on jokes because everybody knows he's always joking — is actually not incompatible with the argument that most of his audience are idiots.
> How many men are abused online? Given the absurdly great numbers involved on the internet,
I suggest your read the article I mentioned. It quantifies this.
> It may not be obvious to you but it is obvious to me from your comments that when you think of the abuse of human beings it's women first, men second.
Isn't that the whole point of this thread? Men are warriors because they are disposable, and the uterus carriers need to be protected against all assaults? Boobies and babies rule the world. Chicks before dicks!!
I think I noted how that article quantified male abuse vs. female abuse in an earlier comment. To recap: something about that study doesn't add up for me. First, it didn't comport anywhere near my own experience playing a female avatar (actually several) online. (Anecdotal; however, still, nowhere near.) Second, exactly how does one go about getting 100 abusive messages per day? I haven't trolled in years, but I'm extremely certain I'd have to be the second coming of Hitler to get that much.
Rush does particularly stand out. To the extent his audience primarily listens to him for the humor factor, 42% more of Rachel Maddow's audience does the same thing, which is actually too much for me to think about without getting a headache right now.
I'm having trouble resolving your referents here in a way that strengthens your point, your own headache notwithstanding. (I've never listened to or watched Maddow, but I get links to her stuff from others, giving me the impression she's less of a commentator than Rush.)
As for that chart: it's kinda interesting how much work went into it, but I note that it's essentially a poll. Incidentally, if I guess correct and you infer 42% more of Maddow's audience watches her for entertainment than of Limbaugh's from that 7-vs.-10 figure in the entertainment column, I'll note that the "mix" column has Maddow at 15 and Limbaugh at 28. What if most of Limbaugh's 28 are mostly there for entertainment, along with news, and same for Maddow?
Which raises another question: this is what people say they go to each source for, but not how much from each source they actually absorb as news or fact. This is critical, and also virtually impossible to tell for sure. Maybe people leave CNN on in the background "for news" and pay more attention to the Daily Show. Or Rush. I can't tell. I'm not even sure how Pew verifies that the polled actually regularly watch each source. It says it's based on that, but not how they found that out. And in the end, it's only a few hundred at most per source, which honestly strains my credulity that this is a representative sample of millions. (I may be overly jaded on that last part; my statistics textbook is still on my reading list.)
Incidentally, if I guess correct and you infer 42% more of Maddow's audience watches her for entertainment than of Limbaugh's from that 7-vs.-10 figure in the entertainment column, I'll note that the "mix" column has Maddow at 15 and Limbaugh at 28. What if most of Limbaugh's 28 are mostly there for entertainment, along with news, and same for Maddow?
Even if you allocate all of Rush's 28% mix to entertainment, that still only adds up to 33% of the total, so a typical Rush listener still isn't there primarily for the entertainment.
Which raises another question: this is what people say they go to each source for, but not how much from each source they actually absorb as news or fact.
Sure, but tangential to the point I was making.
This is critical, and also virtually impossible to tell for sure.
It may be critical to determining how much information is being received by the audience, but is completely immaterial to how the audience perceives the show.
Maddow and Limbaugh and all the Fox commentator audiences think they are getting news and opinion; Colbert and Stewart audiences think they are getting entertainment (and opinion to a lesser extent).
The claim here is that Limbaugh is an entertainer, just like Carlin. But if this poll is accurate, then Limbaugh's audience does not perceive him as an entertainer, but (I claim, based on the Colbert and Stewart numbers) if Carlin were alive and performing a similar gig, he would be perceived as an entertainer.
There may be other possible reasons for this discrepancy I'm missing, but the first two possibilities that come to mind are that (a) Limbaugh is not primarily an entertainer, or (b) he is primarily an entertainer, but over two-thirds of his audience is somehow incognizant of this fact.
And note that my claim about how Carlin would be perceived today is completely separate from and not an antecedent to my claim that Limbaugh's numbers, if accurate, show that if he is primarily an entertainer (as several here have implied), then most of his audience is too dumb to realize it.
I, at least, have not meant to imply that Rush is "primarily an entertainer." I would say he's a commenter who uses a lot of humor in order to be more entertaining. It's silly to try to unscramble that omelette, and I don't think his audience is "too dumb to realize" what category he fits into.
I, at least, have not meant to imply that Rush is "primarily an entertainer."
Yet you seemed to characterize his entire show as "improv":
Much of what Rush does amounts to live improv comedy about politics. Do that for 15 hours a week for years, and some jokes will cause offense.
This is the "same as Colbert" theory.
It's silly to try to unscramble that omelette, and I don't think his audience is "too dumb to realize" what category he fits into.
I don't actually think that of his audience, but because of that I do think it's unfair to give him the same pass on bad jokes I would give Colbert or Stewart or Carlin. Additionally, the Carlin sketch that may or may not be about Rush didn't name names, so there's a false equivalence there about that particular joke.
I can't know whether Carlin would defend him or not (and am not presumptuous enough to say), but I can say that I, personally, don't put him in the same category as a pure entertainer — and apparently, neither does most of his audience.
No, that's why I wrote "Much of…." Perhaps I was misleading when I wrote "15 hours a week." I didn't mean that he does live improv comedy 15 hours a week, I meant that live improv comedy accounted for much of those 15 hours.
Personally, I give passes on bad jokes pretty freely, whether it's to 100% pro comedians, or Joe Schmoe trying to be funny, or anybody in between. And if someone tells lots of topical jokes on touchy subjects, they are bound to have some clunkers. Eh, so what? I'm not one of those neo-bluenoses who go around looking for offensive things to get upset about. I think the Rush/Fluke kerfuffle was largely manufactured by his enemies.
Personally, I give passes on bad jokes pretty freely,
Yeah, but what about the doubling-down for the next two days?
I think the Rush/Fluke kerfuffle was largely manufactured by his enemies.
Sure, if by "manufactured" you mean "handed to"…
On the one hand, this was one of my thoughts, and you express it well.
On the other hand, if you view it purely from a biological/historical imperative perspective, then yes, it's in most mens' interest that (faithful) women be physically protected (and that rapists and adulterers be killed, of course), but perhaps mental protection is a whole 'nother thing.
Obviously, nobody cares about harrassment that is bad enough to make guys commit suicide, because guys are disposable. (Unless, of course, they are gay guys, because queer!)
On the surface, it would seem that harrassment bad enough to make girls commit suicide would detract from the breeding pool and be a bad thing, but maybe most women really are strong enough to cope (after all, historically, most bullying has been junior high school girls doing it to each other), so maybe the bullying helpfully weeds out the borderline psychos who would have drowned their kids in the bathtub and then committed suicide (after some poor guy already devoted ten years of his life and resources to them). In which case, of course, the earlier suicides are a good thing.
If that's how it is, then no, there would not be a biological imperative to treat women any better than men in this aspect, and bullying in general strengthens the herd.
But maybe (to your point, I think) the bullying we are seeing now is so much worse than what we had before that women who wouldn't have drowned their kids are now committing suicide. (Of course, I am discussing the most extreme cases here, and the reality is that if this is the case, it would almost certainly mean that a lot of other women are leading lives that are significantly worse than they would have been absent the new, improved bullying.)
(As an aside, to put a slightly different spin on one of the points TomA made, if we really are seeing worse bullying now than we had before, maybe it's kind of like how we see more allergies when our immune systems have fewer external threats.)
Anyway, if this is the case, then yes, it might play into the extreme feminist and extreme religious right rhetoric that women are extra special and have to be coddled. Unfortunately for the extreme feminists, this is one of the few areas where the logic of the extreme religious right may actually be better than that of its opponents.
Unless we can somehow build a consensus that the sort of bullying you are talking about has crossed a line that no human should have to endure, and divorce it from gender.
I'm sure that we can agree there are examples floating around on the internet (real or exaggerated) of behavior that nobody should have to put up with.
But (to sort of segue back to cops and Ferguson) when discussing such behavior, it is not necessarily helpful to point out that it mostly happens to women, because if we stamp out such behavior it will be helpful to guys, too.
Anyway, I was sort of surprised at the intensity of the argument you got involved in, so I re-read your posts, and I have a couple of belated comments:
I don't think there should be a law to fix it, but I think there should be a movement to change the way people think about this stuff, and to object to it loudly.
When I read the article you pointed to, at the top of my browser window (e.g. embedded in the HTML) it says "The next civil rights issue: Why women aren't welcome on the internet."
Of course, the article leaves that first part off. Maybe it used to have it — couldn't say. The effect on my browser window is subtle enough to qualify as subliminal, but we all know that "civil rights issues" require legislation, so when you point out an article like that as being gospel, that is certainly going to color how people view your opinion — probably whether they realize it or not.
I am talking about guys who go online and say "I know your address, I am going to come when you least expect it, torture you, rape you them kill you." Is that a hold over from the "male workplace of the 1950s?"
That's an actionable threat of violence, already covered by laws in most jurisdictions I know of. If the authorities aren't taking that sort of thing seriously, that's a huge problem, but we don't need additional norms to codify things that we all agree are bad enough that we already have laws against them.
Anyway, thanks for sticking around — I gain a fair amount of insight from a lot of your comments.
That's an actionable threat of violence, already covered by laws in most jurisdictions I know of. If the authorities aren't taking that sort of thing seriously, that's a huge problem…
Forgot to mention that, from what I've seen, the more usual problem is that authorities actually take this sort of thing far too seriously.
Peter Donis says:
If you doubt me read this article, and I can assure you it corresponds very closely with my experience.
The Last Psychiatrist has a take on this article that I think is worth reading:
http://thelastpsychiatrist.com/2014/05/cyberbll.html
@ Jessica – "Men are warriors because they are disposable"
I prefer the term "expendable." If we make it back alive, there's nothing quite like victory sex.
"and the uterus carriers need to be protected against all assaults"
An implicit duty that all men relish.
"Chicks before dicks!!"
OK, we had that one coming after gloating about our evolution-derived massive torsos and biceps.
> I prefer the term "expendable" (to "disposable")
Depending on which dictionary you look at, you might change your mind (or not)…
@Peter Donis:
Thanks for the link. I love that website but don't think to go there often.
Eating some crow.
http://mancave.cbslocal.com/2014/09/09/ass-kickers-of-antiquity-khutulun-the-real-warrior-princess/
>Eating some crow.
I'm skeptical. This has all the earmarks of propaganda intended to overawe the Mongol Empire's subject peoples. "Look! Even our women are tougher than you will ever be!" There may have been a historical Khutulun who defeated some men at wrestling, but the huge round numbers say folktale to me.
Jessica, I think the point that is being missed is that an ideology structured around the superior morality of women such as Laurie Penny and Anita Sarkeesian represent is of course going to have pushback mostly from men. Prominent activists have large platforms that reach a lot of people, so I'm not surprised men outnumber women in these instances.
A more apt comparison would be to compare prominent women personalities and critics to their male counterparts, not prominent women to an audience. And who do we think a large audience of men targeted by a womens' movement are going to target in return – Arabs?
Go to any place like The Guardian or ESPN and the number of commenters is going to outnumber the single author of the post by many times. If that author is coming from a space where they conspicuously self-identify as gay or women, and target Arabs, the pushback will be mostly Arabs and against what is then perceived as womanhood or gays, because feminists and gay activists always maintain they represent all women and gays. They are certainly not doing all women and gays any favors.
The truth is that failure exists on a human level. If you maintain that is false and that Arabs or gays are prone to failure, the angry reaction will naturally come mainly from Arabs and gays, therefore proving the insane theory in the first place in a merry-go-'round of logic I cannot penetrate.
> an ideology structured around the superior morality of women such as Laurie Penny and Anita Sarkeesian represent is of course going to have pushback mostly from men.
I sure hope I misunderstand what you are saying Fail, because it seems to me that what you are saying is "because these women are bitches they deserve this treatment", though you did decorate it in much nicer words.
FWIW, I don't know anything about either of the two women to whom you are referring.
The subjects that provoke these attacks is not generally speaking "women's issues" whatever that means, in a sense that is the point. It is the dismissal and threat against these women because they are women that makes it especially pernicious. And it is responses like yours, which are dismissive of these concerns (irrespective of what other injustices there might be in the world) that is frankly part of the problem.
If I remember rightly the "fluffy girl" dismissal was over a discussion on firearms, though Google couldn't help me find the original thread.
And FWIW, I can't comment on the prevalence of these things in online games like WoW since I am not a gamer and don't really know too many gamers. The gestalt might be different there.
"There may have been a historical Khutulun who defeated some men at wrestling, but the huge round numbers say folktale to me."
Also, in modern competitive sports a lot of effort goes into screening female contestants for being not 100% female. The customary dichotomy male/female is a big simplification.
In other words, not everyone with female genitalia has the body of a woman and vice versa. And there are various intermediate forms.
Hence the call for nonbinary gender in SF, which was dismissed as so much intersectionalist claptrap by the SF old-guard establishment.
>Hence the call for nonbinary gender in SF, which was dismissed as so much intersectionalist claptrap by the SF old-guard establishment.
Quite as it should have been, because the call wasn't an appeal for rational objectivity about intersexes. Rather, it was a demand that authors treat centrally important facts about non-intersexed humans as though they are mutable social constructions when they are not.
It is the dismissal and threat against these women because they are women that makes it especially pernicious
But it's hard to tell if it's that. I suspect a man who said and did the same things (roughly "You're all violent sexist homophobes who need to change and the games you like should be more PC") would get dismissed and threatened, too. And just as some claim every attack on Obama is "racism," some like to pretend that every attack on (or defensive action regarding) a female social justice warrior is "sexism." I have no doubt that sexist things have been said about them, but I think this distinction should be made.
>If I remember rightly the "fluffy girl" dismissal was over a discussion on firearms, though Google couldn't help me find the original thread.
Here it is: Objective evidence against racism
Jeff Read: It's "intersectionalist claptrap" because nobody is stopping anyone from writing "non binary gender" fiction, but many resent being told what they must write.
"because these women are bitches they deserve this treatment"
You're doing it again. That is a fake argument that uses any pushback against defamation as "misogyny." Are white supremacists "bitches?" Is that the issue? Really? Do you have race and sex-neutral definitions of "supremacy?" How about "group defamation?"
Penny is out of this world defamatory against men and whites. Her writings are ludicrously moronic. The idea Sarkeesian is some no-dog-in-the-hunt social scientist who by an amazing coincidence only sees flaws in men is laughable. The idea there is a cogent ideology of women-hating men infesting America that is an analogue to the KKK or insane intersectionalism is laughable. On the other hand, the tenets of radical feminism with its crazy "trigger warnings," "rape culture," "privilege" and postcolonial groups that are mysteriously never post-Muslim Spain have penetrated America's core institutions. The argument then goes "Well, then there magically exists the opposite." No. That only works in human nature, not an actual formulated ideology. It's like saying if I write a book another magically pops up in America somewhere that is an opposite, or that Nazism automatically created an anti-white supremacy movement. There is no mens' version of radical feminism in America.
Are you pranking me? Why even respond? How do you know they are being dismissed because they are women? Do I dismiss the KKK because they are men? We have dictionaries that tell us when people are being racist, sexist or gender expression-phobes. We have radar guns that tell us when people are speeding. It is not necessary to find out if the driver is gay, non-white or a woman before we consult the radar gun and then decide if the person over the speed limit was speeding or a bitch.
I don't live in a world where I can't critique a person because women couldn't vote a century ago or because slavery once existed or anti-homosexuality laws. I look at what people actually do. Sarkeesian and Penny are not social journalists but androphobes who are either too stupid to know that or figure lighting up men is easier than working a fork lift.
The call for non-binary SFF was another shill of bigotry and supremacy posing as diversity. When actual comments were still being left at Tor.com the author, Alex MacFarlane, got on Twitter and wrote "cis peeeeooooople." She has plenty of troublesome quotes like that, including how great a world without men would be. What would she and all the PC call someone who Tweeted "homo peeeeooooople" in exasperation at the innate short-sightedness of gay people? In fact anyone publicly doing that at Tor.com or Google or Publisher's Weekly would be summarily fired.
I'm constantly amazed by how outright sexual bigots and racists in SFF skate clean away merely by constantly saying the words "anti-racist" or "social justice." It's like gays, non-whites and women are held to some otherworldly status like magic leprechauns or something.
Jaymee Goh, co-founder of WisCon's demented racially segregated "safer-space" routinely makes the most insanely racist comments and calls what she does "anti-oppression work" and "great justice."
Yet Goh has also referred to whites as "sour dough-faced," their "white tears," written "lately every week is white stupidity week" and "The truth about which white people are innocent of racist acts? Yeah, I'll admit to not caring about that." It's no coincidence she has "third world intersectional feminist writing sff" at the head of her Twitter feed.
It's like some insane upside-down world where any "marginalized" group has a special dispensation from the Pope to act as racist as they want and people avert their eyes, like the criminal no one saw in that old Damon Knight SF story, "The Country of the Kind." The fact this is happening in SF makes that all the more ironic. We don't take the trouble to respect principled morality tales that define all of us as flawed but instead give awards to idiocy like Rachel Swirsky's defamation "great justice" literature.
Did her anti-oppression work involve moving any ZIGs?
My readings suggest that the Mongols did field significant numbers of fierce female warriors. But that doesn't invalidate your core premise: Mongol warfare strongly favored the bow. In sword- or pike-based warfare, short work would have been made of those women.
> You're doing it again. That is a fake argument that uses any pushback against defamation as "misogyny."
Death threats are not "push back". Either you are changing the subject and attacking a straw man or you are a very troubling individual.
The funny thing about Goh is she's not an American citizen, never shuts up about whites and how America is a white supremacy but won't go live in her country, Malaysia. It's typical of the PC that they complain about the West they won't leave and won't live in the places they find most noble. What kind of people won't live among the people they defend the most and won't leave those they hate? A deluded liar. That's what radical feminism is – one giant shill, barkers at an old carnival side-show who have your snake oil tonic that cures all.
I've asked you before: what death threats? By who, how many? Are you in law enforcement where you can see across a national culture you are admittedly unfamiliar with and launch an investigation?
First of all, the idea anyone in America is somehow not against death threats, which you just suggested, is idiotic. Now THAT'S a straw man. Feminists do the same thing with rape, as if men are indifferent to it. Why not just bravely declare you are against murder?
Second of all, there are death threats and death threats. By now everyone knows the anonymity of the internet allow for the most vulgar and silly remarks. In short, if you want to make a case for this occurring in feminist Ladyland, start listing the dead. I imagine the fakes outnumber the real ones, such as the ex-U of Nebraska lesbian who carved a cross on her chest and called the police to report an assault. She was prosecuted. Or the U of Wyoming "feminist" who faked a rape threat complete with later candle-light vigils. She was also prosecuted.
So who's making these threats?
Publicity sells. Both Sarkeesian and Penny routinely announce they are scared to be in their residences or even leaving them. Why is that? I am not some chump who simply believes anything anyone says. If you have a case to make, make it, but this idea I'm just going to always believe a culture that demonstrably has a thing for men is silly.
Here's an interesting article on who is getting harrassed:
@ ESR – "I'm skeptical."
Actually, I offered that reference as peacemaking humor, but here is another source describing Khutulun; who apparently existed and was known to Marco Polo. The exception proves the rule.
http://www.laphamsquarterly.org/roundtable/roundtable/the-wrestler-princess.php
@Jessica Sorry, but you're a straight up fool. The "death threats" these people receive are part of their business model. "Fail Burton" has nailed you to the post by getting you to admit that you don't know anything about these people – just that you heard seventh-hand that they are receiving "death threats". This is the classic ploy (and Fail Burton alluded to this) of parlaying drummed-up sympathy into new avenues. Not that you've understood anything the guy wrote (or will understand what I'm writing now).
kjj says:
@Jeff Read on 2014-09-10 at 12:12:39 said:
Non-binary cases are either invisible or rare. For example, If a genetic man, for whatever reason, develops hormonally as a woman, she might have been "barren" until the last few decades. (By the way, I'm not making the case that this is the situation for the Mongol woman or any of the viking warrior women, or really anyone in specific.)
But that doesn't matter much. The call mentioned is about trying to equate the few people with biological conditions with the much larger cohort that makes a choice. It has been going on for so long now that I may just get flamed to hell for saying it.
The appropriate place for non-binary gender in fiction is when it matters to the story. If you shoehorn it in because "inclusive", you'll get groans from most people, but kudos from your choir, at least for a few moments before they switch to patting themselves on the back for having praised you, often without buying or even reading your book.
Notice that there is no call to pointlessly include other rarities in fiction. Should fiction include more hunchbacks, harelips and conjoined twins?
Roger Phillips
@Jessica Sorry, but you're a straight up fool.
Ah Roger, I see you have decided to return to your rude, curmudgeonly ways.
I think if you both actually took the time to read what I wrote rather than what you think I wrote, actually treated me as an individual rather than a representative of the feminazis, you might find me a little less foolish than you think.
However, I appreciate you both reminding me why I avoid discussing these issues here. The anger and vitriol does not make for an enjoyable discussion, and unfortunately brings out the worst in me too.
Jeff, your conclusion does not follow from your premise. If the story calls for such, then it should be included. If the gender of the character is not important to the story, I can see throwing it in to help round out the character – in the same proportions that it occurs in the overall population. For example, I had no problems with the gay man in Old Man's War, because it rounded out the story a bit.
But gratuitously making lots of characters have characteristics that aren't common in the overall population? You'd better be able to justify that within the context of your story, not just "because these people are underrepresented and I'm going to fix it".
@ Jay Manard – "not just "because these people are underrepresented and I'm going to fix it".
That's an interesting insight. Have we degenerated to the point where we now have advocacy fiction as mainstream practice? Instead of enjoying a novel for the story and writing skill, do we now have to actively filter for political correctness and covert agenda? This could be a new branch of memetic subversion and possibly has been ported over from the use of public school textbooks for PC indoctrination.
What is this bullshit? This is _exactly what you are doing_. I know exactly what you mean, have not misinterpreted you once. If you're going to accuse me of misinterpretation, point out the sentence I misinterpreted and the sentence where I misinterpret it. This is not some optional thing you can either deign to do or not. If you make shit like this up you look like a fool, and I will call you a fool. I am well aware you're not "feminazi". Even this paragraph accusing me of misinterpreting is a misinterpretation – the whole point of my post that you're a "useful idiot" for the feminists, not that you are a "feminazi". That you think this has anything to do with what I said shows you are a surface reader like Winter who skims for what you want and leaves the rest.
On my part it has nothing to do with the issues and everything to do with the fact that you're unable to exercise reading comprehension, and then turn around accuse ME of misrepresenting. And you call ME rude. Believe it or not, this is not me "angry". This is me coldly spelling out your errors.
@Jessica Boxer on 2014-09-10 at 14:23:15 said:
This is the internet. Death threats practically have their own DSCP. It's almost a *sport* some places.
Larry Correia probably gets a couple an *hour*.
Yes, there are misogynists out there, but there are lots more who are just pathetic, powerless little f*ks who harass *anyone* who doesn't fit their worldview.
"On my part it has nothing to do with the issues and everything to do with the fact that you're unable to exercise reading comprehension, and then turn around accuse ME of misrepresenting."
Funny, I got the same response from you, almost the exact same words.
It seems you are encountering many people that are unable to understand your crystal clear prose.
> What is this bullshit? This is _exactly what you are doing_. I know exactly what you mean,
No actually you don't. For example, you said I was "nailed" because I had no idea who the two random people that that other guy brought up were. But I never brought these people up, I never discussed their work, and I know nothing about them, I have no idea if I agree with them or disagree with them. So how exactly does that mean I am "nailed"? You misinterpreted the thread entirely.
> Believe it or not, this is not me "angry". This is me coldly spelling out your errors.
Yes that is apparently your shtick. I wonder about it. Your thing here is that you love to sit in the background and snipe and pick at other people's shortcomings, and rarely have much to contribute yourself. It must be a pretty depressing way to live, always in the negative, never in the positive.
If I remember rightly you are from Australia. My impression of Australia is "Home and Away", "Neighbours", Kylie Minogue, golden beaches, sun and barbecue. Not some grumpy academic stewing in his black hearted schadenfreude. You are TOTALLY ruining Australia for me.
> Funny, I got the same response from you, almost the exact same words.
Now Winter, here is a guy who I disagree with a lot, but he makes me think. He has lots of interesting things to say, and I have learned a whole bunch from him.
Winter is Dutch. The Netherlands has no beaches, no sun, it is cold, and it rains all the time. But Winter totally makes me want to go there again. The people seem really cool.
BTW, I learned something yesterday about the Dutch. Apparently, according to Wikipedia, the Dutch are the second largest exporter of agricultural products in the world, after only the USA. Pretty amazing for such a teeny weeny country.
Blast it all, Roger! How could you do that to Australia!
"The Netherlands has no beaches, no sun, it is cold, and it rains all the time"
http://www.ontdekdenhelder.com/wordpress/wp-content/uploads/strand-van-scheveningen1.jpg
@Fail Burton: "Jessica, I think the point that is being missed is that an ideology structured around the superior morality of women such as Laurie Penny and Anita Sarkeesian represent is of course going to have pushback mostly from men."
I think this is where the conversation took a turn. Previously, Jessica Boxer was talking about death threats as a general thing, and then you brought up "ideology" and those specific names (because you think they're the only women who have ever received death threats, I suppose?) and you and others on your side ("just that you heard seventh-hand that they are receiving "death threats".") proceeded to assert that she was talking about them.
@esr, I would like to say, that if you are considering changing software, please consider one that has a "reply" button that provides a nicely formatted quote with a back-link to the previous comment. That's a nice middle ground between threaded discussion and the kind of unstructured flat discussion you have now. Backtracing who was replying to what and who said what when was painful.
@Random832
> @esr, I would like to say, that if you are considering changing software, please consider one that has a "reply" button
He actually had something like that a while ago, and he smartly pulled it, because it kind of sucked. I guess it is easier just to read the tail, especially for observers of a thread. One thing I think would be good though is to do with the moderation queue. It tends to trap legitimate things, however the problem is that if Eric, who is generally pretty dilligent about these things, doesn't get to it for a few hours, the volume here is that the approved response gets stuck way back in the discussion thread, and tends to get lost. If there was some way that when it was approved that it changed the comment's date/time to the date/time of approval rather than the date time of submission, that would not happen.
"He actually had something like that a while ago, and he smartly pulled it, because it kind of sucked. I guess it is easier just to read the tail, especially for observers of a thread."
I'm not sure I understand what you mean by "easier to just read the tail", unless what you are talking about is threaded display [which is specifically not what I was proposing]. The only difference would be that each post that is made in reply to something, instead of merely containing an @name and maybe manually pasted quote, would have a link to the previous post. The order things are displayed in would remain chronological. This is how numerous online forums do things.
I second Random832's suggestion.
By the way, I never understood what's so great about beaches. Frankly, I'd be okay with a country with "no beaches, no sun", where it were cold and rained all the time.
And even if I liked beaches, what could a given beach – in this case, a Dutch one – offer me that was absent from the others? They're all alike, aren't they? Besides, I'm pretty sure the Netherlands have much to offer outside of their beach(es); in particular, I find Amsterdam's canals very appealing.
Mr. Random832, I don't think you understand the radical feminist ideology and theory in play here.
There are no bad individual men in feminist theory. That why they keep pounding home this thing about the death threats as a "systemic" male problem rather than holding the individuals who do it responsible as an anomaly. The simple reason for that is they are not considered an anomaly but typical men.
That's why Tor.com ran this article. 5 women buried with swords turn into WARRIOR WOMEN. A single crusader archer I mentioned is PROOF of a medieval cover up of WARRIOR WOMEN. And by the way, uncovered by noble Muslims because all Muslims are noble and all Christian Europeans suspect. Why? In feminist theory white men are racists, by default. Have you have once heard of the "straight black man" or the "straight Arab man?" In intersectionalism, they literally don't exist and the reason is the term is considered a pejorative. Christianity is the only religion attacked in feminist theory.
The way law works, if a man hits a women he is prosecuted. In feminist theory, men hit women. That's why they're going after the Ravens Rice right now. Rice isn't just one man, he is all men. These deaths threats are not made by wacky individuals – it is "misogyny" embedded in OUR culture. Get that? Social Justice Warriors like Penny and Sarkeesian have no interest in talking to actual real men with names who do a thing. All men are guilty of either creating and promoting misogyny, or not speaking up against it. It is the black hole of a circular kafkatrap like "white privilege" that is purposefully set up so there's no escape.
Feminists have no interest in a court room, only legislatures. That should tell you everything you need to know. They have no use for individual prosecutions of rape. That's why colleges across the country are robbing individual male students of due process at the behest of a feminist drive; men are half-guilty just by existing – guilty until proven innocent because of "rape culture."
So yes, she absolutely was talking about death threats as a general thing – a general thing that is somehow smeared onto me and not her. But that is not the way law works, and I have no interest in listening to subtle smears that enroll me in a crime I have nothing to do with.
> There are no bad individual men in feminist theory.
> Feminists have no interest in a court room, only legislatures.
I figured we were done here, but this is too much for me to just pass by. I wonder if you can see the obvious problem here? In the first you criticize "feminists" for seeing men not as individuals but as a whole morally equivalent class. And yet you yourself do exactly the thing in your claim about feminists in your second statement. So which is it?
Perhaps it is particularly noticeable in this thread since I am apparently the representative "feminist" and yet I have stated three or four times explicitly that I am not seeking government action, which seems rather in contrast to your second statement above.
Apparently you can't see the difference between Nazis and Jews. One is an ideology, one is something you are born as.
It is the same difference as the KKK and black folks.
One is not born a radical feminist; it is an ideology. There is no such thing as racially or sexually defaming an ideology. There is no reason for me to view people who join a club with a specific ideology as simply individuals. Feminists are not just "the gals." Men on the other hand do not comprise a club. Saying they do and stipulating it is a conspiracy to discomfit women is defamation. Feminist ideology pretends it's just "women" and just being a guy is an ideology. It is a mad hatter upside-down ideology.
As for your personal rhetoric, it has been feminist cant as far as subtly attaching me to this gamer culture threat brigade while subtly detaching yourself. In truth your complicity is the same as mine: none.
> Apparently you can't see the difference between Nazis and Jews.
Normally, I'd just invoke Godwin's law here, and ignore you, but ridiculous baggage laden metaphor notwithstanding, there is enough there to merit an answer.
Feminism is not a tight ideology like Nazism, it is a broad collection of ideas centered around the general principle that women should be treated equally in all aspects of society. What that means and how that is interpreted varies a lot, and so feminists are no more a unified moral mass than men are, in fact many men are also feminists.
One does not "join" feminism. I don't have a membership card. I am a libertarian too, but I didn't join any libertarian group, I just advocate libertarian views as I advocate the equal treatment of women in all aspects of society. "Equal" is a word that also has a lot of baggage and nuance attached to it, and I don't intend to pick apart the minutia of it here.
FWIW, I also advocate the equal treatment of men in all aspects of society, and find some of the sexist attitudes of the courts in divorce proceedings particularly repugnant, and as I have said several times in this forum, I think the rape of men is a huge moral outrage that may be worse than the levels of rape of women, and is largely ignored since it mostly takes place to people we consider "bad".
So I'm afraid your dissonance stands unanswered, and it should really have been apparent to you since I was a clearly obvious counterexample to the very specific accusation you made.
Fair warning, if you continue with the Nazi metaphor then I play the Godwin's law card. I find being compared to a Nazi rather unpleasant.
As best I can tell, you're relatively new here, and seem to be enjoying your jousting match with Jessica. She's no pushover, which I suspect is a big part of the thrill for you. However, you should know that Jessica is not a doctrinaire Feminazi, nor part of a secret cabal out to castrate all men as part of some ideological imperative. You're going to have to look elsewhere for that kind of combatant.
As you can tell from my posts here and on other threads, I am no fan of the modern Feminist Movement; however my main criticism is that a war between the sexes is absurd, unnecessary, anti-evolutionary, and downright stupid. No matter how hardcore the radical feminists may become, the right woman is still preferable to the left hand.
Gonna have to concur with Jessica's assertion that feminism is too broad to generalize, past being any claim that favors women at least on the surface. Given that, I think anyone wanting to say something about feminism, pro or con, would necessarily have to preface with the specific variety they're referring to.
Corollary: I'm not very keen on judging a group by its worst. The gun rights crowd, for example, gets plenty of shrapnel from some sort of nebulous "gun nut" attribution that apparently only applies to a precious few individuals and moreover shifts depending on the speaker. I find that kind of projection tiresome, and I think this crowd is smart enough to be more scrupulous about it.
(I'm probably coming off as sniffier than usual about this. For the second time in about three months I've met someone ranting about not being able to discuss gun control with "rational" people only to dig deeper and find them holding a belief irrationally. Yeah, yeah, I probably need to get out more. Not your problem, and this is a topic drift as it is.)
What's happening here is you are so far out of your depth, and know so little, that you don't even know what is relevant and what's not. So much so that you boldly venture to rebut me on the basis that you don't know any of the salient facts. I take your silence on your total misunderstanding of my post as an admission that you got it wrong. What a desperate little attempt to "even the score" with me. lol.
Let me explain to you the real reason my thoughts always appear "negative" to you. First, it is because you are such a small intellect that it's like a microbe living on the surface of a sculpture in the making; all you perceive is random destruction of your surroundings. Second, that insofar as I am actually sometimes "negative" it is because I am so overwhelmingly "positive".
I also find children don't understand everything I say. But as with you, it is not mutual.
Godwin's Law is an empty meme I pay no attention to.
I was not talking about Nazism but an intellectual and philosophical space that in principle supersedes such considerations, which is why I used it.
I am not talking about feminism A to Z, but specifically about Third Wave Intersectionalism, which is a racist cult. One certainly does join that ideology if you subscribe to it.
There is no dissonance on my part and I never compared Jessica to a Nazi. I used it to show being German isn't being a de facto Nazi. Being a Nazi is. Similarly, being a male white supremacist isn't simply being white and a man, which is the contention "white male privilege" makes.
As for TomA's remark about a "thrill," I have no idea what that even means. I have never said Jessica is a Feminazi. I questioned her specific rhetoric.
As for Paul's remark, I agree feminism is too broad to generalize. That's why I don't do it. However in not doing it, I get lit up for talking about "intersectionalism," which is at the root of all this, not a woman's right to vote. They are racists – period.
For anyone who wants a short sharp education, I will once again refer you to The Other McCain. Or check out Feminist Wire. I have read it extensively; they are insane. They are not feminists though they use the name. They are a largely gay, non-white and women's supremacist racist cult. They are intersectionalists. You may not know the rules, but the rules know you. That's why any too-white too-male list of classic SF gets lit up by these morons, which just occurred on Twitter over a list at a site called PopSugar.
At this point, I'm not sure that Fail isn't at least partially trolling, but one of the most substantive issues is that "feminism" is always conveniently undefined and a juicy target for equivocation. Jessica's a "feminist" in the sense of, e.g., demanding equality before the law, but not to the extent of imputing collective guilt, and is generally coherent and consistent in her positions, which makes her a lamentable anomaly.
I am not trying to spar with anyone. I am presenting information that is true and real and relevant to this post. I will say this again: you may not know the rules but the rules know you. These are not First Wave 1900-60 feminists. These are not 1960-90 Second Wave feminists. Third Wave Intersectionalists specifically reject those waves as too white and too heterosexual. Let me give you a quote about what forms of oppression concerns these people:
"…age, attractiveness, body type, caste, citizenship, education, ethnicity, height and weight assessments, immigration status, income, marital status, mental health status, nationality, occupation, physical ability, religion, sex, sexual orientation,"
Do you know where I got that quote from? It's from a PDF the ex-president of the Science Fiction Writers of America John Scalzi linked to on his blog as he asked us to "please bone up on the concept of intersectionality."
I might ask a few people here to do the same and stop acting like I'm pulling this stuff out of thin air. Every Hugo and Nebula winner this year is either an intersectionalist or supports it. That is not an opinion but a cold hard fact I can easily support by way of quotes. I consider denying that while not presenting a counter-argument saying I'm wrong to be "sparring." And please tell me in what world I am trolling in any way at all.
"Third Wave Intersectionalists specifically reject those waves as too white and too heterosexual."
Could you give us some statistics on the importance of the subsection of Third Wave Feminists you object to? I have never met a "feminist" that embraced these visions. Are they a US specific breed? Do they matter at all?
For all I know they could be as influential (and obnoxious) as UFO spotters and Chemtrail believers.
Here are just the statistics to shed some light Mr. Winter.
http://www.readingrockets.org/article/what-research-tells-us-about-reading-comprehension-and-comprehension-instruction
I fail to see how this is relevant to this discussion. So, I assume it is some kind of joke implying that I am unable to grasp the meaning of your writings.
It is entirely possible that other people understand how the women you are demonizing are a danger to civilization as we know it. But from the comments I have read here, these people seem to keep that understanding to themselves.
And I am still in the dark about the real threat posed by these demonic witches that you claim are undermining the good old USA, or civilization as we know it.
You don't see how mainstreaming and institutionalizing hate-speech is a threat?
If I understand you correctly, within the world of SFF authorship, there is a group of women who espouse extreme anti white-male rhetoric and have seized control the SF awards apparatus; which I assume is personal attack on you. Consequently, you are playing the role of the canary-in-the-coalmine in order to warn the rest of us here at A&D about this emerging scourge, and perhaps recruit some new blood into the resistance.
Jessica wants me to join her grassroots movement to go after the nutjob cyberbullies (a cause I suspect is personal for her) and you apparently want me to join your lynch mob to go after the SFF crazies that are ruining your playground. I think the two of you have more in common than you realize.
Not to rain on either yours nor Jessica's parade, but as a country, we are in far deeper shit in other macro venues than the impact of two fringe nutjob groups.
No Tom, you don't understand me correctly. That's because you don't understand who created the Ferguson uproar, why the DOJ is now involved and who keeps any anti-illegal immigration in the status of racist. SFF is only a symbolic microcosm. In and of itself it is of no account. Go read The Other McCain stuff. If you want to respond to my words, please read all of them, not just the parts that somehow keep you off The Feminist Wire and The Other McCain. Otherwise, do as you want. I'm not recruiting anyone. If you don't care if your son loses the right to due process in college cuz some radical nutjob who he rejected knows she can destroy his life with a simple accusation, then what do I care?
OK, we agree that there are serious macro problems in the country. You seem to be implying that it is a "radical women's cabal" that is at the root of it. If so, I disagree. There is plenty of blame to go around, as we did not dig this hole this deep just be allowing a few crazy women to run amok. As I have argued elsewhere, as a country we have been too affluent for too long, and we no longer have strong hardship drivers to push our evolutionary advancement. It's not just that our women are more combative and disruptive nowadays, but that our men are also becoming overweight limp-dick metrosexuals. You need to back up a few steps and see the whole battlefield.
> Godwin's Law is an empty meme I pay no attention to.
Actually, it isn't, it is quite a useful rule, that is based on a powerful insight.
Any metaphor carries a whole bunch of baggage with it beyond the intended point of comparison. It is a rhetorical technique people use all the time to try to imply that the similarity on the point of comparison implies similarity on other points. Hence people skeptical of the CAGW theory aren't CAGW skeptics, but climate change deniers, to deliberately put them in with holocaust deniers, or people who advocate the RKBA aren't second amendment advocates but militia gun nuts, to tie them in with the likes of Tim McVeigh.
Instead of "you can't see the difference between Nazis and Jews." you could have said "you can't see the difference between "Irish dancers and Irish people" or "habitat for humanity volunteers and Cubans" or even "herpetologists and rattlesnakes". But you instead chose a point of comparison that portrayed one side with the some of worst people in the world (Nazis and the KKK) against people who are the most oppressed (Wartime Jews and Jim Crow era blacks.)
I don't know if you did this deliberately as a rhetorical device, carelessly, or if it came from a sub conscious belief about the two sides, either way, it doesn't reflect well on your argument.
Godwin's law exists to have a Schelling point of when this has been taken to the most extreme, and is a pretty useful tool for weeding out mendacity in arguments.
> Let me explain to you the real reason my thoughts always appear "negative" to you.
Perhaps my tiny mind didn't explain it very well, but my assessment is that you are like that with everyone here. Honestly, I am a little pissed about that, because I'd rather it was just me, that way I'd feel special.
> First, it is because you are such a small intellect that it's like a microbe living on the surface of a sculpture
So if it is everyone here, everyone's brain is the size of a gnat, what my silly brain wonders is why you hang out here and spend so much time? I mean if we are all a bunch of drooling idiots why bother? Surely there are some intellectual giants out there that can offer you true stimulation for your capacious brain? Is it the same fascination as that little kid who pulls the legs off spiders or cremates them with a magnifying glass? The schadenfreude of the intellectual god tormenting his peon like inferiors? That isn't a very intellectually healthy pursuit don't you think? Wouldn't being a benevolent god be a happier way to go?
I'm not talking about Irish dancers but a philosophical and intellectual shared space. Before Nazis or the KKK can start oppressing people they must have an ideology. In larger principle that ideology is simple hate. In it's specifics it has its own targets. In both cases it requires more than simply looking like someone who sends off people to camps, hangs them, or makes death threats.
I have no problem comparing intersectionalists to neo-Nazis cuz that is the ideology they most resemble. Unless you think the KKK only started to be a problem when they started hanging people it's important to realize hate-speech must first be mainstreamed before it can get enough people to act in concert. Radical feminists are sick and crazy people. Demonizing them is no more demonizing women than demonizing the KKK is demonizing men.
I did not use metaphor but stipulated that intersectionalism is supremacist and racist. In its bare bones it has been around 50 years, but it is only in the last 5 or 6 it is really starting to gain acceptance in our institutions as a morality one defaults too. Everything is whites and men nowadays – they are the new Jews within this ideology of hate.
Godwin's Law is an empty meme used to shut down legitimate comparisons. When those comparisons are not legitimate, I hardly need a law to tell me Bush isn't Hitler or climate change deniers don't reside with mad hatters who think millions of people committed mass suicide or never existed. There is an opposite problem Americans have with this idea and that is a thing like Nazism will only ever have a specific form and face. That is why I constantly mention Orwell's 1984. He not only warned us of crazy dictatorships but that they would always insinuate themselves with a different face. We already have a situation in America where it is now broadly assumed non-whites can never be racist, women are such darlings they should be character witnesses just for being witnesses, and that gays never hate straights. What's stupider than that, or more supremacist? And the result? By an amazing coincidence we have a free-fire zone where women, non-whites and gays say the most amazingly bigoted things without consequence that would get a white straight man fired in a heartbeat, and does.
The Huffington Post has "black voices" and that's just fine. I'm happy there is no such thing as "white voices" but if anyone even tried it, they be fitted with a hood. Mainstreaming racial and sexual double standards is dangerous. Eventually they find their way into law, and that is exactly what is happening in America today.
Do I spend a lot of time here? I did once, certainly. Now I drop by a few times a month, if that. And the premise of the question is bogus; the greatest brain would have a hunger for small intellects too, because it is able to tease useful information from everything. And I am benevolent.
Fail Burton
> I'm not talking about Irish dancers but a philosophical and intellectual shared space.
OK, I am very sorry @Fail, I plainly misunderstood you. I assumed you were doing the usual hyperbolic comparison to the Nazis, but apparently you were not. You were making a literal comparison.
Is it your view that the Feminists (whom you equate with the Nazis) are planning on rounding up all the men (Whom you equate with the Jews) to send them to death camps in a kind of feminist final solution? I sure hope we freeze enough of your sperm before we do that.
Is it your view that the Feminists (whom you equate with the KKK) are planning of lynching any men (whom you equate with Jim Crow era blacks) who get out of line? Are we going to cut off their balls (literally not figuratively) if they look disrespectfully at one of us? Are we going to have them eat at separate counters or sit in the back of the bus? Hah! We already demand we have separate restrooms!! It has started already! Call Dr. King, perhaps he can help you poor oppressed men.
Perhaps you don't think that the Feminists are planning on those specific actions, but by the comparison, and your dismissal of my "your overplaying your hand" argument you are claiming that they are leading toward something at a similar level of outrage. If you think the situation of the white American male resembles or will resemble any time soon, in any serious way whole populations being marched to the death camps then I really don't know what else to say. The problem with reductio ad absurdum is that there is always someone who will actually accept the absurdest premise.
Jessica, while I think your examples of lynching and castration aren't on the table at this point, false imprisonment–possibly including complete financial ruin and unjust incarceration for decades–is a serious issue because of rules of evidence in place right now that make it impossible for men accused of rape to mount a thorough defense. The Obama administration has effectively ordered universities to implement guilt-by-accusation policies for sexual misconduct, defined so broadly as to include kissing without getting explicit, spoken, and "enthusiastic" consent first. These kangaroo courts have already started expelling men in hearings where they are explicitly forbidden to introduce evidence or cross-examine witnesses.
I just figured out that evil Amazons conquering Earth and enslaving all the white males is an SF trope. Now I understand how all this fits together. I think I'll wait for the movie.
@Christopher Smith
> These kangaroo courts…
I'm with you on that Christopher. There is a lot wrong with the rape laws that needs fixed, and the college thing is an outrage. Why in rape cases is the putative victim's identify protected but not the accused? After all, the accused is supposed to be innocent until proven guilty.
So given that we agree on that, will you join me also against the equally dreadful situation suffered by many men who while incarcerated are subject to constant ongoing rape with apparently no real recourse? 25% of men by some accounts. In fact, in many respects the public does not only refuse to demand this be fixed, they even think it is a good thing under the title of "prison justice".
And you do not seem to realize that blaming the downfall of civilization on demonic women has a very long history. At times, such women were burned at the stake.
Whatever is wrong with the world today cannot be attributed to women who made a pact with the devil, or with the Third Feminist Wave.
I must admit that this widespread acceptance of prison rapes gives me a sick suspicion about how many people in the USA might feel about rape outside of prisons. Too often I encounter responses along the lines of "boys will be boys".
@Christoper Smith
"These kangaroo courts have already started expelling men in hearings where they are explicitly forbidden to introduce evidence or cross-examine witnesses."
From across the pond, the USA legal system looks completely dysfunctional. That is not limited to rape laws, but is found across the board. So people should stop blaming women for their pet legal peeves and start focusing on the criminal system itself.
Jessica, if you want to know the goals of these people, you need only read them. bell hooks favorite phrase is the "white supremacist capitalist patriarchy." She has no interest in being equal within that but in tearing it down. It's no coincidence these folks are fighting for illegal immigration to accomplish that and openly brag about the white minority by 2050.
I am not claiming these radical feminists have any violent plans for men or whites. This is a case where the meek shall inherit. They work through institutions and legislation. If you think the Jim Crow analogy is inapt, remember what is happening in SFF as an example. Remember, one cannot have a thing like a new Jim Crow in America without it looking like a completely different thing and working in completely different ways (remember Orwell).
Is there an intellectual and de facto analogy to Jim Crow in America? Absolutely it is beginning. There is a racially segregated "safer-space" at the WisCon SF convention for non-whites only – an unofficial non-white dinner too. There was a non-white gathering at the recent DetCon. There are black SF symposiums and black SF associations and there are countless manifestations of this across America. The Congressional Black Caucus, the Asian-Pacific Congressional Caucus, the Assoc. of Black Mayors, Police Chiefs – the list goes on and on. Meanwhile, there is zilch on the other side – the so-called white supremacist side. Remember, a simple racial majority isn't the same thing as a racial ideology. An ideology is an ideology, not a skewed demographic.
I don't believe there is any analogy to what Nazis did specifically. Remember, I am talking about an intellectual space, one of identity supremacy and defamation. What comes out of that will be determined by the people who reside in it. I see no violence from this crowd and I don't maintain there will be any. What need is there for that when they have the DOJ to sue police forces across America for improperly asking illegals for identification or having colleges rob men of due process due to the sexual smear of "rape culture," in which all men are enrolled the same way all are enrolled in "white privilege." And look who these folks go after. If the idea of "diversity" is a good one, surely it is good for all, but they never target skewed non-white, non-male demographics.
The reason the SFF scene is so interesting it because it is a microcosm of what they would do in larger America. In SFF, they do in fact discriminate against whites and men, they do advocate literature by skin and sex, they do kick white men out of organizations for the exact same racially charged rhetoric they give black women awards for.
Remember what SFWA member and multiple award-winner Mary Robinette Kowal Tweeted after the Nebulas this year: "At @SFWA's #NebulaAwards, only one award went to a white male and that wasn't one of the ones voted on by the membership.#diversityinSFF"
Why would she do that? Because SFF has been a anti-women KKK? For a mere skewed demographic for decades? Why react like that is a sexist racist ideology? And why not then go after middle-weight boxing and the NBA. Aren't those then racial conspiracies rather than coincidental demographics?
As for Mr. Winter, he is reduced to straw men like "demonic women" and "the world today" because he will not do the simple research I have pointed him to. I have never maintain either. Women does not equal ideology and these are not witches any more than the KKK are warlocks. These people exist, you need only read their words, not mine. I am not asking anyone to believe anything I say, but to do your homework and come to your own conclusions. But don't go to Wiki, look up "intersectionalism" and then expect to debate me. When you know who Mikki Kendall, Suey Park, Adele Wilde-Blavatsky, Lauren Chief Elk and the 80 radfem academics who signed a petition for the Feminist Wire denouncing one of their own as a "racist" (for which she was expelled from the Wire) for the simple act of saying a "hoodie" is NOT the same as having to wear a veil in Islamic societies, you'll understand how frothing mad, racist and intolerant these people are. Read this:
http://theothermccain.com/2014/09/01/kate-milletts-tedious-madness/
and Google "The Hounding of Adele Wilde-Blavatsky."
@Fail Burton:
> if you want to know the goals of these people, you need only read them.
My vague understanding from what has been written here is that the institutions of sci-fi have been co-opted by angry women who hate me just because of my genes. Yet, my vague understanding is also that the science fiction that is selling well does not come from that quarter.
Kinda sucks for those who formed the club and then later got kicked out of the clubhouse, but not really relevant to my daily life, and apparently not even really all that relevant to the bread and butter of those who write good sci-fi.
As for Mr. Winter, he is reduced to straw men like "demonic women" and "the world today" because he will not do the simple research I have pointed him to. I have never maintain either. Women does not equal ideology and these are not witches any more than the KKK are warlocks.
Do you really think that Winter thinks there are witches, or thinks that you think there are witches?
I'm sure I could pick any possible goal, and then find somebody who espouses it, and then read them. Not interested, and Winter probably isn't either. As Winter says, "So people should stop blaming women for their pet legal peeves and start focusing on the criminal system itself."
It's hard to characterize your opponents as frothing at the mouth without frothing at the mouth. It's really hard to do that when people haven't even heard of your opponents. I think I understand the frustration — you apparently think "here is a source of a meme which is going to destroy us all" and yet we're not comprehending because we haven't even heard of that source and don't understand the hold the meme apparently has on us.
But the source of the meme isn't really important, and if you have to focus on the source, or even worse, explain who the source is and why you are sure we are being corrupted by them even though we've never heard of them, to get your point across, you've already lost.
@Jessica:
To one of Fail's points, I think, we should try to be cognizant of the biases of the people we get our information from. The article you linked to earlier was by Amanda Hess and she's…
Well, let's just say that if you want to see a good analysis of some of what she says, you could do worse than to google for things that have been written about her at the simple justice blog.
She was a very vocal proponent of the Orwellian-named "yes means yes" law in California, and the simple justice blog does a good job of analyzing that law, too.
It's becoming obvious that you have a deep, personal wound arising from this battle of the sexes within the SF community. And it may well be a foreshadowing of a memetic plague that will soon sweep through our society, so your warning is not trivial. May I suggest another tactic however.
In real combat, there are principles that history has validated. One of these states that, when ambushed, immediately attack directly into the threat.
As a writer, prepare yourself a vaunting libertarian speech a la Ayn Rand, and then stiffen your back and march yourself into one of those anti white male SF convention gatherings and give them all you've got. Win or lose, I guarantee you will emerge tougher, stronger, and better equipped to face future hardships.
Just one point, to Winter's comment about the American criminal justice system as applied to the current college persecution of males:
It's not criminal at all. It's not at any point connected to the criminal justice system. In fact, it's not connected to the legal system at all. The federal government is mandating that colleges run internal kangaroo courts where the woman's word is taken as ironclad gospel and the man has not even the most minimal due process rights. A woman can accuse a man of rape and have his life destroyed solely on her own word.
So no, whatever you think of the American legal system, this is worse. And it's entirely a creature of feminists.
If I had a son, I would strongly advise him not to have sex at all while in college without a signed, notarized consent form spelling out exactly what acts were being consented to, step by step.
Patrick Maupin
> She was a very vocal proponent of the Orwellian-named "yes means yes" law in California
I often wonder if the people who advocate some of these laws and rules have ever actually had a sexual encounter, since their viewpoint seems so far removed from reality.
However, much though the law might be stupid and dangerous, I don't know if "Orwellian" is an appropriate adjective. 1984 was about the massive use of propaganda and a panopticon state to manipulate and control people. This law is about changing the definition of consent in a para-legal setting. Men can certainly abide by this stupid rule if they want, and it won't be good, but it will presumably change the behavior of horny women eventually. And I suppose they can leave the college girls to their own devices and find willing ladies off campus too.
Just as a comment though about all the outrage over these campus codes, let's be clear they are almost entirely about welfare. All these rules are put in place as a response to a requirement from some government agency demanding that change under threat of withdrawing welfare money. What we need are colleges that get their funds from private individuals paying for their kids to have an education or grants to produce specific research. This would fix both this problem and a plethora of other nonsenses that happen in our ivory towers.
With this change we no doubt would have a diversity of colleges and parents could send their boys to places where such terrible injustices were less likely to happen.
So given that we agree on that, will you join me also against the equally dreadful situation suffered by many men who while incarcerated are subject to constant ongoing rape with apparently no real recourse? 25% of men by some accounts.
That figure is nonsense. To see that, check the Bureau of Justice Statistics Survey of Sexual Violence in Adult Correctional Facilities, 2009–11 –
Statistical Tables, NCJ 244227. (Links to PDF's seem to rathole my comments, and the BJS site is not very user-friendly but if you google "NCJ 244227" it shows up on the first page.) The tables start on page 7 – though important definitions are right before. They were looking at state and federal prisons with about 1.4 million prisoners…who in 2011 reported 2,002 "nonconsensual sexual acts" by other prisoners of which prison officials "substantiated" 133 (not all nonconsensual sexual acts are rapes, by the definition they use). They looked at local jails with about 350,000 inmates…who reported 615 "nonconsensual sexual acts" of which prison officials "substantiated" 54.
(This is separate from complaints against prison officials….which are in the same tables…but they report "sexual misconduct" instead of "nonconsensual sex acts"…the numbers are similar.)
You can also try the BJS's anonymous surveys of prisoners…NCJ 241399….which of course has no screening for "substantiation" by prison officials…but even there only about 1% of prisoners claimed "they were forced or pressured to have nonconsensual sex with another inmate, including manual stimulation and oral, anal, or vaginal penetration" (with, again, a similar number making accusations of "abusive sexual contact" against prison staff…I once asked a coworker who'd been a prison guard about that; he tells me this comes largely from frisks, since the prisoners will squawk if they think you're searching their groins too closely). As that report says: "Since participation in the survey is anonymous and reports are confidential, the survey does not permit any follow-up investigation or substantiation of reported incidents through review."
The smaller numbers make sense to me because I've court-martial clients in local jails and military prisons…and the military clemency system is quite robust (I had one guy get 10 days off his 90-day sentence because he'd been denied phone calls with his wife)…it would be hugely in their interests to report it if they were being raped; not a one ever did; and I think it's because it wasn't happening, not because it's happening all over the place and no one reports it.
Long ago I asked where you were getting the numbers you quote, and you pointed me to a website that pointed back to the Bureau of Justice Statistics reports…but I couldn't reconcile the one with the other. I think you've been tricked.
One is too many, and it oughtn't to be laughed off the way it sometimes is, but "25% raped" is the wrong order of magnitude, and there's no basis in the figures for the "ongoing…daily basis" language you quote.
However, much though the law might be stupid and dangerous, I don't know if "Orwellian" is an appropriate adjective.
Apologies — I wasn't clear. I was applying "Orwellian", not to the law, which is bad enough, but to the joyous characterization of it as "yes means yes."
The law itself states that consent must be ongoing and may be withdrawn at any time. This, actually, I have no problem with. There are certain times where it would be kind of mean to withdraw consent, but if agency means anything, then of course, consent can be withdrawn.
I suppose "yes means yes until I change my mind and tell you no (but of course you already knew that no means no, right?)" isn't as catchy, but still, I think Orwell would be proud.
"As for your personal rhetoric, it has been feminist cant as far as subtly attaching me to this gamer culture threat brigade"
Once again, who the hell brought up gamer culture? You are imagining a connection to the Sarkeesian thing where she made none.
@Jessica: I'm completely agreed on the appalling atrocity that is the prison system.
@Winter: This isn't about the US legal system per se, which has plenty of serious issues, but only to about the same level of outrage as perpetrated by various EU courts (superinjunctions, the "right to be forgotten", and the continual circus that is Italy). These are university tribunals.
If you can deliver a speech in the style of Rand without experiencing either a giggle fit or pangs of horror at what you're doing, you must be extremely strong already. Either that or extremely cold-hearted.
@ Jeff Read – "If you can deliver a speech in the style of Rand without experiencing either a giggle fit or pangs of horror "
I don't find either humor or horror in the passionate defense of individualism. I even support your right to be an eager ant in the collective, if you so choose.
Learn to read Random. She dropped a link about women being harassed and threatened on the net. Sarkeesian's the big story now, so's gamergate. I know that's a stunning leap but try it on for size.
> I don't find either humor or horror in the passionate defense of individualism. I even support your right to be an eager ant in the collective, if you so choose.
You're defending the substance of her speeches.
I think jeff was speaking to the "style".
How many pages of the John Galt speech did you actually read before self-abridging?
The style and the substance. Rand's writing is tedious, but perhaps it is deliberately so, as the thesis is fundamentally psychopathic.
Look, in the USA we've already experimented with really free markets and laissez-faire capitalism. It was called the Gilded Age. A time when the poor struggled to make ends meet and children were forced to work 14-hour days to help their families survive. Leading to the eventual formation of labor unions to combat exploitation of labor. Meanwhile, Cornelius Vanderbilt set a large tray of sand on his enormous banquet table and passed out tiny spades to his dinner guests. The guests — wealthy high-society types themselves — used the spades to dig out precious jewels from the sand, which they got to keep as mementos.
Such was the vast disparity between the opulence of the rich and the struggles of the poor. What Rand advocated amounted to a return to that. Sure she, as do modern libertarians, dressed it up with nice language about individual initiative and freedom. But at its root it is a carefully crafted, corporate-sponsored meme complex intended to convince you that what's good for Corporate America will be good for you. And now thanks to influential Rand fans like Alan Greenspan and the Koch brothers, we may yet see a return to those days.
It was called the Gilded Age. A time when the poor struggled to make ends meet and children were forced to work 14-hour days to help their families survive.
Implying, as usual, that Cornelius Vanderbilt stole an idyllic existence from these poor who before only worked 8×5 on their farms.
No question that the Gilded Age had its share of injustices, particularly in the failure to fix responsibility for worker injuries, but the bald fact that families poured into the cities to take up these jobs, dangerous and demanding and low-paying as they were, demonstrates that people in a lot better place to judge the trade-offs than you thought they were an improvement over what they had.
And we're still seeing repeats of this mass migration in industrializing economies around the world. "Sweatshops" may not be great workplaces by Western standards, but they're apparently preferable to rice paddies, and a funny thing happens in these economies: Once the industry scales up, incomes start rising rapidly, and there's a tipping point where more qualitative issues such as worker safety and a pleasant environment (in all its senses) come to the fore in a way they can't when subsistence farmers see no other option than leveling rain forests.
"But at its root it is a carefully crafted, corporate-sponsored meme complex intended to convince you that what's good for Corporate America will be good for you."
Opposed by a carefully crafted, leftist-sponsored, Soviet-originated meme complex that actively denies corporations as the engines of the economy.
@ Jeff Read – "Rand's writing is tedious, but perhaps it is deliberately so, as the thesis is fundamentally psychopathic."
Ayn Rand fled Soviet Russia as a young girl, found freedom and liberty in the United States, and wrote novels that to date have sold more than 40 million copies. Her contemporary, Josef Stalin, ruled the Union of Soviet Socialist States, and was the most prolific murderer of the 20th Century. Yeah, I guess you're right, Rand was the psychopath and your hero was the sane one.
I don't get it. Are you arguing that Stalin wasn't successful, or that Rand wasn't psychopathic enough to make a good CEO?
in the USA we've already experimented with really free markets and laissez-faire capitalism. It was called the Gilded Age.
The robber barons, like Vanderbilt, were not capitalists who thrived at others' expense in a free market with no government intervention. They were capitalists who failed to compete in a free market, and so switched to plan B: buy government intervention to protect their companies from competition. They then thrived at others' expense in the non-free market that ensued.
Interestingly, Peter Thiel is busy arguing that competition is for losers and monopoly is the way to go:
http://online.wsj.com/articles/peter-thiel-competition-is-for-losers-1410535536
I should have stopped reading when he conflated created value with revenue in the second paragraph…
@Peter Donis
"They were capitalists who failed to compete in a free market, and so switched to plan B: buy government intervention to protect their companies from competition. They then thrived at others' expense in the non-free market that ensued."
This is a very naive vision of capitalist markets. The whole "Free Market equilibrium" is based on the assumption of no barriers of entry for new competitors and no positive feedback loops (the Matthew effect).
Neither of these are found in industrial economies. For instance, starting a new steel plant or oil business is hard and an existing brand will attract more customers and investors than a new brand. Also, a monopolist can effectively tax the nation.
Historically, those robber barons became successful and very rich first. As a consequence, they became extremely powerful and were able to use their power to drive out competitors from the market. One part of their power was the ability to co-opt the institutions of the state.
This was no difference from how Microsoft and Intel were able to drive out (almost) all competition in computers.
From "free markets and laissez-faire capitalism […] a time when the poor struggled to make ends meet and children were forced to work 14-hour days" to "Josef Stalin [..] the most prolific murderer of the 20th Century […] and your hero" — in not more than 3 posts …
" …to Josef Stalin [..] the most prolific murderer of the 20th Century"
Is their a Stalin analogue to Godwin's law?
>It's hard to characterize your opponents as frothing at the mouth without frothing at the mouth. It's really hard to do that when people haven't even heard of your opponents. I think I understand the frustration — you apparently think "here is a source of a meme which is going to destroy us all" and yet we're not comprehending because we haven't even heard of that source and don't understand the hold the meme apparently has on us.
This whole thread seems to be deserving of a 'tl;dr'. (Not talking about you specifically here.)
The SFF SJW thing is important infofar as the remaining institutions of conventional old-style publishing still have meaning. Because what this is, is a continuation of the Long March through the Institutions. These SJW types, these radical feminists (yes Jessica, they call themselves feminists, just like you call yourself a feminist, even though what they want is *not* what you want, and they do it on purpose as camoflage and to *use* people like you) and their allies and enablers have taken control of several of the remaining institutions of traditional SFF publishing. Like the SF writers' trade organization, and one of the larger SFF publishers.
This is the usual tactic of controlling the culture by controlling the gatekeepers of the culture's means of communication. I wonder if Tor (yes, the publisher) will survive?
>No actually you don't. For example, you said I was "nailed" because I had no idea who the two random people that that other guy brought up were. But I never brought these people up, I never discussed their work, and I know nothing about them, I have no idea if I agree with them or disagree with them. So how exactly does that mean I am "nailed"? You misinterpreted the thread entirely.
May I step in?
Most people in this thread, myself included, would be inclined to step in to defend Jessica from anyone attacking her. She is intelligent, honest and well meaning among other things.
But there is one persistent blind spot I have noticed her display on several different occasions, and it may be creeping up here. That is, she seems to have no concept of unknown unknowns, and makes insufficient allowance for the fact that she might be wrong, not because of bad reasoning, but just because she just doesn't know enough about the subject at hand, and doesn't know that she doesn't know.
Good general rule is, subjects you don't know about are just as complex as subjects you do know about (and everyone here knows about some subjects that are real doozies).
>That is, [Jessica] seems to have no concept of unknown unknowns, and makes insufficient allowance for the fact that she might be wrong, not because of bad reasoning, but just because she just doesn't know enough about the subject at hand, and doesn't know that she doesn't know.
A common failing in people who are used to being the smartest one in the room.
@ Winter – "Is their a Stalin analogue to Godwin's law?
I could have cited Mao, he came in a close second. All the great mass murderers of the 20th Century are within your pantheon of collectivist political archetypes. The first rule of socialism is, "first kill all the productive individualists." There is no equivalent homicidal imperative in capitalism.
But the link she posted doesn't contain any suggestion that it's about Sarkeesian.
Sarnac says:
I just read through this entire thread
(and recorded lots of fascinating insights and thoughts into my amnesia files)
But I did not see anyone who mentioned that the 5 women warriors might have been buried that way not because of how they *lived*, but because of how they *died* …
i.e. those-who-did-not-live-as-warriors may have died in a heroic defense situation where they took up arms to defend their homes (and children), and presumably their side won because they were buried with honors.
Clearly, they did not die-as-warriors *offensively* because that would mean dying elsewhere and being buried way-over-there
(exception: wounded and brought back and dying of infection after weeks)
This is specifically answerable … lived-as-a-warrior-skeletons show seriously odd bone density variations-from-normal of the striking arm (or archer-arms) vs gardener bones …
if you see warrior-arm-bones, then they lived as warriors,
if you don't, BUT you see female-who-died-of-unhealed-sword-strikes, then she took up arms in -desperate-defense and was buried with military-honors
Random – so what?
Not that I know of.
Godwin's law describes a pattern that's a bit different. For example, in a thread about Microsoft's predatory business practices, eventually some gormless individual is bound to compare Microsoft to Nazi Germany. That's Godwin's law.
What we're seeing here is the tendency for libertarians or conservatives to blither, "b-but Stalin!" whenever confronted with a view of economics that's anywhere to the left of, say, Hayek. A form of the slippery slope argument taken to comical extremes. Not quite the same thing as the Nazi comparisons, but structurally similar. Never mind the fact that in developed countries with stricter economic controls than the USA, not only are there no meatgrinders, gas chambers, or gulags for "productive individualists", but people tend to enjoy more social freedom than in the USA. Never mind that they enjoy access to better schools, health care, municipal infrastructure, and lower crime rates. Never mind that their police aren't militarized jackboots who single out people of certain skin colors for special oppression. The spectre of Stalin and the gulag archipelago still taints their entire state apparatus.
Maybe there should be a Godwin analogue for this kind of argument. I suspect that there isn't because there are actually quite few free-market libertarians in modern discussion fora; most people online with any intellectual chops acknowledge the need for social welfare programs and market regulation.
>That is, she seems to have no concept of unknown unknowns
Thanks for stepping in, but I'm afraid that to me it is unknown what unknown unknowns that you know (or perhaps don't know) that I am supposed to unknowingly know but apparently are unknown to me.
@TomA > I could have cited Mao,
Stalin or Mao, that was hardly the point.
Jeff Read touched one of the sacred cows by noticing there were downsides to free market and/or laissez faire capitalism, and it earned him an immediate reply of ad hominem and, predictably, argumentum ad Hitlerum. Whether you cast Stalin of Mao in the role of Hitler is of no importance, you illustrated the mechanism beautifully, and I agree with Winter that this should invoke Godwin's law.
If you're looking for blind spots, you might also want to look at what Jessica says, and what the people who reply seem to assume she meant, or assume about her hidden agenda, and attacjk her for their assumptions rather than her words. Look at Random32's recent posts, they illustrate this.
That's a reasonable tl;dr and a reasonable question, but the next question is: does it really matter (other than for Tor investors and employees)?
There is a lot of gnashing of teeth about what google means by "don't be evil" and whether or not they are serious about it. My personal take is that they understand perfectly that their actions are destroying, or at least blunting the power of, the traditional gatekeepers, and that they are making a serious promise that they will do their best to screen content for relevance and do their best not to screen content for ideology.
I think they are actually doing a pretty good job of this, and that sucks if your business is running a link farm, or if you let your business be so co-opted by ideologues that you stop producing content that people find relevant.
@ kn – "Stalin or Mao, that was hardly the point."
Agreed, but I'm not the one who wondered down the path of absurdity and distraction.
My relevant post was a constructive suggestion for Fail Burton that happened to include an Ayn Rand reference which was appropriate to the context (spirited defense of the individual versus the collective tyranny of the SFF Amazons). Jeff Read took this on the tangent of calling her a psychopath and raving about her politics and economic philosophy. I merely pointed out that Stalin was her contemporary antecedent and would likely have murdered her had she remained in the Soviet Union. Hopefully you and I can agree that murder is a real psychopathic behavior here. Why is it you collectivists are so defensive about your ideological track record? And no, I'm not claiming that Sweden and the Netherlands are bad places, just that when collectivism goes bad, history demonstrates that there are a non-trivial number of dead bodies.
Samac, they found some 9,000 year old body they called "Kennewick Man" and they were able to figure out everything little thing he did even though the bones were scattered by erosion. They figured out he threw things with one arm a lot, how he worked lifting things towards him, what he ate.
I wonder if this Viking thing is too new or they just can't throw those kinds of resources at it like they did Kennewick.
"And no, I'm not claiming that Sweden and the Netherlands are bad places, just that when collectivism goes bad, history demonstrates that there are a non-trivial number of dead bodies."
Sorry, but both Sweden and the Netherlands are old countries with a "collectivist" culture (400+ years for the Netherlands). During that time, neither of them has engaged in mass murdering their own population.
On the other hand, anti-communist general Suharto murdered more than one million "communist" Indonesians in a well organized campaign. Mr Suharto was brought into power and supported by the USA.
I assume that the ambiguity of whom you refer to with "people" is intentionally.
>That's a reasonable tl;dr and a reasonable question, but the next question is: does it really matter (other than for Tor investors and employees)?
I really really hope it doesn't, because I am very uncomfortable with the idea of people like the SJWs we've talked about in this thread being able to control what people are able to read, by controlling what is published.
For much the same reason, I'm not keen on traditional media in general.
>Thanks for stepping in, but I'm afraid that to me it is unknown what unknown unknowns that you know (or perhaps don't know) that I am supposed to unknowingly know but apparently are unknown to me.
Was that as hard to type as it was to read?
What I'm talking about comes across as trusting uninformed intuition over facts, reality and experience reflected in historical record. Seeming to display the belief that 'if I'm not aware of it, it's not relevant or important', or worse assuming 'if I'm not aware of it it doesn't exist'. Comes up every once in a while, and it does happen to everyone from time to time.
There's more than one type of feminist, more than one meaning of the word feminism- and that's deliberate. Some of those meanings of feminism, and the people who embody them, are not benign. In fact they try to hide themselves behind the more benign meaning of the word, making deliberate use of linguistic confusion, as protection and to draw support from the unwary.
Please excuse me if I'm wrong, I kind of skimmed some of this discussion, but it seems you've let yourself fall victim to that dishonesty and become one of the unwary I just mentioned. Possibly because you're too nice, too honest, and you project that honesty onto people who really don't deserve it. (Not knowing details about the nastier SJW 'feminists', and assuming if you don't know it then it doesn't matter.)
@ Winter
Aren't you conveniently neglecting to mention the bloody history of the Dutch colonization of South Africa? Or of the other Dutch colonies in the East Indies. Or of the period in your history when the Netherlands was ruled by aristocracy.
The Dutch did their share of attrocities. But the victims were not Dutch, but conquered people on the other side of the globe..
Your point was a government massacring their own people.
Greg I suspect Tor is more than a little miffed Twilight, Potter, Hunger Games and now Divergence were poached right out of their back yard by non-genre competitors. Where's Tor's so-called expert editors in their field?
The agents and editors which groomed those into best-sellers and films weren't wasting their time on Twitter harassing straight white men about "privilege" while grooming no-talents like John Scalzi for nothing. At some point one of these morons will figure out that having authors who light up their audience as bigots will garner more awards than cold hard cash.
Although this may startle some people, SF fans don't read SF to hear daily radical feminism or inquisitions into their personal beliefs, status as males or skin. Imagine a football team doing that to their fans and then imagine a stadium emptying.
And Greg, your comment about words is directly to the point. Take the Nebulas and Hugos. 40 years ago those were watchwords for excellence in SF anthologies. Now those same words are watchwords for hate-speech, radical feminism and serial defamation. In short, all the literary air has been sucked right out of those words, just like the word "feminism" has been hijacked.
As an SF fan, I was introduced to The Hugo Winners and SFWA administered SF Hall of Fame anthologies and I loved them. Now I despise those organizations as mainstreamers of hate speech and shitty SFF. I certainly haven't changed, but the number of racist award-nominated SFF authors on Twitter each and every day complaining about misogyny this and privilege that has, and it is stunning, and disgusting.
>Hopefully you and I can agree that murder is a real psychopathic behavior here.
I don't know much about psychopathology, but indeed, i do not condone miurder, and have pretty strong objections to all forms of killing people.
>Why is it you collectivists are so defensive about your ideological track record?
Hm. This sounds a bit like a "Have you stopped beating your wife?" sort of question. I 'm not ideologically affiliated with Stalin (or Mao, or …). I would not want to live in the sort of society that was the USSR – under Stalin or any other dictator.
On the other hand, I don't have any problem with a wealthy society providing a basic income for its unemployed, affordable schools, affordable healthcare, or similar collective facilities.
Does that make me a "collectivist" ? I don't know – I'm not quite sure what that word means to you, but apparently you already had me labelled as such.
Does it make me a communist, extreme leftist or Stalin worshipper ? Not in my book. But I'm not sure your book makes that sort of distinction.
Fail: I'll dispute your contention that Scalzi is talentless. I've read and enjoyed his work, at lease when he's not been using it as a vehicle for his hard-left ideology. And no, he doesn't do that with everything.
kn: The problems with your leftist socialtopia is that there's not enough wealth to make it happen without dragging the wealthy down to the level of everyone else. What are you going to do when you run out of other people's money to spend?
"kn: The problems with your leftist socialtopia is that there's not enough wealth to make it happen without dragging the wealthy down to the level of everyone else."
So what is an (un)acceptable difference in wealth?
An RP10% (richest 10% divided by poorest 10%) of 16 (USA) or 6 (Sweden)?
https://en.wikipedia.org/wiki/List_of_countries_by_income_equality
And how are the Swedes "poor"?
"What are you going to do when you run out of other people's money to spend?"
And when did that ever happen? When did the Swedes, Norse, or Dutch ran out of "other peoples money"? This quote comes from Thatcher who quipped it as a matter of dogma. The UK before or after her never even tried in earnest to reduce income inequality.
>And how are the Swedes "poor"?
In terms of purchasing-power parity, the Swedes discovered a few years back they were about on a level with Alabama – the poorest, most rural state in the U.S. Remarkably, they actually did something effective about it – went a long way towards scrapping Swedish-model socialism. Some improvement has followed. They may have made it up to the level of a mid-tier U.S. state by now. Or maybe not; government share of GDP has not shrunk to American levels yet.
Mean or median PPP?
>Mean or median PPP?
I read an English translation of the original "Sweden is Alabama" study too many years ago to remember. Google is not giving useful hits.
In any case, Sweden's Gini coefficient is low, so the difference between mean and median is unlikely to be large.
Winter: "So what is an (un)acceptable difference in wealth?"
I reject your premise. The only way incomes become more equal is to level everyone down. It's nothing but the naked politics of jealousy.
But Alabama has a high Gin
It's Mississippi that's the poorest US state, rather than Alabama.
I googled and found http://www.freerepublic.com/focus/news/678536/posts According to this it is median PPP that they're making claims about.
@ Winter – "The Dutch did their share of attrocities. But the victims were not Dutch"
Are you implying that state-sponsored genocide is acceptable if not practiced against your own population? Or that there is moral superiority in committing genocide against "inferior" peoples versus homeland's superior race?
I would argue that this is the mindset that typically leads to tyranny.
Income inequality isn't, even for those who talk about it, the real problem – if everyone had enough, then some people having more isn't objectively bad. People talk about income inequality, but the concrete injustices they point to tend to be about poor people, not rich people.
Well, talentless is a "compared to what?" kind of thing. As much as Scalzi laughs at Dan Brown and his Da Vinci Code as a hack, Scalzi is the same animal. His SF is as interesting and shallow as a bad version of Johnny Quest.
In one of the opening scenes to a Human Division story he has aliens startling humans because of their alien custom of spitting gobs of water to close a deal, in this case into the faces of human diplomats. OH MAN LOOK AT THAT, THAT'S REALLY ALIEN! That's the kind of thing I expect to see on The Cartoon Channel. That might've been clever in the 1890s, during the Cis-Het Victorian Patriarchy. The truth is Scalzi has never written anything even as modern or searching as E.M. Forster's "The Machine Stops," published in 1909. "The Maker of Moons," an 1896 no-account short story by Robert W. Chambers has more intrinsic weirdness and interesting dialogue than the redneck conformity of Scalzi. It is at least blessedly devoid of fart jokes, as is the entirety of Golden Age SF for some reason no one understands.
Throw in the fact he's decided to be an anti-racist proxy voice for the racist bigotry of an insulated cult of gay black radicalized feminist dogma and you're not exactly walking down Perception Alley.
I wouldn't be surprised to see Scalzi set aside his insults of Brown and do a startlingly original collaboration together about Capt. Zemo and his Space Templers Twenty Thousand Parsecs Under the Galactic Lens.
Are you implying that state-sponsored genocide is acceptable if not practiced against your own population?
Are you implying that statists are the only ones who engage in aggressions that kill "others"?
Income inequality isn't, even for those who talk about it, the real problem – if everyone had enough, then some people having more isn't objectively bad.
Objectively, no, but humans' firmware tweaks to relative indifference, and we've spent sixty years actively suppressing the tools (e.g., clear reasoning) necessary to get past evolutionarily adaptive but outdated thought patterns.
"Or that there is moral superiority in committing genocide against "inferior" peoples versus homeland's superior race?"
As Patrick already wrote, killing others for fun an profit is older than humanity itself. The discussion here was about collectivist governments being extra prone to mass murder their subjects. Your examples did not apply as they did not involve Dutch subjects, but conquered foreigners.
Random832,
Ensuring everyone has enough is going to be tricky with free-market solutions in the world we're entering, one in which human labor becomes increasingly fungible with machine labor. There is an easy and obvious solution — guaranteed basic income — but if that were ever seriously proposed, up would come the cries of "collectivist!" and "redistributionist!" from the dogmatic conservative/libertarian reactionaries. Clearer-headed libertarians have a more nuanced view.
But there's more. It's well known in counterinsurgency circles that civil unrest arises not from conditions of absolute poverty, but from conditions of relative poverty. Furthermore, the research of Richard Wilkinson and Kate Pickett strongly indicates that societies with more equal incomes are happier and healthier overall. So income inequality is either a real problem, or it's a reliable indicator for a deeper problem which is strongly linked to other forms of civil strife and poor health outcomes. My suspicion is the latter; societies which venerate property rights and individualism above all, like the USA, tend to foster less cooperation, more rivalry, and more dissonance between social strata than do societies which revere social harmony and civic duty. But I can't back that up.
Here is another estimate of median PPP household income (2004)
http://en.wikipedia.org/wiki/Household_income
Differences are not as extreme as in that earlier study which was compiled with a much narrower focus: ~$20k vs $27k
So, where does this discrepancy originates?
"My suspicion is the latter; societies which venerate property rights and individualism above all, like the USA, tend to foster less cooperation, more rivalry, and more dissonance between social strata than do societies which revere social harmony and civic duty."
I think the power inequalities that are at the root of income inequality also affect "hapiness" and "social harmony".
> The problems with your leftist socialtopia is that there's not enough wealth to make it happen without dragging the wealthy down to the level of everyone else. What are you going to do when you run out of other people's money to spend?
and don't understand why you call it utopia. I'ts happening in real life every day. Do you expect the creation of wealth to come to a stop just because a part of it is redistributed in the form public works like road building, running a policy force and an army, or the funding of schools and hospitals ?
^ that first sentence should read "I dont understand …"
don't know what happened there
> guaranteed basic income
I think that is a super idea. How about you set up charity and convince rich people to contribute to it out of their moral and ethical sense of duty? After all, rich people are pretty generous. But that isn't what you want, right? You want to force people, all people who are above some arbitrary line, to contribute to your scheme. You want people who bust their butts 80 hours a week, and live frugally and carefully to subsidize people who sit on their butts watching Oprah and complaining about how the rich are screwing them?
Of course not all poor people are lazy or non industrious, and not all wealthy are hard working and contributory, but when charity becomes an entitlement gratitude turns into resentment, when charity becomes an entitlement the pleasure of helping others turns into bitterness at bureaucratic goons, when charity becomes entitlement there is a word for the working poor — suckers.
> But there's more. It's well known in counterinsurgency circles that civil unrest arises not from conditions of absolute poverty, but from conditions of relative poverty.
So you believe the successful and hard working should genuflect to the blackmailing threats of the lazy and non contributors? Again, by no means are all the poor or low income lazy or non contributory, but the ones who are worthy of consideration are too busy working and trying to improve their situation to be professional protesters or perpetually offended.
> My suspicion is the latter; societies which venerate property rights and individualism above all, like the USA, tend to foster less cooperation, more rivalry, and more dissonance between social strata than do societies which revere social harmony and civic duty.
But your suspicion doesn't bear out reality where the USA, that country bristling with capitalist pigs and oppressed poor, is regularly rated as the most charitable and generous nation, and group of people on earth.
There have been experiments with basic income. They tend to succeed.
http://en.wikipedia.org/wiki/Basic_income#Worldwide
In the Netherlands, we have a wellfare system that will grant a basic income provided you are trying to get a job. But there is no time limit. That is why this subject is off the political agenda.
In the Netherlands, we have a wellfare system that will grant a basic income provided you are trying to get a job. But there is no time limit.
Which sounds a lot like what the US calls "unemployment benefits". I find it interesting that Paul Krugman, back when he was writing on actual economic research, argued against long terms for unemployment. (In particular, one of the problems I observe is that when an economy is overinflated, wages will have to come down at some point to a realistic level, but most people have a vehement loathing to take a significant pay cut, especially when "I can't find a job that pays 'what I'm worth'" is good enough to keep the unemployment.)
"…a basic income…"
"I think it's a wonderful idea. Tell you what. I'll go convince the poor…you go convince the rich."
– Herschel Ostropolier
Forgive me for being blunt, but you and Jeff Read appear to have an extraordinarily casual attitude about mass murder (e.g. ho-hum, it's been around since dirt and everyone does it, even non-collectivists).
Sorry, but in my mind, that attitude is a psychopathology and is also the reason that plebeian collectivists such as yourself are so prone to forming tyrannies. Libertarians inherently recoil against all forms of murder and neither excuse it nor justify it based upon some convenient political mantra. To date, history has not produced a single libertarian mass murderer.
Whoa, whoa, whoa. You and I may believe both of them are overly-casual about government measures that ultimately lead to such atrocities, but they (well, Winter at least) isn't being casual about mass murder; he's just saying the example put forth doesn't uphold the claim that such measures lead to the murder of that nation's own people. Which was in fact what the claim was. Let's not score own-goals by criticizing him for something he didn't say.
One of the problems that college males (and some non-college males) are facing is that women are attempting (and in some cases succeeding) in withdrawing consent hours to years *after* the act has completed.
I don't understand
You could have just stopped there (I applied your patch).
why you call it utopia. I'ts happening in real life every day. Do you expect the creation of wealth to come to a stop just because a part of it is redistributed in the form public works like road building, running a policy force and an army, or the funding of schools and hospitals ?
This will probably be about as fruitful as talking to Read, but:
The creation of wealth *does* stop, or will slow down to the point where it is essentially invisible once the government can redistribute wealth at whim.
There was little to no wealth creation in the USSR during the cold war. Most of their innovation was stolen from the west, and in some ways was active wealth destruction because useful materials were wasted and/or frittered away.
There is NO wealth creation happening in Zimbabwe, quite the opposite.
Australia's doing ok, mostly because they can let the chinese mine their ores for them and get paid for it, but how much better could they be doing if *they* mined their ores and had native industries producing he goods?
Also the general objection isn't the building of roads or the support of some level of military–those are things that nations do (military) or that we all use daily.
The problems that people like me have is that we're subsidizing sloth and corruption.
Those sorts of things do tend to work when you have a culture as homogenous as the Netherlands were until recently–people tend to look at members of their own tribe as "unlucky" or "having a hard time" rather than "lazy worthless parasites".
These things also seem to work better when the dominant culture express values of hard work and not being a parasite. This means that people generally are incentivized to actually *find* work.
As a culture diversifies, especially if one side or the other is seen as getting a larger share than it puts in, well, you'll see it back on the agenda.
Also, according to this:
http://www.shrm.org/hrdisciplines/global/articles/pages/netherlands-law-contract-employment.aspx
You're wrong.
Decreased Unemployment Benefits
The maximum duration of unemployment benefits from social security will gradually decrease from 38 months to 24 months between Jan. 1, 2016, and 2019.
Additionally, as of July 1, 2015, a person receiving unemployment benefits for a period of six months or more will be obliged to accept any available job as suitable employment. Under current rules, the duration is one year.
So not only is it not "unlimited", it's not even off the agenda–it's being reformed.
@ Paul Brinkley
I don't think you have been following this thread accurately. I have never claimed that all mass murderers are collectivists, nor that all collectivists are mass murderers. Only that most of the major mass murderers of the 20th Century have been tyrants leading socialist (e.g. collectivist) governments. And these tyrants murdered lots of people, both internally and externally, so the citizenship criteria is meaningless.
In addition, I have never claimed that the Netherlands government has been a mass murderer of its own people, only that it has a bloody track record in its colonization history. Winter seems to take great pride in the fact that his government has not yet started mass murdering its own people, which is indeed fortunate, but absurd to cite as a virtue.
@Christopher Smith:
most people have a vehement loathing to take a significant pay cut, especially when "I can't find a job that pays 'what I'm worth'" is good enough to keep the unemployment.
Yeah, the art of setting the right level of unemployment seems to be difficult to manage, but…
In my experience, the people in the trenches do know the score, and apparently try to implement systems to compensate.
In 1993, the small company I was working for laid off everybody except one tech-support guy and the accountant (around 15 people). It was not a good time to be looking for work (I actually wound up moving from Texas to Canada for a year), so after about a week, I went down to the unemployment office, which was an enlightening experience.
I don't know how it is now in Texas, but at the time, the unemployment benefits maxed out at $245 per week. We all went to an indoctrination room, got a lecture, watched a video, etc., and then queued up for individual sessions with counselors.
From where I was sitting, I could easily overhear the counselor haranguing the person in front of me for around 20 minutes, about how she was required to check the postings at the unemployment office, get on the phone, go to interviews, and submit paperwork proving that she had applied to at least three companies each and every week.
So when it's my turn, I'm about ready to tell the counselor that seems like a heckuva lotta work for such an insignificant amount of scratch, so they can just keep their money, but being polite, I let him go first. He looks at his computer screen (which shows him my income for the last who-knows-how-many-years, because the state collects unemployment insurance from employers), then turns to me and says "Sorry, I don't think we're going to be able to help you find a job at all. Your checks should start coming in two weeks. Please let us know when you find a job. NEXT!!!"
That link only shows a few actual implementations, and has NO comments on their effectiveness, other than in one country where folks in a single DESPERATELY poor village were given about 12 dollars a month per person for a year, then 10 dollars a month for a while later.
That it "decreased childhood malnutrition and increased school attendance" was seen as a sign of success, and in a grand case of begging the question came to the conclusion that since other people were migrating to the village that this should be implemented nation wide.
What's your bet on their belief going in?
I have absolutely no doubt in my mind that in countries where the mean annual income is well below the poverty level that providing some sort of support to get people to or above the poverty level will make certain metrics better, and that in the long run this might lift these nations out of perpetual poverty.
That is not the case (by definition) anywhere in the developed world.
In contrast we have this: http://www.city-journal.org/printable.php?id=6114 where a bunch of progressives PAID poor people to do what the middle class does (or used to do) reflexively. Stay in school, go to the doctor etc. And even then they often failed.
Which really does make sense.
In the US it isn't hard for *anyone*, black, white, latino, whatever to climb out of poverty. Yeah, there's racism, sexism and some bias against the transgendered (whatever that means this week), but those are things generally take a bit off the top, not keep you at the bottom. All it takes to live a comfortable life in the US is hard work and a bit of thrift. You live somewhere where it's cheap to get to work–in Chicago or NYC this means you wrap your life around the mass transit systems (did that for 7 years in Chicago), and you don't waste money on Air Jordans or go out to eat.
My wife and I have "Liberal Arts" degrees. Mine's even worse–Fine Art. During the first 2 years post college we were barely scraping by–I was hard-headed about the work I'd take, and she was working a job that started at 4 and hour. Of course, within 12 months she'd been promoted to over 7 an hour, and my hard headedness wound up with me taking a part time job at a major publisher, which lead to a full time job, which lead to a rather lucrative profession as a Linux SA. Yeah, like Randy Waterhouse I got out of college with a degree, a girlfriend and more than a working knowledge of Unix. Fortunately my wife wasn't a left wing feminist. Unfortunately this meant I didn't go to the Phillipines and meet a hot half Philippino treasure hunter. Oh, and I should mention 30k in student loan debt that got paid off more-or-less on time.
Let's quote from that City Journal article:
Of course, it's ludicrous to suppose that what keeps America's inner-city residents poor across generations is a struggle for subsistence in an economy of limited opportunities. The main drivers of poverty in America are family breakdown (in 2004, single-parent households nationally were six times as likely to be poor as married families) and nonwork (only 5 percent of all families with one full-time worker were poor in New York City from 2005 to 2007, compared with 47 percent of families with no workers). The antisocial behaviors that contribute to multigenerational poverty also have nothing to do with suffocating economic pressures: very few inner-city students cut classes or drop out of school to help their parents work; they do so because their peer culture is toxic and because their parents exercise little control over their lives.
In the US a *single person* can survive, if the are careful and thrifty, on our current minimum wage (roughly $15080USD. PPP is about 1.1, so this is if I did the conversions right about 12841.20 EURO). This isn't a lot of money, but the numbers say it can be done. And frankly if you *can* do the things you need to survive you have the sorts of skills that mean you won't be working a minimum wage job very long.
The truth is that in the US very, very few people work at the minimum wage, and many who do are "tipped" employees, which means they probably don't report all of their tips as income.
So no, I don't think there are any cases of an industrial society that "successfully" implemented a GAS.
> After all, rich people are pretty generous.
Compared to what?
http://www.theatlantic.com/magazine/archive/2013/04/why-the-rich-dont-give/309254/
Maupin:
This is EXACTLY the difference between the poor and the not-poor in America.
In America the "poor" tend to treat unemployment insurance as a paid holiday. The non-poor treat it as a bridge (I should note that I've applied for UI 3 times, and been on it once, intermittently, for about 3 months total) to keep from getting further in debt.
I've been unemployed about 9 of the last 21 months, partially because of moving continents. Now I'm working a distinctly "not poor" job. I make enough by myself to let my wife stay home and raise our daughter rather than farm that out to the teachers unions and the state.
Because I didn't *expect* anyone to "give" me a job. I expected to earn it, and wanted to maximize my utility.
@William O'Blivion
You are referring to the wrong benefit. These are above minimum unemployment benefits (~70% of last income, but that could have been changed again, WW in Dutch) which are indeed time limited.
After you lose these, you will be eligible for "Bijstand" (Law of work and assistance). In this, a family would get somewhat less than the minimum wage as long as they make an effort to get a job or are in some way incapacitated. After retirement you simply get this amount without conditions.
This is not a basic income as you do not get it if you have equity, savings, or any kind of other income.
For those who can read Dutch (and therefor would already know about this):
https://nl.wikipedia.org/wiki/Wet_werk_en_bijstand
"Only that most of the major mass murderers of the 20th Century have been tyrants leading socialist (e.g. collectivist) governments. And these tyrants murdered lots of people, both internally and externally, so the citizenship criteria is meaningless."
There were a few linked together around WWII. One of which is only considered a socialist by USA libertarians. Those who suffered from it considered it extreme right wing.
"In addition, I have never claimed that the Netherlands government has been a mass murderer of its own people, only that it has a bloody track record in its colonization history."
I interpreted your comment that you predicted mass murder in Sweden and the Netherlands. Mostly because your remarks had been made before in the context of such accusations. So I was wrong, sorry.
"Winter seems to take great pride in the fact that his government has not yet started mass murdering its own people, which is indeed fortunate, but absurd to cite as a virtue."
No, I commented that the link "strong state" and "mass murdering subjects" that is generally made here is simply false. As data points I gave two examples of "old" countries that never engaged in mass murdering their own subjects.
I consider the behavior in our colonies (including Suriname) abject and would love to have those responsible dragged in court for war crimes, if that would be possible. But we cannot redress the atrocities of history as those responsible are dead.
Why you think I am in some way proud of those atrocities is a mystery to me.
"That link only shows a few actual implementations, and has NO comments on their effectiveness, other than in one country where folks in a single DESPERATELY poor village were given about 12 dollars a month per person for a year, then 10 dollars a month for a while later. "
You did not follow the leads:
http://www.cbc.ca/news/canada/manitoba/1970s-manitoba-poverty-experiment-called-a-success-1.868562
http://www.dominionpaper.ca/articles/4100
http://opinionator.blogs.nytimes.com/2014/01/18/what-happens-when-the-poor-receive-a-stipend/?_php=true&_type=blogs&_r=0
Which really does make sense. "
No, you are repeating dogma. Empirically, it tends to work almost everywhere.
But maybe "mainstream Americans" simply are unable to make such a thing work?
The USA also seems to be unable to get universal health-care coverage working. Every other OECD country got it working decades ago (and way cheaper).
FSVO "working". The US has the finest health care system in the world…or at least did until Barack Obama got his socialist fingers into it.
Or are you ignoring all of the news about third world hospital conditions in the UK's NHS?
>Or are you ignoring all of the news about third world hospital conditions in the UK's NHS?
Or, for that matter, the bureaucratic patient-murdering machine at the Veteran's Administration.
Except that 10% of the population was without any health insurance and many of the rest have caps. Didn't we discuss a case higher up where a woman could not get reimbursement for needed hormonal treatment?
The whole basic concept of Breaking Bad is inconceivable in other OECD countries. I know too many people who were in need of very expensive (cancer) treatments. None of them were denied very long term treatments, and none of them received bills or were threatened by caps.
You are dreaming:
http://www.commonwealthfund.org/publications/fund-reports/2014/jun/mirror-mirror
The UK is beaten only by the USA in low quality (=healthy lives). My country is "average" (=in need of improvement).
US health care is 11th in this survey. And even in the one area it boast itself, in quality of care, it is 11th because so many people are denied the care they need. The US is also the most expensive by far.
http://www.commonwealthfund.org/~/media/Files/Publications/Issue%20Brief/2011/Jul/1532_Squires_US_hlt_sys_comparison_12_nations_intl_brief_v2.pdf
About the healthcare system, one thing that often isn't considered is that all these public healthcare systems in foreign lands ride the coat tails of the American medical system. The large majority of investment and new discovery in medical technology happens here, and is mostly paid for here.
A perfect illustration would be with respect to drug pricing. You can often buy the same drug in Canada cheaper than you can here in the USA. Why? Why would the big drug companies be less money grubbing overseas than here at home?
The economics of this are not complicated. Let's use a simple worked example: wonder drug Raymonix, which cures people of their irrational thinking. Total investment in developing the drug is $200M, FDA certification costs are another $200M, marginal cost to produce one pill $0.10.
We expect 100M doses per year in the USA, and if we are looking for an ROI of 3 years and a profit of 10%, then we charge about $1.60 for each pill (someone check my math.) However, on the margin, each pill produces a profit of $1.50. Consequently, if Canada passes a law that say Raymonix should be sold for less than 50 cents, selling it in Canada is still viable, since the marginal profit is still 40 cents per pill.
Consequently, Canadians get Raymonix for a third of the cost because they are being subsidized by the American healthcare market. Then they carp on about how much better the Canadian healthcare system is than the American, while Americans subsidize their crowing.
If the American market did not subsidize Raymonix in this way, or passed a law saying Raymonix should cost 50 cents, then the drug would never have been developed in the first place and we'd all be worse off, or if post facto, the drug company would either go out of business and stop making it, or they would never develop another drug again.
Of course these consequences of price controls are pretty invisible, it is hard to see something not being done, but that is why politicians love that part of economics — visible benefit to them, invisible costs.
Which is to say all those publicly funded healthcare systems are not just funded by the local government, but by a massive tax on the American healthcare system. Which makes it doubly unpleasant for us, since not only are we funding your medical care, but you are mocking us for how stupid and backward we are.
There is a continuum between the manically productive at one end of the spectrum and the utterly parasitic at the other end of the spectrum. Most people are somewhere in between during most of their adult lives. To the extent that a country or society has members with a centroid of distribution nearer the productive side of the spectrum, they tend to be wealthier and more advanced.
High taxation and income redistribution pushes the centroid toward the parasitic tail because it rewards parasitic conduct. This is anti-evolutionary because the parasites have no incentive to become productive. What happens when the gravy train runs out of track?
Still, the USA is unable to deliver health care to all. And you cannot blame that failure to Canada or Europe.
The USA and the drug companies are free to charge and distribute money and drugs the way they want to. Also, the drug companies spend more on marketing than on development.
Note that quite a number of drug companies are non USA. Swtserland is big in this respect.
"High taxation and income redistribution pushes the centroid toward the parasitic tail because it rewards parasitic conduct."
This sounds like received libertarian dogma. Please supply empirical data that a basic income has this effect? All field experiments resulted in the exact opposite.
> The large majority of investment and new discovery in medical technology happens here, and is mostly paid for here.
true for some values of 'mostly'
there's a list of the top 12 pharmaceutical companies at wikipedia. The combined R&D budget (reported, 2012) of the US companies in that list is 44% of the total.
Switzerland : 31%.
> The USA and the drug companies are free to charge and distribute money and drugs the way they want to.
That isn't actually true. In many countries they are subject to price controls, and in all countries they are subject to regulatory controls. However, that isn't especially important.
> Also, the drug companies spend more on marketing than on development.
This is an old saw. What difference does that make? Advertising makes drugs cheaper for consumers by increasing the market and consequently distributing the costs. Whether that takes the form of lower drug costs or more money to develop new drugs, or rewarding investors making it easier to raise capital for new drugs is irrelevant, either way it is a good thing.
> Note that quite a number of drug companies are non USA. Switzerland is big in this respect.
Right, but that isn't important in the economic calculation. The calculation doesn't matter as to where the drugs are made, only as to where the market price is relatively free or relatively controlled. The gigantic American drug economy subsidizes those Swiss made drugs too.
Just as a data point, I did a quick eyeball, so my count might be off a little, but in the 70 years since the end of the second world war, US scientists have won the Nobel Prize in medicine 85 times, and many of the others did their research in American institutions. Some were shared for sure, but I think "mostly" would be a fair assessment of the American contribution to medicine.
I think the phrase you are searching for is "thank you very much America."
>The creation of wealth *does* stop, […] once the government can redistribute wealth at whim.
Because of taxes, right ?
But then, building roads and maintaining an army are also funded with taxes. The creation of wealth could also stop if the governements spends its revenue and then wants more tax.
What they spend it on isn't the issue, welfare or social security isn't really the problem, it's more a problem balancing revenue against expenses, and setting priorities.
That's why that Tatcher quote is wrong. (But it sounds catchy, I'll give you that).
>The problems that people like me have is that we're subsidizing sloth and corruption.
I graduated during the crisis in the 1980s and didn't get a job untill the 90s. Eventually, after a number of odd jobs, a started taking classes in IT, which I could afford to because of the unemployment benefit i received and the school being heavily subsidized by the government.
I then quite easily got me a job as a system administrator, and I've been doing that ever since, with pleasure. Yay.
How's this a story about sloth and corruption ?
Only if you redefine "subjects" to mean "resident citizens".
@ Winter – "Please supply empirical data that a basic income has this effect?"
The history of the Soviet Union is a relevant case study. Feel free to choose any time period from the early 1920s through it's demise in 1991.
"The history of the Soviet Union is a relevant case study."
This remark suggests that your understanding of how work and income were distribited in the Soviet Union must be next to zero.
Or you do not understand the concept of a basic income.
(Or both)
Maybe subjects is not the right word. "resident citizenss" is ambiguous too, resident where?
The locals (natives) of the colonies had no representation in the ruling class nor among the officers of the armed forces. Moreover, those who ruled these colonies were born and raised in the Netherlands and worked only on a temporal basis in these colonies.
In no way were the natives considered citizens of the Netherlands and they also had none of the legal rights of citizens.
You are the one making the extraordinary claim, namely that "basic income" has the magical ability to invert the well known and well documented properties of all other forced wealth redistribution schemes. The evidence you've presented so far is laughable, and doesn't come close to supporting your conclusion.
@Jessica, Winter, etc:
Statistics about medical systems tend to favor the US when they are about medical care, and disfavor the US when they are about things other than medical care. Creating metrics that the US does poorly on is so easy that any fool can do it, and many do. The most meaningful metric is simply "Where do the rich and powerful go when they are afraid?" and everyone knows the answer to that question.
The drug pipeline is so complex and poorly understood by laymen that it is nearly pointless to discuss it in public. For a peek, I suggest http://pipeline.corante.com/
@kjj
"You are the one making the extraordinary claim, namely that "basic income" has the magical ability to invert the well known and well documented properties of all other forced wealth redistribution schemes."
My claim is no more than that basic income can reduce poverty under the definitions as given in the links. Also, basic income improves living conditions and prospects of the beneficiaries.
The empirical evidence supports this claim, see the studies mentioned. And I have yet to see the documents you claim prove otherwise. Feel free to supply links to these sources that so well document the opposite of what I see around me (see the account of kn above).
You can do the math.
The USA spends around $8600 per capita on health care. That is around $2.5 trillion. Drugs make up only a small fraction of this amount. Also, drug companies are very profitable businesses, so they could do with less subsidies from US patients. Stil, with all this money, the USA is unable to supply everyone the care they need. Other countries spend way less per capita and are able to reach everyone.
The result is that the USA trails the developed world in population health statistics.
http://en.m.wikipedia.org/wiki/Health_care_in_the_United_States
I have seen no financial or demographic reasons why the US cannot organise comprehensive health coverage, only political and ideological reasons.
@ Winter – "This remark suggests that your understanding of how work and income were distribited in the Soviet Union must be next to zero. Or you do not understand the concept of a basic income."
These tutorials are getting a little tedious for me, but I will try again.
In an earlier post, I asserted that high taxation and income redistribution created the anti-evolutionary effect of encouraging parasitic behavior in a society. You disagreed and started yammering about a "basic income" and invoking this mantra as if it somehow proved that collectivism was the perfect form of political economy.
This is the key problem with all incipient collectivist tyrants. They all think that their particular unique form of coercion is the one that will finally work and produce the promised Utopia.
The specific details of the Winter "basic income" model are not determinative here. It is the underlying principle (and human nature driver) that matters. A principle that is well reflected in the Soviet failed experiment.
Let me try to explain this principle in direct terms. Collectivist government confiscates wealth from the productive element of society and then uses this booty to purchase the votes of the parasitic element of society. The standard of living of the parasites goes up (positive reinforcement) and they respond by voting more diligently for the collectivist government. As the cycle continues, eventually the productive individuals start to rebel and government must resort to tyranny in order to keep them in line. At the extremis of this cycle, you get labor camps, gulags, and gas chambers.
You do not pay attention. First, taxes are as old as history itself. Human societies have survived them for thousands of years. Redistribution is even older than history, as it is found in every hunter gatherer society.
Basic income is supplemental. The crucial part is that the beneficients can increase their income with work. There are many such safety net systems in the world.
The reason your reference to the Soviet or Maoist system is irrelevant is that in these systems people were assigned a job. They were not allowed to move to a better job if they wanted. The only ways to increase your income were to suck up in party hiearchy or the black market. Both routes attracted a lot of ambitious hard working people. This is fundamentally different from everything that happens in Western economies.
But I suppose the dogmatic, fact free, responses I get from you are a sign you will not bother to look at how people actually behave in the real world. Or even to try to understand why well off Europeans vote to keep and even extend welfare programs from which they themselves do not get "handouts".
> The USA spends around $8600 per capita on health care. That is around $2.5 trillion. Drugs make up only a small fraction of this amount.
Right but the same economic calculus applying to drugs applies to all aspects of medical care, to a greater or lesser extent. So perhaps our healthcare costs are high, but that is because we are subsidizing the Dutch healthcare system, and the Canadian healthcare system, and the NHS in Britain and so forth.
When some Dutch mother doesn't have to spend the last years of her life unable to remember her children's names, or some Canadian child doesn't die from leukemia, it is almost certainly because of all the high cost of American healthcare paying for the research, and the American scientists, Universities and drug companies investing in the drug research. Of course other countries contribute, but the USA is far and away the dominant source.
Thank god that drug companies are so profitable; profitable enough to fund the high risk of drug research and consequently make us all happier and healthier.
> Stil, with all this money, the USA is unable to supply everyone the care they need. Other countries spend way less per capita and are able to reach everyone.
This isn't really true at all. Medical care is available even to the very poorest in various government provided forms. Is it as good as the top quality care the people paying for their own care get? Of course not, but I believe earlier I discussed the problems of entitlements verses charity. There are plenty of flaws in the American medical system, nearly all of them to do with government interference, but let's not change the subject from cost. and how America subsidizes all those putatively fabulous exemplary public funded medical systems around the world.
So it might be true that "Other countries spend way less per capita and are able to reach everyone." but that is only because Americans are paying part of your bill.
> The result is that the USA trails the developed world in population health statistics.
America often doesn't top the statistics on population health, but it is always at the very top on healthcare. The two things are not the same at all. Much of the problems with American health comes from the fact that our poor are so rich (in terms of buying power) that they consume way to much and do to little, and have access to many things that poverty prevents in many other countries. But heath is a different question than healthcare.
> I have seen no financial or demographic reasons why the US cannot organise comprehensive health coverage, only political and ideological reasons.
Because if America did what the Dutch do, then everyone in the world would be sicker and die earlier, and medical research would slow down a very large amount for the reasons already stated earlier. Again the phrase you should be thinking of is "Thank you America".
Having said that there is lots and lots wrong with the American medical system, but lack of central control is not one of them.
Dogmatic and fact free? You ignore the repeated experience with socialist wealth distribution schemes over the past century. You're not doing any different from those that say that we have no real experience with socialism or Communism because the true version has never been practiced.
The reality is that the true version, according to their definition, is impossible, because it depends on those holding the reins of power to be saints – and there is a distinct lack of saintliness in people who take absolute power in that regard.
The real world has places like North Korea and Venezuela and China in it…which European leftists either ignore entirely or handwave away.
You mean like Jews in Germany in 1940?
> You mean like Jews in Germany in 1940?
More like blacks in the US in the 18th and 19th century.
> The real world has places like North Korea and Venezuela and China in it…which European leftists either ignore entirely or handwave away.
So, how near to or how far on the slippery slope towards becoming the next North Korea or, why not, the new USSR, do you think welfare states such as Germany, Switzerland, the Netherlands, Belgium, …. actually are ?
The difference is universal health care, or lack thereof, period. Historically, when it came to population health outcomes, Canadians were virtually indistinguishable from Americans… until the 1970s when Canada started offering universal health care. Then the Canadian health outcomes looked better and better, surpassing the USA's.
There's no denying that if you happen to be rich and powerful, America provides fantastic health care — perhaps even the best in the world. But a society that treats its rich well and fucks over its poor is hardly desirable. And it's hardly the mark of a democratic developed nation. More the mark of a South American dictatorship.
>until the 1970s when Canada started offering universal health care. Then the Canadian health outcomes looked better and better, surpassing the USA's.
Suuuuure. That's why Americans with serious health problems and the practical choice to do so go north to Canadian hospitals to get treatment.
Oh, wait. No. In the real world, it's the other way around.
Which reveals all these claims about the superiority of Canadian health care outcomes to be exactly like every other claim about the shiny wonderful outcomes of collectivism. That is, lies.
@ Winter – "taxes are as old as history itself. Human societies have survived them for thousands of years."
So this is your Utopian vision, a world burdened by taxes and redistribution but still just barely surviving. Or is it maximizing the parasitic consumption to just below the point where the host dies? Or is it enslaving foreigners in the colonies in order to fund the handouts being provided to the citizen parasites? Or is it spending nothing on your own national defense (and instead relying on the US defense umbrella) in order to play Santa Claus with extra welfare goodies and then brag about how magnanimous you are?
> The difference is universal health care, or lack thereof, period…. Then the Canadian health outcomes looked better and better, surpassing the USA's.
Even were one to accept your claim, and even if you ignore the many other things that the government did to the poor in America at this time, and even if you ignore the systematic political attempts to destroy a true free market in healthcare in the US going back 100 years, the conclusion to your statement is obvious — our American poor can't afford to pay the premium that all the foreign healthcare systems are imposing on the American one.
I haven't really studied the cross-border medical stuff, but like everything else, there's probably a lot of chaff to sift through. This looks like a reasonable (if a bit dated) start on sifting:
http://content.healthaffairs.org/content/21/3/19.full
Wikipedia has a page that discusses "medical tourism."
Interestingly, one study purports to show that in 2007, the number of people leaving the US for medical care was around 10X the number of people entering the US for medical care.
http://en.wikipedia.org/wiki/Medical_tourism#United_States
And, reading between the lines, one reason you probably don't read much about Americans going to Canada for medical care may well be that the Canadian hospitals are treading very carefully. They wouldn't want the appearance of putting foreign revenue ahead of the well-being of the locals:
http://www.theglobeandmail.com/news/national/sunnybrook-hospital-accepts-international-patients/article17751151/
> Interestingly, one study purports to show that in 2007, the number of people leaving the US for medical care was around 10X the number of people entering the US for medical care.
There are two different things going on here. Medical tourism for regular medical care to foreign countries because of lower prices, and medical tourism for treatments that are not available in your home country (or are behind some massive waiting line due to national health system rationing.)
The latter is really the point I think Eric was making. No Americans are going to Canada they can't find someone at home willing and qualified to do their brain surgery.
The former is simply a consequence of what I was talking about earlier. People import drugs from Canada because they can buy it without (much of) the premium Americans have to pay to subsidize the whole world's medical care. It is a little leak in the abstraction. Going overseas for a hip replacement or a glaucoma treatment is just the same thing (though there is also a little more to it than that.)
@esr&jessica etc.
So it must be easy to show me statistics that prove all citizens in the USA get the health care they need, even for chronic conditions (which make up the bulk of the costs). That is, Breaking Bad is completely wrong: A teacher in the USA will never have to turn to crime to pay for cancer medication.
Do you really believe the USA denies its own population universal coverage to subsidize Canadians and Europeans?
Drug make up less than $300M (out of ~$2.5 trilliin) of the total. The bulk of health care costs are paying people (nurses, doctors, etc.) and infrastructure (hospitals, labs).
https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/downloads/highlights.pdf
In addition, the research of the NIH is not part of health care costs. So, there is no way that caps on drug prices in Canada are the reason the USA cannot deliver universal coverage.
Also, medical bills are the leading cause of personal bankruptcy in the USA
What happens to care for a chronic condition after you went bankrupt?
Americans leave the US for relatively simple procedures because our government has completely and utterly destroyed the pricing mechanisms for just about everything medical. You fly into Mexico and your procedure can be done at market prices (cheap). Also, without onerous layers of bureaucracy and administration, typically just cash.
Again, no one with any serious problem is flying out of the US for help. Note that the wikipedia article linked above makes this point clear.
Even cosmetic surgery costs are deranged here, despite typically being cash jobs. Every medical interaction is a spin on the billion dollar jury award lottery wheel, and the malpractice insurance costs (included in your bill) reflect that.
"You fly into Mexico and your procedure can be done at market prices (cheap)."
Mexico has universal health care coverage.
http://www.ibtimes.co.uk/mexico-universal-health-care-insurance-lancet-decade-374135
"Again, no one with any serious problem is flying out of the US for help."
Not to Canada? Why not? Are Canadian hospitals worse than those in the USA?
It seems you focus on acute medical help. People tend to die from chronic conditions. You cannot treat chronic conditions by flying to another country.
http://www.cdc.gov/chronicdisease/overview/
http://www.who.int/mediacentre/factsheets/fs310/en/
I notice that you do not come up with any data to support your claims. I still am waiting for statistics or studies that show basic income does work like your dogma requires. I am not holding my breath.
"So this is your Utopian vision, a world burdened by taxes and redistribution but still just barely surviving."
Your vision of a tax-free society is utopian. Taxes are the hard reality of human history. And FYI, the Western world outside the USA is way above "barely surviving". So much for your economic insights.
"Or is it maximizing the parasitic consumption to just below the point where the host dies?"
I know of no country that is anywhere near this point. Please come up with an example supported with evidence.
"Or is it enslaving foreigners in the colonies in order to fund the handouts being provided to the citizen parasites?"
That was a thing of the past, as far as Europe is concerned. The welfare state was been build after the colonies were lost.
"Or is it spending nothing on your own national defense (and instead relying on the US defense umbrella) in order to play Santa Claus with extra welfare goodies and then brag about how magnanimous you are?"
We do spend less on defense than the USA. Actually, every country in the world spends less on defense than the USA. The USA do not look kindly upon countries that try to match their defense spending. The US do a lot to prevent anyone in the world building up a capacity to keep off the US armed forces. So it seems a little pointless to try to do so.
The NATO deal was that Europe would not build up a military capacity to defend itself in return for protection by the USA.
Winter, not all citizens of the USA get the health care they need. You see, health care costs money. It's a resource. Consumption of a resource has a cost. We do not steal from every citizen in the form of increased taxes to pay that cost. …well, we did not, at least, until Barack Obama came along.
What's the difference between this and giving every citizen housing and food and the other resources needed to stay alive? None. So why not go whole hog, comrade, and institute Communism? From each according to his abilities, to each according to his needs, don't you know.
The end of that road is the gulag, and the mass grave.
Oh, and unlike your European utopias, the NIH is not the sole funder of medical research in the US. As for the health care market in Mexico being cheaper than the US, that they have socialized medicine has exactly zero to do with it.
"We do not steal from every citizen in the form of increased taxes to pay that cost. …well, we did not, at least, until Barack Obama came along."
Then do not whine and twist words and simply admit that you do not want universal coverage. That you see people having no access to health care as a feature of a free market.
"What's the difference between this and giving every citizen housing and food and the other resources needed to stay alive?"
A basic income? Most European countries have such kinds of safety nets. Social housing developments are indeed common. But you have to start somewhere.
Their seems to be a very long road between universal health care and the gulags. We have yet to see any kind of a ghost of a gulag in the UK. Meanwhile, more people die early due to lack of medical care in the USA than are murdered in UK gulags.
What's the difference between this and giving every citizen housing and food and the other resources needed to stay alive? None.
Hence — basic income.
The fact is that when you consider the transaction costs of determining who is eligible for government assistance and who is a "parasite", as in the current U.S. welfare system, it is cheaper to simply hand out enough money to survive on to everyone. This is why even libertarians favor a basic income system (replacing current welfare systems).
Conservatives and libertarians would assert that it would be even cheaper to eliminate government entitlement programs altogether. But I'm not so sure that's the case. With machines replacing human jobs at grand scale — even thinky jobs that we thought only humans could do — there are going to be an awful lot of "parasites" in society when hardworking folks find themselves laid off. So the cost of providing everyone a basic income may yet be less than the costs incurred by the civil unrest brought about by millions of the recently jobless.
"Then do not whine and twist words and simply admit that you do not want universal coverage."
When did I ever give you the idea that I considered universal, government-run health care to be in the least bit desirable? How could I have possibly led you that far astray?
I am not whining or twisting words. I will come right out and say what I have said all along: Government-run, universal health care is antithetical to the idea of a free market, and just one step down the road to the gulag. Not only do I not want it, I think it's is profoundly evil.
"So the cost of providing everyone a basic income may yet be less than the costs incurred by the civil unrest brought about by millions of the recently jobless."
Fomented, no doubt, by SEIU and Al Sharpton and the rest of the hard-left Communist front crowd.
Here's a hint: It is fundamentally wrong to take from those who produce and give to those who do not. Regardless of the putative cost savings. Morality is not for sale.
"Government-run, universal health care is antithetical to the idea of a free market, and just one step down the road to the gulag. Not only do I not want it, I think it's is profoundly evil."
So, you want to let people die due to lack of care now to prevent them from being killed in a possible gulag later?
I have problems seeing the logic of your reasoning.
>So, you want to let people die due to lack of care now to prevent them from being killed in a possible gulag later?
Supposing this is what Jay wanted, it would be a pretty good trade. The lethality of governments is very high.
But Jay doesn't want poor people to die. What he wants is for the government to be barred from coercing others to feed and house them, which is not the same thing. He believes (and I agree) that people who care strongly about the condition of the poor by pooling their own money to solve the problem.
I want the world's economies to work well enough that anyone who wants to work can do so, and can earn their own living instead of depending on the largesse of a benevolent government. The less of a drag that government imposes on the economy by taking its cut straight off the top, the better the economy performs, the more people can be put to work, and the greater will be the sum of human happiness and dignity.
Those that choose not to work must accept the consequences of that choice.
Yes, there are those who genuinely cannot support themselves. That's what charity is for.
"I want the world's economies to work well enough that anyone who wants to work can do so, and can earn their own living instead of depending on the largesse of a benevolent government."
That is a nice Utopia to live for. Meanwhile, quite a lot of people are going to die needlessly while the world waits for it arrival in the (very) distant future.
> But Jay doesn't want poor people to die. What he wants is for the government to be barred from coercing others to feed and house them, which is not the same thing. He believes (and I agree) that people who care strongly about the condition of the poor by pooling their own money to solve the problem.
And yet you who so freely talk about the "track record" of "collectivists" don't seem bothered by the fact that this has never actually worked. Oh, sure, private charities help some of the poor people, some of the time. But if that's the only play in your book… well, at least the "collectivists" occasionally try a new variation on the welfare state.
"What he wants is for the government to be barred from coercing others to feed and house them, which is not the same thing. He believes (and I agree) that people who care strongly about the condition of the poor by pooling their own money to solve the problem."
As Random832 already remarked, this has never worked. You are putting a lot of lives on the line to prevent a very hypothetical threat. Because these high lethality states were found in a few (3-4) cases around some pretty awful (regular and civil) wars. They had never been seen before, and there is little indication that these will ever repeat.
On the other hand, the UK has had an NHS since 1948 and has since then not seen any gulags or mass murders by their government. The UK have had taxes since before the Romans took over. You must remind me again about the masses of gulags these taxing governments have installed.
This whole gulag mass murdering thing is just a bogeyman to scare people into accepting a miserable life and death in the name of "Free Market" economics.
The whole developed world has universal health care coverage for decades without the North Korean gulags and work camps. Just the USA are to scared and dogmatic (or incompetent?) to organize that.
"He believes (and I agree) that people who care strongly about the condition of the poor [should act] by pooling their own money to solve the problem."
(I think that's what Eric meant.)
It's the same thing as leftist billionaires who want to raise everyone's taxes…but won't put their money where their mouths are. Drives me nuts.
I'll believe Warren Buffett is serious about reducing the deficit when he writes a 10-figure check to the US Treasury.
Sounds a lot like the old patronage system in classical Roman times. Did not give much stability nor were people free.
The "poor" in our country are literally dying of obesity-related diseases, not starving. The list of government handouts here (vote-bribes) is staggeringly long; and does includes extensive indigent medical care, even including such things as birth control, prenatal care, and in-hospital birthing. That is why pregnant women are flocking here from other countries.
You are arguing that we need a new class of vote-bribe called "basic income" so that we can further the addiction of our parasites and make them even more dependent, obese, and useless. Do you have any idea how insane that sounds?
> The "poor" in our country are literally dying of obesity-related diseases, not starving.
you keep repeating that, as if you believe it's proof that "your poor' are actually well-paid and well-fed.
http://esr.ibiblio.org/?p=6172&cpage=1#comment-1072074
@ kn – "you keep repeating that, as if you believe it's proof that "your poor' are actually well-paid and well-fed."
They are compared to most other countries on the planet. That's why many millions of them have been entering the US illegally from across the border with Mexico over the past few decades.
Your side is the one that's constantly using misleading language. "Universal coverage" smuggles two false premises in one term: That "coverage", rather than provision of service, is the desired end result; and that government-run systems actually provide everyone the service that they want or need.
That the ObamaCare reforms started off by clamping down on some of the most effective means the US had for actually reducing the cost of health care, such as the ability to buy over-the-counter medications with HSA funds, demonstrates blatantly that nobody on the left honestly cares about the actual medical services involved. The only metric they throw around is "coverage", even when the premium price of that coverage is drastically higher than the premiums plus out-of-pocket cap for a plan that actually provides insurance.
Furthermore, every government-run medical system faceplants on the iron fact that demand for $FOO is asymptotically infinite, and that spending more money on more bureaucracy has a surprising tendency to reduce the money spent on actual care. Britain's NHS, the banner child of smug, routinely turns what should be outpatient surgeries into amputations because of unconscionable delays in treatment, and even Labor is saying that it's going to need carefully-undefined "reform". The VA, held up as the ideal model for medicine in the United States, is imploding even worse.
> They are compared to most other countries on the planet.
Ah. I thought you brought up the obesity to suggest that the "poor" are in fact quite whealthy and can afford to buy unlimited amounts of food etc. My bad.
I now understand you actually meant that they don't have a problem because there exist other people who are worse off, and that they are not starving but suffering from a fom of malnutrition. | CommonCrawl |
Can one show that $\sum_{n=1}^N\frac{1}{n} -\log N - \gamma \leqslant \frac{1}{2N}$ without using the Euler-Maclaurin formula?
I would like to prove that $$ \sum_{n=1}^N\frac{1}{n} -\log N - \gamma \leqslant \frac{1}{2N} $$ without using the Euler-Maclaurin summation formula. The motivation for this is that I have come very close to doing so (see the answer provided below) but annoyingly have not actually proved the above.
Some may ask why I don't just use the formula. I'm writing a set of analytic number theory notes for my own use and it seems an unwieldy result to introduce and prove, given that the above inequality is all I need, and given that I have gotten so close without using Euler-Maclaurin!
sequences-and-series inequality harmonic-numbers
SputnikSputnik
Let $$\gamma_n = \sum_{k=1}^n \frac{1}{k} - \log n.$$ Our goal is to show that $$\gamma_n - \lim_{m \to \infty} \gamma_m \leq \frac{1}{2n}.$$ It is enough to show that, for $n<m$, we have $$\gamma_n - \gamma_m \leq \frac{1}{2n}.$$ This has the advantage of dealing solely with finite quantities.
Now, $$\gamma_n - \gamma_m = \int_{n}^m \frac{dt}{t} - \sum_{k=n+1}^m \frac{1}{k} =\sum_{j=n}^{m-1} \int_{j}^{j+1} \left( \frac{1}{t} - \frac{1}{j+1} \right) \cdot dt .$$
At this point, if I were at a chalkboard rather than a keyboard, I would draw a picture. Draw the hyperbola $y=1/x$ and mark off the interval between $x=n$ and $x=m$. Divide this into $m-n$ vertical bars of width $1$. Each bar stretches up to touch the hyperbola at its right corner. There is a little wedge, bounded by $x=j$, $y=1/(j+1)$ and $y=1/x$. We are adding up the area of each of these wedges.1
Because $y=1/x$ is convex, the area of this wedge is less than that of the right triangle with vertices at $(j,1/(j+1))$, $(j+1, 1/(j+1))$ and $(j,1/j)$. This triangle has base $1$ and height $1/j - 1/(j+1)$, so its area is $(1/2) (1/j - 1/(j+1))$. So the quantity of interest is $$\leq \sum_{j=n}^{m-1} \frac{1}{2} \left( \frac{1}{j} - \frac{1}{j+1} \right) = \frac{1}{2} \left( \frac{1}{n} - \frac{1}{m} \right) \leq \frac{1}{2n}.$$
Of course, this is just a standard proof of Euler-Maclaurin summation, but it is a lot more geometric and easy to follow in this special case.
1 By the way, since this area is positive, we also get the corollary that $\gamma_n - \gamma_m > 0$, so $\gamma_n - \gamma >0$, another useful bound.
David E SpeyerDavid E Speyer
$\begingroup$ (+1) Just to point out a typo: In "Draw the hyperbola y=1/x and mark off the interval between x=n and x=n.", the second n should be an m. $\endgroup$ – John Bentin Jun 9 '11 at 13:52
$\begingroup$ It is really annoying. This is exactly the kind of geometric proof I went for, using areas, and I always failed! Thanks for showing how it's done +1. $\endgroup$ – Sputnik Jun 18 '11 at 10:20
What follows is a variant of the method suggested by Fahad Sperinck, which almost gave the desired bound. Although we obtain a pretty short proof of the inequality, I think that the "right" proof is the one in the post by David Speyer. (A proof based on geometry is "right," as is a combinatorial proof.)
Let us start as Fahad Sperinck did, from $$ \int_n^{n+1} \frac{x-[x]}{x^2}\: dx = \log\Big(\frac{n+1}{n}\Big) - \frac{1}{n+1} < \frac{1}{n} - \frac{1}{n+1} -\frac{1}{2n^2} +\frac{1}{3n^3}. $$
Ultimately, we will summing from $N$ to infinity. If we keep this fact in mind, the chunk $$ \frac{1}{n}-\frac{1}{n+1} $$ sums beautifully to $1/N$, and should be left as is. If we could show that the part that is taken away, namely $$\frac{1}{2n^2}-\frac{1}{3n^3}$$ is bigger than $$\frac{1}{2}\left(\frac{1}{n}-\frac{1}{n+1}\right),$$ we would be finished.
Now I will do some unofficial scribbling, don't look. I want to show that $1/2n^2-1/3n^3 \ge 1/2(n)(n+1)$, so I want to show that $(3n-2)/6n^3\ge 1/2n(n+1)$, so I want to show that $(3n-2)/3n^2 \ge 1/(n+1)$, so I want to show that $(3n-2)(n+1) \ge 3n^2$, and this is clearly true if $n \ge 2$, just multiply out the stuff on the left.
Now if I had the energy I would hide my tracks, and have the desired inequality drop out as if by magic.
Comment: Somehow, one acquires the habit of thinking of $n^2$ and $1/n^2$ as "nice" and of $n(n+1)$ and $1/n(n+1)$ as not so nice. In many ways, the opposite is true. Certainly that is the case from the combinatorial point of view.
The calculations in the post were fine, the problem was that of giving away a tiny bit too much. That was, maybe, because the strategy was directed at getting to something that looks like $1/n^2$, which was viewed as tractable and desirable. But $1/n(n+1)$, aka $1/n-1/(n+1)$, arises naturally in the problem, and is much more tractable.
André NicolasAndré Nicolas
$\begingroup$ Nice exposition! I am a big fan of Penn and Teller's videos where they do magic tricks while showing you how they are done, and this has the same feel. $\endgroup$ – David E Speyer Jun 9 '11 at 14:41
$\begingroup$ The beauty, clarity and simplicity of your posts are such an enrichment of this site. Many thanks for all the time and effort you invest in your contributions. $\endgroup$ – t.b. Aug 10 '11 at 14:05
One can check that $S(N):=\sum_{n=1}^N\frac{1}{n} -\log N - \gamma = \int_N^\infty \frac{x-[x]}{x^2} \: dx$, where $[x]$ is the integer part of $x$. Moreover $$ \int_n^{n+1} \frac{x-[x]}{x^2}\: dx = \log\Big(\frac{n+1}{n}\Big) - \frac{1}{n+1} < \frac{1}{n} - \frac{1}{n+1} -\frac{1}{2n^2} +\frac{1}{3n^3}, \qquad (1) $$ by the Taylor series for $\log(1+x)$. But we have that $$ n(n+1)(3n-1) = 3n^3 + 2n^2 -n > 3n^3 $$ so $$ \frac{1}{n(n+1)} < \frac{3n-1}{3n^3} = \frac{1}{n^2} - \frac{1}{3n^3}. $$ Therefore, from equation $(1)$ we find $$ S(N) < \sum_{n=N}^\infty \frac{1}{n(n+1)} -\frac{1}{2n^2} +\frac{1}{3n^3} < \sum_{n=N}^\infty \frac{1}{n^2} - \frac{1}{3n^3} -\frac{1}{2n^2} +\frac{1}{3n^3}, $$ and so finally, $$ S(N) < \frac{1}{2}\sum_{n=N}^\infty \frac{1}{n^2} < \frac{1}{2(N-1)}, $$ for all $N \in \mathbb{N}$, by a standard approximation for $\sum \frac{1}{n^2}$.
$\begingroup$ By the way, does the difference between $1/2(N-1)$ and $1/2N$ really matter? Unless you are heading for hard bounds in the end, I would guess that $1/2N + O(1/N^2)$ is good enough for whatever you want, and you already have that. $\endgroup$ – David E Speyer Jun 9 '11 at 12:11
$\begingroup$ @David: I'll admit that it doesn't really matter, but it is just nicer to be able to write that something is $O(\frac{1}{N})$ with a simple implied constant like $\frac{1}{2}$, which is actually the best possible constant as well. I guess it was more a matter of elegance for me! $\endgroup$ – Sputnik Jun 18 '11 at 10:18
Not the answer you're looking for? Browse other questions tagged sequences-and-series inequality harmonic-numbers or ask your own question.
Do we have to use Bernoulli polynomials in the Euler-Maclaurin summation formula?
Compute $\sum \frac{1}{k^2}$ using Euler-Maclaurin formula
How does one show sin(x) is bounded using the power series?
evaluation of $\sum_{n=1}^{\infty} n e^{-n^2/a} $
Can we show that $1+2+3+\dotsb=-\frac{1}{12}$ using only stability or linearity, not both, and without regularizing or specifying a summation method?
Probably this :$\sum_{n=2}^{\infty}(-1)^n\frac{\arctan{(1-2^n)}\log n \tan{(1-2^{-n})}}{n^3\sqrt{n}\log \log n }$ is Euler constant
Solving a series with this form without using induction or countdown method
What is $a$ in the Tayor series? | CommonCrawl |
Efficient learning with robust gradient descent
Matthew J. Holland
Kazushi Ikeda
Special Issue of the ECML PKDD 2019 Journal Track
Minimizing the empirical risk is a popular training strategy, but for learning tasks where the data may be noisy or heavy-tailed, one may require many observations in order to generalize well. To achieve better performance under less stringent requirements, we introduce a procedure which constructs a robust approximation of the risk gradient for use in an iterative learning routine. Using high-probability bounds on the excess risk of this algorithm, we show that our update does not deviate far from the ideal gradient-based update. Empirical tests using both controlled simulations and real-world benchmark data show that in diverse settings, the proposed procedure can learn more efficiently, using less resources (iterations and observations) while generalizing better.
Robust learning Stochastic optimization Statistical learning theory
Editors: Karsten Borgwardt, Po-Ling Loh, Evimaria Terzi, Antti Ukkonen.
This work was partially supported by the Grant-in-Aid for JSPS Research Fellows.
Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window
Our generic data shall be denoted by \({\varvec{}}{z}\in {\mathcal {Z}}\). Let \(\mu \) denote a probability measure on \({\mathcal {Z}}\), equipped with an appropriate \(\sigma \)-field. Data samples shall be assumed independent and identically distributed (iid), written \({\varvec{}}{z}_{1},\ldots ,{\varvec{}}{z}_{n}\). We shall work with loss function \(l:{\mathbb {R}}^{d} \times {\mathcal {Z}}\rightarrow {\mathbb {R}}_{+}\) throughout, with \(l(\cdot ;{\varvec{}}{z})\) assumed differentiable for each \({\varvec{}}{z}\in {\mathcal {Z}}\). Write \({{\,\mathrm{{\mathbf {P}}}\,}}\) for a generic probability measure, most commonly the product measure induced by the sample. Let \(f:{\mathcal {Z}}\rightarrow {\mathbb {R}}\) be a measurable function. Expectation is written \({{\,\mathrm{{\mathbf {E}}}\,}}_{\mu }f({\varvec{}}{z}) :=\int f \, d\mu \), with variance \({{\,\mathrm{var}\,}}_{\mu }f({\varvec{}}{z})\) defined analogously. For d-dimensional Euclidean space \({\mathbb {R}}^{d}\), the usual (\(\ell _{2}\)) norm shall be denoted \(\Vert \cdot \Vert \) unless otherwise specified. For function F on \({\mathbb {R}}^{d}\) with partial derivatives defined, write the gradient as \(F^{\prime }({\varvec{}}{u}) :=(F^{\prime }_{1}({\varvec{}}{u}),\ldots ,F^{\prime }_{d}({\varvec{}}{u}))\) where for short, we write \(F^{\prime }_{j}({\varvec{}}{u}) :=\partial F({\varvec{}}{u})/\partial u_{j}\). For integer k, write \([k] :=\{1,\ldots ,k\}\) for all the positive integers from 1 to k. Risk shall be denoted \(R({\varvec{}}{w}) :={{\,\mathrm{{\mathbf {E}}}\,}}_{\mu }l({\varvec{}}{w};{\varvec{}}{z})\), and its gradient \({\varvec{}}{g}({\varvec{}}{w}) :=R^{\prime }({\varvec{}}{w})\). We make a running assumption that we can differentiate under the integral sign in each coordinate (Ash and Doleans-Dade 2000; Talvila 2001), namely that
$$\begin{aligned} {\varvec{}}{g}({\varvec{}}{w}) = \left( {{\,\mathrm{{\mathbf {E}}}\,}}_{\mu }\frac{\partial l({\varvec{}}{w};{\varvec{}}{z})}{\partial w_{1}}, \ldots , {{\,\mathrm{{\mathbf {E}}}\,}}_{\mu }\frac{\partial l({\varvec{}}{w};{\varvec{}}{z})}{\partial w_{d}}\right) . \end{aligned}$$
Smoothness and convexity of functions shall also be utilized. For convex function F on convex set \({\mathcal {W}}\), say that F is \(\lambda \)-Lipschitz if, for all \({\varvec{}}{w}_{1},{\varvec{}}{w}_{2} \in {\mathcal {W}}\) we have \(|F({\varvec{}}{w}_{1})-F({\varvec{}}{w}_{2})| \le \lambda \Vert {\varvec{}}{w}_{1}-{\varvec{}}{w}_{2}\Vert \). We say that F is \(\lambda \)-smooth if \(F^{\prime }\) is \(\lambda \)-Lipschitz. Finally, F is strongly convex with parameter \(\kappa > 0\) if for all \({\varvec{}}{w}_{1},{\varvec{}}{w}_{2} \in {\mathcal {W}}\),
$$\begin{aligned} F({\varvec{}}{w}_{1})-F({\varvec{}}{w}_{2}) \ge \langle F^{\prime }({\varvec{}}{w}_{2}), {\varvec{}}{w}_{1}-{\varvec{}}{w}_{2} \rangle + \frac{\kappa }{2}\Vert {\varvec{}}{w}_{1}-{\varvec{}}{w}_{2}\Vert ^{2} \end{aligned}$$
for any norm \(\Vert \cdot \Vert \) on \({\mathcal {W}}\), though we shall be assuming \({\mathcal {W}}\subseteq {\mathbb {R}}^{d}\). If there exists \({\varvec{}}{w}^{*}\in {\mathcal {W}}\) such that \(F^{\prime }({\varvec{}}{w}^{*})=0\), then it follows that \({\varvec{}}{w}^{*}\) is the unique minimum of F on \({\mathcal {W}}\). Let \(f:{\mathbb {R}}^{d} \rightarrow {\mathbb {R}}\) be a continuously differentiable, convex, \(\lambda \)-smooth function. The following basic facts will be useful: for any choice of \({\varvec{}}{u},{\varvec{}}{v}\in {\mathbb {R}}^{d}\), we have
$$\begin{aligned} f({\varvec{}}{u})-f({\varvec{}}{v})&\le \frac{\lambda }{2}\Vert {\varvec{}}{u}-{\varvec{}}{v}\Vert ^{2} + \langle f^{\prime }({\varvec{}}{v}), {\varvec{}}{u}-{\varvec{}}{v}\rangle \end{aligned}$$
$$\begin{aligned} \frac{1}{2\lambda }\Vert f^{\prime }({\varvec{}}{u})-f^{\prime }({\varvec{}}{v})\Vert ^{2}&\le f({\varvec{}}{u})-f({\varvec{}}{v}) - \langle f^{\prime }({\varvec{}}{v}), {\varvec{}}{u}-{\varvec{}}{v}\rangle . \end{aligned}$$
Proofs of these results can be found in any standard text on convex optimization, e.g. (Nesterov 2004).
We shall leverage a special type of M-estimator here, built using the following convenient class of functions.
Definition 13
(Function class for location estimates) Let \(\rho :{\mathbb {R}}\rightarrow [0,\infty )\) be an even function (\(\rho (u)=\rho (-u)\)) with \(\rho (0)=0\) and the following properties. Denote \(\psi (u) :=\rho ^{\prime }(u)\).
\(\rho (u) = O(u)\) as \(u \rightarrow \pm \infty \).
\(\rho (u)/(u^{2}/2) \rightarrow 1\) as \(u \rightarrow 0\).
\(\psi ^{\prime } > 0\), and for some \(C>0\), and all \(u \in {\mathbb {R}}\),
$$\begin{aligned} -\log (1-u+Cu^{2}) \le \psi (u) \le \log (1+u+Cu^{2}). \end{aligned}$$
Of particular importance in the proceeding analysis is the fact that \(\psi =\rho ^{\prime }\) is bounded, monotonically increasing and Lipschitz on \({\mathbb {R}}\), plus the upper/lower bounds which let us generalize the technique of Catoni (2012).
(Valid \(\rho \) choices) In addition to the Gudermannian function (Sect. 2 footnote), functions such as \(2(\sqrt{1+u^{2}/2}-1)\) and \(\log \cosh (u)\) are well-known examples that satisfy the desired criteria. Note that the wide/narrow functions of Catoni do not meet all these criteria, nor does the classic Huber function.
Proof of Lemma 3
For cleaner notation, write \(x_{1},\ldots ,x_{n} \in {\mathbb {R}}\) for our iid observations. Here \(\rho \) is assumed to satisfy the conditions of Definition 13. A high-probability concentration inequality follows by direct application of the specified properties of \(\rho \) and \(\psi :=\rho ^{\prime }\), following the general technique laid out by Catoni (2009, 2012). For \(u \in {\mathbb {R}}\) and \(s>0\), writing \(\psi _{s}(u) :=\psi (u/s)\), and taking expectation over the random draw of the sample,
$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}\exp \left( \sum _{i=1}^{n}\psi _{s}(x_{i}-u)\right)&\le \left( 1 + \frac{1}{s}({{\,\mathrm{{\mathbf {E}}}\,}}x-u) + \frac{C}{s^{2}}{{\,\mathrm{{\mathbf {E}}}\,}}(x^{2}+u^{2}-2xu) \right) ^{n}\\&\le \exp \left( \frac{n}{s}({{\,\mathrm{{\mathbf {E}}}\,}}x-u) + \frac{Cn}{s^{2}}({{\,\mathrm{var}\,}}x + ({{\,\mathrm{{\mathbf {E}}}\,}}x - u)^{2}) \right) . \end{aligned}$$
The inequalities above are due to an application of the upper bound on \(\psi \), and and the inequality \((1+u) \le \exp (u)\). Now, letting
$$\begin{aligned} A&:=\frac{1}{n}\sum _{i=1}^{n}\psi _{s}(x_{i}-u)\\ B&:=\frac{1}{s}({{\,\mathrm{{\mathbf {E}}}\,}}x-u) + \frac{C}{s^{2}}({{\,\mathrm{var}\,}}x + ({{\,\mathrm{{\mathbf {E}}}\,}}x - u)^{2}) \end{aligned}$$
we have a bound on \({{\,\mathrm{{\mathbf {E}}}\,}}\exp (nA) \le \exp (nB)\). By Markov's inequality, we then have
$$\begin{aligned} {{\,\mathrm{{\mathbf {P}}}\,}}\{A> B + \varepsilon \}&= {{\,\mathrm{{\mathbf {P}}}\,}}\{\exp (nA) > \exp (nB+n\varepsilon )\}\\&\le \frac{{{\,\mathrm{{\mathbf {E}}}\,}}\exp (nA)}{\exp (nB+n\varepsilon )}\\&\le \exp (-n\varepsilon ). \end{aligned}$$
Setting \(\varepsilon =\log (\delta ^{-1})/n\) for confidence level \(\delta \in (0,1)\), and for convenience writing
$$\begin{aligned} b(u) :={{\,\mathrm{{\mathbf {E}}}\,}}x - u + \frac{C}{s}({{\,\mathrm{var}\,}}x + ({{\,\mathrm{{\mathbf {E}}}\,}}x-u)^{2}), \end{aligned}$$
we have with probability no less than \(1-\delta \) that
$$\begin{aligned} \frac{s}{n}\sum _{i=1}^{n} \psi _{s}(x_{i}-u) \le b(u) + \frac{s\log (\delta ^{-1})}{n}. \end{aligned}$$
The right hand side of this inequality, as a function of u, is a polynomial of order 2, and if
$$\begin{aligned} 1 \ge D :=4\left( \frac{C^2{{\,\mathrm{var}\,}}x}{s^2} + \frac{C\log (\delta ^{-1})}{n}\right) , \end{aligned}$$
then this polynomial has two real solutions. In the hypothesis, we stated that the result holds "for large enough n and \(s_{j}\)." By this we mean that we require n and s to satisfy the preceding inequality (for each \(j \in [d]\) in the multi-dimensional case). The notation D is for notational simplicity. The solutions take the form
$$\begin{aligned} u = \frac{1}{2}\left( 2{{\,\mathrm{{\mathbf {E}}}\,}}x + \frac{s}{C} \pm \frac{s}{C}\left( 1-D\right) ^{1/2}\right) . \end{aligned}$$
Looking at the smallest of the solutions, noting \(D \in [0,1]\) this can be simplified as
$$\begin{aligned} u_{+}&:={{\,\mathrm{{\mathbf {E}}}\,}}x + \frac{s}{2C}\frac{(1-\sqrt{1-D})(1+\sqrt{1-D})}{1+\sqrt{1-D}}\nonumber \\&= {{\,\mathrm{{\mathbf {E}}}\,}}x + \frac{s}{2C}\frac{D}{1+\sqrt{1-D}}\nonumber \\&\le {{\,\mathrm{{\mathbf {E}}}\,}}x + sD/2C, \end{aligned}$$
where the last inequality is via taking the \(\sqrt{1-D}\) term in the previous denominator as small as possible. Now, writing \({\widehat{x}}\) as the M-estimate using s and \(\rho \) as in (4), note that \({\widehat{x}}\) equivalently satisfies \(\sum _{i=1}^{n}\psi _{s}({\widehat{x}}-x_{i})=0\). Using (16), we have
$$\begin{aligned} \frac{s}{n}\sum _{i=1}^{n}\psi _{s}(x_{i}-u_{+}) \le b(u_{+}) + \frac{s\log (\delta ^{-1})}{n} = 0, \end{aligned}$$
and since the left-hand side of (16) is a monotonically decreasing function of u, we have immediately that \({\widehat{x}}\le u_{+}\) on the event that (16) holds, which has probability at least \(1-\delta \). Then leveraging (17), it follows that on the same event,
$$\begin{aligned} {\widehat{x}}-{{\,\mathrm{{\mathbf {E}}}\,}}x \le sD/2C. \end{aligned}$$
An analogous argument provides a \(1-\delta \) event on which \({\widehat{x}}-{{\,\mathrm{{\mathbf {E}}}\,}}x \ge -sD/2C\), and thus using a union bound, one has that
$$\begin{aligned} |{\widehat{x}}-{{\,\mathrm{{\mathbf {E}}}\,}}x| \le 2\left( \frac{C {{\,\mathrm{var}\,}}x}{s} + \frac{s\log (\delta ^{-1})}{n} \right) \end{aligned}$$
holds with probability no less than \(1-2\delta \). Setting the \(x_{i}\) to \(l_{j}^{\prime }({\varvec{}}{w};{\varvec{}}{z}_{i})\) for \(j \in [d]\) and some \({\varvec{}}{w}\in {\mathbb {R}}^{d}\), \(i \in [n]\), and \({\widehat{x}}\) to \({\widehat{\theta }}_{j}\) corresponds to the special case considered in this Lemma. Dividing \(\delta \) by two yields the \((1-\delta )\) result. \(\square \)
For any fixed \({\varvec{}}{w}\) and \(j \in [d]\), note that
$$\begin{aligned} |{\widehat{\theta }}_{j} - g_{j}({\varvec{}}{w})|&\le \varepsilon _{j}\nonumber \\&:=2\left( \frac{C{{\,\mathrm{var}\,}}_{\mu }l_{j}^{\prime }({\varvec{}}{w};{\varvec{}}{z})}{s_{j}} + s_{j}\log (2\delta ^{-1}) \right) \nonumber \\&= 2 \sqrt{\frac{\log (2\delta ^{-1})}{n}} \left( \frac{C{{\,\mathrm{var}\,}}_{\mu }l_{j}^{\prime }({\varvec{}}{w};{\varvec{}}{z})}{{\widehat{\sigma }}_{j}} + {\widehat{\sigma }}_{j} \right) \end{aligned}$$
$$\begin{aligned}&\le \varepsilon ^{*} :=2 \sqrt{\frac{V\log (2\delta ^{-1})}{n}} c_{0} \end{aligned}$$
holds with probability no less than \(1-\delta \). The first inequality holds via direct application of Lemma 3, which holds under (11) and using \(\rho \) which satisfies (8). The equality follows immediately from (6). The final inequality follows from A4 and (10), along with the definition of \(c_{0}\).
Making the dependence on \({\varvec{}}{w}\) explicit with \({\widehat{\theta }}_{j} = {\widehat{\theta }}_{j}({\varvec{}}{w})\), an important question to ask is how sensitive this estimator is to a change in \({\varvec{}}{w}\). Say we perturb \({\varvec{}}{w}\) to \(\widetilde{{\varvec{}}{w}}\), so that \(\Vert {\varvec{}}{w}- \widetilde{{\varvec{}}{w}}\Vert = a > 0\). By A2, for any sample we have
$$\begin{aligned} \Vert l^{\prime }({\varvec{}}{w};{\varvec{}}{z}_{i}) - l^{\prime }(\widetilde{{\varvec{}}{w}};{\varvec{}}{z}_{i})\Vert \le \lambda \Vert {\varvec{}}{w}- \widetilde{{\varvec{}}{w}}\Vert = \lambda a, \quad i \in [n] \end{aligned}$$
which immediately implies \(|l^{\prime }_{j}({\varvec{}}{w};{\varvec{}}{z}_{i}) - l^{\prime }_{j}(\widetilde{{\varvec{}}{w}};{\varvec{}}{z}_{i})| \le \lambda a\) for all \(j \in [d]\) as well. That is, the maximum that any data point can move in either direction is \(\lambda a\). Given a sample of \(n \ge 1\) points, consider the impact that the a-sized shift from \({\varvec{}}{w}\) to \(\widetilde{{\varvec{}}{w}}\) has on \({\widehat{\theta }}_{j}({\varvec{}}{w})\) shifting to \({\widehat{\theta }}_{j}(\widetilde{{\varvec{}}{w}})\). Without loss of generality, say all the points shifted to the right by the maximum amount, that is, \(l^{\prime }_{j}(\widetilde{{\varvec{}}{w}};{\varvec{}}{z}_{i})-l^{\prime }_{j}({\varvec{}}{w};{\varvec{}}{z}_{i}) = \lambda a\) for all \(i \in [n]\). Then note that
$$\begin{aligned} \frac{s}{n}\sum _{i=1}^{n}\psi _{s}\left( {\widehat{\theta }}_{j}({\varvec{}}{w})+ \lambda a - l^{\prime }_{j}(\widetilde{{\varvec{}}{w}};{\varvec{}}{z}_{i})\right)&= \frac{s}{n}\sum _{i=1}^{n}\psi _{s}\left( {\widehat{\theta }}_{j}({\varvec{}}{w})+ \lambda a - (l^{\prime }_{j}({\varvec{}}{w};{\varvec{}}{z}_{i})+\lambda a)\right) \\&= \frac{s}{n}\sum _{i=1}^{n}\psi _{s}\left( {\widehat{\theta }}_{j}({\varvec{}}{w}) - l^{\prime }_{j}({\varvec{}}{w};{\varvec{}}{z}_{i})\right) \\&= 0 \end{aligned}$$
by definition of \({\widehat{\theta }}_{j}({\varvec{}}{w})\). It thus follows that \({\widehat{\theta }}_{j}(\widetilde{{\varvec{}}{w}})={\widehat{\theta }}_{j}({\varvec{}}{w})+\lambda a\). Next, modify our assumption on the data to allow for at least one point to have moved to the right less than the maximum amount, namely that \(l^{\prime }_{j}(\widetilde{{\varvec{}}{w}};{\varvec{}}{z}_{i})-l^{\prime }_{j}({\varvec{}}{w};{\varvec{}}{z}_{i}) \le \lambda a\) for some i. In this case, by monotonicity of \(\psi \), it follows that \({\widehat{\theta }}_{j}(\widetilde{{\varvec{}}{w}}) \le {\widehat{\theta }}_{j}({\varvec{}}{w})+\lambda a\). An identical argument can be made for movement in the negative direction, implying that no matter how the points are distributed after the perturbation from \({\varvec{}}{w}\) to \(\widetilde{{\varvec{}}{w}}\), the change from \({\widehat{\theta }}_{j}({\varvec{}}{w})\) to \({\widehat{\theta }}_{j}(\widetilde{{\varvec{}}{w}})\) can be no more than \(\lambda a\). That is, smoothness of the loss function immediately implies a Lipschitz property of the estimator:
$$\begin{aligned} |{\widehat{\theta }}_{j}({\varvec{}}{w})-{\widehat{\theta }}_{j}(\widetilde{{\varvec{}}{w}})| \le \lambda \Vert {\varvec{}}{w}- \widetilde{{\varvec{}}{w}}\Vert . \end{aligned}$$
Considering the vector of estimates \(\widehat{{\varvec{}}{\theta }}({\varvec{}}{w}) :=({\widehat{\theta }}_{1}({\varvec{}}{w}),\ldots ,{\widehat{\theta }}_{d}({\varvec{}}{w}))\), we then have
$$\begin{aligned} \Vert \widehat{{\varvec{}}{\theta }}({\varvec{}}{w})-\widehat{{\varvec{}}{\theta }}(\widetilde{{\varvec{}}{w}})\Vert \le \sqrt{d}\lambda \Vert {\varvec{}}{w}- \widetilde{{\varvec{}}{w}}\Vert . \end{aligned}$$
This will be useful for proving uniform bounds on the estimation error shortly.
First, let's use these one-dimensional results for statements about the vector estimator of interest. In d dimensions, using \(\widehat{{\varvec{}}{\theta }}({\varvec{}}{w})\) just defined for any pre-fixed \({\varvec{}}{w}\), then for any \(\varepsilon > 0\) we have
$$\begin{aligned} {{\,\mathrm{{\mathbf {P}}}\,}}\left\{ \Vert \widehat{{\varvec{}}{\theta }}({\varvec{}}{w})-{\varvec{}}{g}({\varvec{}}{w})\Vert> \varepsilon \right\}&= {{\,\mathrm{{\mathbf {P}}}\,}}\left\{ \Vert \widehat{{\varvec{}}{\theta }}({\varvec{}}{w})-{\varvec{}}{g}({\varvec{}}{w})\Vert ^{2}> \varepsilon ^{2} \right\} \\&\le \sum _{j=1}^{d} {{\,\mathrm{{\mathbf {P}}}\,}}\left\{ |{\widehat{\theta }}_{j}({\varvec{}}{w}) - {\varvec{}}{g}_{j}({\varvec{}}{w})| > \frac{\varepsilon }{\sqrt{d}} \right\} . \end{aligned}$$
Using the notation of \(\varepsilon _{j}\) and \(\varepsilon ^{*}\) from (19), filling in \(\varepsilon = \sqrt{d}\varepsilon ^{*}\), we thus have
$$\begin{aligned} {{\,\mathrm{{\mathbf {P}}}\,}}\left\{ \Vert \widehat{{\varvec{}}{\theta }}({\varvec{}}{w})-{\varvec{}}{g}({\varvec{}}{w})\Vert> \sqrt{d}\varepsilon ^{*} \right\}&\le \sum _{j=1}^{d}{{\,\mathrm{{\mathbf {P}}}\,}}\left\{ |{\widehat{\theta }}_{j}({\varvec{}}{w}) - g_{j}({\varvec{}}{w})|> \varepsilon ^{*} \right\} \\&\le \sum _{j=1}^{d}{{\,\mathrm{{\mathbf {P}}}\,}}\left\{ |{\widehat{\theta }}_{j}({\varvec{}}{w}) - g_{j}({\varvec{}}{w})| > \varepsilon _{j} \right\} \\&\le d\delta . \end{aligned}$$
The second inequality is because \(\varepsilon _{j} \le \varepsilon ^{*}\) for all \(j \in [d]\). It follows that the event
$$\begin{aligned} {\mathcal {E}}({\varvec{}}{w}) :=\left\{ \Vert \widehat{{\varvec{}}{\theta }}({\varvec{}}{w})-{\varvec{}}{g}({\varvec{}}{w})\Vert > 2 \sqrt{\frac{dV\log (2d\delta ^{-1})}{n}} c_{0} \right\} \end{aligned}$$
has probability \({{\,\mathrm{{\mathbf {P}}}\,}}{\mathcal {E}}({\varvec{}}{w}) \le \delta \). In practice, however, \(\widehat{{\varvec{}}{w}}_{(t)}\) for all \(t > 0\) will be random, and depend on the sample. We seek uniform bounds using a covering number argument. By A1, \({\mathcal {W}}\) is closed and bounded, and thus compact, and it requires no more than \(N_{\epsilon } :=\lfloor (3\varDelta /2\epsilon )^{d} \rfloor \) balls of \(\epsilon \) radius to cover \({\mathcal {W}}\), where \(\varDelta \) is the diameter of \({\mathcal {W}}\).5 Write the centers of these \(\epsilon \) balls by \(\{\widetilde{{\varvec{}}{w}}_{1},\ldots ,\widetilde{{\varvec{}}{w}}_{N_{\epsilon }}\}\). Given \({\varvec{}}{w}\in {\mathcal {W}}\), denote by \(\widetilde{{\varvec{}}{w}}= \widetilde{{\varvec{}}{w}}({\varvec{}}{w})\) the center closest to \({\varvec{}}{w}\), which satisfies \(\Vert {\varvec{}}{w}- \widetilde{{\varvec{}}{w}}\Vert \le \epsilon \). Estimation error is controllable using the following new error terms:
$$\begin{aligned} \Vert \widehat{{\varvec{}}{\theta }}({\varvec{}}{w}) - {\varvec{}}{g}({\varvec{}}{w})\Vert \le \Vert \widehat{{\varvec{}}{\theta }}({\varvec{}}{w})-\widehat{{\varvec{}}{\theta }}(\widetilde{{\varvec{}}{w}})\Vert + \Vert {\varvec{}}{g}({\varvec{}}{w}) - {\varvec{}}{g}(\widetilde{{\varvec{}}{w}})\Vert + \Vert \widehat{{\varvec{}}{\theta }}(\widetilde{{\varvec{}}{w}}) - {\varvec{}}{g}(\widetilde{{\varvec{}}{w}})\Vert . \end{aligned}$$
The goal is to be able to take the supremum over \({\varvec{}}{w}\in {\mathcal {W}}\). We bound one term at a time. The first term can be bounded, for any \({\varvec{}}{w}\in {\mathcal {W}}\), by (21) just proven. The second term can be bounded by
$$\begin{aligned} \Vert {\varvec{}}{g}({\varvec{}}{w}) - {\varvec{}}{g}(\widetilde{{\varvec{}}{w}})\Vert \le \lambda \Vert {\varvec{}}{w}- \widetilde{{\varvec{}}{w}}\Vert \end{aligned}$$
which follows immediately from A2. Finally, for the third term, fixing any \({\varvec{}}{w}\in {\mathcal {W}}\), \(\widetilde{{\varvec{}}{w}}=\widetilde{{\varvec{}}{w}}({\varvec{}}{w}) \in \{\widetilde{{\varvec{}}{w}}_{1},\ldots ,\widetilde{{\varvec{}}{w}}_{N_{\epsilon }}\}\) is also fixed, and can be bounded on the \(\delta \) event \({\mathcal {E}}(\widetilde{{\varvec{}}{w}})\) just defined. The important fact is that
$$\begin{aligned} \sup _{{\varvec{}}{w}\in {\mathcal {W}}} \left\| \widehat{{\varvec{}}{\theta }}(\widetilde{{\varvec{}}{w}}({\varvec{}}{w})) - {\varvec{}}{g}(\widetilde{{\varvec{}}{w}}({\varvec{}}{w})) \right\| = \max _{k \in [N_{\epsilon }]} \left\| \widehat{{\varvec{}}{\theta }}(\widetilde{{\varvec{}}{w}}_{k}) - {\varvec{}}{g}(\widetilde{{\varvec{}}{w}}_{k}) \right\| . \end{aligned}$$
We construct a "good event" naturally as the event in which the bad event \({\mathcal {E}}(\cdot )\) holds for no center on our \(\epsilon \)-net, namely
$$\begin{aligned} {\mathcal {E}}_{+} = \left( \bigcap _{k \in [N_{\epsilon }]} {\mathcal {E}}(\widetilde{{\varvec{}}{w}}_{k}) \right) ^{c}. \end{aligned}$$
Taking a union bound, we can say that with probability no less than \(1-\delta \), for all \({\varvec{}}{w}\in {\mathcal {W}}\), we have
$$\begin{aligned} \Vert \widehat{{\varvec{}}{\theta }}(\widetilde{{\varvec{}}{w}}({\varvec{}}{w})) - {\varvec{}}{g}(\widetilde{{\varvec{}}{w}}({\varvec{}}{w}))\Vert \le 2 \sqrt{\frac{dV\log (2d N_{\epsilon } \delta ^{-1})}{n}} c_{0}. \end{aligned}$$
Taking the three new bounds together, we have with probability no less than \(1-\delta \) that
$$\begin{aligned} \sup _{{\varvec{}}{w}\in {\mathcal {W}}} \Vert \widehat{{\varvec{}}{\theta }}({\varvec{}}{w}) - {\varvec{}}{g}({\varvec{}}{w})\Vert \le \lambda \epsilon (\sqrt{d}+1) + 2 \sqrt{\frac{dV\log (2d N_{\epsilon } \delta ^{-1})}{n}} c_{0}. \end{aligned}$$
Setting \(\epsilon = 1/\sqrt{n}\) we have
$$\begin{aligned} \sup _{{\varvec{}}{w}\in {\mathcal {W}}} \Vert \widehat{{\varvec{}}{\theta }}({\varvec{}}{w}) - {\varvec{}}{g}({\varvec{}}{w})\Vert \le \frac{\lambda (\sqrt{d}+1)}{\sqrt{n}} + 2c_{0} \sqrt{\frac{dV(\log (2d\delta ^{-1}) + d\log (3\varDelta \sqrt{n}/2))}{n}}. \end{aligned}$$
Since every step of Algorithm 1 (with orthogonal projection if required) has \(\widehat{{\varvec{}}{w}}_{(t)} \in {\mathcal {W}}\), the desired result follows from this uniform confidence interval. \(\square \)
Given \(\widehat{{\varvec{}}{w}}_{(t)}\), running the approximate update (3), we have
$$\begin{aligned} \Vert \widehat{{\varvec{}}{w}}_{(t+1)}-{\varvec{}}{w}^{*}\Vert&= \Vert \widehat{{\varvec{}}{w}}_{(t)}-\alpha \widehat{{\varvec{}}{g}}(\widehat{{\varvec{}}{w}}_{(t)})-{\varvec{}}{w}^{*}\Vert \\&\le \Vert \widehat{{\varvec{}}{w}}_{(t)}-\alpha {\varvec{}}{g}(\widehat{{\varvec{}}{w}}_{(t)})-{\varvec{}}{w}^{*}\Vert + \alpha \Vert \widehat{{\varvec{}}{g}}(\widehat{{\varvec{}}{w}}_{(t)})-{\varvec{}}{g}(\widehat{{\varvec{}}{w}}_{(t)})\Vert . \end{aligned}$$
The first term looks at the distance from the target given an optimal update, using \({\varvec{}}{g}\). Using the \(\kappa \)-strong convexity of R, via Nesterov (2004, Thm. 2.1.15) it follows that
$$\begin{aligned} \Vert \widehat{{\varvec{}}{w}}_{(t)}-\alpha {\varvec{}}{g}(\widehat{{\varvec{}}{w}}_{(t)})-{\varvec{}}{w}^{*}\Vert ^{2} \le \left( 1-\frac{2\alpha \kappa \lambda }{\kappa +\lambda }\right) \Vert \widehat{{\varvec{}}{w}}_{(t)}-{\varvec{}}{w}^{*}\Vert ^{2}. \end{aligned}$$
Writing \(\beta :=2\kappa \lambda /(\kappa +\lambda )\), the coefficient becomes \((1-\alpha \beta )\).
To control the second term simply requires unfolding the recursion. By hypothesis, we can leverage (7) to bound the statistical estimation error by \(\varepsilon \) for every step, all on the same \(1-\delta \) "good event." For notational ease, write \(a :=\sqrt{1-\alpha \beta }\). On the good event, we have
$$\begin{aligned} \Vert \widehat{{\varvec{}}{w}}_{(t+1)}-{\varvec{}}{w}^{*}\Vert&\le a^{t+1}\Vert \widehat{{\varvec{}}{w}}_{(0)}-{\varvec{}}{w}^{*}\Vert + \alpha \varepsilon \left( 1+a+a^{2}+\cdots +a^{t}\right) \\&= a^{t+1}\Vert \widehat{{\varvec{}}{w}}_{(0)}-{\varvec{}}{w}^{*}\Vert + \alpha \varepsilon \frac{(1-a^{t+1})}{1-a}. \end{aligned}$$
To clean up the second summand,
$$\begin{aligned} \alpha \varepsilon \frac{(1-a^{t+1})}{1-a}&\le \frac{\alpha \varepsilon (1+a)}{(1-a)(1+a)}\\&= \frac{\alpha \varepsilon (1+\sqrt{1-\alpha \beta })}{\alpha \beta }\\&\le \frac{2\varepsilon }{\beta }. \end{aligned}$$
Taking this to the original inequality yields the desired result. \(\square \)
Proof of Theorem 8
Using strong convexity and (14), we have that
$$\begin{aligned} R(\widehat{{\varvec{}}{w}}_{(T)}) - R^{*}&\le \frac{\lambda }{2}\Vert \widehat{{\varvec{}}{w}}_{(T)} - {\varvec{}}{w}^{*}\Vert ^{2}\\&\le \lambda (1-\alpha \beta )^{T}D_{0}^{2} + \frac{4\lambda \varepsilon ^{2}}{\beta ^{2}}. \end{aligned}$$
The latter inequality holds by direct application of Lemma 7, followed by the elementary fact \((a+b)^{2} \le 2(a^{2}+b^{2})\). The particular value of \(\varepsilon \) under which Lemma 7 is valid (i.e., under which (7) holds) is given by Lemma 5. Filling in \(\varepsilon \) with this concrete setting yields the desired result. \(\square \)
Proof of Lemma 11
As in the result statement, we write
$$\begin{aligned} \varSigma _{(t)} :={{\,\mathrm{{\mathbf {E}}}\,}}_{\mu }\left( l^{\prime }(\widehat{{\varvec{}}{w}}_{(t)};{\varvec{}}{z}) - {\varvec{}}{g}(\widehat{{\varvec{}}{w}}_{(t)})\right) \left( l^{\prime }(\widehat{{\varvec{}}{w}}_{(t)};{\varvec{}}{z}) - {\varvec{}}{g}(\widehat{{\varvec{}}{w}}_{(t)})\right) ^{T}, \quad {\varvec{}}{w}\in {\mathcal {W}}. \end{aligned}$$
Running this modified version of Algorithm 1, by minimizing the bound on the right-hand side of the inequality in Lemma 3 as a function of scale \(s_{j}\), \(j \in [d]\), plugging in the optimal scale setting to Lemma 3 yields that the estimates \({\varvec{}}{{\widehat{\theta }}}_{(t)}=({\widehat{\theta }}_{1},\ldots ,{\widehat{\theta }}_{d})\) at each step t incur an error of
$$\begin{aligned} |{\widehat{\theta }}_{j} - g_{j}(\widehat{{\varvec{}}{w}})| > 4 \left( \frac{C{{\,\mathrm{var}\,}}_{\mu }l^{\prime }_{j}(\widehat{{\varvec{}}{w}}_{(t)};{\varvec{}}{z})\log (2\delta ^{-1})}{n}\right) ^{1/2} \end{aligned}$$
with probability no greater than \(\delta \). For clean notation, let us also denote
$$\begin{aligned} A :=4 \left( \frac{C\log (2\delta ^{-1})}{n} \right) ^{1/2}, \quad \varepsilon ^{*} :=A\sqrt{{{\,\mathrm{trace}\,}}(\varSigma _{(t)})}. \end{aligned}$$
For the vector estimates then, we have
$$\begin{aligned} {{\,\mathrm{{\mathbf {P}}}\,}}\left\{ \Vert {\varvec{}}{{\widehat{\theta }}}_{(t)}-{\varvec{}}{g}(\widehat{{\varvec{}}{w}}_{(t)})\Vert> \varepsilon ^{*} \right\}&\\&= {{\,\mathrm{{\mathbf {P}}}\,}}\left\{ \sum _{j=1}^{d}\frac{({\widehat{\theta }}_{j} - g_{j}(\widehat{{\varvec{}}{w}}_{(t)}))^{2}}{A^{2}}> {{\,\mathrm{trace}\,}}(\varSigma _{(t)}) \right\} \\&= {{\,\mathrm{{\mathbf {P}}}\,}}\left\{ \sum _{j=1}^{d}\left( \frac{({\widehat{\theta }}_{j} - g_{j}(\widehat{{\varvec{}}{w}}_{(t)}))^{2}}{A^{2}}-{{\,\mathrm{var}\,}}_{\mu }l_{j}^{\prime }(\widehat{{\varvec{}}{w}}_{(t)};{\varvec{}}{z})\right)> 0 \right\} \\&\le {{\,\mathrm{{\mathbf {P}}}\,}}\bigcup _{j=1}^{d} \left\{ \frac{({\widehat{\theta }}_{j} - g_{j}(\widehat{{\varvec{}}{w}}_{(t)}))^{2}}{A^{2}} > {{\,\mathrm{var}\,}}_{\mu }l_{j}^{\prime }(\widehat{{\varvec{}}{w}}_{(t)};{\varvec{}}{z}) \right\} \\&\le d\delta . \end{aligned}$$
The first inequality uses a union bound, and the second inequality follows from (25). Plugging in A and taking confidence \(\delta /d\) implies the desired result. \(\square \)
Proof of Theorem 12
From Lemma 11, the estimation error has exponential tails, as follows. Writing
$$\begin{aligned} A_{1} :=2d, \quad A_{2} :=4\left( \frac{C{{\,\mathrm{trace}\,}}(\varSigma _{(t)})}{n}\right) ^{1/2}, \end{aligned}$$
for each iteration t we have
$$\begin{aligned} {{\,\mathrm{{\mathbf {P}}}\,}}\{\Vert {\varvec{}}{{\widehat{\theta }}}_{(t)}-{\varvec{}}{g}(\widehat{{\varvec{}}{w}}_{(t)})\Vert > \varepsilon \} \le A_{1} \exp \left( -\left( \frac{\varepsilon }{A_{2}}\right) ^{2}\right) . \end{aligned}$$
Controlling moments using exponential tails can be done using a fairly standard argument. For random variable \(X \in {\mathcal {L}}_{p}\) for \(p \ge 1\), we have the classic inequality
$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}|X|^{p} = \int _{0}^{\infty } {{\,\mathrm{{\mathbf {P}}}\,}}\{|X|^{p}>t\}\,dt \end{aligned}$$
as a starting point. Setting \(X = \Vert {\varvec{}}{{\widehat{\theta }}}_{(t)}-{\varvec{}}{g}(\widehat{{\varvec{}}{w}}_{(t)})\Vert \ge 0\), and using substitution of variables twice, we have
$$\begin{aligned} {{\,\mathrm{{\mathbf {E}}}\,}}|X|^{p}&= \int _{0}^{\infty } {{\,\mathrm{{\mathbf {P}}}\,}}\{X>t^{1/p}\} \, dt\\&= \int _{0}^{\infty } {{\,\mathrm{{\mathbf {P}}}\,}}\{X > t\}pt^{p-1} \, dt\\&\le A_{1}p \int _{0}^{\infty } \exp \left( -\left( t/A_{2}\right) ^{2}\right) t^{p-1} \, dt\\&= \frac{A_{1}A_{2}^{p}p}{2} \int _{0}^{\infty }\exp (-t)t^{p/2-1} \, dt. \end{aligned}$$
The last integral on the right-hand side, written \(\varGamma (p/2)\), is the usual Gamma function of Euler evaluated at p / 2. Setting \(p=2\), we have \(\varGamma (1)=0!=1\), and plugging in the values of \(A_{1}\) and \(A_{2}\) yields the desired result. \(\square \)
Guarantees in the case without strong convexity
Lemma 15
(Comparing trajectories) Comparing (2) and (3), assume that \(\widehat{{\varvec{}}{g}}\) satisfies (7). Setting \(\alpha _{(t)} \in (0,1)\) for all \(0 \le t < T\), and initializing to \(\widehat{{\varvec{}}{w}}_{(0)}={\varvec{}}{w}^{*}_{(0)}\), with probability at least \(1-T\delta \), we have
$$\begin{aligned} \Vert \widehat{{\varvec{}}{w}}_{(T)}-{\varvec{}}{w}^{*}_{(T)}\Vert \le \varepsilon \left( \prod _{t=0}^{T-1}(1+\lambda \alpha _{(t)})-1\right) . \end{aligned}$$
For arbitrary step t, comparing the results of updates (2) and (3) with common step size \(\alpha _{(t)}\), we have
$$\begin{aligned} \Vert \widehat{{\varvec{}}{w}}_{(t+1)}-{\varvec{}}{w}^{*}_{(t+1)}\Vert&\le \Vert \widehat{{\varvec{}}{w}}_{(t)} - {\varvec{}}{w}^{*}_{(t)}\Vert + |\alpha _{(t)}|\left( \Vert \widehat{{\varvec{}}{g}}(\widehat{{\varvec{}}{w}}_{(t)})-{\varvec{}}{g}(\widehat{{\varvec{}}{w}}_{(t)}) \Vert + \Vert {\varvec{}}{g}(\widehat{{\varvec{}}{w}}_{(t)})-{\varvec{}}{g}({\varvec{}}{w}^{*}_{(t)})\Vert \right) \nonumber \\&\le \Vert \widehat{{\varvec{}}{w}}_{(t)} - {\varvec{}}{w}^{*}_{(t)}\Vert \left( 1+\lambda \alpha _{(t)}\right) + \alpha _{(t)}\varepsilon . \end{aligned}$$
The latter inequality follows from the \(\varepsilon \)-accuracy and \(\lambda \)-smoothness in A2. Next, note that for any \(t \ge 1\), if we have
$$\begin{aligned} \Vert \widehat{{\varvec{}}{w}}_{(t)} - {\varvec{}}{w}^{*}_{(t)}\Vert \le \frac{\varepsilon }{\lambda } \left( \prod _{k=0}^{t-1}\left( 1+\lambda \alpha _{(k)}\right) -1\right) , \end{aligned}$$
then using (27), it follows that in the next iteration
$$\begin{aligned} \Vert \widehat{{\varvec{}}{w}}_{(t+1)} - {\varvec{}}{w}^{*}_{(t+1)}\Vert&\le \frac{\varepsilon }{\lambda } \left( \prod _{k=0}^{t-1}\left( 1+\lambda \alpha _{(k)}\right) -1\right) \left( 1+\lambda \alpha _{(t)}\right) + \alpha _{(t)}\varepsilon \\&= \frac{\varepsilon }{\lambda }\left( \prod _{k=0}^{t}\left( 1+\lambda \alpha _{(k)}\right) - 1\right) . \end{aligned}$$
Finally noting that we have the base case
$$\begin{aligned} \Vert \widehat{{\varvec{}}{w}}_{(1)}-{\varvec{}}{w}^{*}_{(1)}\Vert \le \alpha _{(0)}\varepsilon = \frac{\varepsilon }{\lambda }\left( (1+\lambda \alpha _{(0)})-1\right) , \end{aligned}$$
taking the form assumed in the induction step. The desired result follows by mathematical induction. \(\square \)
Without strong convexity, control of the risk becomes slightly more cumbersome, but weaker guarantees can be derived in a straightforward manner. Using A2 and (14):
$$\begin{aligned} R(\widehat{{\varvec{}}{w}}_{(T)}) - R^{*}&= R(\widehat{{\varvec{}}{w}}_{(T)}) - R({\varvec{}}{w}^{*}_{(T)}) + R({\varvec{}}{w}^{*}_{(T)}) - R^{*}\\&\le \frac{\lambda }{2}\Vert \widehat{{\varvec{}}{w}}_{(T)}-{\varvec{}}{w}^{*}_{(T)}\Vert ^{2} + \langle {\varvec{}}{g}({\varvec{}}{w}^{*}_{(T)}), \widehat{{\varvec{}}{w}}_{(T)}-{\varvec{}}{w}^{*}_{(T)}\rangle + R({\varvec{}}{w}^{*}_{(T)}) - R^{*}\\&\le \frac{\lambda }{2}\Vert \widehat{{\varvec{}}{w}}_{(T)}-{\varvec{}}{w}^{*}_{(T)}\Vert ^{2} + \Vert {\varvec{}}{g}({\varvec{}}{w}^{*}_{(T)})\Vert \Vert \widehat{{\varvec{}}{w}}_{(T)}-{\varvec{}}{w}^{*}_{(T)}\Vert + R({\varvec{}}{w}^{*}_{(T)}) - R^{*}. \end{aligned}$$
Furthermore, using \({\varvec{}}{g}({\varvec{}}{w}^{*})=0\) and (15), we have
$$\begin{aligned} \Vert {\varvec{}}{g}({\varvec{}}{w}^{*}_{(T)})\Vert ^{2}&= \Vert {\varvec{}}{g}({\varvec{}}{w}^{*}_{(T)})-{\varvec{}}{g}({\varvec{}}{w}^{*})\Vert ^{2}\\&\le 2\lambda \left( R({\varvec{}}{w}^{*}_{(T)})-R({\varvec{}}{w}^{*})-\langle {\varvec{}}{g}({\varvec{}}{w}^{*}), {\varvec{}}{w}^{*}_{(T)}-{\varvec{}}{w}^{*}\rangle \right) \\&= 2\lambda \left( R({\varvec{}}{w}^{*}_{(T)})-R({\varvec{}}{w}^{*})\right) . \end{aligned}$$
By convexity and A3 , we have \(R^{*} = R({\varvec{}}{w}^{*})\). Writing \(A :=\Vert \widehat{{\varvec{}}{w}}_{(T)}-{\varvec{}}{w}^{*}_{(T)}\Vert ^{2}\) and \(B :=R({\varvec{}}{w}^{*}_{(T)})-R({\varvec{}}{w}^{*})\), it follows that
$$\begin{aligned} R(\widehat{{\varvec{}}{w}}_{(T)}) - R^{*} \le \frac{\lambda A}{2} + \sqrt{2\lambda AB} + B. \end{aligned}$$
Control of the estimation error A can be done using a direct application of Lemmas 5 and 15, which naturally yield
$$\begin{aligned} \Vert \widehat{{\varvec{}}{w}}_{(T)} - {\varvec{}}{w}^{*}_{(T)}\Vert \le \frac{{\widetilde{\varepsilon }}}{\sqrt{n}}\left( (1+\lambda \alpha )^{T}-1\right) \end{aligned}$$
with probability at least \(1-\delta \).
As for the optimization error B, this can be controlled using Theorem 2.1.14 of Nesterov (2004), as
$$\begin{aligned} B \le \frac{2R_{0}D_{0}^{2}}{2D_{0}^{2} + T\alpha (2-\lambda \alpha )R_{0}} = \left( \frac{T\alpha (2-\lambda \alpha )}{2D_{0}^{2}} + R_{0} \right) ^{-1} \end{aligned}$$
which is valid using A2 and (12). Plugging these in as upper bounds on A and B in the risk control inequality gives the desired result.
Here we discuss precisely how to compute the implicitly-defined M-estimates of (4) and (6). Assuming \(s>0\) and real-valued observations \(x_{1},\ldots ,x_{n}\), we first look at the program
$$\begin{aligned} \min _{\theta } \frac{1}{n} \sum _{i=1}^{n}\rho _{s}\left( x_{i}-\theta \right) \end{aligned}$$
assuming \(\rho \) is as specified in Definition 13, with \(\psi = \rho ^{\prime }\). Write \({\widehat{\theta }}\) for this unique minimum, and note that it satisfies
$$\begin{aligned} \frac{s}{n} \sum _{i=1}^{n}\psi _{s}\left( x_{i}-{\widehat{\theta }}\right) = 0. \end{aligned}$$
Indeed, by monotonicity of \(\psi \), this \({\widehat{\theta }}\) can be found via \(\rho \) minimization or root-finding. The latter yields standard fixed-point iterative updates, such as
$$\begin{aligned} {\widehat{\theta }}_{(k+1)} = {\widehat{\theta }}_{(k)} + \frac{s}{n}\sum _{i=1}^{n}\psi _{s}\left( x_{i}-{\widehat{\theta }}_{(k)}\right) . \end{aligned}$$
Note the right-hand side has a fixed point at the desired value. In our routines, we use the Gudermannian function
$$\begin{aligned} \rho (u) :=\int _{0}^{u}\psi (x)\,dx, \quad \psi (u) :=2{{\,\mathrm{atan}\,}}(\exp (u))-\pi /2 \end{aligned}$$
which can be readily confirmed to satisfy all requirements of Definition 13.
For the dispersion estimate to be used in re-scaling, we introduce function \(\chi \), which is even, non-decreasing on \({\mathbb {R}}_{+}\), and satisfies
$$\begin{aligned} 0< \left| \lim \limits _{u \rightarrow \pm \infty } \chi (u)\right|< \infty , \quad \chi (0) < 0. \end{aligned}$$
In practice, we take dispersion estimate \({\widehat{\sigma }}>0\) as any value satisfying
$$\begin{aligned} \frac{1}{n} \sum _{i=1}^{n} \chi \left( \frac{x_{i}-\gamma }{{\widehat{\sigma }}}\right) = 0 \end{aligned}$$
where \(\gamma = n^{-1}\sum _{i=1}^{n}x_{i}\), computed by the iterative procedure
$$\begin{aligned} {\widehat{\sigma }}_{(k+1)} = {\widehat{\sigma }}_{(k)}\left( 1-\frac{1}{\chi (0)n}\sum _{i=1}^{n}\chi \left( \frac{x_{i}-\gamma }{{\widehat{\sigma }}_{(k)}}\right) \right) ^{1/2} \end{aligned}$$
which has the desired fixed point, as in the location case. Our routines use the quadratic Geman-type \(\chi \), defined
$$\begin{aligned} \chi (u) :=\frac{u^{2}}{1+u^{2}}-c \end{aligned}$$
with parameter \(c > 0\), noting \(\chi (0)=-c\). Writing the first term as \(\chi _{0}\) so \(\chi (u)=\chi _{0}(u)-c\), we set \(c = {{\,\mathrm{{\mathbf {E}}}\,}}\chi _{0}(x)\) under \(x \sim N(0,1)\). Computed via numerical integration, this is \(c \approx 0.34\).
Additional test results
In this section, we provide some additional experimental results obtained via the tests of Sect. 4. In particular, we consider the regression application at the end of Sect. 4.1, where due to space limitations, we only showed results for four distinct families of noise distributions. Here, we consider all of the following distribution families: Arcsine (asin), Beta Prime (bpri), Chi-squared (chisq), Exponential (exp), Exponential-Logarithmic (explog), Fisher's F (f), Fréchet (frec), Gamma (gamma), Gompertz (gomp), Gumbel (gum), Hyperbolic Secant (hsec), Laplace (lap), Log-Logistic (llog), Log-Normal (lnorm), Logistic (lgst), Maxwell (maxw), Pareto (pareto), Rayleigh (rayl), Semi-circle (scir), Student's t (t), Triangle (asymmetric tri_a, symmetric tri_s), U-Power (upwr), Wald (wald), Weibull (weibull).
The content of this section is as follows:
Figs. 10–11: performance as a function of sample size n.
Figs. 12–13: performance over noise levels, with fixed n and d.
Figs. 14–15: performance as a function of d, with fixed n / d ratio and noise level.
Open image in new window
Prediction error over sample size \(12 \le n \le 122\), fixed \(d=5\), noise level = 8. Each plot corresponds to a distinct noise distribution (Color figure online)
Prediction error over noise levels, for \(n=30, d=5\). Each plot corresponds to a distinct noise distribution (Color figure online)
Prediction error over dimensions \(5 \le d \le 40\), with ratio \(n/d = 6\) fixed, and noise level = 8. Each plot corresponds to a distinct noise distribution (Color figure online)
Abramowitz, M., & Stegun, I. A. (1964). Handbook of mathematical functions with formulas, graphs, and mathematical tables, National Bureau of Standards Applied Mathematics Series, vol 55. US National Bureau of Standards.Google Scholar
Alon, N., Ben-David, S., Cesa-Bianchi, N., & Haussler, D. (1997). Scale-sensitive dimensions, uniform convergence, and learnability. Journal of the ACM, 44(4), 615–631.MathSciNetCrossRefzbMATHGoogle Scholar
Ash, R. B., & Doleans-Dade, C. (2000). Probability and measure theory. Cambridge: Academic Press.zbMATHGoogle Scholar
Bartlett, P. L., Long, P. M., & Williamson, R. C. (1996). Fat-shattering and the learnability of real-valued functions. Journal of Computer and System Sciences, 52(3), 434–452.MathSciNetCrossRefzbMATHGoogle Scholar
Bartlett, P. L., & Mendelson, S. (2003). Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3, 463–482.MathSciNetzbMATHGoogle Scholar
Brownlees, C., Joly, E., & Lugosi, G. (2015). Empirical risk minimization for heavy-tailed losses. Annals of Statistics, 43(6), 2507–2536.MathSciNetCrossRefzbMATHGoogle Scholar
Catoni, O. (2009). High confidence estimates of the mean of heavy-tailed real random variables. arXiv preprint arXiv:0909.5366.
Catoni, O. (2012). Challenging the empirical mean and empirical variance: A deviation study. Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, 48(4), 1148–1185.MathSciNetCrossRefzbMATHGoogle Scholar
Chen, Y., Su, L., & Xu, J. (2017a). Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. arXiv preprint arXiv:1705.05491.
Chen, Y., Su, L., & Xu, J. (2017b). Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 1(2), 44.Google Scholar
Daniely, A., & Shalev-Shwartz, S. (2014). Optimal learners for multiclass problems. In 27th annual conference on learning theory, proceedings of machine learning research (vol. 35, pp. 287–316).Google Scholar
Devroye, L., Lerasle, M., Lugosi, G., & Oliveira, R. I. (2015). Sub-Gaussian mean estimators. arXiv preprint arXiv:1509.05845.
Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12, 2121–2159.MathSciNetzbMATHGoogle Scholar
Feldman, V. (2016). Generalization of ERM in stochastic convex optimization: The dimension strikes back. Advances in Neural Information Processing Systems, 29, 3576–3584.Google Scholar
Finkenstädt, B., & Rootzén, H. (Eds.). (2003). Extreme values in finance, telecommunications, and the environment. Boca Raton: CRC Press.Google Scholar
Frostig, R., Ge, R., Kakade, S. M., & Sidford, A. (2015). Competing with the empirical risk minimizer in a single pass. arXiv preprint arXiv:1412.6606.
Holland, M. J., & Ikeda, K. (2017a). Efficient learning with robust gradient descent. arXiv preprint arXiv:1706.00182.
Holland, M. J., & Ikeda, K. (2017b). Robust regression using biased objectives. Machine Learning, 106(9), 1643–1679. https://doi.org/10.1007/s10994-017-5653-5.MathSciNetCrossRefzbMATHGoogle Scholar
Hsu, D., & Sabato, S. (2016). Loss minimization and parameter estimation with heavy tails. Journal of Machine Learning Research, 17(18), 1–40.MathSciNetzbMATHGoogle Scholar
Huber, P. J., & Ronchetti, E. M. (2009). Robust statistics (2nd ed.). New York: Wiley.CrossRefzbMATHGoogle Scholar
Johnson, R., & Zhang, T. (2013). Accelerating stochastic gradient descent using predictive variance reduction. Advances in Neural Information Processing Systems, 26, 315–323.Google Scholar
Kearns, M. J., & Schapire, R. E. (1994). Efficient distribution-free learning of probabilistic concepts. Journal of Computer and System Sciences, 48, 464–497.MathSciNetCrossRefzbMATHGoogle Scholar
Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Kolmogorov, A. N. (1993). \(\varepsilon \)-entropy and \(\varepsilon \)-capacity of sets in functional spaces. In A. N. Shiryayev (Ed.), Selected works of A. N. Kolmogorov, volume III: Information theory and the theory of algorithms (pp. 86–170). Berlin: Springer.Google Scholar
Le Roux, N., Schmidt, M., & Bach, F. R. (2012). A stochastic gradient method with an exponential convergence rate for finite training sets. Advances in Neural Information Processing Systems, 25, 2663–2671.Google Scholar
Lecué, G., & Lerasle, M.(2017). Learning from MOM's principles. arXiv preprint arXiv:1701.01961.
Lecué, G., Lerasle, M., & Mathieu, T. (2018). Robust classification via MOM minimization. arXiv preprint arXiv:1808.03106.
Lerasle, M., & Oliveira, R. I. (2011). Robust empirical mean estimators. arXiv preprint arXiv:1112.3914.
Lin, J., & Rosasco, L. (2016). Optimal learning for multi-pass stochastic gradient methods. Advances in Neural Information Processing Systems, 29, 4556–4564.Google Scholar
Luenberger, D. G. (1969). Optimization by vector space methods. New York: Wiley.zbMATHGoogle Scholar
Lugosi, G., & Mendelson, S. (2016). Risk minimization by median-of-means tournaments. arXiv preprint arXiv:1608.00757.
Minsker, S., & Strawn, N. (2017). Distributed statistical estimation and rates of convergence in normal approximation. arXiv preprint arXiv:1704.02658.
Minsker, S. (2015). Geometric median and robust estimation in Banach spaces. Bernoulli, 21(4), 2308–2335.MathSciNetCrossRefzbMATHGoogle Scholar
Murata, T., & Suzuki, T. (2016). Stochastic dual averaging methods using variance reduction techniques for regularized empirical risk minimization problems. arXiv preprint arXiv:1603.02412.
Nesterov, Y. (2004). Introductory lectures on convex optimization: A basic course. Berlin: Springer.CrossRefzbMATHGoogle Scholar
Nocedal, J., & Wright, S. (1999). Numerical optimization., Springer Series in Operations Research Berlin: Springer.CrossRefzbMATHGoogle Scholar
Prasad, A., Suggala, A. S., Balakrishnan, S., & Ravikumar, P. (2018). Robust estimation via robust gradient estimation. arXiv preprint arXiv:1802.06485.
Rakhlin, A., Shamir, O., & Sridharan, K. (2012). Making gradient descent optimal for strongly convex stochastic optimization. In Proceedings of the 29th international conference on machine learning (pp. 449–456).Google Scholar
Shalev-Shwartz, S., & Zhang, T. (2013). Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research, 14, 567–599.MathSciNetzbMATHGoogle Scholar
Talvila, E. (2001). Necessary and sufficient conditions for differentiating under the integral sign. American Mathematical Monthly, 108(6), 544–548.MathSciNetCrossRefzbMATHGoogle Scholar
van der Vaart, A. W. (1998). Asymptotic statistics. Cambridge: Cambridge University Press.CrossRefzbMATHGoogle Scholar
Vardi, Y., & Zhang, C. H. (2000). The multivariate \(L_{1}\)-median and associated data depth. Proceedings of the National Academy of Sciences, 97(4), 1423–1426.MathSciNetCrossRefzbMATHGoogle Scholar
© The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2019
1.Osaka UniversityIbarakiJapan
2.Nara Institute of Science and TechnologyIkomaJapan
Holland, M.J. & Ikeda, K. Mach Learn (2019). https://doi.org/10.1007/s10994-019-05802-5
Received 13 September 2018 | CommonCrawl |
Journal Home About Issues in Progress Current Issue All Issues Feature Issues
OSA Continuum
Vol. 3,
•https://doi.org/10.1364/OSAC.388418
Research on laser range profiles based on spatial domain
Yanhui Li, Di Gao, and Hong Liao
Yanhui Li,* Di Gao, and Hong Liao
School of Physics and Optoelectronic Engineering, Xidian University, Xi'an 710071, China
*Corresponding author: [email protected]
Y Li
D Gao
H Liao
Yanhui Li, Di Gao, and Hong Liao, "Research on laser range profiles based on spatial domain," OSA Continuum 3, 1049-1057 (2020)
Targets recognition using subnanosecond pulse laser range profiles
Yanhui Li, et al.
Geometric detection based on one-dimensional laser range profiles of dynamic conical target
Yuan Mou, et al.
Appl. Opt. 53(35) 8335-8341 (2014)
Analytical model of a laser range profile from rough convex quadric bodies of revolution
Li Yanhui, et al.
J. Opt. Soc. Am. A 29(7) 1383-1388 (2012)
Gaussian beams
Laser radar
Laser ranging
Light beams
Original Manuscript: January 20, 2020
Revised Manuscript: March 30, 2020
Manuscript Accepted: March 31, 2020
Laser one-dimensional range profile
Gaussian beam factor
Influences on LRPs by the intensity distribution
Research on laser range profiles in atmosphere turbulence based on spatial domain
References and links
Figures (10)
Information about geometric features and surface material can be obtained by the analysis of the laser range profile (LRP) acquired from the target. Some apparent transforms of the laser range profile that may father obstacle to the target recognition occur when the laser intensity has different spatial distribution. In this paper, a LRP equation is proposed to describe the situation when a single-site radar at an arbitrary location detects the target, and thus simulations of an inclined plate and cone LRP outcome based on the plane wave. As for LRPs based on Gaussian beams, the beam factor is raised. By analysis of the cone LRPs at different intensity distributions, several abnormal intensity-range profiles are found, which may lead to misjudgment for the target. For preparation of the study about LRPs under different weather conditions, a cone LRP at atmosphere turbulence is also simulated.
© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Laser range profile (LRP) is a relatively new field of research, especially the high resolution imaging applications. Different from Intensity profiles reflecting intensity of each part of the target, laser range profiles (LRPs) as a vital method for target recognition in the optical band, can obtain the 3-D shape and the range information of the target by one pulse [1,2]. The basic idea of LRP is that when the pulse laser illuminates the target, the detector can receive the time-varying scattering reflected signal which contains the information of the target. With analyzing the features of the achieved signal, the details of target structure in the direction of the lidar can be discovered. With benefits such as good range resolution, simple system structure, highly speed of target identification, LRPs establishes the application in the field of target recognition and future intelligent weapons [3,4].
The laser radar emits a pulse which makes backscatter on the target's surface with its power information, will be accepted by the receiving aperture thus the intensity-range profile of the subject can be obtained [4]. Yet in this process, LRPs would be different due to changes in the laser beam which have influenced the intensity distribution.
In recent years, because of widespread use of laser beams in optical communication, a great deal of attention has been paid by statistical optics scientists to the subject of propagation of laser beams in various random medium [5]. Beam spread and beam wandering are the most perceptible effects of atmospheric turbulence on propagating laser beams [6], which is closely related to the beam radius and the beam center axis. These situations would definitely cause a huge impact on the intensity distribution. Therefore, it is necessary to set up the physic model of these situations and research them.
Some earlier study on the radar ehco has been made. Ove Steinvall's simulation results show the target shape and reflection characteristics is important for determining the shape and the magnitude of the laser return [7]. Our research group has already accomplished some experimental measurements and theoretical simulation of the laser range profile [8,9]. In their papers, LRPs of cones showing vary postures are simulated and analysis illustrating how the pulse width and the side length make differs in LRPs. The beam center axis occurs in their papers is generally symmetry around the regular target, while the deviation of the beam center axis would lead to a different intensity distribution on the target.
The previous studies are all interested in the target shape or the beam shape. In reality scene the laser might deviate from its original aiming direction, and the intensity distribution on beam radial section would also be changed by the atmosphere. Therefore it is necessary to make research on the spatial distribution of laser intensity. This paper gives a LRP equation based on the laser light irradiating the target at any spatial location. Besides, the Gaussian beam factor is proposed to distinguish the plane wave and the Gaussian beam. By analyzing LRPs of Gaussian beam with different beam radius and changed beam center axis, we find the change of intensity distribution may lead to embarrassments of target recognition.
In the past few decades, much research has been carried out to find a suitable way to reduce the effects of turbulent atmosphere on laser beams [10]. Considering the atmospheric turbulence would also make differs in laser intensity distribution, it is significance to make research on the laser range profile based on spatial domain.
2. Laser one-dimensional range profile
As is shown in Fig. 1, the laser pulse beam is incident on the target. The laser incidence direction is parallel to the Z-axis, and Oxyz is the coordinate system for the target. The whole target is illuminated by a single laser pulse. The received backscattering pulse beam is different obviously to the different shape of the targets and the different character of surface. The backscattering wave power of a laser pulse by the lidar range equation can be written as [11]:
(1)$${P_s} = \frac{{{P_t}}}{{4\pi R_t^2}}\frac{\sigma }{{4\pi R_r^2}}{A_r}{G_r}$$
where, Ps is the received signal power, ${P_t}$ is the transmitter power, ${A_r}$ is the clear aperture of the detector, ${G_r}$ is the gain function, ${R_t}$ and ${R_r}$ are distances from the target to the transmitter and receiver, respectively, and $\sigma $ is the laser scattering cross-section.
Fig. 1. The laser pulse beam radiating the target schematic diagram.
Download Full Size | PPT Slide | PDF
When ${P_t}$ has a pulse form $P(t)$, the extended target ${R_t}$ and ${R_r}$ are the distances from the zero point to the transmitter and receiver, respectively. For a single station radar, ${R_t} = {R_r} = {R_0}$. Then, the Eq. (1) can be written as:
(2)$${P_s}(t) = \int {\textrm{d}\sigma } \;\frac{{P(t^{\prime})}}{{4\pi {R_0}^2}}\frac{{{A_r}{G_r}}}{{4\pi {R_0}^2}}$$
where $t^{\prime} = t - ({R_r} + {R_t})/c - 2Z/c$, $\sigma$ is the laser scattering cross-section.
When time ${t_0} = ({R_r} + {R_t})/c$ is chosen as time zero, the pulse propagates to position ${z_t}$ at t time. In this case, Eq. (2) can be obtained as
(3)$${P_s}({z_t}) = \frac{{{A_r}{G_r}}}{{4\pi {R_0}^2 \cdot 4\pi {R_0}^2}}\int {\textrm{d}\sigma } \;P(2{z_t}/c - 2Z/c)$$
Equation (3) is the LRP expression and can be applied to rough targets.
With Eq. (3), LRPs of the sloped-plane and the circular cone based on the plane wave can be obtained:
Figure 2(a) shows LRPs of the sloped-plane with length $l = 0.5\textrm{m}$ when $\alpha = {45^ \circ }$ is the slope angle for the pulse width ${T_0} = 0.1\textrm{ns}$, while Fig. 2(b) shows LRPs of the circular cone with height $h = 0.5\textrm{m}$ when the tangent of the half-cone angle is $\tan \alpha = 0.25$ for the pulse width ${T_0} = 0.1\textrm{ns}$. Besides, the geometric center point of the slope-plane and the cone top point are chosen as the zero point respectively. For intensity distribution of the plane wave is average, LRPs basically show the area of effective target scattering cross-section being covered by the radar distinguish at unit Z. Under this circumstance, for Fig. 2(a) the peak length coincides with the radial size of the sloped-plane while for Fig. 2(b) the rapidly falling edge correspond to the bottom of the circular cone.
Fig. 2. LRPs at normal conditions: (a) LRPs of slope-plane; (b) LRPs of the circular cone
3. Gaussian beam factor
At most case the incident laser is the Gaussian beam with uncertain incident angle (the angle between the incident direction and normal direction of the target) and position of the beam center axis consequently the LRP equation under more broad conditions is necessary.
To the Gaussian beam, the amplitude can be written as:
(4)$$u(\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} ^{\prime},Z) = {E_0}\frac{{{\omega _0}}}{{\omega (Z)}}\exp [ - \frac{{{g_0}(\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} ^{\prime})}}{{{\omega ^2}(Z)}}]$$
where, ${\omega _0}$ is waist radius, $\omega (Z)$ is beam radius, $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} ^{\prime}$is the position between the point of the target and the target center, and ${g_0}(\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} ^{\prime})$ is the square of the distance between any point of the target and the beam center axis. The power is proportional to the square of the amplitude,
(5)$$P(\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r^{\prime}} ) = |u(\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} ^{\prime},Z){|^2} = {P_i}\frac{{{\omega _0}^2}}{{{\omega ^2}(Z)}}\exp [ - \frac{{2{g_0}(\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} ^{\prime})}}{{{\omega ^2}(Z)}}]$$
Define the Gaussian beam far field divergence angle$\phi$:
(6)$$\phi \approx \tan \phi = \frac{{\omega (Z)}}{Z}\textrm{ = }\frac{2}{{{k_0}{\omega _0}}}$$
where, ${k_0}$is the modulus of the incident wave vector. To the Gaussian beam, the LRP equation adds a factor than the case of plane beam,
(7)$$\frac{{{\omega _0}^2}}{{{\omega ^2}(Z)}}\exp \left[ { - \frac{{2{g_0}(\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r}^{\prime})}}{{{\omega^2}(Z)}}} \right]$$
which, is called the Gaussian beam factor, making consider the intensity distribution.
Obviously, when on the same wavelength of laser, the transmission distance is the same, the beam of different shapes, the laser beam spatial distribution are quite different. Figure 3 shows the Intensity of transverse distance space with the normalized distribution demanding a stable laser power when λ=1.06 µm, Z=1500 m, w0=4000 µm, 3000 µm, 2000 µm, 1000 µm, 500 µm, and the corresponding w(Z) = 1.012 m, 0.506 m, 0.253 m, 0.169 m, 0.127 m, as well as the plane beam (w(Z)= +∞).
Fig. 3. Intensity distribution of Gaussian beam with the beam radius change.
Moreover, the incident angle and the position of the beam center axis should be added to the LRP equation.
Fig. 4. The Gaussian beam scatter schematic diagram of the rough target.
The target surface equation can be written as:
(8)$$f(x,y,z) = 0$$
To simplify the situation, the target is supposed to be a convex quadric body (Fig. 4). With a target coordinate system$Oxyz$ and incident coordinate system$OXYZ$, when $\theta $ is the zenith angle, the transformation between the coordinate system XYZ and a target object coordinate system xyz can be written as:
(9)$$\left( {\begin{array}{{c}} x\\ y\\ z \end{array}} \right) = \left( {\begin{array}{{ccc}} 1&0&0\\ 0&{\cos \theta }&{ - \sin \theta }\\ 0&{\sin \theta }&{\cos \theta } \end{array}} \right)\left( {\begin{array}{{c}} X\\ Y\\ Z \end{array}} \right)$$
Equation (8) and Eq. (9) can be combined as:
(10)$$f(x(X,Y,Z),y(X,Y,Z),z(X,Y,Z)) = 0$$
To calculate the backscattering between the target and the incident laser, the unit laser scattering cross-section can be written as,
(11)$$d\sigma (\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} ^{\prime}) = 4\pi {f_r}(\beta ){\cos ^2}\beta dS\textrm{ = }4\pi {f_r}(\beta )\cos \beta dXdY$$
where, ${f_r}(\beta )$ is the scattering coefficient, $\cos \beta$ is cosine of the scattering angle, which can be further written as,
(12)$$\cos \beta \textrm{ = }\frac{{ - {f_Z}}}{{\sqrt {f_X^2 + f_Y^2 + f_Z^2} }}\textrm{ = }\frac{{\sin \theta {f_y} - \cos \theta {f_z}}}{{\sqrt {f_x^2 + f_y^2 + f_z^2} }}$$
The position of the laser source is $O^{\prime}(0,0,0)$, where ${R_0}$ means the side range between the laser source and the target center point. Furthermore, the beam center axis whose location is defined as $X = {X_0},Y = {Y_0}$, is parallel to the incident direction(the positive direction of Z axis).
So far, combining the radar range equation, LRPs equation with any incident angle to the target can be written as,
(13)$${P_r}({\textrm{Z}_0}) = \frac{{{A_r}{G_r}}}{{4\pi {R_0}^2 \cdot 4\pi {R_0}^2}}\int_{{Z_0} - \Delta /2}^{{Z_0} + \Delta /2} {dZ^{\prime}\int_{{C_0}} {{P_i}} } \frac{{{\omega _0}^2}}{{{\omega ^2}(Z)}}\exp [ - \frac{{2{g_0}(\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} ^{\prime})}}{{{\omega ^2}(Z)}}]d\sigma (\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} ^{\prime})$$
$({C_0}:f(x(X,Y,Z^{\prime}),y(X,Y,Z^{\prime}),z(X,Y,Z^{\prime})) = 0$, ${g_0}(\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over r} ^{\prime}) = {(X - {X_0})^2} + {(Y - {Y_0})^2}$, $\cos \beta \gt 0)$
where, $\Delta \textrm{ = }c{T_0}/2$ is the radar resolution unit.
4. Influences on LRPs by the intensity distribution
According to the illustration above, there are influences on LRPs by the Gaussian beam factor. To calculate the effect more exactly, simulations of the circular cone LRP based on the Gaussian beam has been made. Additionally, combining the surface equation of the circular cone to Eq. (12), the LRP equation of the circular cone can be made. The circular cone height (h) is 0.5m, bottom radius is $r = h\tan \alpha $, The incident angle ($\theta $) is 0°.
Besides, the Gaussian incident pulse adopted can be written as,
(14)$${u_i}(t) = {E_0}\exp ( - {t^2}/T_0^2 + i\omega t)$$
Corresponding to Eq. (14), the pulse power can be written as,
(15)$${P_i}(t) = {E_0}^2\exp ( - 2{t^2}/{T_0}^2)$$
when the laser scans from the top to the bottom of the circular, the target cross section increases while the light intensity decreases, both of the two factors make contribution to the backscattering power and decide on the trend of the LRPs. As shown in Fig. 5(a), when the beam radius is as small as $\omega (Z) = r$, the intensity is focused nearby the beam center axis thus the peak position is near the balance point where the two factors' contribution neutralize rather than correspond to the bottom of the circular cone. On the other hand, when the beam radius keeps increase until it is more than $\omega (Z) = 4r$, the intensity distribution turns to be average and the LRP is close to the LRP of the plane wave ($\omega (Z) = \infty $). Another two incident angles $\theta = 15^\circ $and $\theta = 45^\circ $are adopt in Fig. 5(b) and Fig. 5(c).
Fig. 5. LRPs of circular cone with the beam radius change at different incident angles: (a) $\theta = {0^ \circ }$ (b) $\theta = {15^ \circ }$ (c) $\theta = {45^ \circ }$with the center of the cone on the beam center axis.
LRPs in off-axis situations are also worth of study cause that the target center point is regualarly not on the beam center axis. So several simulations of that have been made.
In Fig. 6(a) d represents for the distance between the circular cone center axis and the beam center axis. When the beam center axis deviate with d increases at $\omega (Z)\textrm{ = }r$, the total intensity of the Z section decrease and the intensity distribution make relatively weaker influences to the LRP than the target cross section, as a result we find the peak is lower and the peak position move toward to the position correspond to the circular cone bottom. Moreover, with d increases the rate of the intensity attenuation becomes lower and lower while the scattering cross section keeps a stable increase resulted in a faster and faster increase of the backscattering power, the LRP shape changes from convex to concave.
Fig. 6. LRPs of circular cone with the beam center axis deviate when $\theta \textrm{ = }{0^ \circ }$: (a) $\omega (Z)\textrm{ = }r$ (c) $\omega (Z)\textrm{ = 2}r$(c) $\omega (Z)\textrm{ = 4}r$.
Figure 6(b) shows LRPs when $\omega (Z)\textrm{ = }2r$ $\theta \textrm{ = }{0^ \circ }$ with the beam center axis deviate. Comparing Fig. 6(b) to Fig. 6(a), we find that the peak position would no longer change and correspond to the bottom of the circular cone, which indicates the peak position is only controlled by the scattering section when the beam radius is big enough to ignore the intensity change caused by moving the beam center axis.
As is shown in Fig. 6(c), the peak position as well as the trend of the line are highly consistent as a result of the average intensity distribution in general, which can be verified by comparing the LRP when $\omega (Z)\textrm{ = 4}r$ and the LRP when $\omega (Z)\textrm{ = }\infty $ in Fig. 5(a)
In summary, the intensity distribution changes when the beam radius is different, which lead to diverse LRPs of the same target. When the beam radius is 4 times bigger than the target radius, the intensity distribution of the Gaussian beam is near to that of the Plane wave, so LRPs are of good description of the target profile shape. When the beam radius is as big as the target radius, laser intensity is highly concentrated on the beam center, so LRPs affected by this intensity distribution are of diverse shapes, which makes it hard to identify the profile shape of the target. Besides, LRPs with two more beam aiming angle are given in the following part of this article.
Considering the incident angle is $\theta \ne {\textrm{0}^ \circ }$, the circular cone is no longer symmetry about the target center axis thus we should distinguish the right deviation and the left deviation so the right is settled as the positive direction. Obviously shown in Fig. 7(a), there are still changes in the peak position and the concavity when the beam radius is small enough.
Fig. 7. LRPs of circular cone with the beam center axis deviate when $\theta \textrm{ = }{15^ \circ }$: (a) $\omega (Z)\textrm{ = }r$ (c) $\omega (Z)\textrm{ = 2}r$(c) $\omega (Z)\textrm{ = 4}r$.
From Fig. 7(b), the LRPs are quite different on both sides of the center axis as in shown with dotted line. In this case, the intensity distribution remains a main factor of the backscattering power changing when the beam center axis on the left side of the center axis (d<0), while the cross section area dominants when the beam center axis on the left side of the center axis (d>0).Moreover, similar patterns can be found in Fig. 7(c).
From Figs. 8(a), (b) and (c), similar conclusions could be found. Besides, the LRPs in different intensity distributions show obvious disparity, which may lead to hard work on target recognition.
5. Research on laser range profiles in atmosphere turbulence based on spatial domain
The amplitude and phase of a laser would have random fluctuations when it transmits in random media (such as atmospheric turbulence) [12,13]. We only study on spatial domain without considering the time factor, thus the intensity in atmosphere turbulence can be written as [14,15],
(16)$$I(i,j) = {I_0}(i,j)\exp [4\chi (\rho ,0)]$$
where ${I_0}(i,j)$ is the backscattering power of a scattering cross section unit without atmosphere turbulence, the intensity fluctuation factor is indicated by the index, $\chi (\rho ,0)$ is a Gaussian random variable with mean as $- {\delta _x}^2$ and variance as ${\delta _x}^2$, ${\delta _x}^2$ is the logarithm amplitude variance determined by the atmosphere turbulence distribution. Sample results of the beam intensity distribution on a radial section are shown in Fig. 9 and are described below.
Fig. 9. Intensity distribution of Gaussian beam
The intensity flicker caused by turbulence make the beam indistinct, which means the intensity distribution is no more consequent but undulate. From Fig. 9 the Gaussian beam amplitude fluctuation is more and more intense with the $\delta _x^2$ increasing, which could result in LRPs of terrible quality. For a more intuitive display, LRPs of Gaussian beam under atmosphere turbulence circumstances are simulated.
As is shown in Fig. 10, the LRPs of the circular cone varies in amplitude when the logarithm amplitude variance changes. The LRP of cone is simulated with a radar-target distance 5km, the beam aiming angle at 0, and the deviation between beam center and target center at 0. When $\delta _x^2\textrm{ = }0$, it means there are no turbulence effects on LRPs, the LRP is of good shape. When the $\delta _x^2$ increases, the radar echo intensity would be with a wavy motion. When $\delta _x^2$ is large enough as $\delta _x^2\textrm{ = }0.5$, the LRP of the cone is too fluctuant to recognize its profile shape.
Fig. 10. LRPs of circular cone in atmosphere turbulence
The spatial intensity distribution shows strong influences on target recognition by the LRP, which may even lead to some misjudgments of the target. For the radar imaging and target recognition, our study is significant. There is a great quantity of factors making differences on the LRP except for the spatial intensity. For further study, our group will do some works on how the atmosphere turbulence affects radar imaging.
National Natural Science Foundation of China (61431010, 61475123); the Higher Education Discipline Innovation Project (B17035).
The authors declare that there are no conflicts of interest related to this article.
1. R. Williams, J. Westerkamp, D. Gross, and A. Palomino, "Automatic target recognition of time critical moving targets using 1D high range resolution (HRR) radar," IEEE Aerosp. Electron. Syst. Mag. 15(4), 37–43 (2000). [CrossRef]
2. H. Li and S. Yang, "Using Range Profiles as Feature Vectors to Identify Aerospace Objects," IEEE Trans. Antennas Propag. 41(3), 261–268 (1993). [CrossRef]
3. R. Stratton, "Target identification from radar signatures," in IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE, 1978), pp. 223–227.
4. Y. Mou, Z.-s. Wu, Z.-j. Li, and G. Zhang, "Geometric detection based on one-dimensional laser range profiles of dynamic conical target," Appl. Opt. 53(35), 8335–8341 (2014). [CrossRef]
5. M. Yousefi, S. Golmohammady, A. Mashal, and F. D. Kashani, "Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence," J. Opt. Soc. Am. A 32(11), 1982–1992 (2015). [CrossRef]
6. M. Charnotskii, "Optimal beam focusing through turbulence," J. Opt. Soc. Am. A 32(11), 1943–1951 (2015). [CrossRef]
7. O. Steinvall, "Effects of target shape and reflection on laser radar cross sections," Appl. Opt. 39(24), 4381–4391 (2000). [CrossRef]
8. Y. H. Li and Z. S. Wu, "Targets Recognition Using Subnanosecond Pulse Laser Range Profiles," Opt. Express 18(16), 16788–16796 (2010). [CrossRef]
9. L. Yanhui, W. Zhensen, G. Yanjun, and Z. Geng, "Analytical model of a laser range profile from rough convex quadric bodies of revolution," J. Opt. Soc. Am. A 29(7), 1383–1388 (2012). [CrossRef]
10. T. Yang, Y. Xu, H. Tian, D. Die, Q. Du, B. Zhang, and Y. Dan, "Propagation of partially coherent Laguerre Gaussian beams through inhomogeneous turbulent atmosphere," J. Opt. Soc. Am. A 34(5), 713–720 (2017). [CrossRef]
11. R. Schoemaker and K. Benoist, "Characterisation of small targets in a maritime environment by means of laser range profiling," Proc. SPIE 8037, 803705 (2011). [CrossRef]
12. Z.-S. Wu and Y.-Q. Li, "Scattering of a partially coherent Gaussian–Schell beam from a diffuse target in slant atmospheric turbulence," J. Opt. Soc. Am. A 28(7), 1531–1539 (2011). [CrossRef]
13. Y. Zhang, M. Cheng, Y. Zhu, J. Gao, W. Dan, Z. Hu, and F. Zhao, "Influence of atmospheric turbulence on the transmission of orbital angular momentum for Whittaker-Gaussian laser beams," Opt. Express 22(18), 22101–22110 (2014). [CrossRef]
14. H. Ahlberg, S. Lundqvist, D. Letalick, I. Renhorn, and O. Steinvall, "Imaging Q-switched CO2 laser radar with heterodyne detection: design and evaluation," Appl. Opt. 25(17), 2891 (1986). [CrossRef]
15. J. H. Shapiro, B. A. Capron, and R. C. Harney, "Imaging and target detection with a heterodyne-reception optical radar," Appl. Opt. 20(19), 3292–3313 (1981). [CrossRef]
Article Order
R. Williams, J. Westerkamp, D. Gross, and A. Palomino, "Automatic target recognition of time critical moving targets using 1D high range resolution (HRR) radar," IEEE Aerosp. Electron. Syst. Mag. 15(4), 37–43 (2000).
[Crossref]
H. Li and S. Yang, "Using Range Profiles as Feature Vectors to Identify Aerospace Objects," IEEE Trans. Antennas Propag. 41(3), 261–268 (1993).
R. Stratton, "Target identification from radar signatures," in IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE, 1978), pp. 223–227.
Y. Mou, Z.-s. Wu, Z.-j. Li, and G. Zhang, "Geometric detection based on one-dimensional laser range profiles of dynamic conical target," Appl. Opt. 53(35), 8335–8341 (2014).
M. Yousefi, S. Golmohammady, A. Mashal, and F. D. Kashani, "Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence," J. Opt. Soc. Am. A 32(11), 1982–1992 (2015).
M. Charnotskii, "Optimal beam focusing through turbulence," J. Opt. Soc. Am. A 32(11), 1943–1951 (2015).
O. Steinvall, "Effects of target shape and reflection on laser radar cross sections," Appl. Opt. 39(24), 4381–4391 (2000).
Y. H. Li and Z. S. Wu, "Targets Recognition Using Subnanosecond Pulse Laser Range Profiles," Opt. Express 18(16), 16788–16796 (2010).
L. Yanhui, W. Zhensen, G. Yanjun, and Z. Geng, "Analytical model of a laser range profile from rough convex quadric bodies of revolution," J. Opt. Soc. Am. A 29(7), 1383–1388 (2012).
T. Yang, Y. Xu, H. Tian, D. Die, Q. Du, B. Zhang, and Y. Dan, "Propagation of partially coherent Laguerre Gaussian beams through inhomogeneous turbulent atmosphere," J. Opt. Soc. Am. A 34(5), 713–720 (2017).
R. Schoemaker and K. Benoist, "Characterisation of small targets in a maritime environment by means of laser range profiling," Proc. SPIE 8037, 803705 (2011).
Z.-S. Wu and Y.-Q. Li, "Scattering of a partially coherent Gaussian–Schell beam from a diffuse target in slant atmospheric turbulence," J. Opt. Soc. Am. A 28(7), 1531–1539 (2011).
Y. Zhang, M. Cheng, Y. Zhu, J. Gao, W. Dan, Z. Hu, and F. Zhao, "Influence of atmospheric turbulence on the transmission of orbital angular momentum for Whittaker-Gaussian laser beams," Opt. Express 22(18), 22101–22110 (2014).
H. Ahlberg, S. Lundqvist, D. Letalick, I. Renhorn, and O. Steinvall, "Imaging Q-switched CO2 laser radar with heterodyne detection: design and evaluation," Appl. Opt. 25(17), 2891 (1986).
J. H. Shapiro, B. A. Capron, and R. C. Harney, "Imaging and target detection with a heterodyne-reception optical radar," Appl. Opt. 20(19), 3292–3313 (1981).
Ahlberg, H.
Benoist, K.
Capron, B. A.
Charnotskii, M.
Cheng, M.
Dan, W.
Dan, Y.
Die, D.
Du, Q.
Gao, J.
Geng, Z.
Golmohammady, S.
Gross, D.
Harney, R. C.
Hu, Z.
Kashani, F. D.
Letalick, D.
Li, H.
Li, Y. H.
Li, Y.-Q.
Li, Z.-j.
Lundqvist, S.
Mashal, A.
Mou, Y.
Palomino, A.
Renhorn, I.
Schoemaker, R.
Shapiro, J. H.
Steinvall, O.
Stratton, R.
Tian, H.
Westerkamp, J.
Williams, R.
Wu, Z. S.
Wu, Z.-s.
Xu, Y.
Yang, S.
Yang, T.
Yanhui, L.
Yanjun, G.
Yousefi, M.
Zhang, B.
Zhang, G.
Zhang, Y.
Zhao, F.
Zhensen, W.
Zhu, Y.
Appl. Opt. (4)
IEEE Aerosp. Electron. Syst. Mag. (1)
IEEE Trans. Antennas Propag. (1)
J. Opt. Soc. Am. A (5)
Opt. Express (2)
Proc. SPIE (1)
Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.
Alert me when this article is cited.
Click here to see a list of articles that cite this paper
View in Article | Download Full Size | PPT Slide | PDF
Equations on this page are rendered with MathJax. Learn more.
(1) P s = P t 4 π R t 2 σ 4 π R r 2 A r G r
(2) P s ( t ) = ∫ d σ P ( t ′ ) 4 π R 0 2 A r G r 4 π R 0 2
(3) P s ( z t ) = A r G r 4 π R 0 2 ⋅ 4 π R 0 2 ∫ d σ P ( 2 z t / c − 2 Z / c )
(4) u ( r ⇀ ′ , Z ) = E 0 ω 0 ω ( Z ) exp [ − g 0 ( r ⇀ ′ ) ω 2 ( Z ) ]
(5) P ( r ⇀ ′ ) = | u ( r ⇀ ′ , Z ) | 2 = P i ω 0 2 ω 2 ( Z ) exp [ − 2 g 0 ( r ⇀ ′ ) ω 2 ( Z ) ]
(6) ϕ ≈ tan ϕ = ω ( Z ) Z = 2 k 0 ω 0
(7) ω 0 2 ω 2 ( Z ) exp [ − 2 g 0 ( r ⇀ ′ ) ω 2 ( Z ) ]
(8) f ( x , y , z ) = 0
(9) ( x y z ) = ( 1 0 0 0 cos θ − sin θ 0 sin θ cos θ ) ( X Y Z )
(10) f ( x ( X , Y , Z ) , y ( X , Y , Z ) , z ( X , Y , Z ) ) = 0
(11) d σ ( r ⇀ ′ ) = 4 π f r ( β ) cos 2 β d S = 4 π f r ( β ) cos β d X d Y
(12) cos β = − f Z f X 2 + f Y 2 + f Z 2 = sin θ f y − cos θ f z f x 2 + f y 2 + f z 2
(13) P r ( Z 0 ) = A r G r 4 π R 0 2 ⋅ 4 π R 0 2 ∫ Z 0 − Δ / 2 Z 0 + Δ / 2 d Z ′ ∫ C 0 P i ω 0 2 ω 2 ( Z ) exp [ − 2 g 0 ( r ⇀ ′ ) ω 2 ( Z ) ] d σ ( r ⇀ ′ )
(14) u i ( t ) = E 0 exp ( − t 2 / T 0 2 + i ω t )
(15) P i ( t ) = E 0 2 exp ( − 2 t 2 / T 0 2 )
(16) I ( i , j ) = I 0 ( i , j ) exp [ 4 χ ( ρ , 0 ) ]
Takashige Omatsu, Editor-in-Chief | CommonCrawl |
Kinetic description of stable white dwarfs
KRM Home
The Boltzmann-Grad limit for the Lorentz gas with a Poisson distribution of obstacles
Pointwise bounds for the Green's function for the Neumann-Laplace operator in $ \text{R}^3 $
David Hoff
Indiana University, Department of Mathematics, Bloomington, IN, USA
Bob Glassey and I often discussed the pedagogy of applied analysis, agreeing in particular that elementary facts should have elementary proofs. This work is offered in that spirit and in his memory
Received June 2021 Revised October 2021 Early access November 2021
We derive pointwise bounds for the Green's function and its derivatives for the Laplace operator on smooth bounded sets in $ {\bf R}^3 $ subject to Neumann boundary conditions. The proofs require only ordinary calculus, scaling arguments and the most basic facts of $ L^2 $-Sobolev space theory.
Keywords: Pointwise bounds, Green's function.
Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35.
Citation: David Hoff. Pointwise bounds for the Green's function for the Neumann-Laplace operator in $ \text{R}^3 $. Kinetic & Related Models, doi: 10.3934/krm.2021037
R. A. Adams, Sobolev Spaces, Pure and Applied Mathematics, Vol. 65, Academic Press, New York-London, 1975. Google Scholar
S. Agmon, A. Douglis and L. Nirenberg, Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions. I, Comm. Pure Appl. Math., 12 (1959), 623-727. doi: 10.1002/cpa.3160120405. Google Scholar
S. Agmon, A. Douglis and L. Nirenberg, Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions. II, Comm. Pure Appl. Math., 17 (1964), 35-92. doi: 10.1002/cpa.3160170104. Google Scholar
B. E. J. Dahlberg and C. E. Kenig, Hardy spaces and the Neumann problem in $L^p$ for Laplace's equation in Lipschitz domains, Ann. of Math., 125 (1987), 437-465. doi: 10.2307/1971407. Google Scholar
O. Druet, F. Robert and J. Wei, The Lin-Ni's Problem for Mean Convex Domains, Mem. Amer. Math. Soc., 2012. doi: 10.1090/S0065-9266-2011-00646-5. Google Scholar
L. C. Evans, Partial Differential Equations, 2$^{nd}$ edition, Graduate Studies in Mathematics, 19, American Mathematical Society, Providence, RI, 2010. Google Scholar
D. Gilbarg and N. S. Trudinger,, Elliptic Partial Differential Equations of Second Order, 2$^{nd}$ edition, Grundlehren der Mathematischen Wissenschaften, 224, Springer-Verlag, Berlin, 1983. doi: 10.1007/978-3-642-61798-0. Google Scholar
D. Hoff,, Linear and Quasilinear Parabolic Systems, Mathematical Surveys and Monographs, 251, American Mathematical Society, Providence, RI, 2020. Google Scholar
D. Hoff, Compressible flow in a half-space with Navier boundary conditions, J. Math. Fluid Mech., 7 (2005), 315-338. doi: 10.1007/s00021-004-0123-9. Google Scholar
C. E. Kenig and J. Pipher, The Neumann problem for elliptic equations with nonsmooth coefficients, Invent. Math., 113 (1993), 447-509. doi: 10.1007/BF01244315. Google Scholar
F. Robert, Construction and asymptotics for the Green Os function with Neumann boundary conditions, Informal Notes, 2010, available at https://iecl.univ-lorraine.fr/files/2021/04/NotesGreenNeumannRobert.pdf. Google Scholar
Peter Bella, Arianna Giunti. Green's function for elliptic systems: Moment bounds. Networks & Heterogeneous Media, 2018, 13 (1) : 155-176. doi: 10.3934/nhm.2018007
Jeremiah Birrell. A posteriori error bounds for two point boundary value problems: A green's function approach. Journal of Computational Dynamics, 2015, 2 (2) : 143-164. doi: 10.3934/jcd.2015001
Virginia Agostiniani, Rolando Magnanini. Symmetries in an overdetermined problem for the Green's function. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 791-800. doi: 10.3934/dcdss.2011.4.791
Sungwon Cho. Alternative proof for the existence of Green's function. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1307-1314. doi: 10.3934/cpaa.2011.10.1307
Hasib Khan, Cemil Tunc, Aziz Khan. Green function's properties and existence theorems for nonlinear singular-delay-fractional differential equations. Discrete & Continuous Dynamical Systems - S, 2020, 13 (9) : 2475-2487. doi: 10.3934/dcdss.2020139
Wen-ming He, Jun-zhi Cui. The estimate of the multi-scale homogenization method for Green's function on Sobolev space $W^{1,q}(\Omega)$. Communications on Pure & Applied Analysis, 2012, 11 (2) : 501-516. doi: 10.3934/cpaa.2012.11.501
Seick Kim, Longjuan Xu. Green's function for second order parabolic equations with singular lower order coefficients. Communications on Pure & Applied Analysis, 2022, 21 (1) : 1-21. doi: 10.3934/cpaa.2021164
Kyoungsun Kim, Gen Nakamura, Mourad Sini. The Green function of the interior transmission problem and its applications. Inverse Problems & Imaging, 2012, 6 (3) : 487-521. doi: 10.3934/ipi.2012.6.487
Jongkeun Choi, Ki-Ahm Lee. The Green function for the Stokes system with measurable coefficients. Communications on Pure & Applied Analysis, 2017, 16 (6) : 1989-2022. doi: 10.3934/cpaa.2017098
Minh-Phuong Tran, Thanh-Nhan Nguyen. Pointwise gradient bounds for a class of very singular quasilinear elliptic equations. Discrete & Continuous Dynamical Systems, 2021, 41 (9) : 4461-4476. doi: 10.3934/dcds.2021043
Zhi-Min Chen. Straightforward approximation of the translating and pulsating free surface Green function. Discrete & Continuous Dynamical Systems - B, 2014, 19 (9) : 2767-2783. doi: 10.3934/dcdsb.2014.19.2767
Claudia Bucur. Some observations on the Green function for the ball in the fractional Laplace framework. Communications on Pure & Applied Analysis, 2016, 15 (2) : 657-699. doi: 10.3934/cpaa.2016.15.657
Chiu-Ya Lan, Huey-Er Lin, Shih-Hsien Yu. The Green's functions for the Broadwell Model in a half space problem. Networks & Heterogeneous Media, 2006, 1 (1) : 167-183. doi: 10.3934/nhm.2006.1.167
Kevin Zumbrun. L∞ resolvent bounds for steady Boltzmann's Equation. Kinetic & Related Models, 2017, 10 (4) : 1255-1257. doi: 10.3934/krm.2017048
Steven D. Taliaferro. Initial Pointwise Bounds and Blow-up for Parabolic Choquard-Pekar Inequalities. Discrete & Continuous Dynamical Systems, 2017, 37 (10) : 5211-5252. doi: 10.3934/dcds.2017226
Agnieszka Badeńska. No entire function with real multipliers in class $\mathcal{S}$. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3321-3327. doi: 10.3934/dcds.2013.33.3321
Alfonso Sorrentino. Computing Mather's $\beta$-function for Birkhoff billiards. Discrete & Continuous Dynamical Systems, 2015, 35 (10) : 5055-5082. doi: 10.3934/dcds.2015.35.5055
Diego Castellaneta, Alberto Farina, Enrico Valdinoci. A pointwise gradient estimate for solutions of singular and degenerate pde's in possibly unbounded domains with nonnegative mean curvature. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1983-2003. doi: 10.3934/cpaa.2012.11.1983
Hongjie Dong, Seick Kim. Green's functions for parabolic systems of second order in time-varying domains. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1407-1433. doi: 10.3934/cpaa.2014.13.1407
\begin{document}$ \text{R}^3 $\end{document}" readonly="readonly"> | CommonCrawl |
Only show content I have access to (32)
Only show open access (8)
Last 12 months (4)
Last 3 years (12)
Physics and Astronomy (36)
Chemistry (10)
Materials Research (10)
Earth and Environmental Sciences (1)
Politics and International Relations (1)
European Astronomical Society Publications Series (10)
Proceedings of the International Astronomical Union (10)
European Psychiatry (6)
Journal of Materials Research (6)
Publications of the Astronomical Society of Australia (6)
MRS Online Proceedings Library Archive (4)
Microscopy and Microanalysis (4)
British Journal of Nutrition (3)
Epidemiology & Infection (2)
Public Health Nutrition (2)
Business and Human Rights Journal (1)
Epidemiology and Psychiatric Sciences (1)
Experimental Agriculture (1)
Journal of Helminthology (1)
Journal of the Marine Biological Association of the United Kingdom (1)
Proceedings of the Nutrition Society (1)
Psychological Medicine (1)
The Spanish Journal of Psychology (1)
International Astronomical Union (10)
Materials Research Society (10)
European Psychiatric Association (6)
BSAS (3)
Nestle Foundation - enLINK (3)
Nutrition Society (3)
AMA Mexican Society of Microscopy MMS (1)
Brazilian Society for Microscopy and Microanalysis (SBMM) (1)
EAAP (1)
MAS - Microbeam Analysis Society (1)
MBA Online Only Members (1)
MSC - Microscopical Society of Canada (1)
Testing Membership Number Upload (1)
WALLABY pilot survey: Public release of H i data for almost 600 galaxies from phase 1 of ASKAP pilot observations
T. Westmeier, N. Deg, K. Spekkens, T. N. Reynolds, A. X. Shen, S. Gaudet, S. Goliath, M. T. Huynh, P. Venkataraman, X. Lin, T. O'Beirne, B. Catinella, L. Cortese, H. Dénes, A. Elagali, B.-Q. For, G. I. G. Józsa, C. Howlett, J. M. van der Hulst, R. J. Jurek, P. Kamphuis, V. A. Kilborn, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, C. Murugeshan, J. Rhee, P. Serra, L. Shao, L. Staveley-Smith, J. Wang, O. I. Wong, M. A. Zwaan, J. R. Allison, C. S. Anderson, Lewis Ball, D. C.-J. Bock, D. Brodrick, J. D. Bunton, F. R. Cooray, N. Gupta, D. B. Hayman, E. K. Mahony, V. A. Moss, A. Ng, S. E. Pearce, W. Raja, D. N. Roxby, M. A. Voronkov, K. A. Warhurst, H. M. Courtois, K. Said
Journal: Publications of the Astronomical Society of Australia / Volume 39 / 2022
Published online by Cambridge University Press: 15 November 2022, e058
We present WALLABY pilot data release 1, the first public release of H i pilot survey data from the Wide-field ASKAP L-band Legacy All-sky Blind Survey (WALLABY) on the Australian Square Kilometre Array Pathfinder. Phase 1 of the WALLABY pilot survey targeted three $60\,\mathrm{deg}^{2}$ regions on the sky in the direction of the Hydra and Norma galaxy clusters and the NGC 4636 galaxy group, covering the redshift range of $z \lesssim 0.08$ . The source catalogue, images and spectra of nearly 600 extragalactic H i detections and kinematic models for 109 spatially resolved galaxies are available. As the pilot survey targeted regions containing nearby group and cluster environments, the median redshift of the sample of $z \approx 0.014$ is relatively low compared to the full WALLABY survey. The median galaxy H i mass is $2.3 \times 10^{9}\,{\rm M}_{{\odot}}$ . The target noise level of $1.6\,\mathrm{mJy}$ per 30′′ beam and $18.5\,\mathrm{kHz}$ channel translates into a $5 \sigma$ H i mass sensitivity for point sources of about $5.2 \times 10^{8} \, (D_{\rm L} / \mathrm{100\,Mpc})^{2} \, {\rm M}_{{\odot}}$ across 50 spectral channels ( ${\approx} 200\,\mathrm{km \, s}^{-1}$ ) and a $5 \sigma$ H i column density sensitivity of about $8.6 \times 10^{19} \, (1 + z)^{4}\,\mathrm{cm}^{-2}$ across 5 channels ( ${\approx} 20\,\mathrm{km \, s}^{-1}$ ) for emission filling the 30′′ beam. As expected for a pilot survey, several technical issues and artefacts are still affecting the data quality. Most notably, there are systematic flux errors of up to several 10% caused by uncertainties about the exact size and shape of each of the primary beams as well as the presence of sidelobes due to the finite deconvolution threshold. In addition, artefacts such as residual continuum emission and bandpass ripples have affected some of the data. The pilot survey has been highly successful in uncovering such technical problems, most of which are expected to be addressed and rectified before the start of the full WALLABY survey.
Dietary diversity and depression: cross-sectional and longitudinal analyses in Spanish adult population with metabolic syndrome. Findings from PREDIMED-Plus trial
Naomi Cano-Ibáñez, Lluis Serra-Majem, Sandra Martín-Peláez, Miguel Ángel Martínez-González, Jordi Salas-Salvadó, Dolores Corella, Camille Lassale, Jose Alfredo Martínez, Ángel M Alonso-Gómez, Julia Wärnberg, Jesús Vioque, Dora Romaguera, José López-Miranda, Ramon Estruch, Ana María Gómez-Pérez, José Lapetra, Fernando Fernández-Aranda, Aurora Bueno-Cavanillas, Josep A Tur, Naiara Cubelos, Xavier Pintó, José Juan Gaforio, Pilar Matía-Martín, Josep Vidal, Cristina Calderón, Lidia Daimiel, Emilio Ros, Alfredo Gea, Nancy Babio, Ignacio Manuel Gimenez-Alba, María Dolores Zomeño-Fajardo, Itziar Abete, Lucas Tojal Sierra, Rita P Romero-Galisteo, Manoli García de la Hera, Marian Martín-Padillo, Antonio García-Ríos, Rosa M Casas, JC Fernández-García, José Manuel Santos-Lozano, Estefanía Toledo, Nerea Becerra-Tomas, Jose V Sorli, Helmut Schröder, María A Zulet, Carolina Sorto-Sánchez, Javier Diez-Espino, Carlos Gómez-Martínez, Montse Fitó, Almudena Sánchez-Villegas
Journal: Public Health Nutrition , First View
Published online by Cambridge University Press: 19 July 2022, pp. 1-13
To examine the cross-sectional and longitudinal (2-year follow-up) associations between dietary diversity (DD) and depressive symptoms.
An energy-adjusted dietary diversity score (DDS) was assessed using a validated FFQ and was categorised into quartiles (Q). The variety in each food group was classified into four categories of diversity (C). Depressive symptoms were assessed with Beck Depression Inventory-II (Beck II) questionnaire and depression cases defined as physician-diagnosed or Beck II >= 18. Linear and logistic regression models were used.
Spanish older adults with metabolic syndrome (MetS).
A total of 6625 adults aged 55–75 years from the PREDIMED-Plus study with overweight or obesity and MetS.
Total DDS was inversely and statistically significantly associated with depression in the cross-sectional analysis conducted; OR Q4 v. Q1 = 0·76 (95 % CI (0·64, 0·90)). This was driven by high diversity compared to low diversity (C3 v. C1) of vegetables (OR = 0·75, 95 % CI (0·57, 0·93)), cereals (OR = 0·72 (95 % CI (0·56, 0·94)) and proteins (OR = 0·27, 95 % CI (0·11, 0·62)). In the longitudinal analysis, there was no significant association between the baseline DDS and changes in depressive symptoms after 2 years of follow-up, except for DD in vegetables C4 v. C1 = (β = 0·70, 95 % CI (0·05, 1·35)).
According to our results, DD is inversely associated with depressive symptoms, but eating more diverse does not seem to reduce the risk of future depression. Additional longitudinal studies (with longer follow-up) are needed to confirm these findings.
Mental impact of Covid-19 among Spanish healthcare workers. A large longitudinal survey
J. Alonso, G. Vilagut, I. Alayo, M. Ferrer, F. Amigo, A. Aragón-Peña, E. Aragonès, M. Campos, I. del Cura-González, I. Urreta, M. Espuga, A. González Pinto, J. M. Haro, N. López Fresneña, A. Martínez de Salázar, J. D. Molina, R. M. Ortí Lucas, M. Parellada, J. M. Pelayo-Terán, A. Pérez Zapata, J. I. Pijoan, N. Plana, M. T. Puig, C. Rius, C. Rodriguez-Blazquez, F. Sanz, C. Serra, R. C. Kessler, R. Bruffaerts, E. Vieta, V. Pérez-Solá, P. Mortier, MINDCOVID Working group
Journal: Epidemiology and Psychiatric Sciences / Volume 31 / 2022
Published online by Cambridge University Press: 29 April 2022, e28
Longitudinal data on the mental health impact of the coronavirus disease 2019 (Covid-19) pandemic in healthcare workers is limited. We estimated prevalence, incidence and persistence of probable mental disorders in a cohort of Spanish healthcare workers (Covid-19 waves 1 and 2) -and identified associated risk factors.
8996 healthcare workers evaluated on 5 May–7 September 2020 (baseline) were invited to a second web-based survey (October–December 2020). Major depressive disorder (PHQ-8 ≥ 10), generalised anxiety disorder (GAD-7 ≥ 10), panic attacks, post-traumatic stress disorder (PCL-5 ≥ 7), and alcohol use disorder (CAGE-AID ≥ 2) were assessed. Distal (pre-pandemic) and proximal (pandemic) risk factors were included. We estimated the incidence of probable mental disorders (among those without disorders at baseline) and persistence (among those with disorders at baseline). Logistic regression of individual-level [odds ratios (OR)] and population-level (population attributable risk proportions) associations were estimated, adjusting by all distal risk factors, health care centre and time of baseline interview.
4809 healthcare workers participated at four months follow-up (cooperation rate = 65.7%; mean = 120 days s.d. = 22 days from baseline assessment). Follow-up prevalence of any disorder was 41.5%, (v. 45.4% at baseline, p < 0.001); incidence, 19.7% (s.e. = 1.6) and persistence, 67.7% (s.e. = 2.3). Proximal factors showing significant bivariate-adjusted associations with incidence included: work-related factors [prioritising Covid-19 patients (OR = 1.62)], stress factors [personal health-related stress (OR = 1.61)], interpersonal stress (OR = 1.53) and financial factors [significant income loss (OR = 1.37)]. Risk factors associated with persistence were largely similar.
Our study indicates that the prevalence of probable mental disorders among Spanish healthcare workers during the second wave of the Covid-19 pandemic was similarly high to that after the first wave. This was in good part due to the persistence of mental disorders detected at the baseline, but with a relevant incidence of about 1 in 5 of HCWs without mental disorders during the first wave of the Covid-19 pandemic. Health-related factors, work-related factors and interpersonal stress are important risks of persistence of mental disorders and of incidence of mental disorders. Adequately addressing these factors might have prevented a considerable amount of mental health impact of the pandemic among this vulnerable population. Addressing health-related stress, work-related factors and interpersonal stress might reduce the prevalence of these disorders substantially. Study registration number: NCT04556565
Informal Mining in Colombia: Gender-Based Challenges for the Implementation of the Business and Human Rights Agenda
Lina M Céspedes-Báez, Enrique Prieto-Ríos, Juan P Pontón-Serra
Journal: Business and Human Rights Journal / Volume 7 / Issue 1 / February 2022
Published online by Cambridge University Press: 02 March 2022, pp. 67-83
This paper analyses whether the implementation of business and human rights (BHR) frameworks in Colombia properly responds to the challenges posed by informal mining and gender-based violence and discrimination in the context of conflict and peacebuilding. The mining sector has been considered key in Colombia to promote economic growth, but it is also characterized by significant informality. Informal mining in Colombia has been linked to gender-based violence and discrimination. We contend that while informality has been identified as a substantial hurdle to the realization of human rights, BHR frameworks still fall short in addressing this aspect of business. By examining the specific measures Colombia has devised to implement BHR, including two National Action Plans on BHR, we demonstrate the urgency of addressing informal economies in BHR and to continue developing particular insights to properly protect, respect and remedy the human rights wrongs women experience in the context of informal mining.
Australian square kilometre array pathfinder: I. system description
Australian SKA Pathfinder
A. W. Hotan, J. D. Bunton, A. P. Chippendale, M. Whiting, J. Tuthill, V. A. Moss, D. McConnell, S. W. Amy, M. T. Huynh, J. R. Allison, C. S. Anderson, K. W. Bannister, E. Bastholm, R. Beresford, D. C.-J. Bock, R. Bolton, J. M. Chapman, K. Chow, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, I. J. Feain, T. M. O. Franzen, D. George, N. Gupta, G. A. Hampson, L. Harvey-Smith, D. B. Hayman, I. Heywood, C. Jacka, C. A. Jackson, S. Jackson, K. Jeganathan, S. Johnston, M. Kesteven, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, E. Lenc, E. S. Lensson, S. Mackay, E. K. Mahony, N. M. McClure-Griffiths, R. McConigley, P. Mirtschin, A. K. Ng, R. P. Norris, S. E. Pearce, C. Phillips, M. A. Pilawa, W. Raja, J. E. Reynolds, P. Roberts, D. N. Roxby, E. M. Sadler, M. Shields, A. E. T. Schinckel, P. Serra, R. D. Shaw, T. Sweetnam, E. R. Troup, A. Tzioumis, M. A. Voronkov, T. Westmeier
Published online by Cambridge University Press: 05 March 2021, e009
In this paper, we describe the system design and capabilities of the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope at the conclusion of its construction project and commencement of science operations. ASKAP is one of the first radio telescopes to deploy phased array feed (PAF) technology on a large scale, giving it an instantaneous field of view that covers $31\,\textrm{deg}^{2}$ at $800\,\textrm{MHz}$. As a two-dimensional array of 36 $\times$12 m antennas, with baselines ranging from 22 m to 6 km, ASKAP also has excellent snapshot imaging capability and 10 arcsec resolution. This, combined with 288 MHz of instantaneous bandwidth and a unique third axis of rotation on each antenna, gives ASKAP the capability to create high dynamic range images of large sky areas very quickly. It is an excellent telescope for surveys between 700 and $1800\,\textrm{MHz}$ and is expected to facilitate great advances in our understanding of galaxy formation, cosmology, and radio transients while opening new parameter space for discovery of the unknown.
P01.18 Role of pro-inflammatory cytokines in down's syndrome
M.G. Carta, M.C. Hardoy, P.E. Manconil, P. Serra, A. Barrancal, C.M. Caffarelli, E. Mancal, A. Ghianil
Journal: European Psychiatry / Volume 15 / Issue S2 / October 2000
Published online by Cambridge University Press: 16 April 2020, p. 325s
Psychiatric Emergency Service use in Coimbra University Hospitals: Results from a 6-Month Cross-Sectional Study Sample
J. Cerejeira, H. Firmino, I. Boto, H. Rita, G. Santos, J. Teixeira, L. Vale, P. Abrantes, A. Vaz Serra
Journal: European Psychiatry / Volume 24 / Issue S1 / January 2009
Published online by Cambridge University Press: 16 April 2020, p. 1
The Psychiatric Emergency Service (PES) is an important part of the mental health care system for the management of acute conditions requiring prompt intervention representing also a significant part of workload of specialists and trainees. The objective of this study was to characterize the clinical features of patients observed in PES of Coimbra University Hospitals.
During the first 6 months of 2008, demographic and clinical data were obtained for all patients observed by the first author of the study, together with a specialist in Psychiatry.
The sample consisted of 159 patients, 103 females and 56 males. Mean age was 45,9 ± 18,367 years. The majority of patients presented in the emergency room either alone (56,6%) or with a first degree relative (34,6%) by self-initiative and having a past psychiatric history (71,1%). Disturbing mood symptoms (depression, anxiety or both) were the motive of assessment in 58% of patients but several other causes were reported including behavioural symptoms, agitation, psychosis, drug or alcohol related disorders, sleep and cognitive disorders. Average Clinical Global Impression was 4,12 ± 1,177. After the psychiatric assessment, several diagnosis were made namely Major Depressive Episode (14,5%), Adaptation Disorders (13,9%), Schizophrenia and related disorders (13,8%), Anxiety Disorder Not Otherwise Specified (11,9%) and Drug or Alcohol related disorders (8,2%). Most patients were discharged without referral (50,3%).
A significant percentage of patients went to the PES for conditions that could have been treated by a primary care physician or in an outpatient clinic setting.
1954 – Intervention Group In Patients With Chronic Low Back Pain: a Multidisciplinary Approach
P. Lusilla, C. Castellano-Tejedor, E. Barnola-Serra, C. Ramos Rodon, T. Biedermann-Villagra, M.L. Torrent-Bertran, G. Costa-Requena, L. Camprubí-Roca, A. Palacios-González, A. Cuxart-Fina, A. Ginés-Puertas, A. Bosch-Graupera
Journal: European Psychiatry / Volume 28 / Issue S1 / 2013
Non-specific chronic low back pain is one of common causes of disability and a recurrent medical complaint with high costs. From rehabilitative medicine, physiotherapy programs and general postural recommendations are offered. Although this treatment is aimed to reduce disability, severity of pain and anxiety-depressive symptoms, many patients report partial improvements and recurrence of pain. Therefore, a new approach to treat this pathology with a broaden focus on psychososocial issues that might modulate pain and its evolution is required.
Aims and hypothesis
To assess the effectiveness of two complementary interventions to physiotherapy, such as relaxation techniques (specifically, sophrology) and cognitive behavioral intervention. It is hypothesized that intervention groups will significantly improve their adherence to physiotherapy and will gain control over their pain. Ultimately, this will foster better quality of life.
Longitudinal design with pre-post intervention measures and follow-up appointments (at 6 and 12 months) carried out in a sample of 66 participants. The sample will be divided into three groups: control (physiotherapy), intervention group 1 (physiotherapy & sophrology) and intervention group 2 (physiotherapy & cognitive behavioral intervention). In all groups biomedical aspects regarding type, evolution and characterization of pain as well as several psychosocial factors will be assessed.
Preliminary results are expected by December 2013.
If hypotheses are confirmed, we will be able to provide empirical evidences to justify a multidisciplinary care model for chronic low back pain, which will favor a significant cost reduction in terms of health care and human suffering.
Motivations behind suicide attempts: A study in the ER of Maggiore hospital – Novara
D. Marangon, C. Gramaglia, E. Gattoni, M. Chiarelli Serra, C. Delicato, S. Di Marco, A. Venesia, L. Castello, G.C. Avanzi, P. Zeppegno
Journal: European Psychiatry / Volume 41 / Issue S1 / April 2017
Published online by Cambridge University Press: 23 March 2020, pp. S398-S399
A previous study, conducted in the province of Novara stated that, from an epidemiological and clinical point of view, being a female, being a migrant, as well as being in the warmer months of the year, or suffering from an untreated psychiatric disease are associated with suicide attempts. Literature suggests there is a positive relation between negative life events and suicidal behaviours. In this study, we intend to deepen knowledge, individuating motivations and meanings underlying suicidal behaviours. This appears a meaningful approach to integrate studies and initiatives in order to prevent suicide and suicidal behaviours.
To examine possible correlation between socio-demographic and clinical characteristics and motivations underlying suicide attempts.
Patients aged > 16 years admitted for attempted suicide in the Emergency Room of the AOU Maggiore della Carità Hospital, Novara, Italy, were studied retrospectively from the 1st January 2015 to the 31st December 2016. Each patient was assessed by an experienced psychiatrist with a clinical interview; socio-demographic and clinical features were gathered. Analysis were performed with SPSS.
Data collection are still ongoing; results and implications will be discussed. We expect to find different motivations in relation to socio-demographic and clinical characteristics [1,2].
Disclosure of interest
The authors have not supplied their declaration of competing interest.
Early clinical predictors and correlates of long-term morbidity in bipolar disorder
G. Serra, A Koukopoulos, L. De Chiara, A.E. Koukopoulos, G. Sani, L. Tondo, P. Girardi, D. Reginaldi, R.J. Baldessarini
Journal: European Psychiatry / Volume 43 / June 2017
Identifying factors predictive of long-term morbidity should improve clinical planning limiting disability and mortality associated with bipolar disorder (BD).
We analyzed factors associated with total, depressive and mania-related long-term morbidity and their ratio D/M, as %-time ill between a first-lifetime major affective episode and last follow-up of 207 BD subjects. Bivariate comparisons were followed by multivariable linear regression modeling.
Total % of months ill during follow-up was greater in 96 BD-II (40.2%) than 111 BD-I subjects (28.4%; P = 0.001). Time in depression averaged 26.1% in BD-II and 14.3% in BD-I, whereas mania-related morbidity was similar in both, averaging 13.9%. Their ratio D/M was 3.7-fold greater in BD-II than BD-I (5.74 vs. 1.96; P < 0.0001). Predictive factors independently associated with total %-time ill were: [a] BD-II diagnosis, [b] longer prodrome from antecedents to first affective episode, and [c] any psychiatric comorbidity. Associated with %-time depressed were: [a] BD-II diagnosis, [b] any antecedent psychiatric syndrome, [c] psychiatric comorbidity, and [d] agitated/psychotic depressive first affective episode. Associated with %-time in mania-like illness were: [a] fewer years ill and [b] (hypo)manic first affective episode. The long-term D/M morbidity ratio was associated with: [a] anxious temperament, [b] depressive first episode, and [c] BD-II diagnosis.
Long-term depressive greatly exceeded mania-like morbidity in BD patients. BD-II subjects spent 42% more time ill overall, with a 3.7-times greater D/M morbidity ratio, than BD-I. More time depressed was predicted by agitated/psychotic initial depressive episodes, psychiatric comorbidity, and BD-II diagnosis. Longer prodrome and any antecedent psychiatric syndrome were respectively associated with total and depressive morbidity.
The Psycho-geriatric Patient in the Emergency Room (ER) of the Maggiore della Carità Hospital in Novara
E. Di Tullio, C. Vecchi, A. Venesia, L. Girardi, C. Molino, P. Camera, M. Chiarelli serra, C. Gramaglia, A. Feggi, P. Zeppegno
Due to population aging, the health system will face increasing challenges in the next years. Concerning mental disorders, they are major public health issues in late life, with mood and anxiety disorders being some of the most common mental disorder among the elderly. For this reason, increasing attention has to be paid to the evaluation of the elderly in psychiatry emergency settings.
To evaluate the socio-demographic and clinical features of over 65 patients referred to psychiatric consultations in the ER of "Maggiore della Carità" Hospital in Novara, in a 7 years period.
The analysis of the characteristics of the study sample could be potentially useful in resource planning in order to better serve this important segment of the general population.
Determinants of ER visits for over 65 patients referred to psychiatric evaluation were studied retrospectively from 2008 to 2015.
Elderly patients made up 14,7% (n = 458) of all psychiatric evaluation in the ER (n = 3124). About two thirds (65,9%) were females and one third were males (34,1%). The mean age of patients recruited was 75.11 years. The majority of subjects (68.6%) presented without a diagnosis of Axis I according to DSM-IV. The other most frequent diagnosis was "cognitive disorders" (11.4%) and "mood disorders" (10.9%).
The large proportion of patients without a diagnosis of Axis I, could be related to the misunderstanding of the psychosocial aspects of aging. Preliminary results highlight the importance of research on this topic, considering population aging and the impact of mental disorders in late-life.
The recurrent nuclear activity of Fornax A and its interaction with the cold gas
F. M. Maccagni, P. Serra, M. Murgia, F. Govoni, K. Morokuma-Matsui, D. Kleiner
Journal: Proceedings of the International Astronomical Union / Volume 15 / Issue S359 / March 2020
Print publication: March 2020
Sensitive (noise ∼16 μJy beam−1), high-resolution (∼10″) MeerKAT observations of show that its giant lobes have a double-shell morphology, where dense filaments are embedded in a diffuse and extended cocoon, while the central radio jets are confined within the host galaxy. The spectral radio properties of the lobes and jets of reveal that its nuclear activity is rapidly flickering. Multiple episodes of nuclear activity must have formed the radio lobes, for which the last stopped 12 Myr ago. More recently (∼3 Myr ago), a less powerful and short (≲1 Myr) phase of nuclear activity generated the central jets. The distribution and kinematics of the neutral and molecular gas in the centre give insights on the interaction between the recurrent nuclear activity and the surrounding interstellar medium.
Active and passive surveillance for bat lyssaviruses in Italy revealed serological evidence for their circulation in three bat species
S. Leopardi, P. Priori, B. Zecchin, G. Poglayen, K. Trevisiol, D. Lelli, S. Zoppi, M. T. Scicluna, N. D'Avino, E. Schiavon, H. Bourhy, J. Serra-Cobo, F. Mutinelli, D. Scaravelli, P. De Benedictis
Journal: Epidemiology & Infection / Volume 147 / 2019
Published online by Cambridge University Press: 04 December 2018, e63
The wide geographical distribution and genetic diversity of bat-associated lyssaviruses (LYSVs) across Europe suggest that similar viruses may also be harboured in Italian insectivorous bats. Indeed, bats were first included within the passive national surveillance programme for rabies in wildlife in the 1980s, while active surveillance has been performed since 2008. The active surveillance strategies implemented allowed us to detect neutralizing antibodies directed towards European bat 1 lyssavirus in six out of the nine maternity colonies object of the study across the whole country. Seropositive bats were Myotis myotis, M. blythii and Tadarida teniotis. On the contrary, the virus was neither detected through passive nor active surveillance, suggesting that fatal neurological infection is rare also in seropositive colonies. Although the number of tested samples has steadily increased in recent years, submission turned out to be rather sporadic and did not include carcasses from bat species that account for the majority of LYSVs cases in Europe, such as Eptesicus serotinus, M. daubentonii, M. dasycneme and M. nattereri. A closer collaboration with bat handlers is therefore mandatory to improve passive surveillance and decrypt the significance of serological data obtained up to now.
Genetic parameters of backfat fatty acids and carcass traits in Large White pigs
R. Davoli, G. Catillo, A. Serra, M. Zappaterra, P. Zambonelli, D. Meo Zilio, R. Steri, M. Mele, L. Buttazzoni, V. Russo
Journal: animal / Volume 13 / Issue 5 / May 2019
Print publication: May 2019
Subcutaneous fat thickness and fatty acid composition (FAC) play an important role on seasoning loss and organoleptic characteristics of seasoned hams. Dry-cured ham industry prefers meats with low contents of polyunsaturated fatty acids (PUFA) because these negatively affect fat firmness and ham quality, whereas consumers require higher contents in those fatty acids (FA) for their positive effect on human health. A population of 950 Italian Large White pigs from the Italian National Sib Test Selection Programme was investigated with the aim to estimate heritabilities, genetic and phenotypic correlations of backfat FAC, Semimembranosus muscle intramuscular fat (IMF) content and other carcass traits. The pigs were reared in controlled environmental condition at the same central testing station and were slaughtered at reaching 150 kg live weight. Backfat samples were collected to analyze FAC by gas chromatography. Carcass traits showed heritability levels from 0.087 for estimated carcass lean percentage to 0.361 for hot carcass weight. Heritability values of FA classes were low-to-moderate, all in the range 0.245 for n-3 PUFA to 0.264 for monounsaturated FA (MUFA). Polyunsaturated fatty acids showed a significant genetic correlation with loin thickness (0.128), backfat thickness (−0.124 for backfat measured by Fat-O-Meat'er and −0.175 for backfat measured by calibre) and IMF (−0.102). Obviously, C18:2(n-6) shows similar genetic correlations with the same traits (0.211 with loin thickness, −0.206 with backfat measured by Fat-O-Meat'er, −0.291 with backfat measured by calibre and −0.171 with IMF). Monounsaturated FA, except with the backfat measured by calibre (0.068; P<0.01), do not show genetic correlations with carcass characteristics, whereas a negative genetic correlation was found between MUFA and saturated FA (SFA; −0.339; P<0.001). These results suggest that MUFA/SFA ratio could be increased without interfering with carcass traits. The level of genetic correlations between FA and carcass traits should be taken into account in dealing with the development of selection schemes addressed to modify carcass composition and/or backfat FAC.
The Australian Square Kilometre Array Pathfinder: Performance of the Boolardy Engineering Test Array
D. McConnell, J. R. Allison, K. Bannister, M. E. Bell, H. E. Bignall, A. P. Chippendale, P. G. Edwards, L. Harvey-Smith, S. Hegarty, I. Heywood, A. W. Hotan, B. T. Indermuehle, E. Lenc, J. Marvil, A. Popping, W. Raja, J. E. Reynolds, R. J. Sault, P. Serra, M. A. Voronkov, M. Whiting, S. W. Amy, P. Axtens, L. Ball, T. J. Bateman, D. C.-J. Bock, R. Bolton, D. Brodrick, M. Brothers, A. J. Brown, J. D. Bunton, W. Cheng, T. Cornwell, D. DeBoer, I. Feain, R. Gough, N. Gupta, J. C. Guzman, G. A. Hampson, S. Hay, D. B. Hayman, S. Hoyle, B. Humphreys, C. Jacka, C. A. Jackson, S. Jackson, K. Jeganathan, J. Joseph, B. S. Koribalski, M. Leach, E. S. Lensson, A. MacLeod, S. Mackay, M. Marquarding, N. M. McClure-Griffiths, P. Mirtschin, D. Mitchell, S. Neuhold, A. Ng, R. Norris, S. Pearce, R. Y. Qiao, A. E. T. Schinckel, M. Shields, T. W. Shimwell, M. Storey, E. Troup, B. Turner, J. Tuthill, A. Tzioumis, R. M. Wark, T. Westmeier, C. Wilson, T. Wilson
Published online by Cambridge University Press: 09 September 2016, e042
We describe the performance of the Boolardy Engineering Test Array, the prototype for the Australian Square Kilometre Array Pathfinder telescope. Boolardy Engineering Test Array is the first aperture synthesis radio telescope to use phased array feed technology, giving it the ability to electronically form up to nine dual-polarisation beams. We report the methods developed for forming and measuring the beams, and the adaptations that have been made to the traditional calibration and imaging procedures in order to allow BETA to function as a multi-beam aperture synthesis telescope. We describe the commissioning of the instrument and present details of Boolardy Engineering Test Array's performance: sensitivity, beam characteristics, polarimetric properties, and image quality. We summarise the astronomical science that it has produced and draw lessons from operating Boolardy Engineering Test Array that will be relevant to the commissioning and operation of the final Australian Square Kilometre Array Path telescope.
Iodine status and thyroid function among Spanish schoolchildren aged 6–7 years: the Tirokid study
L. Vila, S. Donnay, J. Arena, J. J. Arrizabalaga, J. Pineda, E. Garcia-Fuentes, C. García-Rey, J. L. Marín, M. Serra-Prat, I. Velasco, A. López-Guzmán, L. M. Luengo, A. Villar, Z. Muñoz, O. Bandrés, E. Guerrero, J. A. Muñoz, G. Moll, F. Vich, E. Menéndez, M. Riestra, Y. Torres, P. Beato-Víbora, M. Aguirre, P. Santiago, J. Aranda, C. Gutiérrez-Repiso
Journal: British Journal of Nutrition / Volume 115 / Issue 9 / 14 May 2016
Published online by Cambridge University Press: 10 March 2016, pp. 1623-1631
Print publication: 14 May 2016
I deficiency is still a worldwide public health problem, with children being especially vulnerable. No nationwide study had been conducted to assess the I status of Spanish children, and thus an observational, multicentre and cross-sectional study was conducted in Spain to assess the I status and thyroid function in schoolchildren aged 6–7 years. The median urinary I (UI) and thyroid-stimulating hormone (TSH) levels in whole blood were used to assess the I status and thyroid function, respectively. A FFQ was used to determine the consumption of I-rich foods. A total of 1981 schoolchildren (52 % male) were included. The median UI was 173 μg/l, and 17·9 % of children showed UI<100 μg/l. The median UI was higher in males (180·8 v. 153·6 μg/l; P<0·001). Iodised salt (IS) intake at home was 69·8 %. IS consumption and intakes of ≥2 glasses of milk or 1 cup of yogurt/d were associated with significantly higher median UI. Median TSH was 0·90 mU/l and was higher in females (0·98 v. 0·83; P<0·001). In total, 0·5 % of children had known hypothyroidism (derived from the questionnaire) and 7·6 % had TSH levels above reference values. Median TSH was higher in schoolchildren with family history of hypothyroidism. I intake was adequate in Spanish schoolchildren. However, no correlation was found between TSH and median UI in any geographical area. The prevalence of TSH above reference values was high and its association with thyroid autoimmunity should be determined. Further assessment of thyroid autoimmunity in Spanish schoolchildren is desirable.
Morphology of the oxyurid nematodes Trypanoxyuris (T.) cacajao n. sp. and T. (T.) ucayalii n. sp. from the red uakari monkey Cacajao calvus ucayalii in the Peruvian Amazon
D.F. Conga, E.G. Giese, N.M. Serra-Freire, M. Bowler, P. Mayor
Journal: Journal of Helminthology / Volume 90 / Issue 4 / July 2016
Cacajao calvus ucayalii (Thomas, 1928) (Primates: Pitheciidae), a subspecies endemic to the Peruvian Amazon, occurs in patchy and sometimes isolated populations in north-eastern Peru and is in a vulnerable situation, mainly due to habitat loss and hunting. This rareness and remote distribution means that, until now, parasitical studies have been limited. Based on optical and scanning electron microscopy of specimens of both sexes, we report two new species of Trypanoxyuris pinworms occurring in the large intestine of the Peruvian red uakari, namely Trypanoxyuris (Trypanoxyuris) cacajao and Trypanoxyuris (Trypanoxyuris) ucayalii. Both species showed a distinct morphology of the lips and cephalic structure. Sexual dimorphism in the lateral alae was observed in both male and the female worms, with ventral ornamentation being shown in the oesophageal teeth. The finding of these new pinworm species highlights the possibility of discovering other species.
Volatiles in raw and cooked meat from lambs fed olive cake and linseed
R. S. Gravador, A. Serra, G. Luciano, P. Pennisi, V. Vasta, M. Mele, M. Pauselli, A. Priolo
Journal: animal / Volume 9 / Issue 4 / April 2015
Print publication: April 2015
This study was conducted to determine the effects of feeding olive cake and linseed to lambs on the volatile organic compounds (VOCs) in raw and cooked meat. Four groups of eight male Appenninica lambs each were fed: conventional cereal-based concentrates (diet C), concentrates containing 20% on a dry matter (DM) basis of rolled linseed (diet L), concentrates containing 35% DM of stoned olive cake (diet OC), or concentrates containing both rolled linseed (10% DM) and stoned olive cake (17% DM; diet OCL). The longissimus dorsi muscle of each lamb was sampled at slaughter and was subjected to VOC profiling through the use of SPME-GC-MS. In the raw meat, the concentration of 3-methylpentanoic acid was higher in treatment C as compared with treatments L, OC and OCL (P<0.01). Moreover the level of nonanoic acid was greater in treatments C and OC than in treatment L (P<0.05). With respect to alcohols, in raw meat the amount of 2-phenoxyethanol in treatment OCL was lower than in treatments C (P<0.01) and OC (P<0.05), while in cooked meat the amount of 1-pentanol was higher in treatment C than in treatment OC (P<0.05). Apart from these compounds, none of the lipid oxidation-derived volatiles was significantly affected by the dietary treatment. Therefore, the results suggest that the replacement of cereal concentrates with linseed and/or olive cake did not cause appreciable changes in the production of volatile organic compounds in lamb meat.
On the connection between the thick disk and the galactic bar
A. Spagna, A. Curir, R. Drimmel, M.G. Lattanzi, P. Re Fiorentin, A.L. Serra
Journal: European Astronomical Society Publications Series / Volume 68 / 2014
Published online by Cambridge University Press: 17 July 2015, p. 405
Print publication: 2014
Although the thick disk in our Galaxy was revealed more than thirty years ago, its formation scenario is still unclear. Here, we analyze a chemo-dynamical simulation of a primordial disk population representative of the Galactic thick disk and investigate how the spatial, kinematic, and chemical properties are affected by the presence of a central bar.
The use of stoned olive cake and rolled linseed in the diet of intensively reared lambs: effect on the intramuscular fatty-acid composition
M. Mele, A. Serra, M. Pauselli, G. Luciano, M. Lanza, P. Pennisi, G. Conte, A. Taticchi, S. Esposto, L. Morbidini
Journal: animal / Volume 8 / Issue 1 / January 2014
Print publication: January 2014
The aim of the present study was to evaluate the effect of the inclusion of stoned olive cake and rolled linseed in a concentrate-based diet for lambs on the fatty-acid composition of polar and non-polar intramuscular lipids of the longissimus dorsi muscle. To achieve this objective, 32 Appenninica lambs were randomly distributed into four groups of eight lambs each and were fed conventional cereal-based concentrates (diet C); concentrates containing 20% on a dry matter (DM) basis of rolled linseed (diet L); concentrates containing 35% DM of stoned olive cake (diet OC); and concentrates containing both rolled linseed (10% DM) and stoned olive cake (17% DM; diet OCL). The concentrates were administered together with grass hay at a 20:80 forage:concentrate ratio. Growing performances and carcass traits were evaluated. The fatty-acid composition was analysed in the total intramuscular lipids, as well as in the polar and neutral lipids. The average feed intake and the growth performance of lambs were not affected by the dietary treatments, as a consequence of similar nutritional characteristics of the diets. The inclusion of rolled linseed in the L and OCL diets increased the content of C18:3 n-3 in intramuscular total lipids, which was threefold higher in meat from the L lambs and more than twofold higher in meat from the OCL lambs compared with the C and OC treatments. The n-6:n-3 ratio significantly decreased in the meat from lambs in the L and OCL groups, reaching values below 3. The L treatment resulted in the highest level of trans-18:1 fatty acids in the muscle. Regardless of the dietary treatment, the t10-18:1 was the major isomer, representing 55%, 45%, 49% and 45% of total trans-18:1 for C, L, OC and OCL treatments, respectively. Neutral lipids from the OC-fed lambs contained the highest amount of c9-18:1 (more than 36% of total fatty acids); however, the content of c9-18:1 did not differ between the OC and C lambs, suggesting an intensive biohydrogenation of dietary c9-18:1 in the case of OC treatment. The highest content of c9,t11-18:2 was detected in the intramuscular fat from the L-fed lambs, followed by the OCL treatment. A similar trend was observed in the neutral lipid fraction and, to a lower extent, in the polar lipids. | CommonCrawl |
The Emperor bans the number seven on pain of death. Does this inevitably disrupt a medieval society? (trade and commerce) [closed]
Want to improve this question? Update the question so it's on-topic for Worldbuilding Stack Exchange.
The medieval Emperor is up to his tricks again. He has decided that the number seven is unlucky. Despite advice from his courtiers that it can't be done, he bans its use on penalty of death by any of his subjects.
This is a medieval society so they can get by without terribly complex mathematics. They mostly do add, subtract, multiply and divide. They don't have a concept of negative numbers. Apart from scribes and sages, most people do arithmetic by lining up groups of pebbles.
Here are some of the dilemmas the subjects face.
If you have eight sheep and one dies then you must immediately kill another one.
The local coinage is the Grundy. You must never be caught with exactly 7 Grundies on your person.
Counting and adding appear to be inconsistent: When children learn to count on their fingers, it goes as follows: 1, 2, 3, 4, 5, 6, 8, 9, 10, 11 ... Thus they all come to the conclusion that they have eleven digits. Except that when they count their hands separately and add, they get 5 + 5 = 10.
Similarly if you add 3 + 4 then the answer is not allowed to be 7, but how to get around this?
Is there any consistent way that arithmetic can legally be done or will numbers just descend into chaos, thus disrupting trade and commerce?
If you think this kind of legislation is implausible then look at this (thanks to Alexis for drawing my attention): http://www.indianalegalarchive.com/journal/2015/3/14/legislating-pi
You cannot simply invent a new word or symbol for 7. That is just as bad. It is forbidden to have exactly seven of something regardless of what you call that number.
Most people do arithmetic by lining up groups of pebbles. Scribes and sages can use quill and paper for keeping accounts but no-one has yet mastered mental arithmetic.
The Emperor has decreed that from now on there are 6 days in a week.
Just to illustrate that the Emperor isn't alone in this sort of madness, have a look at this article: 7 Modern Dictators Way Crazier Than You Thought Possible http://www.cracked.com/article_18850_7-modern-dictators-way-crazier-than-you-thought-possible.html
society economy mathematics trade
chasly - supports Monica
$\begingroup$ You might have already heard of this similar case: indianalegalarchive.com/journal/2015/3/14/legislating-pi $\endgroup$
– Alexis
$\begingroup$ I think there's a fault in the question, at least with your finger-counting example. If npne of your children knows what seven is, how do you enforce that they don't use it? $\endgroup$
– Jedediah
$\begingroup$ @Giter - Both. Any symbol or combination of symbols that signify 7 is banned and having seven of anything is also banned. $\endgroup$
– chasly - supports Monica
$\begingroup$ Who is enforcing this? Are there sheepcounters travelling the empire? Realistically, people wouldn't do much different until someone gives them clear rules that are enforced and understandable + can be carried out by the average person, you mentioned a couple. It will take a lot of time to work out those rules. Soon the emperor will pass on and things will be back to normal - unless your people are irrational. I don't think the link adds any "this could happen"to the case. If your people are however irrational, you have a problem: We cannot tell you what will happen anymore $\endgroup$
– Raditz_35
$\begingroup$ Just how is this going to be enforced? "You, varlet have seven sheep!" "Eh? Wot you talking about?" "You have seven sheep, in violation of the Emperor's edict! One has to go!" "Wot's 'seven', then?" $\endgroup$
– nzaman
Yes - you use a Base 6 system which has been done before
Quite a few cultures used Base 6 counting systems in the past, and it is actually quite logical because it is the natural outcome of counting on one hand.
As well as being popular a long time ago, it is even in use today with some native cultures, such as in Papua New Guinea, Congo and Ural Mountains.
The basic premise is one of your hands has 6 positions for which to count, zero, and then five. Using this method, you never get to the number 7, but instead scroll over to the number 10.
As an example, a counting sequence is like this: 0, 1, 2, 3, 4, 5 .next group. 10, 11, 12, 13, 14, 15 .next group. 20, 21, 22, 23, 24, 25, and so on.
A monk in England called Saint Bede demonstrated the full range of this by counting to 10,000 using this technique. Because of it's ubiquity it is also common in Chinese number gestures.
So you never actually have 7 of anything, you have 10 instead. You don't even have two 7's (14), you have 21. Your emperor simply changes the counting system to suit Base 6, and as has been shown in the past, the system will work out fine (until you get to SI/metric units, hundreds of years later, where it gets very complicated).
floxflox
$\begingroup$ I thought of this but the question says you can't create another definition for '7' to avoid this. You would just have 12 in base 6 count as an illegal number now too. $\endgroup$
– Lio Elbammalf
$\begingroup$ @LioElbammalf But it isn't a 'number 7', as you never got to 7 in the first place. Just as we don't get to the 'number A' in a hexadecimal system because we use a decimal system. Just think of it as the number 7 just hasn't been invented yet. Let's say we alter the question to 'Emperor bans the number A in a hexadecimal system, can we still count to 10 perfectly well' then changing to a Decimal system will be the same as we do now, which works out fine, and we don't go around counting 'A'. $\endgroup$
– flox
$\begingroup$ 'Natural consequence of counting on one hand' - So why don't we use base 11?? :-p $\endgroup$
– Joe Bloggs
Nothing happens, because most folks are smarter than that.
What happens is that everybody at the court simply lies to the Mad Emperor about how great the new law is going over. They probably have long experience spinning such fantasies already, and lots of previous mad edicts probably never made it out the gate either.
Either the Emperor is sane enough to understand that the edict is mad...and so doesn't do it in the first place.
Or the Emperor is barking mad, in which case nobody pays him mind when out of his sight. Usually, Emperors like this get overthrown or usurped rather quickly since they cannot comprehend reality enough to defend themselves, so it might be a very short-term problem (like a spot of poison at tonight's dinner).
In some situations, a coalition at court wants to keep a mad monarch in power for their own purposes. However, since their own wealth and power rely upon accurate accounting, they will certainly not actually implement such foolishness. After all, they have underlings, too. And they eat dinner, too.
$\begingroup$ en.wikipedia.org/wiki/The_Emperor%27s_New_Clothes $\endgroup$
$\begingroup$ They don't all get overthrown quickly although many do eventually. I've added an EDIT to my question. Meanwhile here's the link. "7 Modern Dictators Way Crazier Than You Thought Possible", cracked.com/… $\endgroup$
I'm going to attempt an answer to my own question.
Ban all odd numbers
Thus you are allowed to have: 0, 2, 4, 6, 8, 10, etc. sheep.
Similarly count your fingers in pairs - then everything will add up correctly.
This will always work for addition, subtraction and multiplication. Unfortunately there is a problem with division. Let us just hope that the empire can get by without it.
Note - If you lose a finger you'll have to chop one off to match.
Could they use something like "four-and-three" or "five-and-two," thus technically breaking it up into two different numbers? You haven't counted to seven fingers, you counted to four and then to three.
Jack C.Jack C.
$\begingroup$ I don't think the Emperor would take kindly to this. He would soon cotton on that everyone was saying "three'n'four" or whatever instead of "seven" and would count that as a new symbol. $\endgroup$
$\begingroup$ Welcome to Worldbuilding.SE. We usually recommend to take the tour. We might also interest you in the help center and Worldbuilding Chat if you have any question. $\endgroup$
– clem steredenn
Strictly speaking, no. There is no possible way of doing this the way you've specified without chaos.
Imagine you have fourteen sheep — officious local lord looks at them, counts them, and once they reach seven, declare one of that set has to die. They start again, get to seven again, and by the end of the process you've lost 8 sheep... except the slaughter house had to stop at six, so two sheep that can't be killed or not-killed are now wandering around like giant wooly Schrödinger's cats.
However, the lack of precise mathematical rigour in your society may save you: those within it wouldn't call it this, but it would effectively be switching to base-6 counting.
You can't have 7 Grundies, but if 6 Grundies makes a Höefer, then everyone who used to have 7 Grundies will now have 1 Höefer and 1 Grundy, which isn't much weirder than pounds/shillings/pence in the UK before 1971.
Likewise, 9 sheep would be {1 {group-of-6} and 3} sheep — this reminds me a bit of French names for numbers.
This doesn't solve the problem of there still being 7 fingers if you hold up 4 on one hand and 3 on the other, but nothing can deal with that.
BenRWBenRW
$\begingroup$ Yes but the lord isn't allowed to say "seven" so he will go from 6 to 8. Counting to base 6 is definitely a useful concept. I think that might be the answer. Would you care to elaborate? $\endgroup$
$\begingroup$ If he's not allowed to say "seven", but there are 7, then he's just renamed 7 as "eight", which you explicitly didn't want to permit. As I write this, I realise I should've called it base-7, for the same reason that 10 is not a digit in base-10. What about base-6 (/base-7) would you like me to elaborate about? $\endgroup$
– BenRW
Let 6 people get caught
The emperor cannot kill the next person or, at some point, he will have created a law which has caught seven people.
The law cannot go on for more than six days
The law itself must be banned on the seventh.
For that matter it can't go on for more than seven seconds (or milliseconds)
What would happen:
Either everyone would follow it and 7 seconds in, one of his advisors would have him arrested for creating a law that had been around for seven seconds.
Or everyone would be aware that the law was terrible and pretend to the emperor that the rule is in place and everyone follows it when, in fact, they just avoid using the number 7 in his presence.
Lio ElbammalfLio Elbammalf
$\begingroup$ Yes but when they are counted it it will go: 1,2,3,4,5,6,8. $\endgroup$
$\begingroup$ But then people are just replacing 7 with 8 and you've got the problem you mentioned earlier with 'four and three' as a replacement $\endgroup$
$\begingroup$ @ Lio Elbammalf - I know. That's the problem! $\endgroup$
$\begingroup$ So all that will happen is he will say '8 actually means 7 so whether we're calling it 7 dead people or 8 dead people the law is in conflict with itself' so the law gets outlawed under its own rules. $\endgroup$
$\begingroup$ The Emperor was advised not to do it and he is hopeless at maths. I'm trying to get my head around what will happen and hence my question. $\endgroup$
In a historically accurate feudal society, this would not be the disruption that we think it is in modern day. There are many workarounds to this there were actually used at some point. Some still are in places today.
Bartering and Credit
Many feudal societies did not use cash for trade. They either traded goods directly or kept a ledger of what each customer owed. In a time where it was costly and rare for people to travel far, local small business owners established trust with customers and kept track of debts for long periods of time. When most people worked in agriculture, it was not unusual to wait until the annual harvest to pay off debts.
Bargaining and Haggling
Merchants were wealthy skilled salesmen who convinced customers to buy their goods at the highest price they were willing, they performed many of the roles a multimillion dollar advertising industry does today. In a society where haggling and bargaining is widespread, a ban on a number would be a trivial inconvenience.
"Those are $8 each"
"I'll give you $14 for 2"
Split Bills or Tipping
It would also be fairly straight forward to split or combine bills to avoid explicitly charging 7 units of currency. Either pay for each item separately if a bill comes to a total with 7 or wait until bill doesn't contain one to settle the tab. In a society with tipping, it would be fairly easy to round up the bill to avoid charging 7 units as well.
Using valuable goods as currency
It was also common for valuable items, rather than cash, to be used as currency. In the past, gold, silver, rice, and salt have all been used as currency. I'll pay with this piece of gold that happens to weigh 7 grams but the shopkeeper will weight it discretely.
Using foreign currency
When a country becomes economically unstable, it is commonplace for foreign currencies to be used in place of or alongside the local currency. The US dollar, Euro, and Japanese Yen are often still used in Asian and African countries where hyperinflation is an issue. A dual currency system or black market for foreign currency would be simple to implement if it became impractical to use the local currency. The 2 currencies could also be used interchangeably to avoid using 7s.
"That'll be ¥13,000"
pays ¥20,000
"Here's $100 change" (not ¥7000)
Lack of enforcement
A King or Emperor does not have as much power as many believe. Often they were only a figurehead. Even those with real power, required a committee or cabinet of advisors and the loyalty of local Lords to effectively rule. Before modern transport and communications infrastructure and technology, it would be unthinkable for one person to run an entire country, especially one composed of many diverse states. Even an Emperor could convince their own government to enact such a whimsical law, it would be difficult to enforce.
Many states, provinces, and prefectures had far more autonomy than today. This was necessary to prevent revolts in a large disparate Empire. Many Emperors struggled to even collect taxes from the outer regions of their dominions, for example the Tokugawa Shogunate of Japan and the British Colonies in America. This was not a trivial issue to solve. Feudal states often had their own currencies, units of measurement, dialects, and militias. Feudal Lords were in charge of collecting taxes from their domains and passing on a share to the Emperor. Currency was either issued by local banks or measured in local units of weight. It was not uncommon for Lords to be dishonest about how much tax revenue they collected to avoid paying more to the Emperor. (This was one of the issues with Imperial units, they differed between regions). When each region has their own language, it would be nearly impossible to ban a word either. They'll just use a different one that the imperial authorities don't know. It would be far to difficult to enforce to be worth the effort for many regional authorities.
Predicted Outcome
If each state is following their own laws and the Emperor has little control over them, how would they even know if they regional Lords were complying with the law or enforcing it. What is likely to happen is that they Emperors decree would largely be ignored. Perhaps a few people will be made an example of but it won't be widely enforced. For those in the provinces, it will be just another day of politics in the Capital: a moment of vanity of an Emperor with a short-lived reign to be quickly overturned by their successor or a flagrant excuse for the Emperor to purge any who oppose them.
It would be a few days of chaos in the capital before workarounds are in place with little to no impact of trade or the economy as a whole. Feudal societies were already a chaotic patchwork or regional systems and it was already difficult to trade between regions, even within a Empire (so exotic goods were very expensive). A minor inconvenience would not impact such an economic system that is already so complex and inefficient.
Tom KellyTom Kelly
Your question is: "Is there any consistent way that arithmetic can legally be done or will numbers just descend into chaos?"
There is a difference between having 7 stones, sheep or anything else and doing formal arithmetics, unless you use an abacus with 7 pearls on the string. Basically, you have already given the answer to your question yourself, when presenting the loop hole in the argumentation: children/people count "1, 2, 3, 4, 5, 6, 8, 9, 10, 11". This is just counting in an arithmetic system with 9 digits instead of 10, using 9 as its basis - a nonal system, if you want. Therefore, children have 11 (in the 9-system) = 1*9 + 1*1 fingers which is arithmetically perfectly consistent. Our most commonly used number systems are decimal, because humans naturally use 10 fingers for counting. If we had less fingers or like the mayas counted our fingers and toes and consistently used 20 as a basis for the number system, we would consider any other basis than 10 as the natural one. Therefore, there is no reason why an arithmetic number system with 0, 1, 2, 3, 4, 5, 6, 8, 9 as its digits and 10 meaning 1*9 + 0*1 = 9 (in our decimal number system) would not work. Most people can barely count or know any calculations and therefore will not notice the difference. Therefore, you would mainly have to train your money changers and people dealing with foreign currency or lists of goods to transform numbers given in the decimal system into your nonal system. From then on, everything handled within the realm of your kingdom can be handled consistently with your own arithmetical number system.
Alex2006Alex2006
$\begingroup$ Related cowbirdsinlove.com/43 $\endgroup$
So there's all sorts of fun solutions, like base 6, or mapping Z to Z/2 (that's the formal way of saying "ban all odd numbers" from chasly's self answer). We could also operate in environments which aren't fields.
But this question caught my eye, because I've already asked it. Well, I asked something similar. On Music Theory, I asked whether Chinese music avoids counting in 4's, given that 4 is bad luck (it is a homophone for the word for "death"). It seemed tricky to my Western mind, given that 4/4 music is so popular. The answer showed the kind of creativity that shows up.
One way of expressing meter in traditional Chinese music is in terms of ban and yan - 'beats' and 'eyes'. The 'ban' represents the main beat, or the pulse of the bar, while the 'yan' (eye) represents a weak beat. Some common meters were
One ban followed by three yans : ban - yan - yan - yan - ban - yan - yan - yan
Alternating : ban - yan - ban - yan
Constant strong beat : ban - ban - ban - ban
You can probably see the similarity of the first one to a time signature involving the dreaded unlucky number you mentioned - but expressing it in this way, we've been able to avoid mentioning it!
Now I find this interesting for a few reasons. First off, it's not just fantasy -- they actually count that way. The second is that it looks an awful lot like "counting in base 4," but it points out a subtle detail. They don't think of it that way. They think of it as counting something larger, rather than thinking of it as counting a fixed-size collection. It doesn't really matter how many eyes fit under a beat (as long as it's not four!)
This seems no more unreasonable than using the phrase "He Who Shall Not Be Named" to get around naming someone. The engineer in me thinks this is silly, because you just gave him a name that happens to have 6 words in it... but linguistically, we find people just don't work that way.
Cort AmmonCort Ammon
If it's just about the word and symbol there's no real problem. If it's about the visual alignement of equal or equivalent objects then you can resort to gratuity (destruction of goods is not necessary). They can give things away and establish relationships based on mutual favors. But first you will try to storage things in another place. Very poor people will have a hard time having legitimate storage alternatives, but they can bury objects in the common lands. At the market place, traders will round up by pricing either 6 or 8 but better off will always offer to by two for the price of forteen. Of course if it's about the idea or concept of seven then you have to get rid of the decimal system and use hexadecimal described by flux.
TomásTomás
Except that when they count their hands separately and add, they get 5 + 5 = 10.
I think you should reconsider that requirement. Because then the answer is just a plain no. The children of your example already found a contradiction, so the system can not possibly be made consistent. In what follows I work an answer that bends a bit the question.
It is just about labels
Let's call this system the hepto-phobic numeral system or HPNS for short. It is unwise for merchants to venture in your emperor's country if they are not aware of the conversions between HPNS and our more familiar system.
Luckily, the Encyclopedia Debilica provides the following guidelines
HPNS works similarly to the numeral system we are used to. Any statement made in one system can also be made in the other. The difference is purely a matter of vocabulary.
Converting numbers from one system to another
As far as naming goes integers from 0 to 6 are the same as usual. Integers from 7 onwards (7, 8, 9, 10, ...) are simply re-labelled 8, 9, 10, 11, etc.
This is why when a child from the Empire tells you that they have 11HPNS fingers, you should understand that they mean 10 fingers.
Below is a python implementation (Encyclopedia Debilica is targeted at educated readers learning about not-so-gifted people) to ease the conversion both ways
def regular_to_HPNS(regular_number):
if regular_number < 7:
return regular_number
return regular_number + 1
def HPNS_to_regular(HPNS_number):
if HPNS_number == 7:
raise PermissionError("This number is not allowed in HPNS.")
elif HPNS_number < 7:
return HPNS_number
return HPNS_number - 1
Other operations
When a citizen of the empire ask you to perform an operation (say $3_{HPNS} \times 11_{HPNS} = ? $), the easiest way to go is to convert all the numbers involved to the system you are familiar to, get the result, and lastly convert it back to HPNS.
$$ 3_{HPNS} + 11_{HPNS} = 3 + 10 = 13 = 14_{HPNS}$$
$$ 8_{HPNS} - 1_{HPNS} = 7 - 1 = 6 = 6_{HPNS} $$
$$ 14_{HPNS} - 3_{HPNS} = 13 - 3 = 10 = 11_{HPNS} $$
$$ 3_{HPNS} \times 11_{HPNS} = 3 \times 10 = 30 = 31_{HPNS}$$
$$ 31_{HPNS} / 11_{HPNS} = 30 / 10 = 3 = 3_{HPNS}$$
Of course, citizens of the Empire do not resort to our numeral system. They just never talk about 7 and have learned their multiplication table that way. Notice how the common identities are preserved ($a + b - a = b$, etc.) so formal calculations are not a problem either.
Python implementation
def HPNS_function(regular_function):
def wrapper(*HPNS_arguments):
arguments = [HPNS_to_regular(hpns_arg) for hpns_arg in HPNS_arguments]
result = regular_function(*arguments)
return regular_to_HPNS(result)
return wrapper
# Example usage:
def addition(a, b):
return a + b
@HPNS_function
def HPNS_additon(a_HPNS, b_HPNS):
return a_HPNS + b_HPNS
AlexisAlexis
$\begingroup$ It'll take me a while to get to grips with this! I'll have to come back to it. $\endgroup$
$\begingroup$ Reading tip: the python is mostly for showing off - though it demonstrate any operation can be made sensical. $\endgroup$
Not the answer you're looking for? Browse other questions tagged society economy mathematics trade .
If all taxes were replaced with a Land Tax, would this deliver a fairer society and improve the economy?
What would be the main import and export of an advanced medieval jungle society?
Why would a technologically advanced society recruit 14 year old children to train them to become the next political leaders and how could this begin? | CommonCrawl |
Asian Pacific Journal of Cancer Prevention
Asian Pacific Organization for Cancer Prevention (아시아태평양암예방학회)
Health Sciences > Development of Pharmaceutical
The Asian Pacific Journal of Cancer Prevention is a monthly electronic journal publishing papers in all areas of cancer control. Its is indexed on PubMed (Impact factor for 2014 : 2.514) and the scope is wide-ranging: including descriptive, analytical and molecular epidemiology; experimental and clinical histopathology/biology of preneoplasias and early neoplasias; assessment of risk and beneficial factors; experimental and clinical trials of primary preventive measures/agents; screening approaches and secondary prevention; clinical epidemiology; and all aspects of cancer prevention education. All of the papers published are freely available as pdf files downloadable from www.apjcpcontrol.org, directly or through PubMed, or obtainable from the first authors. The APJCP is financially supported by the UICC Asian Regional Office and the National Cancer Center of Korea, where the Editorial Office is housed.
KSCI
Volume 17 Issue sup3
Phage Particles as Vaccine Delivery Vehicles: Concepts, Applications and Prospects
Jafari, Narjes;Abediankenari, Saeid 8019
https://doi.org/10.7314/APJCP.2015.16.18.8019 PDF KSCI
The development of new strategies for vaccine delivery for generating protective and long-lasting immune responses has become an expanding field of research. In the last years, it has been recognized that bacteriophages have several potential applications in the biotechnology and medical fields because of their intrinsic advantages, such as ease of manipulation and large-scale production. Over the past two decades, bacteriophages have gained special attention as vehicles for protein/peptide or DNA vaccine delivery. In fact, whole phage particles are used as vaccine delivery vehicles to achieve the aim of enhanced immunization. In this strategy, the carried vaccine is protected from environmental damage by phage particles. In this review, phage-based vaccine categories and their development are presented in detail, with discussion of the potential of phage-based vaccines for protection against microbial diseases and cancer treatment. Also reviewed are some recent advances in the field of phagebased vaccines.
β-Adrenergic Receptors : New Target in Breast Cancer
Wang, Ting;Li, Yu;Lu, Hai-Ling;Meng, Qing-Wei;Cai, Li;Chen, Xue-Song 8031
Background: Preclinical studies have demonstrated that ${\beta}$-adrenergic receptor antagonists could improve the prognosis of breast cancer. However, the conclusions of clinical and pharmacoepidemiological studies have been inconsistent. This review was conducted to re-assess the relationship between beta-adrenoceptor blockers and breast cancer prognosis. Materials and Methods: The literature was searched from PubMed, EMBASE and Web of Nature (Thompson Reuters) databases through using key terms, such as breast cancer and beta-adrenoceptor blockers. Results: Ten publications met the inclusion criteria. Six suggested that receiving beta-adrenoceptor blockers reduced the risk of breast cancer-specific mortality, and three of them had statistical significance (hazard ratio (HR)=0.42; 95% CI=0.18-0.97; p=0.042). Two studies reported that risk of recurrence and distant metastasis (DM) were both significantly reduced. One study demonstrated that the risk of relapse-free survival (RFS) was raised significantly with beta-blockers (BBS) (HR= 0.30; 95% CI=0.10-0.87; p=0.027). One reported longer disease-free interval (Log Rank (LR)=6.658; p=0.011) in BBS users, but there was no significant association between overall survival (OS) and BBS (HR= 0.35; 95% CI=0.12-1.0; p=0.05) in five studies. Conclusions: Through careful consideration, it is suggested that beta-adrenoceptor blockers use may be associated with improved prognosis in breast cancer patients. Nevertheless, larger size studies are needed to further explore the relationship between beta-blocker drug use and breast cancer prognosis.
Identification of HPV Integration and Genomic Patterns Delineating the Clinical Landscape of Cervical Cancer
Akeel, Raid-Al 8041
Cervical cancer is one of the most common cancers in women worldwide. During their life time the vast majority of women become infected with human papillomavirus (HPV), but interestingly only a small portion develop cervical cancer and in the remainder infection regresses to a normal healthy state. Beyond HPV status, associated molecular characterization of disease has to be established. However, initial work suggests the existence of several different molecular classes, based on the biological features of differentially expressed genes in each subtype. This suggests that additional risk factors play an important role in the outcome of infection. Host genomic factors play an important role in the outcome of such complex or multifactor diseases such as cervical cancer and are also known to regulate the rate of disease progression. The aim of this review was to compile advances in the field of host genomics of HPV positive and negative cervical cancer and their association with clinical response.
Potential Roles of Protease Inhibitors in Cancer Progression
Yang, Peng;Li, Zhuo-Yu;Li, Han-Qing 8047
Proteases are important molecules that are involved in many key physiological processes. Protease signaling pathways are strictly controlled, and disorders in protease activity can result in pathological changes such as cardiovascular and inflammatory diseases, cancer and neurological disorders. Many proteases have been associated with increasing tumor metastasis in various human cancers, suggesting important functional roles in the metastatic process because of their ability to degrade the extracellular matrix barrier. Proteases are also capable of cleaving non-extracellular matrix molecules. Inhibitors of proteases to some extent can reduce invasion and metastasis of cancer cells, and slow down cancer progression. In this review, we focus on the role of a few proteases and their inhibitors in tumors as a basis for cancer prognostication and therapy.
Potential Benefit of Metformin as Treatment for Colon Cancer: the Evidence so Far
Abdelsatir, Azza Ali;Husain, Nazik Elmalaika;Hassan, Abdallah Tarig;Elmadhoun, Wadie M;Almobarak, Ahmed O;Ahmed, Mohamed H 8053
Metformin is known as a hypoglycaemic agent that regulates glucose homeostasis by inhibiting liver glucose production and increasing muscle glucose uptake. Colorectal cancer (CRC) is one of the most common cancers worldwide, with about a million new cases diagnosed each year. The risk factors for CRC include advanced age, smoking, black race, obesity, low fibre diet, insulin resistance, and the metabolic syndrome. We have searched Medline for the metabolic syndrome and its relation to CRC, and metformin as a potential treatment of colorectal cancer. Administration of metformin alone or in combination with chemotherapy has been shown to suppress CRC. The mechanism that explains how insulin resistance is associated with CRC is complex and not fully understood. In this review we have summarised studies which showed an association with the metabolic syndrome as well as studies which tackled metformin as a potential treatment of CRC. In addition, we have also provided a summary of how metformin at the cellular level can induce changes that suppress the activity of cancer cells.
DNA Methylation Biomarkers for Nasopharyngeal Carcinoma: Diagnostic and Prognostic Tools
Jiang, Wei;Cai, Rui;Chen, Qiu-Qiu 8059
Nasopharyngeal carcinoma (NPC) is a common tumor in southern China and south-eastern Asia. Effective strategies for the prevention or screening of NPC are limited. Exploring effective biomarkers for the early diagnosis and prognosis of NPC continues to be a rigorous challenge. Evidence is accumulating that DNA methylation alterations are involved in the initiation and progression of NPC. Over the past few decades, aberrant DNA methylation in single or multiple tumor suppressor genes (TSGs) in various biologic samples have been described in NPC, which potentially represents useful biomarkers. Recently, large-scale DNA methylation analysis by genome-wide methylation platform provides a new way to identify candidate DNA methylated markers of NPC. This review summarizes the published research on the diagnostic and prognostic potential biomarkers of DNA methylation for NPC and discusses the current knowledge on DNA methylation as a biomarker for the early detection and monitoring of progression of NPC.
Long Non-coding RNAs and Drug Resistance
Pan, Jing-Jing;Xie, Xiao-Juan;Li, Xu;Chen, Wei 8067
Background: Long non-coding RNAs (lncRNAs) are emerging as key players in gene expression that govern cell developmental processes, and thus contributing to diseases, especially cancers. Many studies have suggested that aberrant expression of lncRNAs is responsible for drug resistance, a substantial obstacle for cancer therapy. Drug resistance not only results from individual variations in patients, but also from genetic and epigenetic differences in tumors. It is reported that drug resistance is tightly modulated by lncRNAs which change the stability and translation of mRNAs encoding factors involved in cell survival, proliferation, and drug metabolism. In this review, we summarize recent advances in research on lncRNAs associated with drug resistance and underlying molecular or cellular mechanisms, which may contribute helpful approaches for the development of new therapeutic strategies to overcome treatment failure.
Sarcopenia in Cancer Patients
Chindapasirt, Jarin 8075
Sarcopenia, characterized by a decline of skeletal muscle plus low muscle strength and/or physical performance, has emerged to be an important prognostic factor for advanced cancer patients. It is associated with poor performance status, toxicity from chemotherapy, and shorter time of tumor control. There is limited data about sarcopenia in cancer patients and associated factors. Moreover, the knowledge about the changes of muscle mass during chemotherapy and its impact to response and toxicity to chemotherapy is still lacking. This review aimed to provide understanding about sarcopenia and to emphasize its importance to cancer treatment.
Benefits of Metformin Use for Cholangiocarcinoma
Kaewpitoon, Soraya J;Loyd, Ryan A;Rujirakul, Ratana;Panpimanmas, Sukij;Matrakool, Likit;Tongtawee, Taweesak;Kootanavanichpong, Nusorn;Kompor, Ponthip;Chavengkun, Wasugree;Kujapun, Jirawoot;Norkaew, Jun;Ponphimai, Sukanya;Padchasuwan, Natnapa;Pholsripradit, Poowadol;Eksanti, Thawatchai;Phatisena, Tanida;Kaewpitoon, Natthawut 8079
Metformin is an oral anti-hyperglycemic agent, which is the most commonly prescribed medication in the treatment of type-2 diabetes mellitus. It is purportedly associated with a reduced risk for various cancers, mainly exerting anti-proliferation effects on various human cancer cell types, such as pancreas, prostate, breast, stomach and liver. This mini-review highlights the risk and benefit of metformin used for cholangiocarcinoma (CCA) prevention and therapy. The results indicated metformin might be a quite promising strategy CCA prevention and treatment, one mechanism being inhibition of CCA tumor growth by cell cycle arrest in both in vitro and in vivo. The AMPK/mTORC1 pathway in intrahepatic CCA cells is targeted by metformin. Furthermore, metformin inhibited CCA tumor growth via the regulation of Drosha-mediated expression of multiple carcinogenic miRNAs. The use of metformin seems to be safe in patients with cirrhosis, and provides a survival benefit. Once hepatic malignancies are already established, metformin does not offer any therapeutic potential. Clinical trials and epidemiological studies of the benefit of metformin use for CCA should be conducted. To date, whether metformin as a prospective chemotherapeutic for CCA is still questionable and waits further atttention.
HPV Infection and Cervical Abnormalities in HIV Positive Women in Different Regions of Brazil, a Middle-Income Country
Freitas, Beatriz C;Suehiro, Tamy T;Consolaro, Marcia EL;Silva, Vania RS 8085
Human papillomavirus is a virus that is distributed worldwide, and persistent infection with high-risk genotypes (HR-HPV) is considered the most important factor for the development of squamous cell cervical carcinoma (SCC). However, by itself, it is not sufficient, and other factors may contribute to the onset and progression of lesions. For example, infection with other sexually transmitted diseases such as human immunodeficiency virus (HIV) may be a factor. Previous studies have shown the relationship between HPV infection and SCC development among HIV-infected women in many regions of the world, with great emphasis on low- and middle-income countries (LMICs). Brazil is considered a LMIC and has great disparities across different regions. The purpose of this review was to highlight the current knowledge about HPV infection and cervical abnormalities in HIV+ women in Brazil because this country is an ideal setting to evaluate HIV impact on SCC development and serves as model of LMICs and low-resource settings.
Evaluation of the MTHFR C677T Polymorphism as a Risk Factor for Colorectal Cancer in Asian Populations
Rai, Vandana 8093
Background: Genetic and environmental factors play important roles in pathogenesis of digestive tract cancers like those in the esophagus, stomach and colorectum. Folate deficiency and methylenetetrahydrofolate reductase (MTHFR) as an important enzyme of folate and methionine metabolism are considered crucial for DNA synthesis and methylation. MTHFR variants may cause genomic hypomethylation, which may lead to the development of cancer, and MTHFR gene polymorphisms (especially C677T and A1298C) are known to influence predispositions for cancer development. Several case control association studies of MTHFR C677T polymorphisms and colorectal cancer (CRC) have been reported in different populations with contrasting results, possibly reflecting inadequate statistical power. Aim: The present meta-analysis was conducted to investigate the association between the C677T polymorphism and the risk of colorectal cancer. Materials and Methods: A literature search of the PubMed, Google Scholar, Springer link and Elsevier databases was carried out for potential relevant articles. Pooled odds ratio (OR) with corresponding 95 % confidence interval (95 % CI) was calculated to assess the association of MTHFR C677T with the susceptibility to CRC. Cochran's Q statistic and the inconsistency index (I2) were used to check study heterogeneity. Egger's test and funnel plots were applied to assess publication bias. All statistical analyses were conducted by with MetaAnalyst and MIX version 1.7. Results: Thirty four case-control studies involving a total of 9,143 cases and 11,357 controls were retrieved according to the inclusion criteria. Overall, no significant association was found between the MTHFR C677T polymorphism and colorectal cancer in Asian populations (for T vs. C: OR=1.03; 95% CI= 0.92-1.5; p= 0.64; for TT vs CC: OR=0.88; 95%CI= 0.74-1.04; p= 0.04; for CT vs. CC: OR = 1.02; 95%CI= 0.93-1.12; p=0.59; for TT+ CT vs. CC: OR=1.07; 95%CI= 0.94-1.22; p=0.87). Conclusions: Evidence from the current meta-analysis indicated that the C677T polymorphism is not associated with CRC risk in Asian populations. Further investigations are needed to offer better insight into any role of this polymorphism in colorectal carcinogenesis.
Psychometric Validation of the Bahasa Malaysia Version of the EORTC QLQ-CR29
Magaji, Bello Arkilla;Moy, Foong Ming;Roslani, April Camilla;Law, Chee Wei;Raduan, Farhana;Sagap, Ismail 8101
Background: This study examined the psychometric properties of the Bahasa Malaysia (BM) version of the European Organization for Research and Treatment of Cancer (EORTC) Colorectal Cancer-specific Quality Of Life Questionnaire (QLQ-CR29). Materials and Methods: We studied 93 patients recruited from University Malaya and Universiti Kebangsaan Medical Centers, Kuala Lumpur, Malaysia using a self-administered method. Tools included QLQ-C30, QLQ-CR29 and Karnofsky Performance Scales (KPS). Statistical analyses included Cronbach's alpha, test-retest correlations, multi-traits scaling and known-groups comparisons. A p vaue ${\leq}0.05$ was considered significant. Results: The internal consistency coefficients for body image, urinary frequency, blood and mucus and stool frequency scales were acceptable (Cronbach's alpha ${\alpha}{\geq}0.65$). However, the coefficients were low for the blood and mucus and stool frequency scales in patients with a stoma bag (${\alpha}=0.46$). Test-retest correlation coefficients were moderate to high (range: r = 0.51 to 1.00) for most of the scales except anxiety, urinary frequency, buttock pain, hair loss, stoma care related problems, and dyspareunia (r ${\leq}0.49$). Convergent and discriminant validities were achieved in all scales. Patients with a stoma reported significantly higher symptoms of blood and mucus in the stool, flatulence, faecal incontinence, sore skin, and embarrassment due to the frequent need to change the stoma bag (p < 0.05) compared to patients without stoma. None of the scales distinguished between patients based on the KPS scores. There were no overlaps between scales in the QLQ-C30 and QLQ-CR29 (r < 0.40). Conclusions: the BM version of the QLQ-CR29 indicated acceptable psychometric properties in most of the scales similar to original validation study. This questionnaire could be used to complement the QLQ-C30 in assessing HRQOL among BM speaking population with colorectal cancer.
Psychometric Validation of the Malaysian Chinese Version of the EORTC QLQ-C30 in Colorectal Cancer Patients
Magaji, Bello Arkilla;Moy, Foong Ming;Roslani, April Camilla;Law, Chee Wei;Sagap, Ismail 8107
Background and Aims: Colorectal cancer is the second most frequent cancer in Malaysia. We aimed to assess the validity and reliability of the Malaysian Chinese version of European Organization for Research and Treatment of Cancer (EORTC) Quality of Life Questionnaire core (QLQ-C30) in patients with colorectal cancer. Materials and Methods: Translated versions of the QLQ-C30 were obtained from the EORTC. A cross sectional study design was used to obtain data from patients receiving treatment at two teaching hospitals in Kuala Lumpur, Malaysia. The Malaysian Chinese version of QLQ-C30 was self-administered in 96 patients while the Karnofsky Performance Scales (KPS) was generated by attending surgeons. Statistical analysis included reliability, convergent, discriminate validity, and known-groups comparisons. Statistical significance was based on p value ${\leq}0.05$. Results: The internal consistencies of the Malaysian Chinese version were acceptable [Cronbach's alpha (${\alpha}{\geq}0.70$)] in the global health status/overall quality of life (GHS/QOL), functioning scales except cognitive scale (${\alpha}{\leq}0.32$) in all levels of analysis, and social/family functioning scale (${\alpha}=0.63$) in patients without a stoma. All questionnaire items fulfilled the criteria for convergent and discriminant validity except question number 5, with correlation with role (r = 0.62) and social/family (r = 0.41) functioning higher than with physical functioning scales (r = 0.34). The test-retest coefficients in the GHS/QOL, functioning scales and in most of the symptoms scales were moderate to high (r = 0.58 to 1.00). Patients with a stoma reported statistically significant lower physical functioning (p=0.015), social/family functioning (p=0.013), and higher constipation (p=0.010) and financial difficulty (p=0.037) compared to patients without stoma. There was no significant difference between patients with high and low KPS scores. Conclusions: Malaysian Chinese version of the QLQ-C30 is a valid and reliable measure of HRQOL in patients with colorectal cancer.
Glehnia littoralis Root Extract Induces G0/G1 Phase Cell Cycle Arrest in the MCF-7 Human Breast Cancer Cell Line
de la Cruz, Joseph Flores;Vergara, Emil Joseph Sanvictores;Cho, Yura;Hong, Hee Ok;Oyungerel, Baatartsogt;Hwang, Seong Gu 8113
Glehnia littoralis (GL) is widely used as an oriental medicine for cough, fever, stroke and other disease conditions. However, the anti-cancer properties of GL on MCF-7 human breast cancer cells have not been investigated. In order to elucidate anti-cancer properties and underlying cell death mechanisms, MCF-7cells ($5{\times}10^4/well$) were treated with Glehnia littoralis root extract at 0-400 ug/ml. A hot water extract of GL root inhibited the proliferation of MCF-7 cells in a dose-dependent manner. Analysis of the cell cycle after treatment of MCF-7 cells with increasing concentrations of GL root extract for 24 hours showed significant cell cycle arrest in the G1 phase. RT-PCR and Western blot analysis both revealed that GL root extract significantly increased the expression of p21 and p27 with an accompanying decrease in both CDK4 and cyclin D1. Our reuslts indicated that GL root extract arrested the proliferation of MCF-7 cells in G1 phase through inhibition of CDK4 and cyclin D1 via increased induction of p21 and p27. In summary, the current study showed that GL could serve as a potential source of chemotherapeutic or chemopreventative agents against human breast cancer.
Anti-tumor and Chemoprotective Effect of Bauhinia tomentosa by Regulating Growth Factors and Inflammatory Mediators
Kannan, Narayanan;Sakthivel, Kunnathur Murugesan;Guruvayoorappan, Chandrasekaran 8119
Cancer is a leading cause of death worldwide. Due to the toxic side effects of the commonly used chemotherapeutic drug cyclophosphamide (CTX), the use of herbal medicines with fewer side effects but having potential use as inducing anti-cancer outcomes in situ has become increasingly popular. The present study sought to investigate the effects of a methanolic extract of Bauhinia tomentosa against Dalton's ascites lymphoma (DAL) induced ascites as well as solid tumors in BALB/c mice. Specifically, B. tomentosa extract was administered intraperitonealy (IP) at 10 mg/kg. BW body weight starting just after tumor cell implantation and thereafter for 10 consecutive days. In the ascites tumor model hosts, administration of extract resulted in a 52% increase in the life span. In solid tumor models, co-administration of extract and CTX significantly reduced tumor volume (relative to in untreated hosts) by 73% compared to just by 52% when the extract alone was provided. Co-administration of the extract also mitigated CTX-induced toxicity, including decreases in WBC count, and in bone marrow cellularity and ${\alpha}$-esterase activity. Extract treatment also attenuated any increases in serum levels of $TNF{\alpha}$, iNOS, IL-$1{\beta}$, IL-6, GM-CSF, and VEGF seen in tumor-bearing hosts. This study confirmed that, the potent antitumor activity of B.tomentosa extract may be associated with immune modulatory effects by regulating anti-oxidants and cytokine levels.
Back Massage to Decrease State Anxiety, Cortisol Level, Blood Prsessure, Heart Rate and Increase Sleep Quality in Family Caregivers of Patients with Cancer: A Randomised Controlled Trial
Pinar, Rukiye;Afsar, Fisun 8127
Background: The objective of this study was to evaluate the effect of back massage on the anxiety state, cortisol level, systolic/diastolic blood pressure, pulse rate, and sleep quality in family caregivers of patients with cancer. Materials and Methods: Forty-four family caregivers were randomly assigned to either the experimental or control group (22 interventions, 22 controls) after they were matched on age and gender. The intervention consisted of back massage for 15 minutes per day for a week. Main research outcomes were measured at baseline (day I) and follow-up (day 7). Unpaired t-test, paired t test and chi-square test were used to analyse data. Results: The majority of the caregivers were women, married, secondary school educated and housewife. State anxiety (p<0.001), cortisol level (p<0.05), systolic/diastolic blood pressure (p<0.001, p<0.01 respectively), and pulse rate (p<0.01) were significantly decreased, and sleep quality (p<0.001) increased after back massage intervention. Conclusions: The study results show that family caregivers for patients with cancer can benefit from back massage to improve state anxiety, cortisol level, blood pressure and heart rate, and sleep quality. Oncology nurses can take advantage of back massage, which is non-pharmacologic and easily implemented method, as an independent nursing action to support caregivers for patients with cancer.
Cell Cycle Modulation of MCF-7 and MDA-MB-231 by a Sub-Fraction of Strobilanthes crispus and its Combination with Tamoxifen
Yaacob, Nik Soriani;Kamal, Nik Nursyazni Nik Mohamed;Wong, Kah Keng;Norazmi, Mohd Nor 8135
Background: Cell cycle regulatory proteins are suitable targets for cancer therapeutic development since genetic alterations in many cancers also affect the functions of these molecules. Strobilanthes crispus (S. crispus) is traditionally known for its potential benefits in treating various ailments. We recently reported that an active sub-fraction of S. crispus leaves (SCS) caused caspase-dependent apoptosis of human breast cancer MCF-7 and MDA-MB-231 cells. Materials and Methods: Considering the ability of SCS to also promote the activity of the antiestrogen, tamoxifen, we further examined the effect of SCS in modulating cell cycle progression and related proteins in MCF-7 and MDA-MB-231 cells alone and in combination with tamoxifen. Expression of cell cycle-related transcripts was analysed based on a previous microarray dataset. Results: SCS significantly caused G1 arrest of both types of cells, similar to tamoxifen and this was associated with modulation of cyclin D1, p21 and p53. In combination with tamoxifen, the anticancer effects involved downregulation of $ER{\alpha}$ protein in MCF-7 cells but appeared independent of an ER-mediated mechanism in MDA-MB-231 cells. Microarray data analysis confirmed the clinical relevance of the proteins studied. Conclusions: The current data suggest that SCS growth inhibitory effects are similar to that of the antiestrogen, tamoxifen, further supporting the previously demonstrated cytotoxic and apoptotic actions of both agents.
Extended Low Anterior Resection with a Circular Stapler in Patients with Rectal Cancer: a Single Center Experience
Talaeezadeh, Abdolhasan;Bahadoram, Mohammad;Abtahian, Amin;Rezaee, Alireza 8141
Background: to evaluate the outcome of stapled colo-anal anastomoses after extended low anterior resection for distal rectal carcinoma. Materials and Methods: A retrospective study of fifty patients who underwent coloanal anastomoses after extended low anterior resection was conducted at Imam Hospital from September 2007 up to July 2012. Results: The distance of the tumor from anal verge was 3 to 8 cm. Anastomotic leakage developed in 6% of patients and defecation problems in 16%. One-year local recurrence was 6% while three-year local recurrence was 4%. One-year systemic recurrence was seen in 22% while three-year systemic recurrence was seen in 20%. Conclusions: Colo-anal anastomoses after extended low anterior resection for distal rectal carcinoma can be conducted safely.
Clinical Significance of Atypical Squamous Cells of Undetermined Significance among Patients Undergoing Cervical Conization
Nishimura, Mai;Miyatake, Takashi;Nakashima, Ayaka;Miyoshi, Ai;Mimura, Mayuko;Nagamatsu, Masaaki;Ogita, Kazuhide;Yokoi, Takeshi 8145
Background: Atypical squamous cells of undetermined significance (ASCUS) feature a wide variety of cervical cells, including benign and malignant examples. The management of ASCUS is complicated. Guidelines for office gynecology in Japan recommend performing a high-risk human papillomavirus (HPV) test as a rule. The guidelines also recommend repeat cervical cytology after 6 and 12 months, or immediate colposcopy. The purpose of this study was to determine the clinical significance of ASCUS. Materials and Methods: Between January 2012 and December 2014, a total of 162 patients underwent cervical conization for cervical intraepithelial neoplasia grade 3 (CIN3), carcinoma in situ, squamous cell carcinoma, microinvasive squamous cell carcinoma, and adenocarcinoma in situ at our hospital. The results of cervical cytology prior to conization, the pathology after conization, and high-risk HPV testing were obtained from clinical records and analyzed retrospectively. Results: Based on cervical cytology, 31 (19.1%) of 162 patients were primarily diagnosed with ASCUS. Among these, 25 (80.6%) were positive for high-risk HPV, and the test results of the remaining 6 patients (19.4%) were uncertain. In the final pathological diagnosis after conization, 27 (87.1%) and 4 patients (12.9%) were diagnosed with CIN3 and carcinoma in situ, respectively. Conclusions: Although ASCUS is known as a low-risk abnormal cervical cytology, approximately 20% of patients who underwent cervical conization had ASCUS. The relationship between the cervical cytology of ASCUS and the final pathological results for CIN3 or invasive carcinoma should be investigated statistically. In cases of ASCUS, we recommend HPV tests or colposcopic examination rather than cytological follow-up, because of the risk of missing CIN3 or more advanced disease.
Malignant Neoplasm Prevalence in the Aktobe Region of Kazakhstan
Bekmukhambetov, Yerbol;Mamyrbayev, Arstan;Jarkenov, Timur;Makenova, Aliya;Imangazina, Zina 8149
An oncopathological state assessment was conducted among adults, children and teenagers in Aktobe region for 2004-2013. Overall the burden of mortality was in the range of 94.8-100.2 per 100,000 population, without any obvious trend over time. Ranking by pathology, the highest incidences among women were registered for breast cancer (5.8-8.4), cervix uteri (2.9-4.6), ovary (2.4-3.6) and corpus uteri, stomach, esophagus, without any marked change over time except for a slight rise in cervical cancer rates. In males, the first place in rank was trachea, bronchus and lung, followed by stomach and esophagus, which are followed by bladder, lymphoid and hematopoietic tissues pathology. Agian no clear trends were apparent over time. In children, main localizations in cancer incidence blood (acute lymphocytic leukemia, lymphosarcoma, acute myeloid leukemia, Hodgkin's disease), brain and central nervous system, bones and articular cartilages, kidneys, and eye and it's appendages, in both sexes. Similarly, in young adults, the major percentage was in blood and lymphatic tissues (acute myeloid leukemia, acute lymphocytic leukemia, Hodgkin's disease) a significant percentage accruing to lymphosarcoma, lymphoma, other myeloid leukemia and hematological malignancies as well as tumors of brain and central nervous system, bones and articular cartilages. This initial survey provides the basis for more detailed investigation of cancer epidemiology in Aktobe, Kazakhstan.
Prognostic Significance of Two Dimensional AgNOR Evaluation in Local Advanced Rectal Cancer Treated with Chemoradiotherapy
Gundog, Mete;Yildiz, Oguz G;Imamoglu, Nalan;Aslan, Dicle;Aytekin, Aynur;Soyuer, Isin;Soyuer, Serdar 8155
The prognostic significance of AgNOR proteins in stage II-III rectal cancers treated with chemoradiotherapy was evaluated. Silver staining was applied to the $3{\mu}m$ sections of parafin blocked tissues from 30 rectal cancer patients who received 5-FU based chemoradiotherapy from May 2003 to June 2006. The microscopic displays of the cells were transferred into the computer via a video camera. AgNOR area (nucleolus organizer region area) and nucleus area values were determined as a nucleolus organizer regions area/total nucleus area (NORa/TNa). The mean NORa/TNa value was found to be $9.02{\pm}3.68$. The overall survival and disease free survival in the high NORa/TNa (>9.02) patients were 52.2 months and 39.4 months respectively, as compared to 100.7 months and 98.4 months in the low NORa/TNa (<9.02) cases. (p<0.001 and p<0.001 respectively). In addition, the prognosis in the high NORa/TNa patients was worse than low NORa/TNa patients (p<0.05). In terms of overall survival and disease-free survival, a statistically significant negative correlation was found with the value of NORa/TNa in the correlations tests. Cox regression analyses demostrated that overall survival and disease-free survival were associated with lymph node status (negative or positive) and the NORa/TNa value. We suggest that two-dimensional AgNOR evaluation may be a safe and usable parameter for prognosis and an indicator of cell proliferation instead of AgNOR dots.
Epidemiological Study on Breast Cancer Associated Risk Factors and Screening Practices among Women in the Holy City of Varanasi, Uttar Pradesh, India
Paul, Shatabdi;Solanki, Prem Prakash;Shahi, Uday Pratap;Srikrishna, Saripella 8163
Background: Breast cancer is the second most cause of death (1.38 million, 10.9% of all cancer) worldwide after lung cancer. In present study, we assess the knowledge, level of awareness of risk factors and screening practices especially breast self examination (BSE) among women, considering the non-feasibility of diagnostic tools such as mammography for breast screening techniques of breast cancer in the holy city Varanasi, Uttar Pradesh, India. Materials and Methods: A cross-sectional population based survey was conducted. The investigation tool adopted was self administrated questionnaire format. Data were analysed using SPSS 20 version and Chi square test to determine significant association between various education groups with awareness and knowledge, analysis of variance was applied in order to establish significance. Results: The attitude of participants in this study, among 560 women 500 (89%) responded (age group 18-65 years), 53.8% were married. The knowledge about BSE was very low (16%) and out of them 15.6% were practised BSE only once in life time. study shown that prominent age at which women achieve their parity was 20 yrs, among 500 participants 224 women have achieved their parity from age 18 to 30 yrs. Very well known awareness about risk factors of breast cancer were alcohol (64.6%), smoking (64%) and least known awareness risk factors were early menarche (17.2%) and use of red meat (23%). The recovery factors of breast cancer cases were doctors support (95%) and family support (94.5%) as most familiar responses of the holy city Varanasi. Conclusions: The study revealed that the awareness about risk factors and practised of BSE among women in Varanasi is extremely low in comparison with other cities and countries as well (Delhi, Mumbai, Himachal Pradesh, Turkey and Nigeria). However, doctors and health workers may promote the early diagnosis of breast cancer.
Evaluation of Nutritional Status of Cancer Patients during Treatment by Patient-Generated Subjective Global Assessment: a Hospital-Based Study
Sharma, Dibyendu;Kannan, Ravi;Tapkire, Ritesh;Nath, Soumitra 8173
Cancer patients frequently experience malnutrition. Cancer and cancer therapy effects nutritional status through alterations in the metabolic system and reduction in food intake. In the present study, fifty seven cancer patients were selected as subjects from the oncology ward of Cachar Cancer Hospital and Research Centre, Silchar, India. Evaluation of nutritional status of cancer patients during treatment was carried out by scored Patient-Generated Subjective Global Assessment (PG-SGA). The findings of PG-SGA showed that 15.8% (9) were well nourished, 31.6% (18) were moderately or suspected of being malnourished and 52.6% (30) were severely malnourished. The prevalence of malnutrition was highest in lip/oral (33.33%) cancer patients. The study showed that the prevalence of malnutrition (84.2%) was high in cancer patients during treatment.
HeLa Cells Containing a Truncated Form of DNA Polymerase Beta are More Sensitized to Alkylating Agents than to Agents Inducing Oxidative Stress
Khanra, Kalyani;Chakraborty, Anindita;Bhattacharyya, Nandan 8177
The present study was aimed at determining the effects of alkylating and oxidative stress inducing agents on a newly identified variant of DNA polymerase beta ($pol{\beta}{\Delta}_{208-304}$) specific for ovarian cancer. $Pol{\beta}{\Delta}_{208-304}$ has a deletion of exons 11-13 which lie in the catalytic part of enzyme. We compared the effect of these chemicals on HeLa cells and HeLa cells stably transfected with this variant cloned into in pcDNAI/neo vector by MTT, colony forming and apoptosis assays. $Pol{\beta}{\Delta}_{208-304}$ cells exhibited greater sensitivity to an alkylating agent and less sensitivity towards $H_2O_2$ and UV when compared with HeLa cells alone. It has been shown that cell death in $Pol{\beta}{\Delta}_{208-304}$ transfected HeLa cells is mediated by the caspase 9 cascade. Exon 11 has nucleotidyl selection activity, while exons 12 and 13 have dNTP selection activity. Hence deletion of this part may affect polymerizing activity although single strand binding and double strand binding activity may remain same. The lack of this part may adversely affect catalytic activity of DNA polymerase beta so that the variant may act as a dominant negative mutant. This would represent clinical significance if translated into a clinical setting because resistance to radiation or chemotherapy during the relapse of the disease could be potentially overcome by this approach.
Detection of High-Risk Human Papillomaviruses in the Prevention of Cervical Cancer in India
Baskaran, Krishnan;Kumar, P Kranthi;Karunanithi, Santha;Sethupathy, Subramanian;Thamaraiselvi, B;Swaruparani, S 8187
Human papillomaviruses (HPVs) are small, non-enveloped, double-stranded DNA viruses that infect epithelial tissues. Specific genotypes of human papillomavirus are the single most common etiological agents of cervical intraepithelial lesions and cervical cancer. Cervical cancer usually arises at squamous metaplastic epithelium of transformation zone (TZ) of the cervix featuring infection with one or more oncogenic or high-risk HPV (HR-HPV) types. A hospital-based study in a rural set up was carried out to understand the association of HR-HPV with squamous intraepithelial lesions (SILs) and cervical cancer. In the present study, HR-HPV was detected in 65.7% of low-grade squamous intraepithelial lesions (LSILs), 84.6% of high-grade squamous intraepithelial lesions (HSILs) and 94% of cervical cancer as compared to 10.7% of controls. The association of HPV infection with SIL and cervical cancer was analyzed with Chi square test (p<0.001). The significant association found confirmed that detection of HR-HPV is a suitable candidate for early identification of cervical precancerous lesions and in the prevention of cervical cancer in India.
Identification and Pharmacological Analysis of High Efficacy Small Molecule Inhibitors of EGF-EGFR Interactions in Clinical Treatment of Non-Small Cell Lung Carcinoma: a Computational Approach
Gudala, Suresh;Khan, Uzma;Kanungo, Niteesh;Bandaru, Srinivas;Hussain, Tajamul;Parihar, MS;Nayarisseri, Anuraj;Mundluru, Hema Prasad 8191
Inhibition of EGFR-EGF interactions forms an important therapeutic rationale in treatment of non-small cell lung carcinoma. Established inhibitors have been successful in reducing proliferative processes observed in NSCLC, however patients suffer serious side effects. Considering the narrow therapeutic window of present EGFR inhibitors, the present study centred on identifying high efficacy EGFR inhibitors through structure based virtual screening strategies. Established inhibitors - Afatinib, Dacomitinib, Erlotinib, Lapatinib, Rociletinib formed parent compounds to retrieve similar compounds by linear fingerprint based tanimoto search with a threshold of 90%. The compounds (parents and respective similars) were docked at the EGF binding cleft of EGFR. Patch dock supervised protein-protein interactions were established between EGF and ligand (query and similar) bound and free states of EGFR. Compounds ADS103317, AKOS024836912, AGN-PC-0MXVWT, GNF-Pf-3539, SCHEMBL15205939 were retrieved respectively similar to Afatinib, Dacomitinib, Erlotinib, Lapatinib, Rociletinib. Compound-AGN-PC-0MXVWT akin to Erlotinib showed highest affinity against EGFR amongst all the compounds (parent and similar) assessed in the study. Further, AGN-PC-0MXVWT brought about significant blocking of EGFR-EGF interactions in addition showed appreciable ADMET properties and pharmacophoric features. In the study, we report AGN-PC-0MXVWT to be an efficient and high efficacy inhibitor of EGFR-EGF interactions identified through computational approaches.
FHIT Gene Expression in Acute Lymphoblastic Leukemia and its Clinical Significance
Malak, Camelia A Abdel;Elghanam, Doaa M;Elbossaty, Walaa Fikry 8197
Background: To investigate the expression of the fragile histidine triad (FHIT) gene in acute lymphoblastic leukemia and its clinical significance. Materials and Methods: The level of expressed FHIT mRNA in peripheral blood from 50 patients with acute lymphoblastic leukemia (ALL) and in 50 peripheral blood samples from healthy volunteers was measured via RT-PCR. Correlation analyses between FHIT gene expression and clinical characteristics (gender, age, white blood count, immunophenotype of acute lymphoblastic leukemia and percentage of blast cells) of the patients were performed. Results: The FHIT gene was expressed at $2.49{\pm}7.37$ of ALL patients against $14.4{\pm}17.9$ in the healthy volunteers. The difference in the expression levels between ALL patients and healthy volunteers was statistically significant. The rate of gene expression did not significantly vary with immunophenotype subtypes. Gene expression was also found to be correlated with increase of total leukocyte and decrease in platelets, but not with age, gender, immunophenotyping or percentage of blast cells. Conclusions: FHIT gene expression is low in acute lymphoblastic leukemia and could be a useful marker to monitor minimal residual disease. This gene is also a candidate target for the immunotherapy of acute lymphoblastic leukemia.
Is Immunohistochemical Sex Hormone Binding Globulin Expression Important in the Differential Diagnosis of Adenocarcinomas?
Bulut, Gulay;Kosem, Mustafa;Bulut, Mehmet Deniz;Erten, Remzi;Bayram, Irfan 8203
Adenocarcinomas (AC) are the most frequently encountered carcinomas. It may be quite challenging to detect the primary origin when those carcinomas metastasize and the first finding is a metastatic tumor. This study evaluated the role of sex hormone binding globulin (SHBG) positivity in tumor cells in the subclassification and detection of the original organ of adenocarcinomas. Between 1994 and 2008, 64 sections of normal tissue belonging to ten organs, and 116 cases diagnosed as adenoid cystic carcinoma and mucoepidermoid carcinoma of the salivary gland, lung adenocarcinoma, invasive ductal carcinoma of the breast, adenocarcinoma of stomach, colon, gallbladder, pancreas and prostate, endometrial adenocarcinoma and serous adenocarcinoma and mucinous adenocarcinoma of the ovary, were sent to the laboratory at the Department of Pathology at the Yuzuncu Yil University School of Medicine, where they were stained immunohistochemically, using antibodies against SHBG. The SHBG immunoreactivity in both the tumor cells and normal cells, together with the type, diffuseness and intensity of the staining were then evaluated. In the differential diagnosis of the adenocarcinomas of the organs, including the glandular structures, impressively valuable results are encountered in the tumor cells, whether the SHBG immunopositivity is evaluated alone or together with other IHC markers. Further extensive research with a larger number of cases, including instances of cholangiocarcinoma and cervix uteri AC [which we could not include in the study for technical reasons] should be performed, in order to appropriately evaluate the role of SHBG in the differential diagnosis of AC.
Levels of Conscience and Related Factors among Iranian Oncology Nurses
Gorbanzadeh, Behrang;Rahmani, Azad;Mogadassian, Sima;Behshid, Mojhgan;Azadi, Arman;Taghavy, Saied 8211
Background: Having a conscience is one of the main pre-requisite of providing nursing care. The knowledge regarding levels of conscience among nurses in eastern countries is limited. So, the purpose of this study was to examine the level of conscience and its related factors among Iranian oncology nurses. Materials and Methods: This descriptive-correlational study was conducted in 3 hospitals in Tabriz, Iran. Overall, 68 nurses were selected using a non-probability sampling method. The perceptions of conscience questionnaire was used to identify the levels of conscience among nurses. The data were analyzed using SPSS version 13.0. Results: The mean nurses' level of conscience scores was 72.7. In the authority and asset sub-scales nurses acquired higher scores. The mean of nurses' scores in burden and depending on culture sub-scales were the least. Also, there were no statistical relationship between some demographic characteristics of participants and their total score on the perceptions of conscience questionnaire. Conclusions: According to study findings Iranian nurses had high levels of conscience. However, understanding all the factors that affect nurses' perception of conscience requires further studies.
Oral non Squamous Cell Malignant Tumors in an Iranian Population: a 43 year Evaluation
Mohtasham, Nooshin;Saghravanian, Nasrollah;Goli, Maryam;Kadeh, Hamideh 8215
Background: The prevalence of non-squamous cell malignant tumors of the oral cavity has not been evaluated in Iran extensively. The aim of this study was to evaluate epidemiological aspects of the oral malignancies with non-squamous cell origin during a 43-year period in the Faculty of Dentistry, Mashhad University of Medical Sciences, Iran. Materials and Methods: In this retrospective study, the records of all patients referred to dental school of Mashhad university of medical sciences in northeast of Iran, during the period 1971-2013 were evaluated. All confirmed samples of oral non squamous cell malignant tumors were included in this study. Demographic information including age, gender and location of the lesions were extracted from patient's records. Data were analyzed using SPSS statistical soft ware, Chi-square and Fisher's exact tests. Results: Among 11,126 patients, 188 (1.68%) non squamous cell malignant tumors were found, with mean age of 39.9 years ranging from 2 to 92 years. The most common tumors were mucoepidermoid carcinoma (33 cases) and lymphoma (32 cases). Non squamous cell malignant tumors occurred almost equally in men (94 cases) and women (93 cases). Most (134 cases) of them were located peripherally with high frequency in salivary glands (89 cases) and 52 cases were centrally with high frequency in the mandible (38 cases). Conclusions: More findings in this survey were similar to those reported from other studies with differences in some cases; it may be due to variation in the sample size, geographic and racial differences in tumors.
Misclassification Adjustment of Family History of Breast Cancer in a Case-Control Study: a Bayesian Approach
Moradzadeh, Rahmatollah;Mansournia, Mohammad Ali;Baghfalaki, Taban;Ghiasvand, Reza;Noori-Daloii, Mohammad Reza;Holakouie-Naieni, Kourosh 8221
Background: Misreporting self-reported family history may lead to biased estimations. We used Bayesian methods to adjust for exposure misclassification. Materials and Methods: A hospital-based case-control study was used to identify breast cancer risk factors among Iranian women. Three models were jointly considered; an outcome, an exposure and a measurement model. All models were fitted using Bayesian methods, run to achieve convergence. Results: Bayesian analysis in the model without misclassification showed that the odds ratios for the relationship between breast cancer and a family history in different prior distributions were 2.98 (95% CRI: 2.41, 3.71), 2.57 (95% CRI: 1.95, 3.41) and 2.53 (95% CRI: 1.93, 3.31). In the misclassified model, adjusted odds ratios for misclassification in the different situations were 2.64 (95% CRI: 2.02, 3.47), 2.64 (95% CRI: 2.02, 3.46), 1.60 (95% CRI: 1.07, 2.38), 1.61 (95% CRI: 1.07, 2.40), 1.57 (95% CRI: 1.05, 2.35), 1.58 (95% CRI: 1.06, 2.34) and 1.57 (95% CRI: 1.06, 2.33). Conclusions: It was concluded that self-reported family history may be misclassified in different scenarios. Due to the lack of validation studies in Iran, more attention to this matter in future research is suggested, especially while obtaining results in accordance with sensitivity and specificity values.
Polymorphisms in Heat Shock Proteins A1B and A1L (HOM) as Risk Factors for Oesophageal Carcinoma in Northeast India
Saikia, Snigdha;Barooah, Prajjalendra;Bhattacharyya, Mallika;Deka, Manab;Goswami, Bhabadev;Sarma, Manash P;Medhi, Subhash 8227
Background: To investigate polymorphisms in heat shock proteins A1B and A1L (HOM) and associated risk of oesophageal carcinoma in Northeast India. Materials and Methods: The study includes oesophageal cancer (ECA) patients attending general outpatient department (OPD) and endoscopic unit of Gauhati Medical College. Patients were diagnosed based on endoscopic and histopathological findings. Genomic DNA was typed for HSPA1B1267 and HSPA1L2437 SNPs using the polymerase chain reaction with restriction fragment length polymorphisms. Results: A total of 78 cases and 100 age-sex matched healthy controls were included in the study with a male: female ratio of 5:3 and a mean age of $61.4{\pm}8.5years$. Clinico-pathological evaluation showed 84% had squamous cell carcinoma and 16% were adenocarcinoma. Dysphagia grades 4 (43.5%) and 5 (37.1%) were observed by endoscopic and hispathological evaluation. The frequency of genomic variation of A1B from wild type A/A to heterozygous A/G and mutant G/G showed a positive association [chi sq=19.9, p=<0.05] and the allelic frequency also showed a significant correlation [chi sq=10.3, with cases vs. controls, OR=0.32, $p{\leq}0.05$]. The genomic variation of A1L from wild T/T to heterozygous T/C and mutant C/C were found positively associated [chi sq=7.02, p<0.05] with development of ECA. While analyzing the allelic frequency, there was no significant association [chi sq=3.19, OR=0.49, p=0.07]. Among all the risk factors, betel quid [OR=9.79, Chi square=35.0, p<0.05], tobacco [OR=2.95, chi square=10.6, p<0.05], smoking [OR=3.23, chi square=10.1, p<0.05] demonstrated significant differences between consumers vs. non consumers regarding EC development. Alcohol did not show any significant association [OR=1.34, chi square=0.69, p=0.4] independently. Conclusions: It can be concluded that the present study provides marked evidence that polymorphisms of HSP70 A1B and HSP70 A1L genes are associated with the development of ECA in a population in Northeast India, A1B having a stronger influence. Betel quid consumption was found to be a highly significant risk factor, followed by smoking and tobacco chewing. Although alcohol was not a potent risk factor independently, alcohol consumption along with tobacco, smoking and betel nut was found to contribute to development of ECA.
Clinico-Pathological Profile and Haematological Abnormalities Associated with Lung Cancer in Bangalore, India
Baburao, Archana;Narayanswamy, Huliraj 8235
Background: Lung cancer is one of the most common types of cancer causing high morbidity and mortality worldwide. An increasing incidence of lung cancer has been observed in India. Objectives:To evaluate the clinicpathological profile and haematological abnormalities associated with lung cancer in Bangalore, India. Materials and Methods: This prospective study was carried out over a period of 2 years. A total of 96 newly diagnosed and histopathologically confirmed cases of lung cancer were included in the study. Results: Our lung cancer cases had a male to female ratio of 3:1. Distribution of age varied from 40 to 90 years, with a major contribution in the age group between 61 and 80 years (55.2%). Smoking was the commonest risk factor found in 69.7% of patients. The most frequent symptom was cough (86.4%) followed by loss of weight and appetite (65.6%) and dyspnea (64.5%). The most common radiological presentation was a mass lesion (55%). The most common histopathological type was squamous cell carcinoma (47.9%), followed by adenocarcinoma (28.1%) and small cell carcinoma (12.5%). Distant metastasis at presentation was seen in 53.1% patients. Among the haematological abnormalities, anaemia was seen in 61.4% of patients, leucocytosis in 36.4%, thrombocytosis in 14.5% and eosinophilia in 19.7% of patients. Haematological abnormalities were more commonly seen in non small cell lung cancer. Conclusions: Squamous cell carcinoma was found to be the most common histopathological type and smoking still remains the major risk factor for lung cancer. Haematological abnormalities are frequently observed in lung cancer patients, anaemia being the commonest of all.
Human Papillomavirus E6 Knockdown Restores Adenovirus Mediated-estrogen Response Element Linked p53 Gene Transfer in HeLa Cells
Kajitani, Koji;Ken-Ichi, Honda;Terada, Hiroyuki;Yasui, Tomoyo;Sumi, Toshiyuki;Koyama, Masayasu;Ishiko, Osamu 8239
The p53 gene is inactivated by the human papillomavirus (HPV) E6 protein in the majority of cervical cancers. Treatment of HeLa S3 cells with siRNA for HPV E6 permitted adenovirus-mediated transduction of a p53 gene linked to an upstream estrogen response element (ERE). Our previous study in non-siRNA treated HHUA cells, which are derived from an endometrial cancer and express estrogen receptor ${\beta}$, showed enhancing effects of an upstream ERE on adenovirus-mediated p53 gene transduction. In HeLa S3 cells treated with siRNA for HPV E6, adenovirus-mediated transduction was enhanced by an upstream ERE linked to a p53 gene carrying a proline variant at codon 72, but not for a p53 gene with arginine variant at codon 72. Expression levels of p53 mRNA and Coxsackie/adenovirus receptor (CAR) mRNA after adenovirus-mediated transfer of an ERE-linked p53 gene (proline variant at codon 72) were higher compared with those after non-ERE-linked p53 gene transfer in siRNA-treated HeLa S3 cells. Western blot analysis showed lower ${\beta}$-tubulin levels and comparatively higher p53/${\beta}$-tubulin or CAR/${\beta}$-tubulin ratios in siRNA-treated HeLa S3 cells after adenovirus-mediated ERE-linked p53 gene (proline variant at codon 72) transfer compared with those in non-siRNA-treated cells. Apoptosis, as measured by annexin V binding, was higher after adenovirus-mediated ERE-linked p53 gene (proline variant at codon 72) transfer compared with that after non-ERE-linked p53 gene transfer in siRNA-treated cells.
Promoter Methylation Status of Two Novel Human Genes, UBE2Q1 and UBE2Q2, in Colorectal Cancer: a New Finding in Iranian Patients
Mokarram, Pooneh;Shakiba-Jam, Fatemeh;Kavousipour, Soudabeh;Sarabi, Mostafa Moradi;Seghatoleslam, Atefeh 8247
Background: The ubiquitin-proteasome system (UPS) degrades a variety of proteins which attach to specific signals. The ubiquitination pathway facilitates degradation of damaged proteins and regulates growth and stress responses. This pathway is altered in various cancers, including acute lymphoblastic leukemia, head and neck squamous cell carcinoma and breast cancer. Recently it has been reported that expression of newly characterized human genes, UBE2Q1 and UBE2Q2, putative members of ubiquitin-conjugating enzyme family (E2), has been also changed in colorectal cancer. Epigenetics is one of the fastest-growing areas of science and nowadays has become a central issue in biological studies of diseases. According to the lack of information about the role of epigenetic changes on gene expression profiling of UBE2Q1 and UBE2Q2, and the presence of CpG islands in the promoter of these two human genes, we decided to evaluate the promoter methylation status of these genes as a first step. Materials and Methods: The promoter methylation status of UBE2Q1 and UBE2Q2 was studied by methylation-specific PCR (MSP) in tumor samples of 60 colorectal cancer patients compared to adjacent normal tissues and 20 non-malignant controls. The frequency of the methylation for each gene was analyzed by chi-square method. Results: MSP results revealed that UBE2Q2 gene promoter were more unmethylated, while a higher level of methylated allele was observed for UBE2Q1 in tumor tissues compared to the adjacent normal tissues and the non malignant controls. Conclusions: UBE2Q1 and UBE2Q2 genes show different methylation profiles in CRC cases.
Plasma Soluble CD30 as a Possible Marker of Adult T-cell Leukemia in HTLV-1 Carriers: a Nested Case-Control Study
Takemoto, Shigeki;Iwanaga, Masako;Sagara, Yasuko;Watanabe, Toshiki 8253
Elevated levels of soluble CD30 (sCD30) are linked with various T-cell neoplasms. However, the relationship between sCD30 levels and the development of adult T-cell leukemia (ATL) in human T-cell leukemia virus type 1 (HTLV-1) carriers remains to be clarified. We here investigated whether plasma sCD30 is associated with risk of ATL in a nested case-control study within a cohort of HTLV-1 carriers. We compared sCD30 levels between 11 cases (i.e., HTLV-1 carriers who later progressed to ATL) and 22 age-, sex- and institution-matched control HTLV-1 carriers (i.e., those with no progression). The sCD30 concentration at baseline was significantly higher in cases than in controls (median 65.8, range 27.2-134.5 U/mL vs. median 22.2, range 8.4-63.1 U/mL, P=0.001). In the univariate logistic regression analysis, a higher sCD30 (${\geq}30.2U/mL$) was significantly associated with ATL development (odds ratio 7.88 and the 95% confidence intervals 1.35-45.8, P = 0.02). Among cases, sCD30 concentration tended to increase at the time of diagnosis of aggressive-type ATL, but the concentration was stable in those developing the smoldering-type. This suggests that sCD30 may serve as a predictive marker for the onset of aggressive-type ATL in HTLV-1 carriers.
Upregulation of Mir-34a in AGS Gastric Cancer Cells by a PLGA-PEG-PLGA Chrysin Nano Formulation
Mohammadian, Farideh;Abhari, Alireza;Dariushnejad, Hassan;Zarghami, Faraz;Nikanfar, Alireza;Pilehvar-Soltanahmadi, Yones;Zarghami, Nosratollah 8259
Background: Nano-therapy has the potential to revolutionize cancer therapy. Chrysin, a natural flavonoid, was recently recognized as having important biological roles in chemical defenses and nitrogen fixation, with anti-inflammatory and anti-oxidant effects but the poor water solubility of flavonoids limitstheir bioavailability and biomedical applications. Objective: Chrysin loaded PLGA-PEG-PLGA was assessed for improvement of solubility, drug tolerance and adverse effects and accumulation in a gastric cancer cell line (AGS). Materials and Methods: Chrysin loaded PLGA-PEG copolymers were prepared using the double emulsion method (W/O/W). The morphology and size distributions of the prepared PLGA-PEG nanospheres were investigated by 1H NMR, FT-IR and SEM. The in vitro cytotoxicity of pure and nano-chrysin was tested by MTT assay and miR-34a was measured by real-time PCR. Results: 1H NMR, FT-IR and SEM confirmed the PLGA-PEG structure and chrysin loaded on nanoparticles. The MTT results for different concentrations of chrysin at different times for the treatment of AGS cell line showed IC50 values of 68.2, 56.2 and $42.3{\mu}M$ and 58.2, 44.2, $36.8{\mu}M$ after 24, 48, and 72 hours of treatment, respectively for chrysin itslef and chrysin-loaded nanoparticles. The results of real time PCR showed that expression of miR-34a was upregulated to a greater extent via nano chrysin rather than free chrysin. Conclusions: Our study demonstrates chrysin loaded PLGA-PEG promises a natural and efficient system for anticancer drug delivery to fight gastric cancer.
Cost-Utility of "Doxorubicin and Cyclophosphamide" versus "Gemcitabine and Paclitaxel" for Treatment of Patients with Breast Cancer in Iran
Hatam, Nahid;Askarian, Mehrdad;Javan-Noghabi, Javad;Ahmadloo, Niloofar;Mohammadianpanah, Mohammad 8265
Purpose: A cost-utility analysis was performed to assess the cost-utility of neoadjuvant chemotherapy regimens containing doxorubicin and cyclophosphamide (AC) versus paclitaxel and gemcitabine (PG) for locally advanced breast cancer patients in Iran. Materials and Methods: This cross-sectional study in Namazi hospital in Shiraz, in the south of Iran covered 64 breast cancer patients. According to the random numbers, the patients were divided into two groups, 32 receiving AC and 32 PG. Costs were identified and measured from a community perspective. These items included medical and non-medical direct and indirect costs. In this study, a data collection form was used. To assess the utility of the two regimens, the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire-Core30 (EORTC QLQ-C30) was applied. Using a decision tree, we calculated the expected costs and quality adjusted life years (QALYs) for both methods; also, the incremental cost-effectiveness ratio was assessed. Results: The results of the decision tree showed that in the AC arm, the expected cost was 39,170 US$ and the expected QALY was 3.39 and in the PG arm, the expected cost was 43,336 dollars and the expected QALY was 2.64. Sensitivity analysis showed the cost effectiveness of the AC and ICER=-5535 US$. Conclusions: Overall, the results showed that AC to be superior to PG in treatment of patients with breast cancer, being less costly and more effective.
Altered Cell to Cell Communication, Autophagy and Mitochondrial Dysfunction in a Model of Hepatocellular Carcinoma: Potential Protective Effects of Curcumin and Stem Cell Therapy
Tork, Ola M;Khaleel, Eman F;Abdelmaqsoud, Omnia M 8271
Background: Hepato-carcinogenesis is multifaceted in its molecular aspects. Among the interplaying agents are altered gap junctions, the proteasome/autophagy system, and mitochondria. The present experimental study was designed to outline the roles of these players and to investigate the tumor suppressive effects of curcumin with or without mesenchymal stem cells (MSCs) in hepatocellular carcinoma (HCC). Materials and Methods: Adult female albino rats were divided into normal controls and animals with HCC induced by diethyl-nitrosamine (DENA) and $CCl_4$. Additional groups treated after HCC induction were: Cur/HCC which received curcumin; MSCs/HCC which received MSCs; and Cur+MSCs/HCC which received both curcumin and MSCs. For all groups there were histopathological examination and assessment of gene expression of connexin43 (Cx43), ubiquitin ligase-E3 (UCP-3), the autophagy marker LC3 and coenzyme-Q10 (Mito.Q10) mRNA by real time, reverse transcription-polymerase chain reaction, along with measurement of LC3II/LC3I ratio for estimation of autophagosome formation in the rat liver tissue. In addition, the serum levels of ALT, AST and alpha fetoprotein (AFP), together with the proinflammatory cytokines $TNF{\alpha}$ and IL-6, were determined in all groups. Results: Histopathological examination of liver tissue from animals which received DENA-$CCl_4$ only revealed the presence of anaplastic carcinoma cells and macro-regenerative nodules. Administration of curcumin, MSCs; each alone or combined into rats after induction of HCC improved the histopathological picture. This was accompanied by significant reduction in ${\alpha}$-fetoprotein together with proinflammatory cytokines and significant decrease of various liver enzymes, in addition to upregulation of Cx43, UCP-3, LC3 and Mito.Q10 mRNA. Conclusions: Improvement of Cx43 expression, nonapoptotic cell death and mitochondrial function can repress tumor growth in HCC. Administration of curcumin and/or MSCs have tumor suppressive effects as they can target these mechanisms. However, further research is still needed to verify their effectiveness.
High Prevalence of Helicobacter pylori Resistance to Clarithromycin: a Hospital-Based Cross-Sectional Study in Nakhon Ratchasima Province, Northeast of Thailand
Tongtawee, Taweesak;Dechsukhum, Chavaboon;Matrakool, Likit;Panpimanmas, Sukij;Loyd, Ryan A;Kaewpitoon, Soraya J;Kaewpitoon, Natthawut 8281
Background: Helicobacter pylori is a cause of chronic gastritis, peptic ulcer disease, and gastric malignancy, infection being a serious health problem in Thailand. Recently, clarithromycin resistant H. pylori strains represent the main cause of treatment failure. Therefore this study aimed to determine the prevalence and pattern of H. pylori resistance to clarithromycin in Suranaree University of Technology Hospital, Suranree University of Technology, Nakhon Ratchasima, Northeastern Thailand, Nakhon Ratchasima province, northeast of Thailand. Materials and Methods: This hospital-based cross-sectional study was carried out between June 2014 and February 2015 with 300 infected patients interviewed and from whom gastric mucosa specimens were collected and proven positive by histology. The gastric mucosa specimens were tested for H. pylori and clarithromycin resistance by 23S ribosomal RNA point mutations analysis using real-time polymerase chain reactions. Correlation of eradication rates with patterns of mutation were analyzed by chi-square test. Results: Of 300 infected patients, the majority were aged between 47-61 years (31.6%), female (52.3%), with monthly income between 10,000-15,000 Baht (57%), and had a history of alcohol drinking (59.3%). Patient symptoms were abdominal pain (48.6%), followed by iron deficiency anemia (35.3%). Papaya salad consumption (40.3%) was a possible risk factor for H. pylori infection. The prevalence of H. pylori strains resistant to clarithromycin was 76.2%. Among clarithromycin-resistant strains tested, all were due to the A2144G point mutation in the 23S rRNA gene. Among mutations group, wild type genotype, mutant strain mixed wild type and mutant genotype were 23.8%, 35.7% and 40.5% respectively. With the clarithromycin-based triple therapy regimen, the efficacy decreased by 70% for H. pylori eradication (P<0.01). Conclusions: Recent results indicate a high rate of H. pylori resistance to clarithromycin. Mixed of wild type and mutant genotype is the most common mutant genotype in Nakhon Ratchasima province, therefore the use of clarithromycin-based triple therapy an not advisable as an empiric first-line regimen for H. pylori eradication in northeast region of Thailand.
Thymidylate Synthase Polymorphisms and Risk of Lung Cancer among the Jordanian Population: a Case Control Study
Qasem, Wiam Al;Yousef, Al-Motassem;Yousef, Mohammad;Manasreh, Ihab 8287
Background: Thymidylate synthase (TS) catalyzes the methylation of deoxyuridylate to deoxythymidylate and is involved in DNA methylation, synthesis and repair. Two common polymorphisms have been reported, tandem repeats in the promoter-enhancer region (TSER), and 6bp ins/del in the 5'UTR, that are implicated in a number of human diseases, including cancer. The association between the two polymorphisms in risk for lung cancer (LC) was here investigated in the Jordanian population. Materials and Methods: An age, gender, and smoking-matched case-control study involving 84 lung cancer cases and 71 controls was conducted. The polymerase chain reaction/restriction fragment length polymorphism (PCR-RFLP) technique was used to detect the polymorphism of interest. Results: Individuals bearing the ins/ins genotype were 2.5 times more likely to have lung cancer [(95%CI: 0.98-6.37), p=0.051]. Individuals who were less than or equal to 57 years and carrying ins/ins genotype were 4.6 times more susceptible to lung cancer [OR<57 vs >57years: 4.6 (95%CI: 0.93-22.5), p=0.059)]. Genotypes and alleles of TSER were distributed similarly between cases and controls. Weak linkage disequilibrium existed between the two loci of interest (Lewontin's coefficient [D']) (LC: D' =0.03, r2: 0. 001, p=0.8; Controls: D' =0.29, r2: 0.08, p=0.02). Carriers of the "3 tandem repeats_insertion" haplotype (3R_ins) were 2 times more likely to have lung cancer [2 (95%CI: 1.13-3.48), p=0.061]. Conclusions: Genetic polymorphism of TS at 3 'UTR and its haplotype analysis may modulate the risk of lung cancer in Jordanians. The 6bp ins/del polymorphism of TS at 3 'UTR is more informative than TSER polymorphism in predicting increased risk.
Retrospective Evaluation of Risk Factors and Immunohistochemical Findings for Pre-Neoplastic and Neoplastic lesions of Upper Urinary Tract in Patients with Chronic Nephrolithiasis
Desai, Fanny Sharadkumar;Nongthombam, Jitendra;Singh, Lisam Shanjukumar 8293
Background: Urinary stones are known predisposing factors for upper urinary tract carcinoma (UUTC) which are commonly detected at advanced stage with poor outcome because of rarity and lack of specific criteria for early detection. Aims and objectives: The main aim was to evaluate the impact of age, gender andstone characteristics on risk of developing UUTC in patients with chronic nephrolithiasis. We also discuss the role of aberrant angiogenesis (AA) and immunohistochemical expression of p53, p16INK4a, CK20 and Ki-67 in diagnosis of pelvicalyceal neoplastic (NL) and pre-neoplastic lesions (PNL) in these patients. Materials and Methods: Retrospective analysis of pelvicalyceal urothelial lesions from 88 nephrectomy specimens were carried out in a tertiary care centre from June 2012 to December 2014. Immunohistochemistry (IHC) was performed on 37 selected cases. Computed image analysis was performed to analyse aberrant angiogenesis. Results: All UUTC (5.7%) and metaplastic lesions were found to be associated with stones. Some 60% were pure squamous cell carcinoma and 40% were transitional cell carcinoma. Odd ratios for developing NL and PNL lesions in presence of renal stone, impacted stones, multiple and large stag horn stones were 9.39 (95% CI 1.15-76.39, p value 0.05), 6.28 (95% CI 1.59-24.85, p value 0.000) and 7.4 (95% CI, 2.29-23.94, p value 0.001) respectively. When patient age was ${\geq}55$, the odds ratio for developing NL was 3.43 (95% CI 1.19-9.88, p value 0.019). IHC analysis showed that mean Ki-67 indices were $3.15{\pm}3.63%$ for non-neoplastic lesions, $10.0{\pm}9.45%$ for PNL and $28.0{\pm}18.4%$ for NL. Sensitivity and specificity of CK20, p53, p16INK4a, AA were 76% and 95.9%; 100% and 27.5%; 100% and 26.5%; 92.3 % and 78.8% respectively. Conclusions: Age ${\geq}55years$, large stag horn stones, multiple stones and impacted stones are found to be associated with increased risk of NL and PNL in UUT. For flat lesions, a panel of markers, Ki 67 index >10 and presence of aberrant angiogenesis were more useful than individual markers.
Plasma Circulating Cell-free Nuclear and Mitochondrial DNA as Potential Biomarkers in the Peripheral Blood of Breast Cancer Patients
Mahmoud, Enas H;Fawzy, Amal;Ahmad, Omar K;Ali, Amr M 8299
Background: In Egypt, breast cancer is estimated to be the most common cancer among females. It is also a leading cause of cancer-related mortality. Use of circulating cell-free DNA (ccf-DNA) as non-invasive biomarkers is a promising tool for diagnosis and follow-up of breast cancer (BC) patients. Objective: To assess the role of circulating cell free DNA (nuclear and mitochondrial) in diagnosing BC. Materials and Methods: Multiplex real time PCR was used to detect the level of ccf nuclear and mitochondrial DNA in the peripheral blood of 50 breast cancer patients together with 30 patients with benign lesions and 20 healthy controls. Laboratory investigations, histopathological staging and receptor studies were carried out for the cancer group. Receiver operating characteristic curves were used to evaluate the performance of ccf-nDNA and mtDNA. Results: The levels of both nDNA and mtDNA in the cancer group were significantly higher in comparison to the benign and the healthy control group. There was a statistically significant association between nDNA and mtDNA levels and well established prognostic parameters; namely, histological grade, tumour stage, lymph node status andhormonal receptor status. Conclusions: Our data suggests that nuclear and mitochondrial ccf-DNA may be used as non-invasive biomarkers in BC.
Pharmacophore Development for Anti-Lung Cancer Drugs
Haseeb, Muhammad;Hussain, Shahid 8307
Lung cancer is one particular type of cancer that is deadly and relatively common than any other. Treatment is with chemotherapy, radiation therapy and surgery depending on the type and stage of the disease. Focusing on drugs used for chemotherapy and their associated side effects, there is a need to design and develop new anti-lung cancer drugs with minimal side effects and improved efficacy. The pharmacophore model appears to be a very helpful tool serving in the designing and development of new lead compounds. In this paper, pharmacophore analysis of 10 novel anti-lung cancer compounds was validated for the first time. Using LigandScout the pharmacophore features were predicted and 3D pharmacophores were extracted via VMD software. A training set data was collected from literature and the proposed model was applied to the training set whereby validating and verifying similar activity as that of the most active compounds was achieved. Therefore pharmacophore develoipment could be recommended for further studies.
In Vitro Anti-Neuroblastoma Activity of Thymoquinone Against Neuro-2a Cells via Cell-cycle Arrest
Paramasivam, Arumugam;Raghunandhakumar, Subramanian;Priyadharsini, Jayaseelan Vijayashree;Jayaraman, Gopalswamy 8313
We have recently shown that thymoquinone (TQ) has a potent cytotoxic effect and induces apoptosis via caspase-3 activation with down-regulation of XIAP in mouse neuroblastoma (Neuro-2a) cells. Interestingly, our results showed that TQ was significantly more cytotoxic towards Neuro-2a cells when compared with primary normal neuronal cells. In this study, the effects of TQ on cell-cycle regulation and the mechanisms that contribute to this effect were investigated using Neuro-2a cells. Cell-cycle analysis performed by flow cytometry revealed cell-cycle arrest at G2/M phase and a significant increase in the accumulation of TQ-treated cells at sub-G1 phase, indicating induction of apoptosis by the compound. Moreover, TQ increased the expression of p53, p21 mRNA and protein levels, whereas it decreased the protein expression of PCNA, cyclin B1 and Cdc2 in a dose-dependent manner. Our finding suggests that TQ could suppress cell growth and cell survival via arresting the cell-cycle in the G2/M phase and inducing apoptosis of neuroblastoma cells.
Epidemiology of Hydatidiform Moles in a Tertiary Hospital in Thailand over Two Decades: Impact of the National Health Policy
Wairachpanich, Varangkana;Limpongsanurak, Sompop;Lertkhachonsuk, Ruangsak 8321
Background: The incidence of hydatidiform mole (HM) differs among regions but has declined significantly over time. In Thailand, the initiation of universal health coverage in 2002 has resulted in a change of medical services countrywide. However, impacts of these policies on gestational trophoblastic disease (GTD) cases in Thailand have not been reported. This study aimed to find the incidence of hydatidiform mole (HM) in King Chulalongkorn Memorial Hospital (KCMH) from 1994-2013, comparing before and after the implementation of the universal coverage health policy. Materials and Methods: All cases of GTD in KCMH from 1994-2013 were reviewed from medical records. The incidence of HM, patient characteristics, treatment and remission rates were compared over two study decades between 1994-2003 and 2004-2013. Results: Hydatidiform mole cases decreased from 204 cases in the first decade to 111 cases in the seond decade. Overall incidence of HM was 1.70 per 1,000 deliveries. The incidence of HM in the first and second decades were 1.70 and 1.71 per 1,000 deliveries, respectively (p=0.65, 95%CI 1.54-1.88). Referred cases of nonmolar gestational trophoblastic neoplasia (GTN) increased from 12 (4.4%) to 23 (14.4%, p<0.01). Vaginal bleeding was the most common presenting symptom which decreased from 89.4% to 79.6% (p=0.02). Asymptomatic HM patients increased from 4.8% to 10.2% (p=0.07). Rate of postmolar GTN was 26%. Conclusions: The number of HM cases in this study decreased over 2 decades but incidence was unchanged. Referral rates of malignant cases were more common after universal health coverage policy initiation. Classic clinical presentation was decreased significantly in the last decade.
Breast Cancer in Lampang, a Province in Northern Thailand: Analysis of 1993-2012 Incidence Data and Future Trends
Lalitwongsa, Somkiat;Pongnikorn, Donsuk;Daoprasert, Karnchana;Sriplung, Hutcha;Bilheem, Surichai 8327
Background: The recent epidemiologic transition in Thailand, with decreasing incidence of infectious diseases along with increasing rates of chronic conditions, including cancer, is a serious problem for the country. Breast cancer has the highest incidence rates among females throughout Thailand. Lampang is a province in the upper part of Northern Thailand. A study was needed to identify the current burden, and the future trends of breast cancer in upper Northern Thai women. Materials and Methods: Here we used cancer incidence data from the Lampang Cancer Registry to characterize and analyze the local incidence of breast cancer. Joinpoint analysis, age period cohort model and Nordpred package were used to investigate the incidences of breast cancer in the province from 1993 to 2012 and to project future trends from 2013 to 2030. Results: Age-standardized incidence rates (world) of breast cancer in the upper parts of Northern Thailand increased from 16.7 to 26.3 cases per 100,000 female population which is equivalent to an annual percentage change of 2.0-2.8%, according to the method used. Linear drift effects played a role in shaping the increase of incidence. The three projection method suggested that incidence rates would continue to increase in the future with incidence for women aged 50 and above, increasing at a higher rate than for women below the age of 50. Conclusions: The current early detection measures increase detection rates of early disease. Preparation of a budget for treatment facilities and human resources, both in surgical and medical oncology, is essential.
Comparative Investigation of Single Voxel Magnetic Resonance Spectroscopy and Dynamic Contrast Enhancement MR Imaging in Differentiation of Benign and Malignant Breast Lesions in a Sample of Iranian Women
Faeghi, Fariborz;Baniasadipour, Banafsheh;Jalalshokouhi, Jalal 8335
Purpose: To make a comparison of single voxel magnetic resonance spectroscopy (SV-MRS) and dynamic contrast enhancement (DCE) MRI for differentiation of benign and malignant breast lesions in a sample of Iranian women. Materials and Methods: A total of 30 women with abnormal breast lesions detected in mammography, ultrasound, or clinical breast exam were examined with DCE and SV-MRS. tCho (total choline) resonance in MRS spectra was qualitatively evaluated and detection of a visible tCho peak at 3.2 ppm was defined as a positive finding for malignancy. Different types of DCE curves were persistent (type 1), plateau (type 2), and washout (type 3). At first, lesions were classified according to choline findings and types of DCE curve, finally being compared to pathological results as the standard reference. Results: this study included 19 patients with malignant lesions and 11 patients with benign ones. While 63.6 % of benign lesions (7 of 11) showed type 1 DCE curves and 36.4% (4 of 11) showed type 2, 57.9% (11of 19) of malignant lesions were type 3 and 42.1% (8 of 19) type 2. Choline peaks were detected in 18 of 19 malignant lesions and in 3 of 11 benign counterparts. 1 malignant and 8 benign cases did not show any visible resonance at 3.2 ppm so SV-MRS featured 94.7% sensitivity, 72.7 % specificity and 86.7% accuracy.Conclusions: The present findings indicate that a combined approach using MRS and DCE MRI can improve the specificity of MRI for differentiation of benign and malignant breast lesions.
Outcome and Cost Effectiveness of Ultrasonographically Guided Surgical Clip Placement for Tumor Localization in Patients undergoing Neo-adjuvant Chemotherapy for Breast Cancer
Masroor, Imrana;Zeeshan, Sana;Afzal, Shaista;Sufian, Saira Naz;Ali, Madeeha;Khan, Shaista;Ahmad, Khabir 8339
Background: To determine the outcome and cost saving by placing ultrasound guided surgical clips for tumor localization in patients undergoing neo-adjuvant chemotherapy for breast cancer. Materials and Methods: This retrospective cross sectional analytical study was conducted at the Department of Diagnostic Radiology, Aga Khan University Hospital, Karachi, Pakistan from January to December 2014. A sample of 25 women fulfilling our selection criteria was taken. All patients came to our department for ultrasound guided core biopsy of suspicious breast lesions and clip placement in the index lesion prior to neo-adjuvant chemotherapy. All the selected patients had biopsy proven breast cancer. Results: The mean age was $45{\pm}11.6years$. There were no complications seen after clip placement in terms of clip migration or hemorrhage. The cost of commercially available markers was approximately PKR 9,000 (US$ 90) and that of the surgical clip was PKR 900 (US$ 9). The cost of surgical clips in 25 patients was PKR 22,500 (US$ 225), when compared to the commercially available markers which may have incurred a cost of PKR 225,000 (US$ 2,250). The total cost saving for 25 patients was PKR 202,500 (US$ 2, 025), making it PKR 8100 (US$ 81) per patient. Conclusions: The results of our study show that ultrasound guided surgical clip placement in index lesions prior to neo-adjuvant therapy is a safe and cost effective method to identify tumor bed and response to treatment for further management.
Colorectal Cancer Awareness and Screening Preference: A Survey during the Malaysian World Digestive Day Campaign
Suan, Mohd Azri Mohd;Mohammed, Noor Syahireen;Hassan, Muhammad Radzi Abu 8345
Background: Although the incidence of colorectal cancer in Malaysia is increasing, awareness of this cancer, including its symptoms, risk factors and screening methods, remains low among Malaysian populations. This survey was conducted with the aim of (i) ascertaining the awareness level regarding colorectal cancer symptoms, risk factors and its screening among the general populations and (ii) assessing the public preference and willingness to pay for colorectal cancer screening. Materials and Methods: The questionnaire was distributed in eight major cities in West Malaysia during the World Health Digestive Day (WDHD) campaign. Two thousand four hundred and eight respondents participated in this survey. Results: Generally, awareness of colorectal cancer was found to be relatively good. Symptoms such as change in bowel habit, blood in the stool, weight loss and abdominal pain were well recognized by 86.6%, 86.9%, 83.4% and 85.6% of the respondents, respectively. However, common risk factors such as positive family history, obesity and old age were acknowledged only by less than 70% of the respondents. Almost 80% of the respondents are willing to take the screening test even without any apparent symptoms. Colonoscopy is the preferred screening method, but only 37.5% were willing to pay from their own pocket to get early colonoscopy. Conclusions: Continous cancer education should be promoted with more involvement from healthcare providers in order to make future colorectal cancer screening programs successful.
Automatic Electronic Cleansing in Computed Tomography Colonography Images using Domain Knowledge
Manjunath, KN;Siddalingaswamy, PC;Prabhu, GK 8351
Electronic cleansing is an image post processing technique in which the tagged colonic content is subtracted from colon using CTC images. There are post processing artefacts, like: 1) soft tissue degradation; 2) incomplete cleansing; 3) misclassification of polyp due to pseudo enhanced voxels; and 4) pseudo soft tissue structures. The objective of the study was to subtract the tagged colonic content without losing the soft tissue structures. This paper proposes a novel adaptive method to solve the first three problems using a multi-step algorithm. It uses a new edge model-based method which involves colon segmentation, priori information of Hounsfield units (HU) of different colonic contents at specific tube voltages, subtracting the tagging materials, restoring the soft tissue structures based on selective HU, removing boundary between air-contrast, and applying a filter to clean minute particles due to improperly tagged endoluminal fluids which appear as noise. The main finding of the study was submerged soft tissue structures were absolutely preserved and the pseudo enhanced intensities were corrected without any artifact. The method was implemented with multithreading for parallel processing in a high performance computer. The technique was applied on a fecal tagged dataset (30 patients) where the tagging agent was not completely removed from colon. The results were then qualitatively validated by radiologists for any image processing artifacts.
Breast Cancer in Lopburi, a Province in Central Thailand: Analysis of 2001-2010 Incidence and Future Trends
Sangkittipaiboon, Somphob;Leklob, Atit;Sriplung, Hutcha;Bilheem, Surichai 8359
Background: Thailand has come to an epidemiologic transition with decreasing infectious diseases and increasing burden of chronic conditions, including cancer. Breast cancer has the highest incidence rates among females throughout Thailand. This study aimed to identify the current burden and the future trends of breast cancer of Lopburi, a province in the Central Thailand. Materials and Methods: We used cancer incidence data from the Lopburi Cancer Registry to characterize and analyze the incidence of breast cancer in Central Thailand. With joinpoint and age-period-cohort analyses, the incidence of breast cancer in the province from 2001 to 2010 and project future trends from 2011 to 2030 was investigated. Results: Age-adjusted incidence rates of breast cancer in Lopburi increased from 23.4 to 34.3 cases per 100,000 female population during the period, equivalent to an annual percentage change of 4.3% per year. Both period and cohort effects played a role in shaping the increase in incidence. Joinpoint projection suggested that incidence rates would continue to increase in the future with incidence for women ages 50 years and above increasing at a higher rate than for women below the age of 50. Conclusions: The current situation where early detection measures are being promoted could increase detection rates of the disease. Preparation of sufficient budget for treatment facilities and human resources, both in surgical and medical oncology, is essential for future medical care.
Association between Shammah Use and Oral Leukoplakia-like Lesions among Adult Males in Dawan Valley, Yemen
Al-Tayar, Badr Abdullah;Tin-Oo, Mon Mon;Sinor, Modh Zulkarnian;Alakhali, Mohammed Sultan 8365
Background: Shammah is a traditional form of snuff dipping tobacco (a smokeless tobacco form) that is commonly used in Yemen. Oral mucosal changes due to the use of shammah can usually be observed in the mucosal surfaces that the product touches. The aim of this study was to determine the association between shammah use and oral leukoplakia-like lesions. Other associated factors were also determined. Materials and Methods: A cross sectional study was conducted on 346 randomly selected adult males. Multi-stage random sampling was used to select the study location. After completing the structured questionnaire interviews, all the participants underwent clinical exanimation for screening of oral leukoplakia-like lesions Clinical features of oral leukoplakia-like lesion were characterized based on the grades of $Ax{\acute{e}}ll$ et al (1976). Univariable logistic regression and multivariable logistic regression were used to assess the potential associated factors. Results: Out of 346 male participants aged 18 years and older, 68 (19.7%) reported being current shammah users. The multivariable analysis revealed that age, non-formal or primary level of education, former shammah user, current shammah user, and frequency of shammah use per day were statistically associated with the presence of oral leukoplakia-like lesions [Adjusted odds ratio (AOR) = 1.03; 95% confidence interval (CI) : 1.01, 1.06; P=0.006], (AOR=8.65; 95% CI: 2.81, 26.57; P=0.001), (AOR=3.65; 95% CI: 1.40, 9.50; P=0.008), (AOR=12.99; 95% CI: 6.34, 26.59; P=0.001), and (AOR=1.17; 95% CI: 1.02, 1.36; P=0.026), respectively. Conclusions: The results revealed oral leukoplakia-like lesions to be significantly associated with shammah use. Therefore, it is important to develop comprehensive shammah prevention programs in Yemen.
Factors Associated with Adherence to Colorectal Cancer Screening among Moderate Risk Individuals in Iran
Taheri-Kharameh, Zahra;Noorizadeh, Farsad;Sangy, Samira;Zamanian, Hadi;Shouri-Bidgoli, Ali Reza;Oveisi, Helaleh 8371
Background: Colorectal cancer is one of the most common neoplasms in Iran. Secondary prevention (colorectal cancer screening) is important and a most valuable method of early diagnosis of this cancer. The objectives of this study were to determine the factors associated with colorectal cancer screening adherence among Iranians 50 years and older using the Health Belief Model. Materials and Methods: This cross-sectional study was conducted from June 2012 to May 2013. A convenience sample of 200 individuals aged 50 and older was recruited from the population at outpatient clinics in teaching hospitals. Data gathering tools were the Champions health belief model scale (CHBMS) with coverage of socio-demographic background and CRC screening information. Multiple logistic regression was performed to identify factors associated with colorectal cancer screening adherence. Results: The mean age of participants was $62.5{\pm}10.8$ and 75.5% were women. A high percentage of the participants had not heard or read about colorectal cancer (86.5%) and CRC screening (93.5%). Perceived susceptibility to colorectal cancer had the lowest percentage of all of the subscales. Participants who perceived more susceptibility (OR =2.99; CI 95%: 1.23-5.45) and reported higher knowledge (OR =1.29; CI 95%: 1.86-3.40) and those who reported fewer barriers (OR =.37; CI 95%:.21-.89), were more likely to have carried out colorectal cancer screening. Conclusions: Our findings indicated that CRC knowledge, perceived susceptibility and barriers were significant predictors of colorectal cancer screening adherence. Strategies to increase knowledge and overcome barriers in risk individuals appear necessary. Education programs should be promoted to overcome knowledge deficiency and negative perceptions in elderly Iranians.
Association of PNPLA3 Polymorphism with Hepatocellular Carcinoma Development and Prognosis in Viral and Non-Viral Chronic Liver Diseases
Khlaiphuengsin, Apichaya;Kiatbumrung, Rattanaporn;Payungporn, Sunchai;Pinjaroen, Nutcha;Tangkijvanich, Pisit 8377
Background: The aim of this study was to evaluate any association between a single nucleotide polymorphism (SNP) in the patatin-like phospholipase domain containing 3 (PNPLA3) (rs738409, C>G) and the development and prognosis in patients with hepatocellular carcinoma (HCC). Materials and Methods: Two hundred heathy controls and 388 HCC cases were included: 211 with HBV, 98 patients with HCV, 29 with alcoholic steatohepatitis (ASH) and 52 with non-alcoholic steatohepatitis (NASH). The SNP was determined by real-time PCR based on TaqMan assays. Results: The prevalence of rs738409 genotypes CC, CG and GG in controls was 91 (45.5%), 88 (44.0%), and 21 (10.5%), respectively, while the corresponding genotypes in all patients with HCC was 158 (40.7%), 178 (45.9%), and 52 (13.4%). The GG genotype had significantly higher distribution in patients with ASH/NASH-related HCC compared with controls (OR=2.34, 95% CI=1.16-4.71, P=0.018), and viral-related HCC cases (OR=2.15, 95% CI=1.13-4.08, P=0.020). However, the frequency of the GG genotype was similar between controls and patients with viral-related HCC. At initial diagnosis, HBV-related HCC were larger and at more advanced BCLC stage than the other HCC groups. There were no significant differences between the GG and non-GG groups regarding clinical characteristics, tumor stage and overall survival. Conclusions: These data suggest an influence of the PNPLA3 polymorphism on the occurrence of HCC in patients with ASH/NASH but not among those with chronic viral hepatitis. However, the polymorphism was not associated with the prognosis of HCC.
Treatment of Oral Leukoplakia with Diode Laser: a Pilot Study on Indian Subjects
Kharadi, Usama A Rashid;Onkar, Sanjeev;Birangane, Rajendra;Chaudhari, Swapnali;Kulkarni, Abhay;Chaudhari, Rohan 8383
Background: To evaluate the safety, convenience and effectiveness of 940nm diode laser for treatment of homogenous leukoplakia. Materials and Methods: Ten patients having homogenous leukoplakia which were diagnosed clinically were selected from an Indian dental educational institution for the study. Toludine blue staining was applied locally over the lesion. The area where there was increased uptake of stain was excised using a 940 nm EZLASE TM diode laser (BIOLASE-USA). Results: Although various treatment modalities have been tried and the search continues for novel treatment modalities for complete removal of homogenous leukoplakia, from results of our preliminary pilot study it is clear that the use of 940 nm diode laser as a treatment modality for homogenous leukoplakia is a good substitute. Healing was perfect without any complication within a duration of 1 month. Pain intensity was also mild and absolutely zero on the VAS scale after 1 month follow up. Conclusions: 940 nm diode lasers are safe and can be effectively used as a treatment modality of homogenous leukoplakia, without any complication and without compromising health and oral function of patients. Considering recurrence factor, long term follow up for patients is a must.
Comparison between Use of PSA Kinetics and Bone Marrow Micrometastasis to Define Local or Systemic Relapse in Men with Biochemical Failure after Radical Prostatectomy for Prostate Cancer
Murray, Nigel P;Reyes, Eduardo;Fuentealba, Cynthia;Orellana, Nelson;Jacob, Omar 8387
Background: Treatment of biochemical failure after radical prostatectomy for prostate cancer is largely empirically based. The use of PSA kinetics has been used as a guide to determine local or systemic treatment of biochemical failure. We here compared PSA kinetics with detection of bone marrow micrometastasis as methods to determine local or systemic relapse. Materials and Methods: A transversal study was conducted of men with biochemical failure, defined as a serum PSA >0.2ng/ml after radical prostatectomy. Consecutive patients having undergone radical prostatectomy and with biochemical failure were enrolled and clinical and pathological details were recorded. Bone marrow biopsies were obtained from the iliac crest and touch prints made, micrometastasis (mM) being detected using anti-PSA. The clinical parameters of total serum PSA, PSA velocity, PSA doubling time and time to biochemical failure, age, Gleason score and pathological stage were registered. Results: A total of 147 men, mean age $71.6{\pm}8.2years$, with a median time to biochemical failure of 5.5 years (IQR 1.0-6.3 years) participated in the study. Bone marrow samples were positive for micrometastasis in 98/147 (67%) of patients at the time of biochemical failure. The results of bone marrow micrometastasis detected by immunocytochemistry were not concordant with local relapse as defined by PSA velocity, time to biochemical failure or Gleason score. In men with a PSA doubling time of < six months or a total serum PSA of >2,5ng/ml at the time of biochemical failure the detection of bone marrow micrometastasis was significantly higher. Conclusions: The detection of bone marrow micrometastasis could be useful in defining systemic relapse, this minimally invasive procedure warranting further studies with a larger group of patients.
Nutritional Status among Rural Community Elderly in the Risk Area of Liver Fluke, Surin Province, Thailand
Kaewpitoon, Soraya J;Namwichaisirikul, Niwatchai;Loyd, Ryan A;Churproong, Seekaow;Ueng-Arporn, Naporn;Matrakool, Likit;Tongtawee, Taweesak;Rujirakul, Ratana;Nimkhuntod, Porntip;Wakhuwathapong, Parichart;Kaewpitoon, Natthawut 8391
Thailand is becoming an aging society, this presenting as a serious problem situation especially regarding health. Chronic diseases found frequently in the elderly may be related to dietary intake and life style. Surin province has been reported as a risk area for liver fluke with a high incidence of cholangiocarcinma especially in the elderly. Therefore, this study aimed to determine the nutritional status and associated factors among elderly in Surin province, northeast of Thailand. A community-based cross-sectional study was conducted among 405 people aged 60 years and above, between September 2012 and July 2014. The participants were selected through a randomized systematic sampling method and completed a pre-designed questionnaire with general information, food recorded, weight, height, waist circumference, and behavior regarding to food consume related to liver fluke infection. The data were analyzed using descriptive statistics and Spearman's rank correlation coefficients. The majority of participants was female (63.5%), age between 60-70 years old (75.6%), with elementary school education (96.6%), living with their (78.9%), and having underlying diseases (38.3%). Carbohydrate (95.3%) was need to improve the consumption. The participants demonstrated under-nutrition (24.4%), over-nutrition (16.4%), and obesity (15.4%). Elderly had a waist circumference as the higher than normal level (34.0%). Gender, female, age 71-80 years old, elementary school and underlying diseases were significantly associated with poor nutritional status. The majority of them had a high knowledge (43.0%), moderate attitude (44.4%), and moderate practice (46.2%) regarding food consumption related to liver fluke infection. In conclusion, these findings data indicated that elderly age group often have an under- or over-nutritional status. Carbohydrate consumption needs to be improved. Some elderly show behavior regarding food consumption that is related to liver fluke infection hat needs to be improved, so that health education pertaining good nutrition is required.
Association of Histopathological Markers with Clinico-Pathological Factors in Mexican Women with Breast Cancer
Bandala, Cindy;De la Garza-Montano, Paloma;Cortes-Algara, Alfredo;Cruz-Lopez, Jaime;Dominguez-Rubio, Rene;Gonzalez-Lopez, Nelly Judith;Cardenas-Rodriguez, Noemi;Alfaro-Rodriguez, A;Salcedo, M;Floriano-Sanchez, E;Lara-Padilla, Eleazar 8397
Background: Breast cancer (BCa) is the most common malignancy in Mexican women. A set of histopathological markers has been established to guide BCa diagnosis, prognosis and treatment. Nevertheless, in only a few Mexican health services, such as that of the Secretariat of National Defense (SEDENA for its acronym in Spanish), are these markers commonly employed for assessing BCa. The aim of this study was to explore the association of Ki67, TP53, HER2/neu, estrogenic receptors (ERs) and progesterone receptors (PRs) with BCa risk factors. Materials and Methods: Clinical histories provided background patient information. Immunohistochemical (IHC) analysis was conducted on 48 tissue samples from women diagnosed with BCa and treated with radical mastectomy. The Chi square test or Fisher exact test together with the Pearson and Spearman correlation were applied. Results: On average, patients were $58{\pm}10.4$ years old. It was most common to find invasive ductal carcinoma (95.8%), histological grade 3 (45.8%), with a poor Nottingham Prognostic Index (NPI; 80.4%). ERs and PRs were associated with smoking and alcohol consumption, metastasis at diagnosis and Ki67 expression (p<0.05). PR+ was also related to urea and ER+ (p<0.05). Ki67 was associated with TP53 and elevated triglycerides (p<0.05), and HER2/neu with ER+, the number of pregnancies and tumor size (p<0.05). TP53 was also associated with a poor NPI (p<0.05) and CD34 with smoking (p<0.05). The triple negative status (ER-/PR-/HER2/neu-) was related to smoking, alcohol consumption, exposure to biomass, number of pregnancies, metastasis and a poor NPI (p<0.05). Moreover, the luminal B subty was associated with histological type (p=0.007), tumor size (p=0.03) and high cholesterol (p=0.02). Conclusions: Ki67, TP53, HER2/neu, ER and PR proved to be related to several clinical and pathological factors. Hence, it is crucial to determine this IHC profile in women at risk for BCa. Certain associations require further study to understand physiological/biochemical/molecular processes.
Single Nucleotide Polymorphisms in STAT3 and STAT4 and Risk of Hepatocellular Carcinoma in Thai Patients with Chronic Hepatitis B
Chanthra, Nawin;Payungporn, Sunchai;Chuaypen, Natthaya;Piratanantatavorn, Kesmanee;Pinjaroen, Nutcha;Poovorawan, Yong;Tangkijvanich, Pisit 8405
Hepatitis B virus (HBV) infection is the leading cause of hepatocellular carcinoma (HCC) development. Recent studies demonstrated that single nucleotide polymorphisms (SNPs) rs2293152 in signal transducer and activator of transcription 3 (STAT3) and rs7574865 in signal transducer and activator of transcription 4 (STAT4) are associated with chronic hepatitis B (CHB)-related HCC in the Chinese population. We hypothesized that these polymorphisms might be related to HCC susceptibility in Thai population as well. Study subjects were divided into 3 groups consisting of CHB-related HCC (n=192), CHB without HCC (n=200) and healthy controls (n=190). The studied SNPs were genotyped using polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP). The results showed that the distribution of different genotypes for both polymorphisms were in Hardy-Weinberg equilibrium (P>0.05). Our data demonstrated positive association of rs7574865 with HCC risk when compared to healthy controls under an additive model (GG versus TT: odds ratio (OR)=2.07, 95% confidence interval (CI)=1.06-4.03, P=0.033). This correlation remained significant under allelic and recessive models (OR=1.46, 95% CI=1.09-1.96, P=0.012 and OR=1.71, 95% CI=1.13-2.59, P=0.011, respectively). However, no significant association between rs2293152 and HCC development was observed. These data suggest that SNP rs7574865 in STAT4 might contribute to progression to HCC in the Thai population.
Inhibition of NF-ĸB, Bcl-2 and COX-2 Gene Expression by an Extract of Eruca sativa Seeds during Rat Mammary Gland Carcinogenesis
Abdel-Rahman, Salah;Shaban, Nadia;Haggag, Amany;Awad, Doaa;Bassiouny, Ahmad;Talaat, Iman 8411
The effect of Eruca sativa seed extract (SE) on nuclear factor kappa B (NF-${\kappa}B$), cyclooxygenase-2 (COX-2) and B-cell lymphoma-2 (Bcl-2) gene expression levels was investigated in rat mammary gland carcinogenesis induced by 7,12 dimethylbenz(${\alpha}$)anthracene (DMBA). DMBA increased NF-${\kappa}B$, COX-2 and Bcl-2 gene expression levels and lipid peroxidation (LP), while, decreased glutathione-S-transferase (GST) and superoxide dismutase (SOD) activities and total antioxidant concentration (TAC) compared to the control group. After DMBA administration, SE treatment reduced NF-${\kappa}B$, COX-2 and Bcl-2 gene expression levels and LP. Hence, SE treatment reduced inflammation and cell proliferation, while increasing apoptosis, GST and SOD activities and TAC. Analysis revealed that SE has high concentrations of total flavonoids, triterpenoids, alkaloids and polyphenolic compounds such as gallic, chlorogenic, caffeic, 3,4-dicaffeoyl quinic, 3,5-dicaffeoyl quinic, tannic, cinnamic acids, catechin and phloridzin. These findings indicate that SE may be considered a promising natural product from cruciferous vegetables against breast cancer, especially given its high antioxidant properties.
Aqueous Extract of Anticancer Drug CRUEL Herbomineral Formulation Capsules Exerts Anti-proliferative Effects in Renal Cell Carcinoma Cell Lines
Verma, Shiv Prakash;Sisoudiya, Saumya;Das, Parimal 8419
Purpose: Anti-cancer activity evaluation of aqueous extract of CRUEL (herbomineral formulation) capsules on renal cell carcinoma cell lines, and exploration of mechanisms of cell death. Materials and Methods: To detect the cytotoxic dose concentration in renal cell carcinoma (RCC) cells, MTT assays were performed and morphological changes after treatment were observed by inverted microscopy. Drug effects against RCC cell lines were assessed with reference to cell cycle distribution (flow cytometry), anti-metastatic potential (wound healing assay) and autophagy(RT-PCR). Results: CRUEL showed anti-proliferative effects against RCC tumor cell lines with an IC50 value of ${\approx}4mg/mL$ in vitro., while inducing cell cycle arrest at S-phase of cell cycle and inhibiting wound healing. LC3 was found to be up-regulated after drug treatment in RT-PCR resulting in an autophagy mode of cell death. Conclusions: This study provides the experimental validation for antitumor activity of CRUEL.
Safety and Prognostic Impact of Prophylactic Level VII Lymph Node Dissection for Papillary Thyroid Carcinoma
Fayek, Ihab Samy;Kamel, Ahmed Ahmed;Sidhom, Nevine FH 8425
Purpose: To study the safety of prophylactic level VII nodal dissection regarding hypoparathyroidism (temporary and permanent) and vocal cord dysfunction (temporary and permanent) and its impact on disease free survival. Materials and Methods: This prospective study concerned 63 patients with papillary thyroid carcinoma with N0 neck node involvement (clinically and radiologically) in the period from December 2009 to May 2013. All patients underwent total thyroidectomy and prophylactic central neck dissection including levels VI and VII lymph nodes in group A (31 patients) and level VI only in group B (32 patients). The thyroid gland, level VI and level VII lymph nodes were each examined histopathologically separately for tumor size, multicentricity, bilaterality, extrathyroidal extension, number of dissected LNs and metastatic LNs. Follow-up of both groups, regarding hypoparathyroidism, vocal cord dysfunction and DFS, ranged from 6-61 months. Results: The mean age was 34.8 and 34.3, female predominance in both groups with F: M 24:7 and 27:5 in groups A and B, respectively. Mean tumor size was 12.6 and 14.7mm. No statistical differences were found between both groups regarding age, sex, bilaterality, multicentricity or extrathyroidal extension. The mean no. of dissected level VI LNs was 5.06 and 4.72 and mean no. of metastatic level VI was 1 and 0.84 in groups A and B, respectively. The mean no. of dissected level VII LNs was 2.16 and mean no. of metastatic LNs was 0.48. Postoperatively temporary hypoparathyroidism was detected in 10 and 7 patients and permanent hypoparathyroidism in 2 and 3 patients; temporary vocal cord dysfunction was detected in 4 patients and one patient, and permanent vocal cord dysfunction in one and 2 patients in groups A and B, respectively. No significant statistical differences were noted between the 2 groups regarding hypoparathyroidism (P=0.535) or vocal cord dysfunction (P=0.956). The number of dissected LNs at level VI only significantly affected the occurrence of hypoparathyroidism (<0.001) and vocal cord dysfunction (<0.001).The DFS was significantly affected by bilaterality, multicentricity and extrathyroidal extension. Conclusions: Level VII nodal dissection is a safe procedure complementary to level VI nodal dissection with prophylactic central neck dissection for papillary thyroid carcinoma.
Combined Treatment with 2-Deoxy-D-Glucose and Doxorubicin Enhances the in Vitro Efficiency of Breast Cancer Radiotherapy
Islamian, Jalil Pirayesh;Aghaee, Fahimeh;Farajollahi, Alireza;Baradaran, Behzad;Fazel, Mona 8431
Doxorubicin (DOX) was introduced as an effective chemotherapeutic for a wide range of cancers but with some severe side effects especially on myocardia. 2-Deoxy-D-glucose (2DG) enhances the damage caused by chemotherapeutics and ionizing radiation (IR) selectively in cancer cells. We have studied the effects of $1{\mu}M$ DOX and $500{\mu}M$ 2DG on radiation induced cell death, apoptosis and also on the expression levels of p53 and PTEN genes in T47D and SKBR3 breast cancer cells irradiated with 100, 150 and 200 cGy x-rays. DOX and 2DG treatments resulted in altered radiation-induced expression levels of p53 and PTEN genes in T47D as well as SKBR3 cells. In addition, the combination along with IR decreased the viability of both cell lines. The radiobiological parameter (D0) of T47D cells treated with 2DG/DOX and IR was 140 cGy compared to 160 cGy obtained with IR alone. The same parameters for SKBR3 cell lines were calculated as 120 and 140 cGy, respectively. The sensitivity enhancement ratios (SERs) for the combined chemo-radiotherapy on T47D and SKBR3 cell lines were 1.14 and 1.16, respectively. According to the obtained results, the combination treatment may use as an effective targeted treatment of breast cancer either by reducing the single modality treatment side effects.
Incidence and Mortality of Breast Cancer and their Relationship with the Human Development Index (HDI) in the World in 2012
Ghoncheh, Mahshid;Mirzaei, Maryam;Salehiniya, Hamid 8439
Background: Breast cancer is the most common malignancy in women worldwide and its incidence is generally increasing. In 2012, it was the second most common cancer in the world. It is necessary to obtain information on incidence and mortality for health planning. This study aimed to investigate the relationship between the human development index (HDI), and the incidence and mortality rates of breast cancer in the world in 2012. Materials and Methods: This ecologic study concerns incidence rate and standardized mortality rates of the cancer from GLOBOCAN in 2012, and HDI and its components extracted from the global bank site. Data were analyzed using correlation tests and regression with SPSS software (version 15). Results: Among the six regions of WHO, the highest breast cancer incidence rate (67.6) was observed in the PAHO, and the lowest incidence rate was 27.8 for SEARO. There was a direct, strong, and meaningful correlation between the standardized incidence rate and HDI (r=0.725, $p{\leq}0.001$). Pearson correlation test showed that there was a significant correlation between age-specific incidence rate (ASIR) and components of the HDI (life expectancy at birth, mean years of schooling, and GNP). On the other, a non-significant relationship was observed between ASIR and HDI overall (r=0.091, p=0.241). In total, a significant relationship was not found between age-specific mortality rate (ASMR) and components of HDI. Conclusions: Significant positive correlations exist between ASIR and components of the HDI. Socioeconomic status is directly related to the stage of the cancer and patient's survival. With increasing the incidence rate of the cancer, mortality rate from the cancer does not necessariloy increase. This may be due to more early detection and treatment in developed that developing countries. It is necessary to increase awareness of risk factors and early detection in the latter.
Estudy the Effect of Breast Cancer on Tlr2 Expression in Nb4 Cell
Amirfakhri, Siamak;Salimi, Arsalan;Fernandez, Nelson 8445
Background: Breast cancer is the most common neoplasm in women and the most frequent cause of death in those between 35 and 55 years of age. All multicellular organisms have an innate immune system, whereas the adaptive or 'acquired' immune system is restricted to vertebrates. This study focused on the effect of conditioned medium isolated from cultured breast cancer cells on NB4 neutrophil-like cells. Materials and Methods: In the current study neutrophil-like NB4 cells were incubated with MCF-7 cell-conditioned medium. After 6 h incubation the intracellular receptor TLR2, was analyzed. Results: The results revealed that MCF-7 cell-conditioned medium elicited expression of TLR2 in NB4 cells. Conclusions: This treatment would result in the production of particular stimulants (i.e. soluble cytokines), eliciting the expression of immune system receptors. Furthermore, the flow cytometry results demonstrated that MCF-7 cell-conditioned medium elicited an effect on TLR2 intracellular receptors.
Epidemiological Characteristics of Gallbladder Cancer in Jeju Island: A Single-Center, Clinically Based, Age-Sex-Matched, Case-Control Study
Cha, Byung Hyo 8451
Background: Gallbladder cancer (GBC) is a rare but highly invasive malignancy characterized by poor survival. In a national cancer survey, the age-standardized incidence rate of GBC was highest in Jeju Island among the 15 provinces in South Korea. The aim of this descriptive epidemiological study was to suggest the modifiable risk factors for this rare malignant disease in Jeju Island by performing an age-sex-matched case-control study. Materials and Methods: The case group included patients diagnosed with GBC at the Department of Internal Medicine of Cheju Halla General Hospital, Jeju, South Korea, within the 5-year study period. The control group consisted of age-sex-matched subjects selected from among the participants of the health promotion center at the same institute and in the same period. We compared 78 case-control pairs in terms of clinical variables such as histories of hypertension, diabetes, vascular occlusive disorders, alcohol and smoking consumption, obesity, and combined polypoid lesions of the gallbladder (PLG) or gallstone diseases (GSDs). Results: Among the relevant risk factors, alcohol consumption, parity ${\geq}2$, PLG, and GSDs were significant risk factors in the univariate analysis. PLG (p < 0.01; OR, 51.1; 95% confidence interval [CI], 2.98-875.3) and GSD (p < 0.01; OR, 54.9; 95% CI, 3.00-1001.8) were associated risk factors of GBC in the multivariate analysis with the conditional logistic regression model. However, we failed to find any correlation between obesity and GBC. We also found a negative correlation between alcohol consumption history and GBC in the multivariate analysis (p < 0.01; OR, 0.06; 95% CI, 0.01-0.31). Conclusions: These results suggest that combined PLG and GSDs are strongly associated with the GBC in Jeju Island and mild to moderate alcohol consumption may negatively correlate with GBC risk.
Improving Participation in Colorectal Cancer Screening: a Randomised Controlled Trial of Sequential Offers of Faecal then Blood Based Non-Invasive Tests
Symonds, Erin L;Pedersen, Susanne;Cole, Stephen R;Massolino, Joseph;Byrne, Daniel;Guy, John;Backhouse, Patricia;Fraser, Robert J;LaPointe, Lawrence;Young, Graeme P 8455
Background: Poor participation rates are often observed in colorectal cancer (CRC) screening programs utilising faecal occult blood tests. This may be from dislike of faecal sampling, or having benign bleeding conditions that can interfere with test results. These barriers may be circumvented by offering a blood-based DNA test for screening. The aim was to determine if program participation could be increased by offering a blood test following faecal immunochemical test (FIT) non-participation. Materials and Methods: People were invited into a CRC screening study through their General Practice and randomised into control or intervention (n=600/group). Both groups were mailed a FIT (matching conventional screening programs). Participation was defined as FIT completion within 12wk. Intervention group non-participants were offered a screening blood test (methylated BCAT1/IKZF1). Overall participation was compared between the groups. Results: After 12wk, FIT participation was 82% and 81% in the control and intervention groups. In the intervention 96 FIT nonparticipants were offered the blood test - 22 completed this test and 19 completed the FIT instead. Total screening in the intervention group was greater than the control (88% vs 82%, p<0.01). Of 12 invitees who indicated that FIT was inappropriate for them (mainly due to bleeding conditions), 10 completed the blood test (83%). Conclusions: Offering a blood test to FIT non-participants increased overall screening participation compared to a conventional FIT program. Blood test participation was particularly high in invitees who considered FIT to be inappropriate for them. A blood test may be a useful adjunct test within a FIT program.
DPPA2 Protein Expression is Associated with Gastric Cancer Metastasis
Shabestarian, Hoda;Ghodsi, Mohammad;Mallak, Afsaneh Javdani;Jafarian, Amir Hossein;Montazer, Mehdi;Forghanifard, Mohammad Mahdi 8461
Gastric cancer (GC) as the fourth most common cause of malignancies shows high rate of morbidity appropriating the second leading cause of cancer-related death worldwide. Developmental pluripotency associated-2 (DPPA2), cancer-testis antigen (CT100), is commonly expressed only in the human germ line and pluripotent embryonic cells but it is also present in a significant subset of malignant tumors. To investigate whether or not DPPA2 expression is recalled in GC, our aim in this study was to elucidate DPPA2 protein expression in gastric cancer. Fifty five GC tumor and their related margin normal tissues were recruited to evaluate DPPA2 protein expression and its probable associations with different clinicopathological features of the patients. DPPA2 was overexpressed in GC cases compared with normal tissues (P < .005). While DPPA2 expression was detected in all GC samples, its high expression was found in 23 of 55 tumor tissues (41.8%). Interestingly, 50 of 55 normal samples (90.9%) were negative for DPPA2 protein expression and remained 5 samples showed very low expression of DPPA2. DPPA2 protein expression in GC was significantly correlated with lymph node metastasis (p = 0.012). The clinical relevance of DPPA2 in GC illustrated that high level expression of this protein was associated with lymph node metastasis supporting this hypothesis that alteration in DPPA2 was associated with aggressiveness of gastric cancer and may be an early event in progression of the disease. DPPA2 may be introduced as a new marker for invasive and metastatic GCs.
Genetic Variation in the ABCB1 Gene May Lead to mRNA Level Chabge: Application to Gastric Cancer Cases
Mansoori, Maryam;Golalipour, Masoud;Alizadeh, Shahriar;Jahangirerad, Ataollah;Khandozi, Seyed Reza;Fakharai, Habibollah;Shahbazi, Majid 8467
Background: One of the major mechanisms for drug resistance is associated with altered anticancer drug transport, mediated by the human-adenosine triphosphate binding cassette (ABC) transporter superfamily proteins. The overexpression of adenosine triphosphate binding cassette, sub-family B, member 1 (ABCB1) by multidrug-resistant cancer cells is a serious impediment to chemotherapy. In our study we have studied the possibility that structural single-nucleotide polymorphisms (SNP) are the mechanism of ABCB1 overexpression. Materials and Methods: A total of 101 gastric cancer multidrug resistant cases and 100 controls were genotyped with sequence-specific primed PCR (SSP-PCR). Gene expression was evaluated for 70 multidrug resistant cases and 54 controls by real time PCR. The correlation between the two groups was based on secondary structures of RNA predicted by bioinformatics tool. Results: The results of genotyping showed that among 3 studied SNPs, rs28381943 and rs2032586 had significant differences between patient and control groups but there were no differences in the two groups for C3435T. The results of real time PCR showed over-expression of ABCB1 when we compared our data with each of the genotypes in average mode. Prediction of secondary structures in the existence of 2 related SNPs (rs28381943 and rs2032586) showed that the amount of ${\Delta}G$ for original mRNA is higher than the amount of ${\Delta}G$ for the two mentioned SNPs. Conclusions: We have observed that 2 of our studied SNPs (rs283821943 and rs2032586) may elevate the expression of ABCB1 gene, through increase in mRNA stability, while this was not the case for C3435T.
Knowledge, Attitude and Practices Regarding HPV Vaccination Among Medical and Para Medical in Students, India a Cross Sectional Study
Swarnapriya, K;Kavitha, D;Reddy, Gopireddy Murali Mohan 8473
Background: High risk human papilloma virus (HPV) types 16 and 18 have been proven as central causes of cervical cancer and safety and immunogenicity of HPV vaccines are sufficiently established. Knowledge and practices of HPV vaccination among medical and paramedical students is vital as these may strongly determine intention to recommend vaccination to others in the future. The present study was therefore undertaken to assess the knowledge, attitude and practices regarding cervical cancer screening and HPV vaccination among medical and paramedical students and to analyze factors influencing them. Materials and Methods: The present cross sectional study, conducted in a tertiary care teaching hospital in south India, included undergraduate students aged 18 years and above, belonging to medical, dental and nursing streams, after informed written consent. Results: Out of 957 participants, only 430 (44.9%) displayed good knowledge and only 65 (6.8%) had received HPV vaccination. Among the unvaccinated, 433 (48.54%), were not willing to take the vaccine. Concerns regarding the efficacy (30.5%), safety (26.1%) and cost of the vaccine (21.7%) were responsible for this. Age, gender, family history of malignancy and mother's education had no influence on knowledge. Compared to medical students, nursing students had better knowledge (OR-1.49, 95% CI 0.96 to 2.3, p = 0.072) and students of dentistry had poor knowledge (OR-0.50 95% CI 0.36 to 0.70, p<0.001). Conclusions: The knowledge and uptake of HPV vaccination among medical and paramedical students in India is poor. Targeted health education interventions may have huge positive impact not only on the acceptance of vaccination among them, but also on their intention to recommend the vaccine in future.
Knowledge and Perceptions about Colorectal Cancer in Jordan
Taha, Hana;Jaghbeer, Madi Al;Shteiwi, Musa;AlKhaldi, Sireen;Berggren, Vanja 8479
Background: Colorectal cancer (CRC) is the third most common cancer globally. In Jordan, it is the number one cancer among men and the second most common cancer among women, accounting for 15% and 9.4% respectively of all male and female diagnosed cancers. This study aimed to evaluate the knowledge and perceptions about colorectal cancer risk factors, signs and symptoms in Jordan and to provide useful data about the best modes of disseminating preventive messages about the disease. Materials and Methods: A stratified clustered random sampling technique was used to recruit 300 males and 300 females aged 30 to 65 years without a previous history of CRC from four governorates in Jordan. A semi-structured questionnaire and face to face interviews were employed. Descriptive and multivariate analysis was applied to assess knowledge and perceptions about CRC. Results: Both males and females perceived their CRC risk to be low. They had low knowledge scores about CRC with no significant gender association (P=0.47). From a maximum knowledge score of 18 points, the median scores of males and females were 4 points (SD=2.346, range 0-13) and 4 points (SD=2.329, range 0-11) respectively. Better knowledge scores were associated with governorate, higher educational level, older age, higher income, having a chronic disease, having a family history of CRC, previously knowing someone who had CRC and their doctor's knowledge about their family history of CRC. Conclusions: There is a low level of knowledge about CRC and underestimation of risk among the study participants. This underlines the need for public health interventions to create awareness about the illness. It also calls for further research to assess the knowledge and perceptions about CRC early detection examinations in Jordan.
Improved Detection of Helicobacter pylori Infection and Premalignant Gastric Mucosa Using "Site Specific Biopsy": a Randomized Control Clinical Trial
Tongtawee, Taweesak;Dechsukhum, Chavaboon;Leeanansaksiri, Wilairat;Kaewpitoon, Soraya;Kaewpitoon, Natthawut;Loyd, Ryan A;Matrakool, Likit;Panpimanmas, Sukij 8487
Background: Helicobacter pylori infection and premalignant gastric mucosa can be reliably identified using conventional narrow band imaging (C-NBI) gastroscopy. The aim of our study was to compare standard biopsy with site specific biopsy for diagnosis of H. pylori infection and premalignant gastric mucosa in daily clinical practice. Materials and Methods: Of a total of 500 patients who underwent gastroscopy for investigation of dyspeptic symptoms, 250 patients underwent site specific biopsy using C-NBI (Group 1) and 250 standard biopsy (Group 2). Sensitivity, specificity, and positive and negative predictive values were assessed. The efficacy of detecting H. pylori associated gastritis and premalignant gastric mucosa according to the updated Sydney classification was also compared. Results: In group 1 the sensitivity, specificity, positive and negative predictive values for predicting H. pylori positivity were 95.4%, 97.3%, 98.8% and 90.0% respectively, compared to 92.9%, 88.6%, 83.2% and 76.1% in group 2. Site specific biopsy was more effective than standard biopsy in terms of both H. pylori infection status and premalignant gastric mucosa detection (P<0.01). Conclusions: Site specific biopsy using C-NBI can improve detection of H. pylori infection and premalignant gastric mucosa in daily clinical practice.
Comparison of Unsatisfactory Rates and Detection of Abnormal Cervical Cytology Between Conventional Papanicolaou Smear and Liquid-Based Cytology (Sure Path®)
Kituncharoen, Saroot;Tantbirojn, Patou;Niruthisard, Somchai 8491
Purpose: To compare unsatisfactory rates and detection of abnormal cervical cytology between conventional cytology or Papanicolaou smear (CC) and liquid-based cytology (LBC). Materials and Methods: A total of 23,030 cases of cervical cytology performed at King Chulalongkorn Memorial Hospital during 2012-2013 were reviewed. The percentage unsatisfactory and detection rates of abnormal cytology were compared between CC and LBC methods. Results: There was no difference in unsatisfactory rates between CC and LBC methods (0.1% vs. 0.1%, p = 0.84). The detection rate for squamous cell abnormalities was significantly higher with the LBC method (7.7% vs. 11.5%, p < 0.001), but those for overall abnormal glandular epithelium were similar (0.4% vs. 0.6%, p = 0.13). Low grade squamous lesion (ASC-US and LSIL) were more frequently detected by the LBC method (6.1% vs. 9.5%, p < 0.001). However, there was no difference in high gradd squamous lesions (1.1% vs. 1.1%, p = 0.95). When comparing between types of glandular abnormality, there was no significant difference the groups. Conclusions: There was no difference in unsatisfactory rates between the conventional smear and LBC. However, LBC could detect low grade squamous cell abnormalities more than CC, while there were similar rates of detection of high grade squamous cell lesions and glandular cell abnormalities.
Comparative Assessment of a Self-sampling Device and Gynecologist Sampling for Cytology and HPV DNA Detection in a Rural and Low Resource Setting: Malaysian Experience
Latiff, Latiffah A;Ibrahim, Zaidah;Pei, Chong Pei;Rahman, Sabariah Abdul;Akhtari-Zavare, Mehrnoosh 8495
Purpose: This study was conducted to assess the agreement and differences between cervical self-sampling with a Kato device (KSSD) and gynecologist sampling for Pap cytology and human papillomavirus DNA (HPV DNA) detection. Materials and Methods: Women underwent self-sampling followed by gynecologist sampling during screening at two primary health clinics. Pap cytology of cervical specimens was evaluated for specimen adequacy, presence of endocervical cells or transformation zone cells and cytological interpretation for cells abnormalities. Cervical specimens were also extracted and tested for HPV DNA detection. Positive HPV smears underwent gene sequencing and HPV genotyping by referring to the online NCBI gene bank. Results were compared between samplings by Kappa agreement and McNemar test. Results: For Pap specimen adequacy, KSSD showed 100% agreement with gynecologist sampling but had only 32.3% agreement for presence of endocervical cells. Both sampling showed 100% agreement with only 1 case detected HSIL favouring CIN2 for cytology result. HPV DNA detection showed 86.2%agreement (K=0.64, 95% CI 0.524-0.756, p=0.001) between samplings. KSSD and gynaecologist sampling identified high risk HPV in 17.3% and 23.9% respectively (p=0.014). Conclusion: The self-sampling using Kato device can serve as a tool in Pap cytology and HPV DNA detection in low resource settings in Malaysia. Self-sampling devices such as KSSD can be used as an alternative technique to gynaecologist sampling for cervical cancer screening among rural populations in Malaysia.
Early Activation of Apoptosis and Caspase-independent Cell Death Plays an Important Role in Mediating the Cytotoxic and Genotoxic Effects of WP 631 in Ovarian Cancer Cells
Gajek, Arkadiusz;Denel-Bobrowska, Marta;Rogalska, Aneta;Bukowska, Barbara;Maszewski, Janusz;Marczak, Agnieszka 8503
The purpose of this study was to provide a detailed explanation of the mechanism of bisanthracycline, WP 631 in comparison to doxorubicin (DOX), a first generation anthracycline, currently the most widely used pharmaceutical in clinical oncology. Experiments were performed in SKOV-3 ovarian cancer cells which are otherwise resistant to standard drugs such as cis-platinum and adriamycin. As attention was focused on the ability of WP 631 to induce apoptosis, this was examined using a double staining method with Annexin V and propidium iodide probes, with measurement of the level of intracellular calcium ions and cytosolic cytochrome c. The western blotting technique was performed to confirm PARP cleavage. We also investigated the involvement of caspase activation and DNA degradation (comet assay and immunocytochemical detection of phosphorylated H2AX histones) in the development of apoptotic events. WP 631 demonstrated significantly higher effectiveness as a pro-apoptotic drug than DOX. This was evident in the higher levels of markers of apoptosis, such as the externalization of phosphatidylserine and the elevated level of cytochrome c. An extension of incubation time led to an increase in intracellular calcium levels after treatment with DOX. Lower changes in the calcium content were associated with the influence of WP 631. DOX led to the activation of all tested caspases, 8, 9 and 3, whereas WP 631 only induced an increase in caspase 8 activity after 24h of treatment and consequently led to the cleavage of PARP. The lack of active caspase 3 had no outcome on the single and double-stranded DNA breaks. The obtained results show that WP 631 was considerably more genotoxic towards the investigated cell line than DOX. This effect was especially visible after longer times of incubation. The above detailed studies indicate that WP 631 generates early apoptosis and cell death independent of caspase-3, detected at relatively late time points. The observed differences in the mechanisms of the action of WP631 and DOX suggest that this bisanthracycline can be an effective alternative in ovarian cancer treatment.
Breast Cancer Survival at a Leading Cancer Centre in Malaysia
Abdullah, Matin Mellor;Mohamed, Ahmad Kamal;Foo, Yoke Ching;Lee, Catherine May Ling;Chua, Chin Teong;Wu, Chin Huei;Hoo, LP;Lim, Teck Onn;Yen, Sze Whey 8513
Background: GLOBOCAN12 recently reported high cancer mortality in Malaysia suggesting its cancer health services are under-performing. Cancer survival is a key index of the overall effectiveness of health services in the management of patients. This report focuses on Subang Jaya Medical Centre (SJMC) care performance as measured by patient survival outcome for up to 5 years. Materials and Methods: All women with breast cancer treated at SJMC between 2008 and 2012 were enrolled for this observational cohort study. Mortality outcome was ascertained through record linkage with national death register, linkage with hospital registration system and finally through direct contact by phone or home visits. Results: A total of 675 patients treated between 2008 and 2012 were included in the present survival analysis, 65% with early breast cancer, 20% with locally advanced breast cancer (LABC) and 4% with metastatic breast cancer (MBC). The overall relative survival (RS) at 5 years was 88%. RS for stage I was 100% and for stage II, III and IV disease was 95%, 69% and 36% respectively. Conclusions: SJMC is among the first hospitals in Malaysia to embark on routine measurement of the performance of its cancer care services and its results are comparable to any leading centers in developed countries.
Religion as an Alleviating Factor in Iranian Cancer Patients: a Qualitative Study
Rahnama, Mozhgan;Khoshknab, Masoud Fallahi;Maddah, Sadat Seyed Bagher;Ahmadi, Fazlollah;Arbabisarjou, Azizollah 8519
After diagnosis of cancer, many patients show more inclination towards religion and religious activities. This qualitative study using semi-structured interviews explored the perspectives and experiences of 17 Iranian cancer patients and their families regarding the role of religion in their adaptation to cancer in one of the hospitals in Tehran and a charity institute. The content analysis identified two themes: "religious beliefs" (illness as God's will, being cured by God's will, belief in God's supportiveness, having faith in God as a relieving factor, and hope in divine healing) and "relationship with God during the illness." In general, relationship with God and religious beliefs had a positive effect on the patients adapting to their condition, without negative consequences such as stopping their treatment process and just waiting to be cured by God. Thus a strengthening of such beliefs, as a coping factor, could be recommended through religious counseling.
Patterns of Cancer in Kurdistan - Results of Eight Years Cancer Registration in Sulaymaniyah Province-Kurdistan-Iraq
Khoshnaw, Najmaddin;Mohammed, Hazha A;Abdullah, Dana A 8525
Background: Cancer has become a major health problem associated with high mortality worldwide, especially in developing countries. The aim of our study was to evaluate the incidence rates of different types of cancer in Sulaymaniyah from January-2006 to January-2014. The data were compared with those reported for other middle east countries. Materials and Methods: This retrospective study depended on data collected from Hiwa hospital cancer registry unit, death records and histopathology reports in all Sulaymaniyah teaching hospitals, using international classification of diseases. Results: A total of 8,031 cases were registered during the eight year period, the annual incidence rate in all age groups rose from 38 to 61.7 cases/100,000 population/year, with averages over 50 in males and 50.7 in females. The male to female ratio in all age groups were 0.98, while in the pediatric age group it was 1.33. The hematological malignancies in all age groups accounted for 20% but in the pediatric group around half of all cancer cases. Pediatric cancers were occluding 7% of total cancers with rates of 10.3 in boys and 8.7 in girls. The commonest malignancies by primary site were leukemia, lymphoma, brain, kidney and bone. In males in all age groups they were lung, leukaemia, lymphoma, colorectal, prostate, bladder, brain, stomach, carcinoma of unknown primary (CUP) and skin, while in females they were breast, leukaemia, lymphoma, colorectal, ovary, lung, brain, CUP, and stomach. Most cancers were increased with increasing age except breast cancer where decrease was noted in older ages. High mortality rates were found with leukemia, lung, lymphoma, colorectal, breast and stomach cancers. Conclusions: We here found an increase in annual cancer incidence rates across the period of study, because of increase of cancer with age and higher rates of hematological malignancies. Our study is valuable for Kurdistan and Iraq because it provides more accurate data about the exact patterns of cancer and mortality in our region.
Role of Tumor Necrosis Factor-Producing Mesenchymal Stem Cells on Apoptosis of Chronic B-lymphocytic Tumor Cells Resistant to Fludarabine-based Chemotherapy
Valizadeh, Armita;Ahmadzadeh, Ahmad;Saki, Ghasem;Khodadadi, Ali;Teimoori, Ali 8533
Background: B-cell chronic lymphocytic leukemia B (B-CLL), the most common type of leukemia, may be caused by apoptosis deficiency in the body. Adipose tissue-derived mesenchymal stem cells (AD-MSCs) as providers of pro-apoptotic molecules such as tumor necrosis factor-related apoptosis-inducing ligand (TRAIL), can be considered as an effective anti-cancer therapy candidate. Therefore, in this study we assessed the role of tumor necrosis factor-producing mesenchymal stem cells oin apoptosis of B-CLL cells resistant to fludarabine-based chemotherapy. Materials and Methods: In this study, after isolation and culture of AD-MSCs, a lentiviral LeGO-iG2-TRAIL-GFP vector containing a gene producing the ligand pro-apoptotic with plasmid PsPAX2 and PMDG2 virus were transfected into cell-lines to generate T293HEK. Then, T293HEK cell supernatant containing the virus produced after 48 and 72 hours was collected, and these viruses were transduced to reprogram AD-MSCs. Apoptosis rates were separately studied in four groups: group 1, AD-MSCs-TRAIL; group 2, AD-MSCs-GFP; group 3, AD-MSCs; and group 4, CLL. Results: Observed apoptosis rates were: group 1, $42{\pm}1.04%$; group 2, $21{\pm}0.57%$; group 3, $19{\pm}2.6%$; and group 4, % $0.01{\pm}0.01$. The highest rate of apoptosis thus occurred ingroup 1 (transduced TRAIL encoding vector). In this group, the average medium-soluble TRAIL was 72.7pg/m and flow cytometry analysis showed a pro-apoptosis rate of $63{\pm}1.6%$, which was again higher than in other groups. Conclusions: In this study we have shown that tumor necrosis factor (TNF) secreted by AD-MSCs may play an effective role in inducing B-CLL cell apoptosis.
Low Coverage and Disparities of Breast and Cervical Cancer Screening in Thai Women: Analysis of National Representative Household Surveys
Mukem, Suwanna;Meng, Qingyue;Sriplung, Hutcha;Tangcharoensathien, Viroj 8541
Background: The coverage of breast and cervical cancer screening has only slightly increased in the past decade in Thailand, and these cancers remain leading causes of death among women. This study identified socioeconomic and contextual factors contributing to the variation in screening uptake and coverage. Materials and Methods: Secondary data from two nationally representative household surveys, the Health and Welfare Survey (HWS) 2007 and the Reproductive Health Survey (RHS) 2009 conducted by the National Statistical Office were used. The study samples comprised 26,951 women aged 30-59 in the 2009 RHS, and 14,619 women aged 35 years and older in the 2007 HWS were analyzed. Households of women were grouped into wealth quintiles, by asset index derived from Principal components analysis. Descriptive and logistic regression analyses were performed. Results: Screening rates for cervical and breast cancers increased between 2007 and 2009. Education and health insurance coverage including wealth were factors contributing to screening uptake. Lower or non-educated and poor women had lower uptake of screenings, as were young, unmarried, and non-Buddhist women. Coverage of the Civil Servant Medical Benefit Scheme increased the propensity of having both screenings, while the universal coverage scheme increased the probability of cervical screening among the poor. Lack of awareness and knowledge contributed to non-use of both screenings. Women were put off from screening, especially Muslim women on cervical screening, because of embarrassment, fear of pain and other reasons. Conclusions: Although cervical screening is covered by the benefit package of three main public health insurance schemes, free of charge to all eligible women, the low coverage of cervical screening should be addressed by increasing awareness and strengthening the supply side. As mammography was not cost effective and not covered by any scheme, awareness and practice of breast self examination and effective clinical breast examination are recommended. Removal of cultural barriers is essential.
Gene Expression Biodosimetry: Quantitative Assessment of Radiation Dose with Total Body Exposure of Rats
Saberi, Alihossein;Khodamoradi, Ehsan;Birgani, Mohammad Javad Tahmasebi;Makvandi, Manoochehr 8553
Background: Accurate dose assessment and correct identification of irradiated from non-irradiated people are goals of biological dosimetry in radiation accidents. Objectives: Changes in the FDXR and the RAD51 gene expression (GE) levels were here analyzed in response to total body exposure (TBE) to a 6 MV x-ray beam in rats. We determined the accuracy for absolute quantification of GE to predict the dose at 24 hours. Materials and Methods: For this in vivo experimental study, using simple randomized sampling, peripheral blood samples were collected from a total of 20 Wistar rats at 24 hours following exposure of total body to 6 MV X-ray beam energy with doses (0.2, 0.5, 2 and 4 Gy) for TBE in Linac Varian 2100C/D (Varian, USA) in Golestan Hospital, in Ahvaz, Iran. Also, 9 rats was irradiated with a 6MV X-ray beam at doses of 1, 2, 3 Gy in 6MV energy as a validation group. A sham group was also included. After RNA extraction and DNA synthesis, GE changes were measured by the QRT-PCR technique and an absolute quantification strategy by taqman methodology in peripheral blood from rats. ROC analysis was used to distinguish irradiated from non-irradiated samples (qualitative dose assessment) at a dose of 2 Gy. Results: The best fits for mean of responses were polynomial equations with a R2 of 0.98 and 0.90 (for FDXR and RAD51 dose response curves, respectively). Dose response of the FDXR gene produced a better mean dose estimation of irradiated "validation" samples compared to the RAD51 gene at doses of 1, 2 and 3 Gy. FDXR gene expression separated the irradiated rats from controls with a sensitivity, specificity and accuracy of 87.5%, 83.5% and 81.3%, respectively, 24 hours after dose of 2 Gy. These values were significantly (p<0.05) higher than the 75%, 75% and 75%, respectively, obtained using gene expression of RAD51 analysis at a dose of 2 Gy. Conclusions: Collectively, these data suggest that absolute quantification by gel purified quantitative RT-PCR can be used to measure the mRNA copies for GE biodosimetry studies at comparable accuracy to similar methods. In the case of TBE with 6MV energy, FDXR gene expression analysis is more precise than that with RAD51 for quantitative and qualitative dose assessment.
Radiofrequency Ablation in Treating Colorectal Cancer Patients with Liver Metastases
Xu, Chuan;Huang, Xin-En;Lv, Peng-Hua;Wang, Shu-Xiang;Sun, Ling;Wang, Fu-An 8559
Purpose: To evaluate efficacy of radiofrequency ablation (RFA) in treating colorectal cancer patients with liver metastases. Methods: During January 2010 to April 2012, 56 colorectal cancer patients with liver metastases underwent RFA. CT scans were obtained one month after RFA for all patients to evaluate tumor response. (CR+PR+SD)/n was used to count the disease control rates (DCR). Survival data of 1, 2 and 3 years were obtained from follow up. Results: Patients were followed for 10 to 40 months after RFA (mean time, $25{\pm}10months$). Median survival time was 27 months. The 1, 2, 3 year survival rate were 80.4%, 71.4%, 41%, 1 % respectively. 3-year survival time for patients with CR or PR after RFA was 68.8% and 4.3% respectively, the difference was statistically significant. The number of CR, PR, SD and PD in our study was 13, 23, 11 and 9 respectively. Conclusions: RFA could be an effective method for treating colorectal cancer patients with liver metastases, and prolong survival time, especially for metastatic lesions less than or equal to 3 cm. But this result should be confirmed by randomized controlled studies.
Effect of Root Extracts of Medicinal Herb Glycyrrhiza glabra on HSP90 Gene Expression and Apoptosis in the HT-29 Colon Cancer Cell Line
Nourazarian, Seyed Manuchehr;Nourazarian, Alireza;Majidinia, Maryam;Roshaniasl, Elmira 8563
Colorectal cancer is one of the most common lethal cancer types worldwide. In recent years, widespread and large-scale studies have been done on medicinal plants for anti-cancer effects, including Glycyrrhiza glabra. The aim of this study was to evaluate the effects of an ethanol extract Glycyrrhiza glabra on the expression of HSP90, growth and apoptosis in the HT-29 colon cancer cell line. HT-29 cells were treated with different concentrations of extract (50,100,150, and $200{\mu}g/ml$). For evaluation of cell proliferation and apoptosis, we used MTT assay and flow cytometry technique, respectively. RT-PCR was also carried out to evaluate the expression levels of HSP90 genes. Results showed that Glycyrrhiza glabra inhibited proliferation of the HT-29 cell line at a concentration of $200{\mu}g/ml$ and this was confirmed by the highest rate of cell death as measured by trypan blue and MTT assays. RT-PCR results showed down-regulation of HSP90 gene expression which implied an ability of Glycyrrhiza glabra to induce apoptosis in HT-29 cells and confirmed its anticancer property. Further studies are required to evaluate effects of the extract on other genes and also it is necessary to make an extensive in vivo biological evaluation and subsequently proceed with clinical evaluations.
Survival Analysis of Patients with Breast Cancer using Weibull Parametric Model
Baghestani, Ahmad Reza;Moghaddam, Sahar Saeedi;Majd, Hamid Alavi;Akbari, Mohammad Esmaeil;Nafissi, Nahid;Gohari, Kimiya 8567
Background: The Cox model is known as one of the most frequently-used methods for analyzing survival data. However, in some situations parametric methods may provide better estimates. In this study, a Weibull parametric model was employed to assess possible prognostic factors that may affect the survival of patients with breast cancer. Materials and Methods: We studied 438 patients with breast cancer who visited and were treated at the Cancer Research Center in Shahid Beheshti University of Medical Sciences during 1992 to 2012; the patients were followed up until October 2014. Patients or family members were contacted via telephone calls to confirm whether they were still alive. Clinical, pathological, and biological variables as potential prognostic factors were entered in univariate and multivariate analyses. The log-rank test and the Weibull parametric model with a forward approach, respectively, were used for univariate and multivariate analyses. All analyses were performed using STATA version 11. A P-value lower than 0.05 was defined as significant. Results: On univariate analysis, age at diagnosis, level of education, type of surgery, lymph node status, tumor size, stage, histologic grade, estrogen receptor, progesterone receptor, and lymphovascular invasion had a statistically significant effect on survival time. On multivariate analysis, lymph node status, stage, histologic grade, and lymphovascular invasion were statistically significant. The one-year overall survival rate was 98%. Conclusions: Based on these data and using Weibull parametric model with a forward approach, we found out that patients with lymphovascular invasion were at 2.13 times greater risk of death due to breast cancer.
Development and Evaluation of a Patient-Reported Outcome (PRO) Scale for Breast Cancer
Zhang, Jun;Yao, Yu-Feng;Zha, Xiao-Ming;Pan, Li-Qun;Bian, Wei-He;Tang, Jin Hai 8573
Background: This study was guided by principles of the theoretical system of evidence-based medicine. In particular, when searching for evidence of breast cancer, a measuring scale is an instrument for evaluating curative effects in accordance with the laws and characteristics of medicine and exploring the establishment of a system for medically assessing curative effects. At present, there exist few tools for evaluating curative effects. Patient-reported outcomes (PROs) refer to outcomes directly reported by patients (without input or explanations from doctors or other intermediaries) with respect to all aspects of their health. Data obtained from PROs provide evidence of treatment effects. Materials and Methods: In accordance with the tenets of theoretical medicine and ancient medical theory regarding breast cancer, principles for developing a PRO scale were established, and a theoretical model was developed and a literature review was performed, items from this pool were combined and split, and an initial scale was constructed. After a pilot survey and additional modifications, a pre-questionnaire scale was formed and used in a field investigation. After the application of statistical methods, the item pool was used to create a formal scale. The reliability, validity and feasibility of this formal scale were then assessed. Results: In a clinical investigation, 479 responses were recovered, with an acceptance rate of 95%. a combination of various methods was employed, and the items that were selected by all methods or more than half of the methods were employed in the questionnaire. In these cases, the screening methods were combined with certain features of the item, A total of four domains and 38 items were reserved. The reliability analysis indicated that the PRO scale was relatively reliable. Conclusions: Scientific assessment proved that the proposed scale exhibited good reliability and validity. This scale was readily accepted and could be used to assess the curative effects of medical therapy. However, given the limited scope of this investigation, the capacity for adapting this scale to incorporate other theories could not be determined.
Significance of Tissue Expression and Serum Levels of Angiopoietin-like Protein 4 in Breast Cancer Progression: Link to NF-κB /P65 Activity and Pro-Inflammatory Cytokines
Shafik, Noha M;Mohamed, Dareen A;Bedder, Asmaa E;El-Gendy, Ahmed M 8579
Background: The molecular mechanisms linking breast cancer progression and inflammation still remain obscure. The aim of the present study was to investigate the possible association of angiopoeitin like protein 4 (ANGPTL4) and its regulatory factor, hypoxia inducible factor-$1{\alpha}$ (HIF-$1{\alpha}$), with the inflammatory markers nuclear factor kappa B/p65 (NF-${\kappa}B$/P65) and interleukin-1 beta (IL-$1{\beta}$) in order to evaluate their role in inflammation associated breast cancer progression. Materials and Methods: Angiopoietin-like protein 4 (ANGPTL4) mRNA expressions were evaluated using quantitative real time PCR and its protein expression by immunohistochemistry. DNA binding activity of NF-${\kappa}B$/P65 was evaluated by transcription factor binding immunoassay. Serum levels of ANGPTL4, HIF-$1{\alpha}$ and IL-$1{\beta}$ were immunoassayed. Tumor clinico-pathological features were investigated. Results: ANGPTL4 mRNA expressions and serum levels were significantly higher in high grade breast carcinoma ($1.47{\pm}0.31$ and $184.98{\pm}18.18$, respectively) compared to low grade carcinoma ($1.21{\pm}0.32$ and $171.76{\pm}7.58$, respectively) and controls ($0.70{\pm}0.02$ and $65.34{\pm}6.41$, respectively), (p<0.05). Also, ANGPTL4 high/moderate protein expression was positively correlated with tumor clinico-pathological features. In addition, serum levels of HIF-$1{\alpha}$ and IL-$1{\beta}$ as well as NF-${\kappa}B$/P65 DNA binding activity were significantly higher in high grade breast carcinoma ($148.54{\pm}14.20$, $0.79{\pm}0.03$ and $247.13{\pm}44.35$ respectively) than their values in low grade carcinoma ( $139.14{\pm}5.83$, $0.34{\pm}0.02$ and $184.23{\pm}37.75$, respectively) and controls ($33.95{\pm}3.11$, $0.11{\pm}0.02$ and $7.83{\pm}0.92$, respectively), (p<0.001). Conclusion: ANGPTL4 high serum levels and tissue expressions in advanced grade breast cancer, in addition to its positive correlation with tumor clinico-pathological features and HIF-$1{\alpha}$ could highlight its role as one of the signaling factors involved in breast cancer progression. Moreover, novel correlations were found between ANGPTL4 and the inflammatory markers, IL-$1{\beta}$ and NF-${\kappa}B$/p65, in breast cancer, which may emphasize the utility of these markers as potential tools for understanding interactions for axes of carcinogenesis and inflammation contributed for cancer progression. It is thus hoped that the findings reported here would assist in the development of new breast cancer management strategies that would promote patients' quality of life and ultimately improve clinical outcomes. However, large-scale studies are needed to verify these results.
Age-Period-Cohort Analysis of Liver Cancer Mortality in Korea
Park, Jihwan;Jee, Yon Ho 8589
Background: Liver cancer is one of the most common causes of death in the world. In Korea, hepatitis B virus (HBV) is a major risk factor for liver cancer but infection rates have been declining since the implementation of the national vaccination program. In this study, we examined the secular trends in liver cancer mortality to distinguish the effects of age, time period, and birth cohort. Materials and Methods: Data for the annual number of liver cancer deaths in Korean adults (30 years and older) were obtained from the Korean Statistical Information Service for the period from 1984-2013. Joinpoint regression analysis was used to study the shapes of and to detect the changes in mortality trends. Also, an age-period-cohort model was designed to study the effect of each age, period, and birth cohort on liver cancer mortality. Results: For both men and women, the age-standardized mortality rate for liver cancer increased from 1984 to 1993 and decreased thereafter. The highest liver cancer mortality rate has shifted to an older age group in recent years. Within the same birth cohort group, the mortality rate of older age groups has been higher than in the younger age groups. Age-period-cohort analysis showed an association with a high mortality rate in the older age group and in recent years, whereas a decreasing mortality rate were observed in the younger birth cohort. Conclusions: This study confirmed a decreasing trend in liver cancer mortality among Korean men and women after 1993. The trends in mortality rate may be mainly attributed to cohort effects.
Electronic Risk Assessment System as an Appropriate Tool for the Prevention of Cancer: a Qualitative Study
Amoli, Amir hossein Javan;Maserat, Elham;Safdari, Reza;Zali, Mohammad Reza 8595
Background: Decision making modalities for screening for many cancer conditions and different stages have become increasingly complex. Computer-based risk assessment systems facilitate scheduling and decision making and support the delivery of cancer screening services. The aim of this article was to survey electronic risk assessment system as an appropriate tool for the prevention of cancer. Materials and Methods: A qualitative design was used involving 21 face-to-face interviews. Interviewing involved asking questions and getting answers from exclusive managers of cancer screening. Of the participants 6 were female and 15 were male, and ages ranged from 32 to 78 years. The study was based on a grounded theory approach and the tool was a semi-structured interview. Results: Researchers studied 5 dimensions, comprising electronic guideline standards of colorectal cancer screening, work flow of clinical and genetic activities, pathways of colorectal cancer screening and functionality of computer based guidelines and barriers. Electronic guideline standards of colorectal cancer screening were described in the s3 categories of content standard, telecommunications and technical standards and nomenclature and classification standards. According to the participations' views, workflow and genetic pathways of colorectal cancer screening were identified. Conclusions: The study demonstrated an effective role of computer-guided consultation for screening management. Electronic based systems facilitate real-time decision making during a clinical interaction. Electronic pathways have been applied for clinical and genetic decision support, workflow management, update recommendation and resource estimates. A suitable technical and clinical infrastructure is an integral part of clinical practice guidline of screening. As a conclusion, it is recommended to consider the necessity of architecture assessment and also integration standards.
Methylation Status and Expression of BRCA2 in Epithelial Ovarian Cancers in Indonesia
Pradjatmo, Heru 8599
Ovarian cancer is the main cause of mortality in gynecological malignancy and extensive studies have been conducted to study the underlying molecular mechanisms. The BRCA2 gene is known to be an important tumor suppressor in ovarian cancer, thereby BRCA2 alterations may lead to cancer progression. However, the BRCA2 gene is rarely mutated, and loss of function is suspected to be mediated by epigenetic regulation. In this study we investigated the methylation status and gene expression of BRCA2 in ovarian cancer patients. Ovarian cancer pateints (n=69) were recruited and monitored for 54 months in this prospective cohort study. Clinical specimens were used to study the in situ expression of aberrant BRCA2 proteins and the methylation status of BRCA2. These parameters were then compared with clinical parameters and overall survival rate. We found that BRCA2 methylation was found in the majority of cases (98.7%). However, the methylation status was not associated with protein level expression of BRCA2 (49.3%). Therefore in addition to DNA methylation, other epigenetic mechanisms may regulate BRCA2 expresison. Our findings may become evidence of BRCA2 inactivation mechanism through DNA methylation in the Indonesian population. More importantly, from multivariate analysis, BRCA2 expression was correlated with better overall survival (HR 0.32; p=0.05). High percentage of BRCA2 methylation and correlation of BRCA2 expression with overall survival in epithelial ovarian cancer cases may lead to development of treatment modalities specifically to target methylation of BRCA genes.
Assessment of Reliability when Using Diagnostic Binary Ratios of Polycyclic Aromatic Hydrocarbons in Ambient Air PM10
Pongpiachan, Siwatt 8605
The reliability of using diagnostic binary ratios of particulate carcinogenic polycyclic aromatic hydrocarbons (PAHs) as chemical tracers for source characterisation was assessed by collecting PM10 samples from various air quality observatory sites in Thailand. The major objectives of this research were to evaluate the effects of day and night on the alterations of six different PAH diagnostic binary ratios: An/(An + Phe), Fluo/(Fluo + Pyr), B[a]A/(B[a]A + Chry), B[a]P/(B[a]P + B[e]P), Ind/(Ind + B[g,h,i]P), and B[k]F/Ind, and to investigate the impacts of site-specific conditions on the alterations of PAH diagnostic binary ratios by applying the concept of the coefficient of divergence (COD). No significant differences between day and night were found for any of the diagnostic binary ratios of PAHs, which indicates that the photodecomposition process is of minor importance in terms of PAH reduction. Interestingly, comparatively high values of COD for An/(An + Phe) in PM10 collected from sites with heavy traffic and in residential zones underline the influence of heterogeneous reactions triggered by oxidising gaseous species from vehicular exhausts. Therefore, special attention must be paid when interpreting the data of these diagnostic binary ratios, particularly for cases of low-molecular-weight PAHs.
Clinical, Radiologic, and Endoscopic Manifestations of Small Bowel Malignancies: a First Report from Thailand
Tangkittikasem, Natthakan;Boonyaarunnate, Thiraphon;Aswakul, Pitulak;Kachintorn, Udom;Prachayakul, Varayu 8613
Background: The symptoms of small bowel malignancies are mild and frequently nonspecific, thus patients are often not diagnosed until the disease is at an advanced stage. Moreover, the lack of sufficient studies and available data on small bowel cancer makes diagnosis difficult, further delaying proper treatment for these patients. In fact, only a small number of published studies exist, and there are no studies specific to Thailand. Radiologic and endoscopic studies and findings may allow physicians to better understand the disease, leading to earlier diagnosis and improved patient outcomes. Objective: To retrospectively analyze the clinical, radiologic, and endoscopic characteristics of small bowel cancer patients in Thailand's Siriraj Hospital. Materials and Methods: This retrospective analysis included 185 adult patients (97 men, 88 women; mean age = $57.6{\pm}14.9$) with pathologically confirmed small bowel cancer diagnosed between January 2006 and December 2013. Clinical, radiologic, and endoscopic findings were collected and compared between each subtype of small bowel cancer. Results: Of the 185 patients analyzed, gastrointestinal stromal tumor (GIST) was the most common diagnosis (39.5%, n=73). Adenocarcinoma was the second most common (25.9%, n = 48), while lymphoma and all other types were identified in 24.3% (n = 45) and 10.3% (n = 19) of cases, respectively. The most common symptoms were weight loss (43.2%), abdominal pain (38.4%), and upper gastrointestinal bleeding (23.8%). Conclusions: Based on radiology and endoscopy, this study revealed upper gastrointestinal bleeding, an intra-abdominal mass, and a sub-epithelial mass as common symptoms of GIST. Obstruction and ulcerating/circumferential masses were findicative of adenocarcinoma, as revealed by radiology and endoscopy, respectively. Finally, no specific symptoms were related to lymphoma.
Assessing the Potential of Thermal Imaging in Recognition of Breast Cancer
Zadeh, Hossein Ghayoumi;Haddadnia, Javad;Ahmadinejad, Nasrin;Baghdadi, Mohammad Reza 8619
Background: Breast cancer is a common disorder in women, constituting one of the main causes of death all over the world. The purpose of this study was to determine the diagnostic value of the breast tissue diseases by the help of thermography. Materials and Methods: In this paper, we applied non-contact infrared camera, INFREC R500 for evaluating the capabilities of thermography. The study was conducted on 60 patients suspected of breast disease, who were referred to Imam Khomeini Imaging Center. Information obtained from the questionnaires and clinical examinations along with the obtained diagnostic results from ultrasound images, biopsies and thermography, were analyzed. The results indicated that the use of thermography as well as the asymmetry technique is useful in identifying hypoechoic as well as cystic masses. It should be noted that the patient should not suffer from breast discharge. Results: The accuracy of asymmetry technique identification is respectively 91/89% and 92/30%. Also the accuracy of the exact location of identification is on the 61/53% and 75%. The approach also proved effective in identifying heterogeneous lesions, fibroadenomas, and intraductal masses, but not ISO-echoes and calcified masses. Conclusions: According to the results of the investigation, thermography may be useful in the initial screening and supplementation of diagnostic procedures due to its safety (its non-radiation properties), low cost and the good recognition of breast tissue disease.
Could Tumor Size Be A Predictor for Papillary Thyroid Microcarcinoma: a Retrospective Cohort Study
Wang, Min;Wu, Wei-Dong;Chen, Gui-Ming;Chou, Sheng-Long;Dai, Xue-Ming;Xu, Jun-Ming;Peng, Zhi-Hai 8625
Background: Central lymph node metastasis(CLNM) is common in papillary thyroid microcarcinoma (PTMC). The aim of this study was to define the pathohistologic risk grading based on surgical outcomes. Materials and Methods: Statistical analysis was performed to figure out the optimal cut-off values of size in preoperative ultrasound images for defining the risk of CLNM in papillary thyroid microcarcinoma. Receiver operating characteristic curves (ROC) studies were carried out to determine the cutoff value(s) for the predictor(s). All the patients were divided into two groups according to the above size and the clinic-pathological and immunohistochemical parameters were compared to determine the significance of findings. Results: The optimal cut-off value of tumor size to predict the risk of CLNM in papillary thyroid microcarcinoma was 0.575 cm (area under the curve 0.721) according to the ROC curves. Significant differences were observed on the multifocality, extrathyroidal extension and central lymph node metastasis between two groups which were divided according to the tumor size by the cutoff values. Patients in two groups showed different positive rate and intensity of Ki67. Conclusions: The size of PTMC in ultrasound images are helpful to predict the aggressiveness of the tumors, it could be an easy predictor for PTMC prognosis and assist us to choose treatment.
Primary Idiopathic Myelofibrosis: Clinico-Epidemiological Profile and Risk Stratification in Pakistani Patients
Sultan, Sadia;Irfan, Syed Mohammed 8629
Background: Primary idiopathic myelofibrosis (PMF) is a clonal Philadelphia chromosome-negative myeloproliferative neoplasm characterized by extramedullary hematopoiesis and marrow fibrosis. It is an uncommon hematopoietic malignancy which primarily affects elderly individuals. The rational of this study was to determine its clinico-epidemiological profile along with risk stratification in Pakistani patients. Materials and Methods: In this retrospective cross sectional study, 20 patients with idiopathic myelofibrosis were enrolled from January 2011 to December 2014. Data were analyzed with SPSS version 22. Results: The mean age was $57.9{\pm}16.5years$ with 70% of patients aged above 50. The male to female ratio was 3:1. Overall only 10% of patients were asymptomatic and the remainder presented with constitutional symptoms. In symptomatic patients, major complaints were weakness (80%), weight loss (75%), abdominal discomfort (60%), night sweats (13%), pruritus (5%) and cardiovascular accidents (5%). Physical examination revealed splenomegaly as a predominant finding detected in 17 patients (85%) with the mean splenic span of $22.2{\pm}2.04cm$. The mean hemoglobin was $9.16{\pm}2.52g/dl$ with the mean MCV of $88.2{\pm}19.7fl$. The total leukocyte count of $17.6{\pm}19.2{\times}10^9/l$ and platelets count were $346.5{\pm}321.9{\times}10^9/l$. Serum lactate dehydrogenase, serum creatinine and uric acid were $731.0{\pm}154.1$, $0.82{\pm}0.22$ and $4.76{\pm}1.33$ respectively. According to risk stratification, 35% were in high risk, 40% in intermediate risk and 25% in low risk groups. Conclusions: The majority of PMF patients were male and presented with constitutional symptoms in our setting. Risk stratification revealed predominance of advanced disease in our series.
Expression and Clinical Significance of Sushi Domain-Containing Protein 3 (SUSD3) and Insulin-like Growth Factor-I Receptor (IGF-IR) in Breast Cancer
Zhao, Shuang;Chen, Shuang-Shuang;Gu, Yuan;Jiang, En-Ze;Yu, Zheng-Hong 8633
Background: To investigate the expression of insulin-like growth factor-I receptor (IGF-IR) and sushi domain containing protein 3 (SUSD3) in breast cancer tissue, and analyze their relationship with clinical parameters and the correlation between the two proteins. Materials and Methods: The expression of IGF-IR and SUSD3 in 100 cases of breast cancer tissues and adjacent normal breast tissues after surgery was detected by immunohistochemical technique MaxVisionTM, and the relationship with clinical pathological features was further analyzed. Results: The positive rate of IGF-IR protein was 86.0% in breast cancer, higher than 3.0% in adjacent normal breast tissue (P<0.05). The positive expression rate of SUSD3 protein was 78.0% in breast cancer, higher than 2.0% in adjacent normal breast tissue (P<0.05). The expression of IGF-IR and SUSD3 was related to estrogen receptor and pathological types (P<0.05),but not with age, stage, the expression of HER-2 and Ki-67 (P>0. 05). The expression of IGF-IR and SUSD3 in breast cancer tissue was positively related (r=0.553, P<0.01). Conclusions: The expression of IGF-IR and SUSD3 may be correlated to the occurrence and development of breast cancer. The combined detection of IGF-IR, SUSD3 and ER may play an important role in judging prognosis and guiding adjuvant therapy after surgery of breast cancer.
Intraperitoneal Perfusion Therapy of Endostar Combined with Platinum Chemotherapy for Malignant Serous Effusions: A Meta-analysis
Liang, Rong;Xie, Hai-Ying;Lin, Yan;Li, Qian;Yuan, Chun-Ling;Liu, Zhi-Hui;Li, Yong-Qiang 8637
Background: Malignant serous effusions (MSE) are one complication in patients with advanced cancer. Endostar is a new anti-tumor drug targeting vessels which exerts potent inhibition of neovascularization. This study aimed to systematically evaluate the efficacy and safety of intraperitoneal perfusion therapy of Endostar combined with platinum chemotherapy for malignant serous effusions (MSE). Materials and Methods: Randomized controlled trials (RCTs) on intraperitoneal perfusion therapy of Endostar combined with platinum chemotherapy for malignant serous effusions were searched in the electronic data of PubMed, EMBASE, Web of Science, CNKI, VIP, CBM and WanFang. The quality of RCTs was evaluated by two independent researchers and a meta-analysis was performed using RevMan 5.3 software. Results: The total of 25 RCTs included in the meta-analysis covered 1,253 patients, and all literature quality was evaluated as "B" grade. The meta-analysis showed that Endostar combined with platinum had an advantage over platinum alone in terms of response rate of effusions (76% vs 48%, RR=1.63, 95%CI: 1.50-1.78, P<0.00001) and improvement rate in quality of life (69% vs 44%, RR=1.57, 95%CI: 1.42-1.74, P<0.00001). As for safety, there was no significant difference between the two groups in the incidences of nausea and vomiting (35% vs 34%, RR=1.01, 95%CI: 0.87-1.18, P=0.88), leucopenia (38% vs 38%, RR=1, 95%CI: 0.87-1.15, P=0.99), and renal impairment (18% vs 20%, RR=0.86, 95%CI: 0.43-1.74, P=0.68). Conclusions: Endostar combined with platinum by intraperitoneal perfusion is effective for malignant serous effusions, and patient quality of life is significantly improved without the incidence of adverse reactions being obviously increased.
Turkish Adolescent Perceptions about the Effects of Water Pipe Smoking on their Health
Cakmak, Vahide;Cinar, Nursan 8645
Background: Consumption of tobacco in the form of a water pipe has recently increased, especially among young people. This study aimed to develop a scale which would be used in order to detect perceptions about the effects of water pipe smoking on health and to test its validity and reliability. Our scale named "a scale of perception about the effects of water pipe smoking on health" was developed in order to detect factors effecting the perception of adolescents about the effects of water pipe smoking on health. Materials and Methods: The sample consisted of 150 voluntary students in scale development and 750 voluntary students in the study group. Data were collected via a questionnaire prepared by researchers themselves and 5-pont Likert scale for "a scale of perception about the effects of water pipe smoking on health" which was prepared through the literature. Data evaluation was carried out on a computer with SPSS. Results: The findings of the study showed that "a scale of perception about the effects of water pipe smoking on health" was valid and reliable. Total score average of the adolescents participated in the study was $58.5{\pm}1.25$. The mean score of the ones who did not smoke water pipe ($60.1{\pm}11.7$) was higher than the mean score of the ones who smoked water pipe ($51.6{\pm}13.8$), the difference being statistically significant. Conclusions: It is established that "a scale of perception about the effects of water pipe smoking on health" was a reliable and valid measurement tool. It is also found out that individuals who smoked a water pipe had a lower level of perception of water pipe smoking effects on health than their counterparts who did not smoke a water pipe.
Heparanase mRNA and Protein Expression Correlates with Clinicopathologic Features of Gastric Cancer Patients: a Meta-analysis
Li, Hai-Long;Gu, Jing;Wu, Jian-Jun;Ma, Chun-Lin;Yang, Ya-Li;Wang, Hu-Ping;Wang, Jing;Wang, Yong;Chen, Che;Wu, Hong-Yan 8653
Background: Heparanase is believed to be involved in gastric carcinogenesis. However, the clinicopathologic features of gastric cancer with high heparanase expression remain unclear. Aim : The purpose of this study was to comprehensively and quantitatively summarize available evidence for the use of heparanase mRNA and protein expression to evaluate the clinicopathological associations in gastric cancer in Asian patients by meta-analysis. Materials and Methods: Relevant articles listed in MEDLINE, CNKI and the Cochrane Library databases up to MARCH 2015 were searched by use of several keywords in electronic databases. A meta-analysis was performed to clarify the impact of heparanase mRNA and protein on clinicopathological parameters in gastric cancer. Combined ORs with 95%CIs were calculated by Revman 5.0, and publication bias testing was performed by stata12.0. Results: A total of 27 studies which included 3,891 gastric cancer patients were combined in the final analysis. When stratifying the studies by the pathological variables of heparanase mRNA expression, the depth of invasion (633 patients) (OR=4.96; 95% CI=2.38-1.37; P<0.0001), lymph node metastasis (639 patients) (OR=6.22; 95%CI=2.70-14.34, P<0.0001), and lymph node metastasis (383 patients) (OR=6.85; 95% CI=2.04-23.04; P=0.002) were all significant. When stratifying the studies by the pathological variables of heparanase protein expression, this was the case for depth of invasion (1250 patients) (OR=2.76; 95% CI=1.52-5.03; P=0.0009), lymph node metastasis (1178 patients) (OR=4.79 ; 95% CI=3.37-6.80, P<0.00001), tumor size (727 patients) (OR=2.06 ; 95% CI=1.31-3.23; P=0.002) (OR=2.61; 95% CI=2.09-3.27; P=0.000), and TNM stage (1233 patients) (OR=6.85; 95% CI=2.04-23.04; P=0.002). Egger's tests suggested publication bias for depth of invasion, lymph node metastasis, lymph node metastasis and tumor size of heparanase mRNA and protein expression. Conclusions: This meta-analysis suggests that higher heparanase expression in gastric cancer is associated with clinicopathologic features of depth of invasion, lymph node metastasis and TNM stage at mRNA and protein levels, and of tumor size only at the protein level. Egger's tests suggested publication bias for these clinicopathologic features of heparanase mRNA and protein expression, and which may be caused by shortage of relevant studies. As a result, although abundant reports showed heparanase may be associated with clinicopathologic features in gastric cancer, this meta-analysis indicates that more strict studies were needed to evaluate its clinicopathologic significance.
Malignant Neoplasm Burden in Nepal - Data from the Seven Major Cancer Service Hospitals for 2012
Pun, Chin Bahadur;Pradhananga, Kishore K;Siwakoti, Bhola;Subedi, Krishna;Moore, Malcolm A 8659
In Nepal, while no population based cancer registry program exists to assess the incidence, prevalence, morbidity and mortality of cancer, at the national level a number of hospital based cancer registries are cooperating to provide relevant data. Seven major cancer diagnosis and treatment hospitals are involved, including the BP Koirala Memorial Cancer hospital, supported by WHO-Nepal since 2003. The present retrospective analysis of cancer patients of all age groups was conducted to assess the frequencies of different types of cancer presenting from January 1st to December 31st 2012. A total of 7,212 cancer cases were registered, the mean age of the patients being 51.9 years. The most prevalent age group in males was 60-64 yrs (13.6%), while in females it was 50-54 yrs (12.8%). The commonest forms of cancer in males were bronchus and lung (17.6%) followed by stomach (7.3%), larynx (5.2%) and non Hodgkins lymphoma (4.5%). In females, cervix uteri (19.1%) and breast (16.3%), were the top ranking cancer sites followed by bronchus and lung (10.2%), ovary (6.1%) and stomach (3.8%). The present data provide an update of the cancer burden in Nepal and highlight the relatively young age of breast and cervical cancer patients.
Efficacy of Prophylactic Entecavir for Hepatitis B Virus-Related Hepatocellular Carcinoma Receiving Transcatheter Arterial Chemoembolization
Li, Xing;Zhong, Xiang;Chen, Zhan-Hong;Wang, Tian-Tian;Ma, Xiao-Kun;Xing, Yan-Fang;Wu, Dong-Hao;Dong, Min;Chen, Jie;Ruan, Dan-Yun;Lin, Ze-Xiao;Wen, Jing-Yun;Wei, Li;Wu, Xiang-Yuan;Lin, Qu 8665
Background and Aims: Hepatitis B virus (HBV) reactivation was reported to be induced by transcatheter arterial chemoembolization (TACE) in HBV-related hepatocellular carcinonma (HCC) patients with a high incidence. The effective strategy to reduce hepatitis flares due to HBV reactivation in this specific group of patients was limited to lamivudine. This retrospective study was aimed to investigate the efficacy of prophylactic entecavir in HCC patients receiving TACE. Methods: A consecutive series of 191 HBV-related HCC patients receiving TACE were analyzed including 44 patients received prophylactic entecavir. Virologic events, defined as an increase in serum HBV DNA level to more than 1 log10 copies/ml higher than nadir the level, and hepatitis flares due to HBV reactivation were the main endpoints. Results: Patients with or without prophylactic were similar in host factors and the majorities of characteristics regarding to tumor factors, HBV status, liver function and LMR. Notably, cycles of TACE were parallel between the groups. Ten (22.7%) patients receiving prophylactic entecavir reached virologic response. The patients receiving prophylactic entecavir presented significantly reduced virologic events (6.8% vs 54.4%, p=0.000) and hepatitis flares due to HBV reactivation (0.0% vs 11.6%, p=0.039) compared with patients without prophylaxis. Kaplan-Meier analysis illustrated that the patients in the entecavir group presented significantly improved virologic events free survival (p=0.000) and hepatitis flare free survival (p=0.017). Female and Eastern Cooperative Oncology Group (ECOG) performance status 2 was the only significant predictors for virological events in patients without prophylactic antiviral. Rescue antiviral therapy did not reduce the incidence of hepatitis flares due to HBV reactivation. Conclusion: Prophylactic entecavir presented promising efficacy in HBV-related cancer patients receiving TACE. Lower performance status and female gender might be the predictors for HBV reactivation in these patients.
Deactivation of Telomerase Enzyme and Telomere Destabilization by Natural Products: a Potential Target for Cancer Green Therapy
Sasidharan, Sreenivasan;Jothy, Subramanion L;Kavitha, Nowroji;Chen, Yeng;Kanwar, Jagat R 8671
Clues to Identifying Risk Factors for Nasopharyngeal Carcinoma
Wang, Chuqiong;He, Jiman 8673 | CommonCrawl |
Effects of 10-week walking and walking with home-based resistance training on muscle quality, muscle size, and physical functional tests in healthy older individuals
Akito Yoshiko ORCID: orcid.org/0000-0001-8929-92021,6,
Aya Tomita2,
Ryosuke Ando3,4,
Madoka Ogawa2,3,
Shohei Kondo2,
Akira Saito5,
Noriko I. Tanaka2,4,
Teruhiko Koike1,4,
Yoshiharu Oshida1,4 &
Hiroshi Akima2,4
European Review of Aging and Physical Activity volume 15, Article number: 13 (2018) Cite this article
Older individuals have been shown to present muscle atrophy in conjunction with increased fat fraction in some muscles. The proportion of fat and connective tissue within the skeletal muscle can be estimated from axial B-mode ultrasound images using echo intensity (EI). EI was used to calculate the index of muscle quality. Walking, home-based weight-bearing resistance training, and its combinations are considered simple, easy, and practical exercise interventions for older adults. The purpose of this study was to quantify the effects of walking and walking with home-based resistance training on muscle quality of older individuals.
Thirty-one participants performed walking training only (W-group; 72 ± 5 years) and 33 participants performed walking and home-based resistance training (WR-group; 73 ± 6 years). This study was a non-randomized controlled trial with no control group. All participants were instructed to walk 2 or 3 sets per week for 10 weeks (one set: 30-min continuous walking). In addition, the WR-group performed home-based weight-bearing resistance training. EI was measured as a muscle quality index using axial B-mode ultrasound images of the rectus femoris and vastus lateralis of the mid-thigh. We further averaged these parameters to obtain the EI of the quadriceps femoris (QF). Participants further performed five functional tests: sit-ups, supine up, sit-to-stand, 5-m maximal walk, and 6-min walk.
QF EI was significantly decreased in both groups after training (W-group 69.9 ± 7.4 a.u. to 61.7 ± 7.0 a.u., WR-group 64.0 ± 9.5 a.u. to 51.1 ± 10.0 a.u.; P < 0.05), suggesting improved muscle quality. QF EI was further decreased in the WR-group compared with the W-group. The sit-up test in both groups and the sit-to-stand and 5-m maximal walk tests in the W-group were significantly improved after training.
These results suggest that training-induced stimulation is associated with a decrease in EI in some thigh regions. Furthermore, the addition of home-based resistance training to walking would be effective for a greater reduction of EI.
Skeletal muscle mass and function decline with age, and this age-related deterioration of skeletal muscle is known as sarcopenia [1]. As a result of aging and the progression of sarcopenia, adipose tissue infiltrates the skeletal muscle. Increased fat infiltration within the muscle, i.e., increased intramuscular fat (IMF) content, which can be assessed by computed tomography (CT) and magnetic resonance imaging (MRI), is considered to reflect worse muscle quality [2,3,4]. Aging-related physical and metabolic impairments have been commonly investigated in many previous studies [5, 6]; however, little attention has been paid to the role that IMF may play in these processes. Further, excessive IMF was found to be related to lower maximum strength, lower gait ability, and insulin resistance [7,8,9]. These findings imply that worse muscle quality may cause difficulty in living independently as well as metabolic syndrome. Thus, practical and effective methods of improving muscle quality as well as decreasing IMF in older individuals are needed.
Exercise interventions utilizing endurance and resistance training protocols based on maximal oxygen consumption and one-repetition maximum, respectively, have been reported to induce significant muscle hypertrophy and improvements in cardiovascular function that enhance overall physical function in older adults [10, 11]. Besides these interventions, there are few attempts at determining whether physical training reduces the IMF content in middle-aged and older individuals [12]. Walking and home-based weight-bearing resistance training have been proposed as simple, easy, and practical exercise interventions for older adults. Walking in particular has been shown to effectively improve physical function [10, 13] and insulin responsiveness [14] and to reduce abdominal fat [15]. Previously, Ryan et al. [16] evaluated the effects of walking combined with diet restriction on IMF cross-sectional area in older obese women using CT. They showed a decrease in IMF cross-sectional area after 6 months of the intervention. Thus, in this study, we hypothesize that an increase in physical activity (i.e., walking) would also improve muscle quality; however, the effects of walking training in older individuals on muscle quality are not well understood.
A combination of traditional endurance and resistance training has also been shown to improve muscle quality in older individuals. Wilhelm et al. [17] showed that concurrent strength and endurance training increased muscle thickness and decreased ultrasound echo intensity (EI), which reflects IMF and/or connective tissue content. Higher skeletal muscle EI is associated with lower muscle quality [18, 19]. Previous studies have shown that a statistically significant correlation exists between muscle EI and adipose tissue level in a muscle biopsy sample [20]. Furthermore, connective and fibrous tissue also reflect muscle EI [21, 22]. Akima et al. [23] showed significant correlation between EI and extramyocellular lipid levels determined by 1H magnetic resonance spectroscopy (MRS) and IMF content measured using MRI. Therefore, EI can potentially reflect the lipid content around muscle cells; however, it should be paid a little attention to including connective and fibrous tissue as well. These observations suggest that combined training can improve muscle quality by reducing the IMF content. Additionally, in a cross-sectional study, Akima et al. [3] found that IMF content measured by MRI was related to muscle size (r = − 0.67 to − 0.59, P < 0.05), suggesting that individuals with larger muscles have less IMF. Considering the results of these two studies, it appears that an increase in muscle size is the key to IMF reduction and that a combination of walking and resistance training would effectively reduce IMF; however, this has not been proven. Home-based weight-bearing resistance training is currently being recommended for sarcopenic individuals because this type of training has been shown to improve not only strength and functional ability but also muscle size [13, 24, 25]. Accordingly, we hypothesize that the combination of walking and home-based weight-bearing resistance training may decrease IMF and eventually improve muscle quality, more than walking training alone. The purpose of this study was to compare the effects of walking alone and walking combined with home-based weight-bearing resistance training on the muscle quality of the thigh muscles of older individuals.
Experimental design and procedure
This study was carried out as part of health promotion classes for volunteers at Nagoya City from 2014 to 2015. All participants learned about this class through a public relations magazine or website, and they applied to participate in the class on their own accord. The class consisted of an introductory session (1st week), measurement sessions (2nd and 12th week), lectures on health promotion (4th, 6th, 8th, and 10th week), and presentation of the measurement results (13th week). Therefore, we met the participants at least once every 2 weeks during the class. At the first visit, we introduced the purpose and significance of the study to the participants and explained the entire experimental protocol and specific training procedures and measurement techniques.
All participants gave their written informed consent before study participation. Participants were assigned non-randomly to two exercise groups (i.e., walking alone or walking combined with resistance training) and performed the prescribed exercises at home for 10 weeks; they recorded their training on customized recording sheets. This study was designed as a non-randomized controlled trial to unify the training condition within the class. Practical considerations required that all participants in each class perform the same exercise; therefore, participants were assigned to the walking group (W-group) during the first year (2014) and to the walking combined with resistance training group (WR-group) during the second year (2015). Nobody attended the class during both the first and second years. During the study period, the participants were instructed to avoid changes in diet and in their recreational physical activities (e.g., walking, jogging, and stretching). Before and after the training intervention, the participants underwent skeletal muscle ultrasonography and physical function testing in our laboratory.
The participant inclusion criteria were as follows: 1) resided in Nagoya City, 2) aged 65 years or older, 3) did not have any conditions requiring exercise restriction (e.g., cardiac disease, respiratory disease, hypertension, orthopedic conditions), 4) were able to perform activities of daily living (ADL) independently, and 5) were not currently involved in exercise training. Questionnaires and interviews during the first classroom session were used to determine pre-existing conditions and the ability to perform ADL, and no participants were excluded based on these issues. Seventy-nine healthy older men and women were eligible to participate in this study. Fifteen participants failed to complete either the 10-week intervention or the follow-up examination; however, none of these participants dropped out because of injury or illness. Of the 64 participants who completed the study, 31 were assigned to the W-group (16 men and 15 women; age 72 ± 5 years, height 159 ± 8 cm, weight 56 ± 10 kg, BMI 22 ± 3 kg/m2) and 33 were assigned to the WR-group (12 men and 21 women, age 73 ± 6 years, height 156 ± 7 cm, weight 53 ± 7 kg, BMI 22 ± 2 kg/m2). Before the experiment, the purpose, procedures, and risks associated with this study were explained to each participant, and written informed consent was obtained from all participants. All examination protocols were approved by the Institutional Review Board of the Research Center for Health, Physical Fitness and Sports at Nagoya University (approval numbers: 26–13, 27–9) and conducted in accordance with the ethical principles stated in the Declaration of Helsinki.
The walking program performed by W- and WR-group participants consisted of more than two sets per week. One set included at least 30 min of continuous walking without rest. Furthermore, participants were asked to try to achieve an average of 10,000 steps per day (i.e., total of 70,000 steps per week). If the daily steps of the participants exceeded 10,000 steps without the walking intervention, we instructed them to add the opportunity for walking more than two sets per week. In this case, they would not worry about the target step numbers. Each participant was instructed to walk at his/her usual speed. The participants wore a pedometer (PD-635, TANITA, Tokyo, Japan) attached to the anterior midline of their waist while performing their ADL every day during the 10-week training period from the time they got up in the morning until they went to bed at night. The reliability of this type of pedometer has been well established, and these pedometers are widely used to measure step counts [26, 27].
The WR-group performed resistance training at least three times per week during the 10-week training period. We used an original home-based resistance training program developed by the Japan Health Promotion & Fitness Foundation. This program can be performed at home and does not require any specialized exercise equipment. The participants were instructed on the correct exercise techniques during the first classroom session and were then able to perform the training at home while watching the DVD, which included a performance model with music and explanation of key points of the exercise. The training regimen consisted of five exercises: chair stands, hip flexions, calf raises, lateral leg raises, and sit-ups. Chair stands consisted of repeatedly standing and sitting on a chair. Hip flexions consisted of alternate elevations of each knee until the hip joint was flexed at a 90° angle. Calf raises consisted of alternating plantar flexion and dorsiflexion while standing. Lateral leg raises consisted of adduction of each leg to approximately 30° from the vertical while standing. Sit-ups were performed from a supine position, with the knees bent at approximately 80° and arms crossed in front of the chest. Hip flexions, calf raises, and lateral leg raises were performed in a standing position while holding a chair to support the body. Participants repeated 45 repetitions of each exercise to the music on the DVD and were told to perform these exercises while singing to the lyrics to avoid holding their breath. It was estimated that the participants would complete one series in approximately 30 min. Between each exercise, the participants were required to take a 30-s break. We instructed the participants to keep training logs using the customized recording sheets that we provided. We also required participants to record their physical condition, number of steps/day, type and frequency of physical activities, and special events in their daily life. We checked that the participants were properly completing the training tasks by interview once every 2 weeks.
Ultrasound measurements
Subcutaneous fat thickness, muscle thickness, and EI of the mid-thigh were measured by ultrasonography, as in our previous study [19, 28, 29]. Ultrasonography was performed after 15 min rest in order to avoid the influence by the body fluid shifts induced by muscle contraction [30]. Participants laid down on an examination bed in a supine position, with their knee joints fully extended. We measured the anterior and lateral parts of the right thigh corresponding to the midpoint between the greater trochanter and lateral condyle. A real-time B-mode ultrasonography device (LOGIQ e, GE Healthcare, Duluth, GA, USA) with a 3.8-cm, 8–10 MHz linear array probe was used to obtain images (Fig. 1) with the following acquisition parameters: frequency 10 MHz, gain 70 dB, depth 4.0 to 6.0 cm, focus point 1 (top of the image). The depth was determined depending on each participant, generally up to 6.0 cm, and was set at the same level before and after the training period. A water-soluble gel was applied to the scanning head of the probe to achieve acoustic coupling, and extra care was taken to avoid deformation of the muscle architecture. Three frozen axial images of each section were stored in the DICOM format and transferred to a personal computer. ImageJ software (National Institute of Health, Bethesda, MD, USA, version 1.46) was used for analysis. The thickness of subcutaneous fat was identified as the distance between the dermis and upper boundary of the ventral fascia. Muscle thickness (MT) in the rectus femoris (RF) and vastus lateralis (VL) was defined as the distance between the superior border of the subcutaneous fascia and the deep aponeurosis. In the case of the vastus intermedius (VI), muscle thickness was defined as the distance between the inferior border of the superficial aponeurosis and the superior border of the femur. The subcutaneous fat thickness (SFT QF) and muscle thickness (MT QF) of the quadriceps femoris (QF) were calculated using the following equations:
$$ \mathrm{SFT}\ \mathrm{QF}=\left(\mathrm{anterior}\ \mathrm{subcutaneous}\ \mathrm{fat}\ \mathrm{thickness}+\mathrm{lateral}\ \mathrm{subcutaneous}\ \mathrm{fat}\ \mathrm{thickness}\right)/2 $$
$$ \mathrm{MT}\ \mathrm{QF}=\left(\mathrm{thickness}\ \mathrm{of}\ \mathrm{RF}+\mathrm{thickness}\ \mathrm{of}\ \mathrm{anterior}\ \mathrm{VI}+\mathrm{thickness}\ \mathrm{of}\ \mathrm{VL}+\mathrm{thickness}\ \mathrm{of}\ \mathrm{lateral}\ \mathrm{VI}\right)/4 $$
Representative ultrasound images of the anterior (a) and lateral (b) thighs. SF, subcutaneous fat; RF, rectus femoris; VI, vastus intermedius; VL, vastus lateralis; F, femur. Black double-headed arrows show subcutaneous fat thickness. White double-headed arrows show muscle thickness. Scale is 1 cm
EI was assessed at the gray scale level, which was expressed in arbitrary units (a.u.), using ImageJ software. A rectangular region of interest as large as possible was established, excluding the visible fascia and bone in RF from the anterior image and VL from the lateral image. The mean EI inside the region of interest in RF (EI RF) and VL (EI VL) was calculated for each image, and the mean EI from three images for each muscle was used for future analyses. We calculated EI QF using the following equation:
$$ \mathrm{EI}\ \mathrm{QF}=\left(\mathrm{EI}\ \mathrm{RF}+\mathrm{EI}\ \mathrm{VL}\right)/2 $$
The reliability of this methodology was established by Caresio et al. [31], which supported the reliability of our method. We further calculated the interclass correlation coefficient (ICC, 2.1), the standard error of the measurement (SEM), and minimal detectable change (MDC) of EI for 20 randomly selected participants. The ICC was 0.99 for RF and 0.96 for VL (all P < 0.01), SEM was 0.73 for RF and 1.86 for VL, and MDC was 7.77 for VL and 8.34 for VL.
Physical functional tests
The participants performed five functional tests (i.e., sit-ups, supine up, sit-to-stand, 5-m maximal walking and 6-min walk) in a gymnasium. These functional tests were chosen because they have been used in many previous studies as the index of lower limb strength, and the findings of several studies have shown that EI is associated with basic functional capacity and agility [28, 29, 32]. For the sit-up test, the participants lay in a supine position with their knees bent at approximately 80° and their feet flat on the floor. The participants performed as many sit-ups as possible for 30 s with their arms crossed in front of their chest. The examiner held the ankle joints of the participants during the test. The supine up test consisted of measuring the time it took for the participant to go from the supine to standing position as fast as possible using whatever form they preferred. The sit-to-stand test measured the time taken to sit in and stand up from a chair 10 times as quickly as possible, with the participant's arms crossed in front of their chests. The height of the seat was 40 cm from the floor. For the 5-m maximal walk test, four parallel lines were taped on the floor at 1 m, 6 m, and 7 m (finish line) from the start line (0 m). The participants walked with maximal effort from the start line toward the finish line. The examiner timed between the 1-m and 6-m lines while walking alongside the participant. The 6-min walk test consisted of measuring the distance achieved by walking for 6 min on a 108-m circular course at maximal effort. Markers were placed along the course every 6 m as landmarks, and the examiners counted the laps completed. We verbally encouraged the participants to give their maximal effort. The sit-up, sit-to-stand, and 6-min walk tests were conducted once. The supine up and 5-m maximal walk tests were conducted twice, and the best results were used in the analyses. The ICC values (ICC, 2.1) for the physical function tests indicate that their reliabilities range from "moderate" to "almost perfect" (supine up, 0.85; sit-to-stand, 0.74; 5-m maximal walk, 0.65; 6-min walk, 0.77; P < 0.05). The MDC was 1.15 for supine up, 0.34 for sit-to-stand, 0.21 for 5-m maximal walk, and 25.92 for 6-m walk. ICC and MDC were measured in 20 older adults who were recruited from the same community. They were confirmed to match with our participants in age and BMI and were instructed to perform five functional tests two times, following the same procedure, to confirm test-retest reliability.
All values are reported as mean ± standard deviation. Two-way (time × group) analysis of variance with repeated measures over time was used to compare subcutaneous fat thickness, muscle thickness, EI, and physical function parameters. In the case of two-factor interaction of main effects, the Bonferroni post-hoc test was used to identify significant differences. An unpaired Student's t-test was used to compare the variance in the percent change in subcutaneous fat thickness, muscle thickness, and EI between groups. Pearson's product-moment correlation coefficients were used to determine the associations between the percent changes. The level of significance was set at P < 0.05. All statistical analyses were performed using IBM SPSS statistics (version 22.0 J; IBM Japan, Tokyo, Japan).
Both groups achieved the walking frequency target (W-group: 2.8 ± 1.6 times per week, WR-group: 3.0 ± 2.0 times per week). The WR-group participants performed their home-based resistance training series of exercises with an average of 5.1 ± 2.8 times per week. Participants walked approximately 11,000 steps on their walking training days (W-group: 11,473 ± 2683 steps, WR-group: 11,035 ± 2324 steps). The number of steps taken on non-walking days was significantly lower than that on training days (W-group: 7969 ± 2034 steps, WR-group: 7498 ± 2180 steps). The average numbers of steps taken per day during the 10-week training period were 9117 ± 2360 steps for the W-group and 9306 ± 2417 steps for the WR-group; these values were not significantly different.
There were significant time-by-group interactions for anterior subcutaneous fat thickness, SFT QF, RF thickness, lateral VI thickness, EI VL, and EI QF (Table 1). The EIs of RF, VL, and QF were significantly decreased relative to baseline in both groups after the training intervention (P < 0.05). When compared with baseline values, RF thickness significantly increased in the WR-group, whereas the thicknesses of RF and VI-lateral significantly decreased in the W-group after training. Lateral and QF SFT were significantly decreased in the WR-group after training (P < 0.05). The percent change from baseline in muscle thickness and EI is also shown in Table 1. There were significant between-group differences in the percent changes in RF thickness, anterior VI thickness, and MT QF. There were also significant between-group differences in the percent changes in EI of the VL and QF.
Table 1 Echo intensity, subcutaneous fat thickness and muscle thickness for the walking training group (W-group) and walking and resistance training group (WR-group) in before and after the 10-week training
After the training intervention, participants in both the W- and WR-groups showed improvement in the sit-up test. The W-group also showed improvements in the sit-to-stand and 5-m maximal walk tests after the intervention (Table 2).
Table 2 Functional performance for the walking training group (W-group) and walking and resistance training group (WR-group) in before and after the 10-week training
The correlations of the percent changes in EI QF with SFT QF, MT QF, and physical function are shown in Table 3. We calculated the percent change in the sit-up test using a limited number of participants (W-group, n = 22; WR-group, n = 26) because some participants scored zero at their baseline assessment. The percent change in EI QF was associated with the percent change in MT QF in both groups. The percent change in EI QF was associated with the percent change in the supine up test in the W-group.
Table 3 Correlation coefficient between the percent change of quadriceps femoris echo intensity (EI QF) and that of subcutaneous fat, muscle thickness and physical functions
The main findings of this study were as follows: 1) the EI of the thigh muscles significantly decreased in both W- and WR-groups over the study period and 2) changes in EI VL and EI QF in the WR-group were significantly greater than those in the W-group, resulting in significantly lower post-intervention values, which suggests a greater improvement of muscle quality in the WR-group than in the W-group.
According to the American College of Sports Medicine, walking is the recommended endurance exercise for older adults [33]. Therefore, we selected walking as the endurance exercise used in this study and monitored the time the participants spent walking and the number of steps taken. We allowed participants to walk at their own, self-selected pace for several reasons. First, the positive effects of walking have been previously established. Rooks et al. [13] reported that self-paced walking improved physical function parameters, such as balance and stair climbing, and Ryan et al. [16] confirmed that walking reduces IMF. Second, allowing participants to self-select their walking pace reduces the risk of falls and stress fractures. A recent cross-sectional study showed that the risk of falls was greater during both slow and fast walking than during walking at a normal speed [34]. Furthermore, walking at a non-usual speed (slow or fast) can induce excessive fatigue, which increases fall risk in older adults [35]. Therefore, we decided that self-paced walking exercise provided the best balance between stimulating muscle quality improvements and the need for participant safety. The participants took approximately 7500 to 8000 steps per day on non-walking days and 11,000 steps per day on walking days, with an overall average of approximately 9000 steps per day for both groups. Previously, the number of steps taken by healthy and unhealthy older individuals with obesity, peripheral arterial occlusive disease, and claudication or stroke ranged from 3500 to 7000 steps per day [26, 36]. It is speculated that, compared with the subjects of these studies, some of our participants would be active and healthy; in fact, they had no serious health problems, e.g., obesity, endocrine disorder, sarcopenia, and frailty. Through the walking intervention used in our study, our participants increased their number of steps per day by approximately 2500 to 3000 steps; eventually, the overall daily steps had increased by approximately 1.2- to 1.3-fold compared to those in untrained healthy older individuals [37]. This might have contributed to the significant improvements in their functional parameter test results after the intervention. This result is supported by previous reports, which suggest that walking contributes to improvements in mobility and the ability to perform ADL [13]. We measured muscle EI to find the infiltration level of adipose and/or connective tissue [21, 22, 38]. Another striking result of this study was that the EI of the thigh muscles significantly decreased after the 10 weeks walking intervention. This result is similar to the results reported by Ryan et al. [16], who found a reduction in IMF cross-sectional area (as measured by CT) after a walking intervention in obese older women. However, given the different measurement methods (e.g., CT vs. ultrasonography) and the physical characteristics of participants in our study compared with those in the study by Ryan et al. [16], similarities in the results should be interpreted with care. The reduction in muscle use induced by lower limb unloading has been shown to increase IMF in the calf and thigh [39], and Goodpaster et al. [40] reported that increased physical activity prevented an increase in IMF. These results suggest that the amount of activity performed by the lower limb muscles greatly influences their IMF content. We also found a significant decrease in the EI of RF, VL, and QF after the 10-week walking training intervention in our study. This response likely indicates a decrease in IMF content. Improvements in muscle quality may also reduce the risk of type 2 diabetes in older individuals because IMF content has a negative effect on insulin sensitivity [2, 8]. Therefore, walking may be an effective method of improving the quality of the thigh muscles in older individuals and may result in recovery and/or prevention of metabolic syndrome.
We showed a significant decrease in SFT in the WR-group accompanied by a decrease in muscle EI; accordingly, this caused a decrease in SFT and EI in QF (Table 1). In a cross-sectional study, Goodpaster et al. [2] found significant correlation coefficient between subcutaneous fat cross-sectional area and muscle density (r = − 0.35, P < 0.01), which is the index of fat infiltration level, implying that IMF content was higher if subcutaneous fat area was higher. Therefore, IMF accumulations may change and may be accompanied by a change in subcutaneous fat; however, it is still unknown how regional fat accumulation in the whole body, including subcutaneous, abdominal, intramuscular, intermuscular, liver, and so on, are related to each other. Interestingly, the change was shown only in the lateral region of the WR-group but not in the anterior region (Table 1). This result is inconsistent with that of our previous training experiment [29]. This spot-specific change could be explained by the findings of Akima et al. [28]. They showed that SFT is related with the EI in the lateral (r = − 0.40, P < 0.05) but not in the anterior region (r = − 0.29, P > 0.05). However, there are few studies reporting the relationship of longitudinal change in both subcutaneous fat and muscle quality, and the underlying physiological mechanism is still unclear.
The percent changes in the muscle thicknesses of the RF and QF were significantly higher in the WR-group than in the W-group (Table 1). This demonstrates that home-based weight-bearing resistance training effectively increased the muscle size of the participants in our study. This result was partly consistent with the results of previous studies [17, 41] and suggests that home-based resistance training may contribute to preventing sarcopenia and related mobility disorders, falls and fractures, disability, and loss of independence [13, 24, 25]. However, in both groups, some of the muscles examined showed no significant change or even decrease in thickness after the training intervention (Table 1). This was likely because the MT parameter included both skeletal muscle tissue and IMF tissue, even though this parameter was called "muscle thickness" [17, 41]. Thus, if the thickness of the skeletal muscle tissue and/or IMF tissue decreased as a result of the intervention, the measured "muscle thickness" would decrease. EI changes in the W and WR groups suggest that IMF decreased after the 10-week training intervention. However, using ultrasound imaging, it is difficult to determine whether the change in thickness that we observed was due to IMF loss alone or to muscle loss. MRI is considered the gold standard medical imaging modality for analysis of lean and non-lean tissues [3, 4]; however, the problems of cost and accessibility of MRI have been discussed frequently in the literature. For practical reasons, we used ultrasonography in this study. The validity of assessing muscle size on the basis of thickness measurements made using ultrasonography has been shown previously [42], and the validity and reliability of muscle quality measurements made using ultrasonography have also been discussed [20, 23, 31]. Therefore, ultrasonography has been shown to be a suitable imaging technique for studies such as ours and has the advantage of lower cost and greater accessibility than those of MRI.
Our participants took approximately 7500 to 8000 steps per day on non-walking days, and the score of physical functional performance was better than that in our previous reports [19, 29]. These characteristics would imply that participants were very active and did not have obstacle to the exercise and made it possible to achieve our home-based training protocol because participants were required to manage and fulfill their own training quota considering their lifestyle. They actually achieved the target of walking and resistance training frequency. These characteristics of participants would affect the results as well; greater decreases in EI, which indicate greater improvements in muscle quality, were observed after home-based weight-bearing resistance training in conjunction with walking compared with after walking alone (Table 1). Concurrent endurance and resistance training effectively improves both muscle strength and cardiovascular function and has been previously shown to reduce EI by 5% in older individuals [17]. However, this reduction is small compared with the effect on EI after strength training alone shown by Radaelli et al. [41]; they found a 12 to 20% reduction in EI after resistance training of the same duration. Similarly, Akima et al. [3] reported that muscle size inversely determined IMF content. Thus, we hypothesized that the effects on IMF of walking combined with resistance training that induced muscle hypertrophy would be greater than the effects of walking alone. Consistent with our hypothesis, the percent changes in the EI VL and EI QF of the WR-group were significantly greater than those in the W-group (Table 1). However, this result was inconsistent with results reported by Marcus et al. [43]. They confirmed a decrease in IMF cross-sectional area after exercise but failed to find a specific effect of combined training. One reason for this discrepancy may be the characteristics of the participants. IMF content was reported to be affected by many factors, including age, disease status, injury, inactivity, and obesity [44]. Furthermore, race is one of the factors that determine individual difference in IMF content [7]. The different magnitudes of response to the training interventions in our study might also be due to different metabolic responses in the W- and WR-groups. Perhaps, the reductions in IMF that we observed were the result of changes in energy expenditure, fat oxidation, and/or improvements in mitochondrial function [45, 46]; however, we have not determined to what extent these changes occurred in the muscles of our participants.
We found significant correlations between the percent changes in EI QF and the percent changes in MT QF in both groups (Table 3); this result supports the findings of a previous study by Akima et al. [3]. Similarly, Gorgey et al. [47] reported that neuromuscular electrical stimulation of patients with spinal cord injury decreased IMF in conjunction with muscle hypertrophy. Manini et al. [39] showed a significant increase in IMF with muscle loss as a result of lower limb unloading and found that the increased IMF could be statistically explained by muscle loss. Therefore, interventions that increase muscle size, such as resistance training, may more effectively reduce IMF than endurance training interventions. In our participants, not all muscles increased in thickness (Table 1); this was inconsistent with our hypothesis and may suggest that the intensity of the weight-bearing resistance training was not sufficient to induce muscle hypertrophy in all muscles. Many previous studies have used resistance training protocols with increasing loads that require resistance machines or dumbbells [17, 41, 43]; however, lack of access to transportation, unavailability of training programs, labor considerations, and costs are major limitations of these types of interventions. Furthermore, the risk of injury is also greater with this type of training. Considering the balance between risk, simplicity, and versatility, we concluded that home-based weight-bearing resistance training was suitable for our study, and our results suggest that it was effective for improving muscle quality even without causing hypertrophy of all muscles. The lack of evident hypertrophy in some muscles may also be because of the inclusion of IMF in the MT measurement, as discussed previously. A limitation of ultrasound imaging analysis is that it is difficult to precisely differentiate between muscle tissue and IMF.
Our study had several limitations. First, we assigned participants to the W- and WR-groups using a non-randomized controlled procedure. However, we overcame the biases as much as possible by the following procedures: 1) participants were recruited from the same city using the same methods (e.g., public invitation from Nagoya City using public relations magazine and website), 2) participants in each year met the same inclusion criteria (e.g., aged over 65 years, living independently, without serious disease, capable of exercise, and not currently involved in exercise training), and 3) blinding the examiners analyzing the data to the participants' group. Thus, there were no significant between-group differences in basic parameters such as age, height, weight, BMI, and the number of steps taken on non-walking days. Given these conditions, our participants showed a high level of activity and performances of physical function. Because of these participant characteristics, it would be difficult to apply this training effect to older adults with injury, sarcopenia, frailty, nursing care, etc. Second, we did not have a control group. Previous studies that examined training effects on EI have reported the before and after EI results over a control period; they found that EI did not change over 6 weeks and 12 weeks [17, 48]. Data from a control period may emphasize the effects of the training intervention, but we observed changes in EI that were obviously enhanced by physical activity. Third, we did not measure baseline physical activity level because of restricted participant schedule. However, we analyzed their training logs and calculated the number of steps on a non-training day as the baseline step count. We instructed participants to describe in detail their physical condition, daily steps, and events of daily living comprehensively related with daily activity. Furthermore, we strictly checked them once every 2 weeks through consultation. These managements could contribute to understanding their daily living activity in both training and non-training days.
Knee extensor muscle EI was significantly reduced after a 10-week walking intervention in older adults and even further reduced by a combined walking with home-based resistance training intervention. Changes in EI were negatively correlated with changes in MT in both groups, suggesting that the mechanical and metabolic stimulation of the trained muscles resulted in EI changes. These results indicate that walking training alone may be useful for improving the muscle quality of older individuals but that it has a lesser overall training effect than walking combined with home-based resistance training. Muscle size and muscle quality were both improved, with concurrent improvements in functional abilities, as a result of the 10-week combined walking and home-based resistance training intervention without the use of conventional resistance training machines.
CT:
EI:
Echo intensity
IMF:
Intramuscular fat
MRI:
MT:
Muscle thickness
QF:
RF:
Standard error of the measurement
SFT:
Subcutaneous fat thickness
VL:
W-group:
WR-group:
Walking and resistance training group
Evans WJ, Campbell WW. Sarcopenia and age-related changes in body composition and functional capacity. J Nutr. 1993;123:465–8.
Goodpaster BH, Thaete FL, Kelley DE. Thigh adipose tissue distribution is associated with insulin resistance in obesity and in type 2 diabetes mellitus. Am J Clin Nutr. 2000;71(4):885–92.
Akima H, Yoshiko A, Hioki M, Kanehira N, Shimaoka K, Koike T, Sakakibara H, Oshida Y. Skeletal muscle size is a major predictor of intramuscular fat content regardless of age. Eur J Appl Physiol. 2015;115(8):1627–35.
Yoshiko A, Hioki M, Kanehira N, Shimaoka K, Koike T, Sakakibara H, Oshida Y, Akima H. Three-dimensional comparison of intramuscular fat content between young and old adults. BMC Med Imaging. 2017;17(1):12.
Fink RI, Kolterman OG, Griffin J, Olefsky JM. Mechanisms of insulin resistance in aging. J Clin Invest. 1983;71(6):1523–35.
Jubrias SA, Odderson IR, Esselman PC, Conley KE. Decline in isokinetic force with age: muscle cross-sectional area and specific force. Pflugers Arch. 1997;434(3):246–53.
Goodpaster BH, Carlson CL, Visser M, Kelley DE, Scherzinger A, Harris TB, Stamm E, Newman AB. Attenuation of skeletal muscle and strength in the elderly: the health ABC study. J Appl Physiol. 2001;90(6):2157–65.
Goodpaster BH, Thaete FL, Simoneau JA, Kelley DE. Subcutaneous abdominal fat and thigh muscle composition predict insulin sensitivity independently of visceral fat. Diabetes. 1997;46(10):1579–85.
Marcus RL, Addison O, Dibble LE, Foreman KB, Morrell G, Lastayo P. Intramuscular adipose tissue, sarcopenia, and mobility function in older individuals. J Aging Res. 2012;2012:629637.
Sipila S, Suominen H. Effects of strength and endurance training on thigh and leg muscle mass and composition in elderly women. J Appl Physiol. 1995;78(1):334–40.
Cadore EL, Pinto RS, Bottaro M, Izquierdo M. Strength and endurance training prescription in healthy and frail elderly. Aging Dis. 2014;5(3):183–95.
Jacobs JL, Marcus RL, Morrell G, LaStayo P. Resistance exercise with older fallers: its impact on intermuscular adipose tissue. Biomed Res Int. 2014;2014:398960.
Rooks DS, Kiel DP, Parsons C, Hayes WC. Self-paced resistance training and walking exercise in community-dwelling older adults: effects on neuromotor performance. J Gerontol A Biol Sci Med Sci. 1997;52(3):161–8.
Hersey WC 3rd, Graves JE, Pollock ML, Gingerich R, Shireman RB, Heath GW, Spierto F, McCole SD, Hagberg JM. Endurance exercise training improves body composition and plasma insulin responses in 70- to 79-year-old men and women. Metabolism. 1994;43(7):847–54.
Hong HR, Jeong JO, Kong JY, Lee SH, Yang SH, Ha CD, Kang HS. Effect of walking exercise on abdominal fat, insulin resistance and serum cytokines in obese women. J Exerc Nutrition Biochem. 2014;18(3):277–85.
Ryan AS, Nicklas BJ, Berman DM, Dennis KE. Dietary restriction and walking reduce fat deposition in the midthigh in obese older women. Am J Clin Nutr. 2000;72(3):708–13.
Wilhelm EN, Rech A, Minozzo F, Botton CE, Radaelli R, Teixeira BC, Reischak-Oliveira A, Pinto RS. Concurrent strength and endurance training exercise sequence does not affect neuromuscular adaptations in older men. Exp Gerontol. 2014;60:207–14.
Fukumoto Y, Ikezoe T, Yamada Y, Tsukagoshi R, Nakamura M, Takagi Y, Kimura M, Ichihashi N. Age-related ultrasound changes in muscle quantity and quality in women. Ultrasound Med Biol. 2015;41(11):3013–7.
Yoshiko A, Kaji T, Sugiyama H, Koike T, Oshida Y, Akima H. Muscle quality characteristics of muscles in the thigh, upper arm and lower back in elderly men and women. Eur J Appl Physiol. 2018;118(7):1385–95.
Reimers K, Reimers CD, Wagner S, Paetzke I, Pongratz DE. Skeletal muscle sonography: a correlative study of echogenicity and morphology. J Ultrasound Med. 1993;12(2):73–7.
Pillen S, Tak RO, Zwarts MJ, Lammens MM, Verrijp KN, Arts IM, van der Laak JA, Hoogerbrugge PM, van Engelen BG, Verrips A. Skeletal muscle ultrasound: correlation between fibrous tissue and echo intensity. Ultrasound Med Biol. 2009;35(3):443–6.
Arts IM, Pillen S, Schelhaas HJ, Overeem S, Zwarts MJ. Normal values for quantitative muscle ultrasonography in adults. Muscle Nerve. 2010;41(1):32–41.
Akima H, Hioki M, Yoshiko A, Koike T, Sakakibara H, Takahashi H, Oshida Y. Intramuscular adipose tissue determined by T1-weighted MRI at 3 T primarily reflects extramyocellular lipids. Magn Reson Imaging. 2016;34(4):397–403.
Nelson ME, Layne JE, Bernstein MJ, Nuernberger A, Castaneda C, Kaliton D, Hausdorff J, Judge JO, Buchner DM, Roubenoff R, et al. The effects of multidimensional home-based exercise on functional performance in elderly people. J Gerontol A Biol Sci Med Sci. 2004;59(2):154−160.
Bruce-Brand RA, Walls RJ, Ong JC, Emerson BS, O'Byrne JM, Moyna NM. Effects of home-based resistance training and neuromuscular electrical stimulation in knee osteoarthritis: a randomized controlled trial. BMC Musculoskelet Disord. 2012;13:118.
Tudor-Locke CE, Myers AM. Methodological considerations for researchers and practitioners using pedometers to measure physical (ambulatory) activity. Res Q Exerc Sport. 2001;72(1):1–12.
Schneider PL, Crouter S, Bassett DR. Pedometer measures of free-living physical activity: comparison of 13 models. Med Sci Sports Exerc. 2004;36(2):331–5.
Akima H, Yoshiko A, Tomita A, Ando R, Saito A, Ogawa M, Kondo S, Tanaka NI. Relationship between quadriceps echo intensity and functional and morphological characteristics in older men and women. Arch Gerontol Geriatr. 2017;70:105–11.
Yoshiko A, Kaji T, Sugiyama H, Koike T, Oshida Y, Akima H. Effect of 12-month resistance and endurance training on quality, quantity, and function of skeletal muscle in older adults requiring long-term care. Exp Gerontol. 2017;98:230–7.
Berg HE, Tedner B, Tesch PA. Changes in lower limb muscle cross-sectional area and tissue fluid volume after transition from standing to supine. Acta Physiol Scand. 1993;148(4):379–85.
Caresio C, Molinari F, Emanuel G, Minetto MA. Muscle echo intensity: reliability and conditioning factors. Clin Physiol Funct Imaging. 2015;35(5):393–403.
McCarthy EK, Horvat MA, Holtsberg PA, Wisenbaker JM. Repeated chair stands as a measure of lower limb strength in sexagenarian women. J Gerontol A Biol Sci Med Sci. 2004;59(11):1207–12.
Chodzko-Zajko WJ, Proctor DN, Fiatarone Singh MA, Minson CT, Nigg CR, Salem GJ, Skinner JS. American College of Sports Medicine position stand. Exercise and physical activity for older adults. Med Sci Sports Exerc. 2009;41(7):1510–30.
Quach L, Galica AM, Jones RN, Procter-Gray E, Manor B, Hannan MT, Lipsitz LA. The nonlinear relationship between gait speed and falls: the maintenance of balance, independent living, intellect, and zest in the elderly of Boston study. J Am Geriatr Soc. 2011;59(6):1069–73.
Morrison S, Colberg SR, Parson HK, Neumann S, Handel R, Vinik EJ, Paulson J, Vinik AI. Walking-induced fatigue leads to increased falls risk in older adults. J Am Med Dir Assoc. 2016;17(5):402–9.
Tudor-Locke C, Bassett DR Jr. How many steps/day are enough? Preliminary pedometer indices for public health. Sports Med. 2004;34(1):1–8.
Sequeira MM, Rickenbach M, Wietlisbach V, Tullen B, Schutz Y. Physical activity assessment using a pedometer and its comparison with a questionnaire in a large population survey. Am J Epidemiol. 1995;142(9):989–99.
Reimers CD, Fleckenstein JL, Witt TN, Muller-Felber W, Pongratz DE. Muscular ultrasound in idiopathic inflammatory myopathies of adults. J Neurol Sci. 1993;116(1):82–92.
Manini TM, Clark BC, Nalls MA, Goodpaster BH, Ploutz-Snyder LL, Harris TB. Reduced physical activity increases intermuscular adipose tissue in healthy young adults. Am J Clin Nutr. 2007;85(2):377–84.
Goodpaster BH, Chomentowski P, Ward BK, Rossi A, Glynn NW, Delmonico MJ, Kritchevsky SB, Pahor M, Newman AB. Effects of physical activity on strength and skeletal muscle fat infiltration in older adults: a randomized controlled trial. J Appl Physiol. 2008;105(5):1498–503.
Radaelli R, Botton CE, Wilhelm EN, Bottaro M, Brown LE, Lacerda F, Gaya A, Moraes K, Peruzzolo A, Pinto RS. Time course of low- and high-volume strength training on neuromuscular adaptations and muscle quality in older women. Age (Dordr). 2014;36(2):881–92.
Miyatani M, Kanehisa H, Ito M, Kawakami Y, Fukunaga T. The accuracy of volume estimates using ultrasound muscle thickness measurements in different muscle groups. Eur J Appl Physiol. 2004;91(2–3):264–72.
Marcus RL, Smith S, Morrell G, Addison O, Dibble LE, Wahoff-Stice D, Lastayo PC. Comparison of combined aerobic and high-force eccentric resistance exercise with aerobic exercise only for people with type 2 diabetes mellitus. Phys Ther. 2008;88(11):1345–54.
Addison O, Marcus RL, Lastayo PC, Ryan AS. Intermuscular fat: a review of the consequences and causes. Int J Endocrinol. 2014;2014:309570.
Crane JD, Devries MC, Safdar A, Hamadeh MJ, Tarnopolsky MA. The effect of aging on human skeletal muscle mitochondrial and intramyocellular lipid ultrastructure. J Gerontol A Biol Sci Med Sci. 2010;65(2):119–28.
Bajpeyi S, Reed MA, Molskness S, Newton C, Tanner CJ, McCartney JS, Houmard JA. Effect of short-term exercise training on intramyocellular lipid content. Appl Physiol Nutr Metab. 2012;37(5):822–8.
Gorgey AS, Shepherd C. Skeletal muscle hypertrophy and decreased intramuscular fat after unilateral resistance training in spinal cord injury: case report. J Spinal Cord Med. 2010;33(1):90–5.
Cadore EL, Gonzalez-Izal M, Pallares JG, Rodriguez-Falces J, Hakkinen K, Kraemer WJ, Pinto RS, Izquierdo M. Muscle conduction velocity, strength, neural activity, and morphological changes after eccentric and concentric training. Scand J Med Sci Sports. 2014;24(5):343–52.
This study was promoted by the City of Nagoya's Health and Welfare Bureau. The experiment was carried out as a part of Health Promotion Services called "Nagoya Health College" program.
The authors have received no funding for conducting this study.
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Graduate School of Medicine, Nagoya University, Nagoya, Japan
Akito Yoshiko, Teruhiko Koike & Yoshiharu Oshida
Graduate School of Education and Human Development, Nagoya University, Nagoya, Japan
Aya Tomita, Madoka Ogawa, Shohei Kondo, Noriko I. Tanaka & Hiroshi Akima
Japan Society for the Promotion of Science, Tokyo, Japan
Ryosuke Ando & Madoka Ogawa
Research Center of Health, Physical Fitness and Sports, Nagoya University, Nagoya, Japan
Ryosuke Ando, Noriko I. Tanaka, Teruhiko Koike, Yoshiharu Oshida & Hiroshi Akima
Faculty of Sports Sciences, Waseda University, Saitama, Japan
School of International Liberal Studies, Chukyo University, Toyota, Japan
Akito Yoshiko
Aya Tomita
Ryosuke Ando
Madoka Ogawa
Shohei Kondo
Noriko I. Tanaka
Teruhiko Koike
Yoshiharu Oshida
Hiroshi Akima
AY: guarantor of integrity of the entire study, study concepts and design, literature research, experimental study, data analysis, and manuscript preparation and editing. AT, RA, MO, SK, AS and NT: experimental study, data sampling and analysis, and manuscript editing. TK: manuscript editing. YO: guarantor of integrity of the entire study and manuscript editing. HA: guarantor of integrity of the entire study, study concepts and design, experimental study, data sampling and manuscript editing. All authors read and approved the final manuscript.
Correspondence to Akito Yoshiko.
All examination protocols were approved by the Institutional Review Board of the Research Center for Health, Physical Fitness and Sports at Nagoya University (approval numbers: 26–13, 27–9), and were conducted in accordance with the ethical principles stated in the Declaration of Helsinki. The subjects gave written informed consent for the study after receiving a detailed explanation of the purposes, potential benefits, and risks associated with participation.
All participants completed consent forms.
Yoshiko, A., Tomita, A., Ando, R. et al. Effects of 10-week walking and walking with home-based resistance training on muscle quality, muscle size, and physical functional tests in healthy older individuals. Eur Rev Aging Phys Act 15, 13 (2018). https://doi.org/10.1186/s11556-018-0201-2
Home-based resistance training
Muscle quality | CommonCrawl |
Home > Journals > Proc. Japan Acad. Ser. A Math. Sci. > Volume 83 > Issue 4 > Article
April 2007 Robin's inequality and the Riemann hypothesis
Marek Wójtowicz
Proc. Japan Acad. Ser. A Math. Sci. 83(4): 47-49 (April 2007). DOI: 10.3792/pjaa.83.47
Let $f(n)=\sigma(n)/e^\gamma n\log\log n$, $n=3,4,\ldots$ , where $\sigma$ denotes the sum of divisors function. In 1984 Robin proved that the inequality $f(n)>1$, for all $n\ge 5041$, is equivalent to the Riemann hypothesis. Here we show that the values of $f$ are close to $0$ on a set of asymptotic density $1$. Similarly, an inequality by Rosser and Schoenfeld of 1962, dealing with the Euler totient function $\varphi$, is essential only on "thin" subsets of $\mathbf{N}$.
Marek Wójtowicz. "Robin's inequality and the Riemann hypothesis." Proc. Japan Acad. Ser. A Math. Sci. 83 (4) 47 - 49, April 2007. https://doi.org/10.3792/pjaa.83.47
Digital Object Identifier: 10.3792/pjaa.83.47
Primary: 11M06, 11N37
Keywords: Asymptotic density, Riemann hypothesis, Robin's inequality
Rights: Copyright © 2007 The Japan Academy
Proc. Japan Acad. Ser. A Math. Sci.
Vol.83 • No. 4 • April 2007
The Japan Academy
Marek Wójtowicz "Robin's inequality and the Riemann hypothesis," Proceedings of the Japan Academy, Series A, Mathematical Sciences, Proc. Japan Acad. Ser. A Math. Sci. 83(4), 47-49, (April 2007) | CommonCrawl |
The Phillips Curve: An Overview
Noah Smith has an article in Bloomberg today about the Phillips curve — the relationship between employment and inflation where "employment" and "inflation" can mean a couple of different things. Phillips' original paper talked about wage inflation (wage growth) and unemployment, but sometimes these can refer to inflation the price level in general (e.g. CPI inflation) or even expected inflation (in the New Keynesian Phillips Curves [NKPC] in DSGE models). I realized I don't have a good one-stop post for discussion of the Phillips curve, so this is going to be that post.
Noah's frame is the recent congressional hearings with Fed Chair Powell, and in particular the pointed questioning from Alexandria Ocasio-Cortez about whether the Phillips curve is "no longer describing what is happening in today's economy." He continues to discuss the research finding a 'fading Phillips curve' and mentioned Adam Ozimek's claim that the Phillips curve is alive and well — all things I have discussed on this blog in the context of the Dynamic Information Equilibrium Model [DIEM]. Let's begin!
1. There is a direct relationship between wage growth and the unemployment rate
The structure of wage growth and the unemployment rate over the past few decades shows a remarkable similarity (as always, click to enlarge):
The wage growth model has continued to forecast well for a year and a half so far, while the unemployment rate model not only has done well for over two years now (I started it earlier) but has outperformed forecasts from the Fed as well as Ray Fair's model. Regardless of whether the models are correct (but seriously, that forecasting performance should weigh pretty heavily), they are still excellent fits to the prior data and describe the time series' structure accurately. There's actually another series with this exact shock pattern ('economic seismogram') match — JOLTS hires. The hires measure confirms the 2014 mini-boom appearing in wage growth and unemployment so we're not just matching negative recession shocks, but positive booms. We can put the models together on the same graph to highlight the similarity ... and we can basically transform them to fall on top of each other by simply scaling and lagging:
This find that shocks to JOLTS hires lead unemployment by about 5 months, and shocks to wages by 11 months — with first two leading NBER recessions and the last one happening after a recession is over. We can be pretty confident that changes in hires cause changes in unemployment which in turn cause changes in wage growth. Between shocks, the normal pattern is that unemployment falls and wage growth rises (accelerates). The rate of the latter is slow, but consistent (and forecast correctly by the DIEM):
2. Adam Ozimek's graph is more like a Beveridge curve and isn't quite as clean as presented
I used the wage growth model above and a similar model of prime age employment to reproduce a version of Ozimek's graph in an earlier post. Ozimek uses the Employers Cost Index (ECI), but I use the Atlanta Fed wage growth tracker data because it is monthly and goes back a bit farther in time [3]. However, this pretty much produces an identical graph to Ozimek's when we plot the same time period:
The DIEMs for wage growth and prime age employment population ratio [EPOP] also have some similar structure — however the 2014 mini-boom is not as obvious in EPOP if it appears at all ...
This indicates that these two series might have a more complex relationship than unemployment and wage growth. In fact, if you plot them on Ozimek's axes highlighting the temporal path through the data points in green (and yellow) as well as some additional earlier data in yellow (and highlighting the recent data in black) you see how the nice straight line above is somewhat spurious and the real slope is actually a bit lower:
The green dashed line shows where the data is headed (in the absence of a recession), and the light gray lines show the "dynamic equilibria" — the periods between shocks when wage growth and employment steadily grow. When a recession shock hits, we move from one "equilibrium" to another, much like the Beveridge curve (as I discuss in this blog post and in my paper).
3. The macro-relevant Phillips curve has faded away
The Phillips curves above talk about wage inflation, but in macro models the relationship is between unemployment and the price level (e.g. CPI or PCE inflation) — the NKPC. Now it's true that wages are a "price" and a lot of macro models don't distinguish between the price of labor and the price of goods. But it appears empirically we cannot just ignore this distinction because there does not appear to be any signal in price level data today ... but there used to be!
Much like in the first part of this post, we can look at DIEMs for (in this case) core PCE inflation and unemployment, and note that they really do seem to be related in the 60s through the 80s:
We see spikes of inflation cut off by spikes in unemployment, which fade out in the 90s. This is where a visualization of these "shocks" I've called "economic seismograms" is helpful — the following is a chart in a presentation from last year (this time its the GDP deflator):
Spikes in inflation are "cut-off" by recessions during the 60s and 70s, but that effect begins to fade out over time. What's interesting is that the period of a "strong Phillips curve" pretty much matches up with the long demographic shift of women entering the workforce in the 60s, 70s, and 80s. The Phillips curve vanishes when women's labor force participation becomes highly correlated with men's (i.e. only really showing signs of recession shocks). This is among several things that seem to change after the 1990s.
Why does this happen? I have some speculation (a metaphor I use is that mass labor force entry is like a "gravity wave" for macro) that I most concisely wrote up in a comment about my new book:
My thinking behind it is that high rates of labor force expansion (high compared to population growth) are more susceptible to the business cycle. Unlike adding people at the population growth rate, adding people at an accelerated rate because of something else happening — women entering the workforce — is more easily affected by macro conditions. Population grows and people have to find jobs, but women don't have to go against existing social norms and enter the workforce in a downturn, but are more likely to do so during an upturn (i.e. breaking social norms gets easier if it pays better than if it doesn't).
This would cause the business cycle to pro-cyclically amplify and modulate the rate of women entering the workforce, which gives rise to bigger cyclical fluctuations and also the Phillips curve.
As a side note: I think a similar mechanism played out during industrialization, when people were being drawn from rural agriculture into urban industry. And also a similar mechanism plays out when soldiers return from war (post-war inflation and recession cycles).
That new book's first chapter is largely about how this effect is generally behind the "Great Inflation" — and that it has nothing to do with monetary policy. Which brings us back to the beginning of this post: the Fed can't produce inflation because it never really could [1].
Update 13 July 2019: I wanted to add that this relationship between inflation and unemployment and the fading of it isn't about "expected" inflation (the expectations augmented Phillips curve), but observed inflation. It remains entirely possible that the "Lucas critique" is behind the fading — that agents learned how the Fed exploits the Phillips curve and so the relationship began to break down. Of course, the direct consequence is that apparently the Fed became a master of shaping expectations ... only to result in sub-target inflation after the Great Recession. It would also mean that the apparent match between rising labor force participation and the magnitude of the Phillips curve is purely a coincidence. I personally would go with Occam's razor here [2] — generally expectations-based theories verge on the unfalsifiable.
So 1. yes, wage growth and unemployment appear to be directly causally related; 2. wage growth and EPOP are not as closely or causally related; and 3. yes, the Phillips curve relationship between unemployment and the macro-relevant price level inflation has faded away as the surge of women entering the workforce ended.
[1] This is not to say a central bank can never create inflation — it could easily create hyperinflation, which is more a political problem than a macroeconomic mechanism. The cut-off between the "hyperinflation" effective theory and the "monetary policy is irrelevant" effective theory seems to be on the order of sustained 10% inflation. (Ina side note mentioned at that link, that might also be where MMT — or really any one-dimensional theory of how an economy works — is a good effective theory. Your economy simplifies to a single dimension when money printing, inflation and government spending all far outpace population and labor force growth.)
[2] Is granting the Fed and monetary policy control of inflation so important that we must come up with whatever theory allows it no matter how contrived?
[3] Update 14 July 2019: Here's the ECI version alongside the Atlanta Fed wage growth tracker data — graph originally from here. ECI's a bit too uncertain to see the positive shock in the 2014 mini-boom.
Wage growth, inflation, interest rates, and employment
With the Fed hearings in Congress this week and some new data releases this week, I thought it'd be good to get a dynamic information equilibrium model (DIEM) snapshot just before the end of the month and what many people are thinking is going to be the first Fed rate cut since the Great Recession. The Atlanta Fed's Wage growth tracker was updated today and the latest results are in line with the DIEM forecast from a year and a half ago:
We're pretty much at the point where wage growth has reached the NGDP growth dynamic equilibrium, which I've speculated is the point where a recession is triggered (by e.g. wages eating into profits, resulting in falling investment). Of course, the NGDP series is noisy, but this is what the "limits to wage growth" picture looks like with an average-sized shock (in the wage growth time series):
Inflation (CPI all items, seasonally adjusted) came in today lower than the 2.5% dynamic equilibrium this month but well within the error bands. This is year-over-year and continuously compounded annual rate of change (i.e. log derivative):
But inflation doesn't give us much of a sign of a recession (it can react after the fact, but isn't a leading indicator).
A metric many people look at is the yield curve — I've been tracking the median of a collection of rate spreads (which basically matches the principal component). This is only loosely based on dynamic information equilibrium (i.e. there's a long-term tendency for interest rates to decline), but is really more a linear model of the interest rate data before the last three recessions (so caveat emptor) coupled with an AR process forecast:
That linear model gives us an estimate of when the yield curve should invert as an indication of a recession. One thing to note is that with the Fed potentially lowering interest rates at the end of the month, the path of the interest rate spread will likely "turnaround" and start climbing — it's done so in the past three recessions. That turnaround point has been between one and five quarters before the recession onset, but then the turnaround has also usually been at about -50 bp — these are indicated with the gray box on the next graph:
As a side note: when people say AR processes outperform DSGE models, this is an example of one of those AR processes.
If the fed lowers rates this month, then the turnaround will be 20-30 bp higher than the past three recessions — is this an indication of looser policy than in the past? Political pressure? This is not necessarily to say the Fed's rate decisions will have an impact. It's just a representation of how the Fed changes policy in the face of economic weakness. Much like how a person who sees themselves about to get in a car accident might tense up, tensing up does not do anything to mitigate or prevent the accident.
Earlier this week, JOLTS data came out. I've speculated that these measures are leading indicators, and it appears that shocks to JOLTS hires appear at around 5 months before shocks to the unemployment rate and around 11 months before shocks to wage growth (the model above) — the latter coming after the recession has begun. In any case, JOLTS quits appears to be showing a flattening indicating a turnaround:
I talked about this on Twitter a bit. In the last recession, hires led the pack but that might have been a result of the housing bubble where construction hires started falling nearly 2 years before the recession onset. If that was a one-off, then quits and openings look like the better indicators. Here's openings:
As a side note, I talk about that atypical early lead for hires in my book as an indication that potentially the big xenophobic outbreak around the 2006 election might have had an impact on the housing bubble (an earlier draft version appears here as a blog post).
Again, a lot of this is speculative — I'm trying to put out clear tests of the usefulness of the dynamic information equilibrium model for forecasting and understanding data. But the series that seem to lag recessions (wage growth, inflation) are right in line with the DIEMs, while the series that seem to lead recessions (JOLTS) are showing the signs of deviations.
Update 4:30pm PDT
Here's the 10-year-rate forecast from 2015 still doing much better than the BCEI forecast of roughly the same vintage ...
Posted by Jason Smith at 9:30 AM No comments: Links to this post
Labor market update: external validity edition
There were several threads on twitter (e.g. here, here, here) the past couple days that tie up under the theme of "external validity" versus "internal validity". It's a distinction that appears to mean something different in macroeconomics than it does in other sciences, but I can't quite put my finger on it. Operationally, its definition appears to imply you can derive some kind of "understanding" from a model that doesn't fit out-of-sample data.
Let's say I observe some humans running around, jumping over things at a track and field event. I go back to my computer and code up an idealized model of a human reproducing the appearance of a representative human and giving it some of the behaviors I saw. Now I want to use this model to derive some understanding when I experiment with some policy changes ... say, watching the interaction between the human and angry mushroom people ...
A lot of macro models are basically like this — neither internal validity nor external validity. It's just kind of a simulacrum — sure, Mario looks a bit like a person, and people can move around. But no one can jump that high or change direction of their jump 180° in mid-air. A more precise analogy would be the invented virtual economies of video games like Civilization or Eve Online, but they're still not real because there is no connection with macro data.
In science, a conclusion about e.g. effects of some treatment on mice may be internally valid (i.e. it was done correctly and shows a real and reproducible effect, per a snarky twitter account, in mice), but not externally valid (i.e. the effect will not not occur in humans). There's even a joke version of the linked "in mice" twitter account for DSGE models, but that's really not even remotely the same thing at all. DSGE models do not have internal validity in the scientific sense — they are not valid representations of even the subset of data they are estimated for. Or a better way to put it: we don't know if they are valid representations of the data they are estimated for.
We can know if the test on mice is internally valid — someone else can reproduce the results, or you can continue to run the experiment with more mice. Usually something like this is done in the paper itself. There's been a crisis in psychology recently due to failing to meet this standard, but it's knowable through doing the experiments again.
We cannot know if a macro model is internally valid in this sense. Why? Because macro models are estimated using time series for individual countries. If I estimate a regression over a set of data from 1980-1990 for the US, there is no way to get more data from 1980-1990 for the US in the same way we can get more mice — I effectively used all the mice already. Having someone else estimate the model or run the codes isn't testing internal validity because it's basically just re-doing some math (though some models fail even this).
The macro model might be an incredibly precise representation of the US economy between 1980 and 1990 in the same way old quantum theory and the Bohr model was an incredibly precise representation of the energy levels of Hydrogen. But old quantum theory was wrong.
Macro models are sometimes described as having "internal consistency" which is sometimes confused for "internal validity" [1]. Super Mario Brothers is internally consistent, but it's not internally valid.
So if internal validity is "unknowable" for a macro model, we can look at external validity — out-of-sample data from other countries or other times, i.e. forecasting. It is through external validity that macro models gain internal validity — we can only know if a macro model is a valid description of the data it was tested on (instead of being a simulacrum) if it works for other data.
Which brings me to today's data release from BLS — and an unemployment rate forecast I've been tracking for over two years (click to enlarge):
Note that the model works not only for other countries (like Australia), but also different time series such as the prime age labor force participation rate also released today:
That is to say the dynamic information equilibrium model (DIEM) has demonstrated some degree of external validity. This basically obviates any talk about whether DSGE models, ABMs, or other macro models can be useful for "understanding" if they do not accurately forecast. There are models that accurately forecast — that is now the standard. If the model does not accurately forecast, then it lacks external validity which means it cannot have internal validity — we can ignore it [1].
That said, the DSGE model from FRB NY has been doing fairly well with inflation for bit over a year ... so even discussion of whether a DSGE model has to forecast accurately is obviated even if you are only considering DSGE models. They have to now — at least a year. This one has.
[1] Often people bring up microfoundations as a kind of logical consistency. A DSGE model has microfoundations, so even if it doesn't forecast exactly right the fact that we can fit macro data with a microfounded DSGE model provides some kind of understanding.
The reasoning is that we're extrapolating from the micro scale (agents, microfoundations) to the macro scale. It's similar to "external validity" except instead of moving to a different time (i.e. forecasting) or a different space (i.e. other countries), we are moving to a different scale. In physics, there's an excellent example of doing this correctly — in fact, it's related to my thesis. The quark model (QCD) is kind of like a set of microfoundations for nuclear physics. It's especially weird because we cannot really test the model very well at the micro scale (though recent lattice calculations have been getting better and better). The original tests of QCD came from extrapolating from the different energy scales (in the diagram below, Q²) using evolution equations. QCD was quite excellent at describing the data (click to enlarge):
Measurement of the structure function of a nucleon at one scale allows QCD to tell us what it looks like at another scale. We didn't prove QCD to be a valid description of reality at the scale it was formulated at in terms of quarks and gluons ("microfoundations"), but rather we extrapolated to different scales — external validity. Other experiments confirmed various properties of the quark microfoundations, but this experiment was one that confirmed the whole structure of QCD.
But we can in fact measure various aspects of the microfoundations of economics — humans, unlike quarks, are readily accessible without building huge accelerators. These often turn out to be wrong. But more importantly, the DSGE models extrapolated from these microfoundations do not have external validity — they don't forecast and economists don't use them to predict things at other scales (AFAICT) like, say, predicting state by state GDP.
What's weird is that the inability to forecast is downplayed, and the macro models are instead seen as providing some kind of "understanding" because they incorporate microfoundations, when in actuality the proper interpretation of the evidence and the DSGE construction is that either the microfoundations or the aggregation process are wrong. The only wisdom you should gain is that you should try something else.
Median sales price of new houses
Data for the median sales price (MSP) of new houses was released this past week on FRED, and the data is showing a distinct correlated negative deviation which is generally evidence that a non-equilibrium shock is underway in the dynamic information equilibrium model (DIEM).
I added a counterfactual shock (in gray). This early on, there is a tendency for the parameter fit to underestimate the size of the shock (for an explicit example, see this version for the unemployment rate in the Great Recession). The model overall shows the housing bubble alongside the two shocks (one negative and one positive) to the level paralleling the ones seen in the Case Shiller index and housing starts.
This seems like a good time to look at the interest rate model and the yield curve / interest rate spreads. First, the interest rate model is doing extraordinarily well for having started in 2015:
I show the Blue Chip Economic Indicators forecast from 2015 as well as a recent forecast from the Wall Street Journal (click to embiggen):
And here's the median (~ principal component) interest rate spread we've been tracking for the past year (almost exactly — June 25, 2018):
If -28 bp was the lowest point (at the beginning of June), it's higher than previous three lowest points (-40 to -70 bp). Also, if it is in fact the lowest point, the previous three cycles achieved their lowest points between 1 and 5 quarters before the NBER recession onset.
PCE inflation
The DIEM for PCE inflation continues to perform fairly well ... though it's not the most interesting model in the current regime (the lowflation period has ended).
Here's the same chart with other forecasts on it:
The new gray dot with a black outline shows the estimated annual PCE inflation for 2019 assuming the previous data is a good sample (this is not the best assumption, but it gives an idea where inflation might end up given what we know today). The purple dots with the error bars are Fed projections, and the other purple dotted line is the forecast from Jan Hatzius of Goldman Sachs.
Mostly just to troll the DSGE haters, here's the FRB NY DSGE model forecast compared to the latest data — it's doing great!
But then the DIEM is right on as well with smaller error bands ...
A Workers' History of the United States 1948-2020
Available now! Click here!
After seven years of economic research and developing forecasting models that have outperformed the experts, author, blogger, and physicist Dr. Jason Smith offers his controversial insights about the major driving factors behind the economy derived from the data and it's not economics — it's social changes. These social changes are behind the questions of who gets to work, how those workers organize, and how workers identify politically — and it is through labor markets that these social changes manifest in economic effects. What would otherwise be a disjoint and nonsensical postwar economic history of the United States is made into a cohesive workers' history driven by women entering the workforce and the backlash to the Civil Rights movement — plainly: sexism and racism. This new understanding of historical economic data offers lessons for understanding the political economy of today and insights for policies that might actually work.
Dr. Smith is a physicist who began with quarks and nuclei before moving into research and development in signal processing and machine learning in the aerospace industry. During a government fellowship from 2011 to 2012 — and in the aftermath of the global financial crisis — he learned about the potential use of prediction markets in the intelligence community and began to assess their validity using information theoretic approaches. From this spark, Dr. Smith developed the more general information equilibrium approach to economics which has shown to have broader applications to neuroscience and online search trends. He wrote A Random Physicist Takes on Economics in 2017 documenting this intellectual journey and the change in perspective towards economic theory and macroeconomics that comes with this framework. This change in perspective to economic theory came with new interpretations of economic data over time that finally came together in this book.
The book I've been working on for the past year and a half — A Workers' History of the United States 1948-2020 — is now available on Amazon as a Kindle e-book or a paperback. Get your copy today! Head over to the book website for an open thread for your first impressions and comments. And pick up a copy of A Random Physicist Takes on Economics if you haven't already ...
Update 7am PDT 24 June 2019
The paperback edition still says "publishing" on KDP, but it should be ready in the next 24-48 hours. However, I did manage to catch what is probably a fleeting moment where the book is #1 in Macroeconomics:
Update 2pm PDT 24 June 2019
Paperback is live!
Posted by Jason Smith at 5:00 AM Links to this post
Sometimes I feel like I don't see the data
Sometimes I feel like my only friend.
I've seen links to this nymag article floating around the interwebs that purports to examine labor market data for evidence that the Fed rate hike of 2015 was some sort of ominous thing:
But refrain they did not.
Instead, the Federal Reserve began raising interest rates in 2015 ...
Scott Lemieux (a poli sci lecturer at the local university) puts it this way:
But the 2015 Fed Rate hike was based on false premises and had disastrous consequences, not only because of the direct infliction of unnecessary misery on many Americans, but because it may well have been responsible for both President Trump and the Republican takeover of the Senate, with a large amount of resultant damage that will be difficult or impossible to reverse.
Are we looking at the same data? Literally nothing happened in major labor market measures in December of 2015 (here: prime age labor force participation, JOLTS hires, unemployment rate, wage growth from ATL Fed):
There were literally no consequences from the Fed rate hike in terms of labor markets. All of these time series continued along their merry log-linear equilibrium paths. It didn't even end the 2014 mini-boom (possibly triggered by Obamacare going into effect) which was already ending.
But it's a good opportunity to plug my book which says that the Fed is largely irrelevant (although it can make a recession worse). The current political situation is about changing alliances and identity politics amid the backdrop of institutions that under-weight urban voters.
Update + 30 minutes
Before someone mentions something about the way the BLS and CPS count unemployment, let me add that nothing happened in long term unemployment either:
The mini-boom was already fading. Long term unemployment has changed, but the change (like the changes in many measures) came in the 90s.
Resolving the Cambridge capital controversy with logic
So I wrote somewhat tongue-in-cheek blog post a few years ago titled "Resolving the Cambridge capital controversy with abstract algebra" [RCCC I] that called the Cambridge Capital Controversy [CCC] for Cambridge, UK in terms of the original debate they they were having — summarized by Joan Robinson's claim that you can't really add apples and oranges (or in this case printing presses and drill presses) to form a sensible definition of capital. I used a bit of group theory and the information equilibrium framework to show that you can't simply add up factors of production. I mentioned at the bottom of that post that there are really easy ways around it — including a partition function approach in my paper — but Cambridge, MA (Solow and Samuelson) never made those arguments.
On the Cambridge, MA side no one seemed to care because the theory seemed to "work" (debatable). A few years passed and eventually Samuelson conceded Robinson and Sraffa were in fact right about their re-switching arguments. A short summary is available in an NBER paper from Baqaae and Farhi, but what interested me about that paper was that the particular way they illustrated it made it clear to me that the partition function approach also gets around the re-switching arguments. So I wrote that up in a blog post with another snarky title "Resolving the Cambridge capital controversy with MaxEnt" [RCCC II] (a partition function is maximum entropy distribution or MaxEnt).
This of course opened a can of worms on Twitter when I tweeted out the link to my post. The first volley was several people saying Cobb-Douglas functions were just a consequence of accounting identities or that they fit any data — a lot of which was based on papers by Anwar Shaikh (in particular the "humbug" production function). I added an update to my post saying these arguments were disingenuous — and in my view academic fraud because they rely on a visual misrepresentation of data as well as a elision of the direction of mathematical implication. Solow pointed out the former in his 1974 response to Shaikh's "humbug" paper (as well as the fact that Shaikh's data shows labor output is independent of capital which would render the entire discussion moot if true), but Shaikh has continued to misrepresent "humbug" until at least 2017 in an INET interview on YouTube.
The funny thing is that I never really cared about the CCC — my interest on this blog is research into economic theory based on information theory. RCCC I and RCCC II were both primarily about how you would go about addressing the underlying questions in the information equilibrium framework. However, the subsequent volleys have brought up even more illogical or plainly false arguments against aggregate production functions that seem to have sprouted in the Post-Keynesian walled garden. I believe it's because "mainstream" academic econ has long since abandoned arguing about it, and like my neglected back yard a large number of weeds have grown up. This post is going to do a bit of weeding.
Constant factor shares!
Several comments brought up that Cobb-Douglas production functions can fit any data assuming (empirically observed) constant factor shares. However, this is just a claim that the gradient
\nabla = \left( \frac{\partial}{\partial \log L} , \frac{\partial}{\partial \log K} \right)
is constant, which a fortiori implies a Cobb-Douglas production function
\log Y = a \log L + b \log K + c
A backtrack is that it's only constant factor shares in the neighborhood of observed values, but that just means Cobb-Douglas functions are a local approximation (i.e. the tangent plane in log-linear space) to the observed region. Either way, saying "with constant factor shares, Cobb Douglas can fit any data" is saying vacuously "data that fits a Cobb-Douglas function can be fit with a Cobb-Douglas function". Leontief production functions also have constant factor shares locally, but in fact have two tangent planes, which just retreats to the local description (data that is locally Cobb-Douglas can be fit with a local Cobb-Douglas function).
Aggregate production functions don't exist!
The denial that the functions even exist is by far the most interesting argument, but it's still not logically sound. At least it's not disingenuous — it could just use a bit of interdisciplinary insight. Jo Michell linked me to a paper by Jonathan Temple with the nonthreatening title "Aggregate production functions and growth economics" (although the filename is "Aggreg Prod Functions Dont Exist.Temple.pdf" and the first line of the abstract is "Rigorous approaches to aggregation indicate that aggregate production functions do not exist except in unlikely special cases.")
However, not too far in (Section 2, second paragraph) it makes a logical error of extrapolating from $N = 2$ to $N \gg 1$:
It is easy to show that if the two sectors each have Cobb-Douglas production technologies, and if the exponents on inputs differ across sectors, there cannot be a Cobb-Douglas aggregate production function.
It's explained how the argument proceeds in a footnote:
The way to see this is to write down the aggregate labour share as a weighted average of labour shares in the two sectors. If the structure of output changes, the weights and the aggregate labour share will also change, and hence there cannot be an aggregate Cobb-Douglas production function (which would imply a constant labour share at the aggregate level).
This is true for $N = 2$, because the change of one "labor share state" (specified by $\alpha_{i}$ for a individual sector $y_{i} \sim k^{\alpha_{i}}$) implies an overall change in the ensemble average labor share state $\langle \alpha \rangle$. However, this is a bit like saying if you have a two-atom ideal gas, the kinetic energy of one of the atoms can change and so the average kinetic energy of the two-atom gas doesn't exist therefore (rigorously!) there is no such thing as temperature (i.e. a well defined kinetic energy $\sim k T$) for an ideal gas in general with more than two atoms ($N \gg 1$) except in unlikely special cases.
I was quite surprised that econ has disproved the existence of thermodynamics!
Joking aside, if you have more than two sectors, it is possible you could have an empirically stable distribution over labor share states $\alpha_{i}$ and a partition function (details of the approach appear in my paper):
Z(\kappa) = \sum_{i} e^{- \kappa \alpha_{i}}
take $\kappa \equiv \log (1+ (k-k_{0})/k_{0})$ which means
\langle y \rangle \sim k^{\langle \alpha \rangle}
where the ensemble average is
\langle X \rangle \equiv \frac{1}{Z} \sum_{i} \hat{X} e^{- \kappa \alpha_{i}}
There are likely more ways than this partition function approach based on information equilibrium to get around the $N = 2$ case, but we only need to construct one example to disprove nonexistence. Basically this means that unless the output structure of a single firm affects the whole economy, it is entirely possible that the output structure of an ensemble of firms could have a stable distribution of labor share states. You cannot logically rule it out.
What's interesting to me is that in a whole host of situations, the distributions of these economic states appear to be stable (and in some cases in an unfortunate pun, stable distributions). For some specific examples, we can look at profit rate states and stock growth rate states.
Now you might not believe these empirical results. Regardless, the logical argument is not valid unless your model of the economy is unrealistically extremely simplistic (like modeling a gas with a single atom — not too unlike the unrealistic representative agent picture). There is of course the possibility that empirically this doesn't work (much like it doesn't work for a whole host of non-equilibrium thermodynamics processes). But Jonathan Temple's paper is a bunch of wordy prose with the odd equation — it does not address the empirical question. In fact, Temple re-iterates one of the defenses of the aggregate production function approaches that has vexed these theoretical attempts to knock them down (section 4, first paragraph):
One of the traditional defenses of aggregate production functions is a pragmatic one: they may not exist, but empirically they 'seem to work'.
They of course would seem to work if economies are made up of more than two firms (or sectors) and have relatively stable distributions of labor share states.
To put it yet another way, Temple's argument relies on a host of unrealistic assumptions about an economy — that we know the distribution isn't stable, and that there are only a few sectors, and that the output structure of these few firms changes regularly enough to require a new estimate of the exponent $\alpha$ but not regularly enough that the changes create a temporal distribution of states.
Fisher! Aggregate production functions are highly constrained!
There's a lot of references that trace all the way back to Fisher (1969) "The existence of aggregate production functions" and several people who mentioned Fisher or work derived from his papers. The paper is itself a survey of restrictions believed to constrain aggregate production functions, but it seems to have been written from the perspective that an economy is a highly mathematical construct that can either only be described by $C^{2}$ functions or not at all. In a later section (Sec. 6) talking about whether maybe aggregate production functions can be good approximations, Fisher says:
approximations could only result if [the approximation] ... exhibited very large rates of change ... In less technical language, the derivatives would have to wiggle violently up and down all the time.
Heaven forbid were that the case!
He cites in a footnote the rather ridiculous example of $\lambda \sin (x/\lambda)$ (locally $C^{2}$!) — I get the feeling he was completely unaware of stochastic calculus or quantum mechanics and therefore could not imagine a smooth macroeconomy made up of noisy components, only a few pathological examples from his real analysis course in college. Again, a nice case for some interdisciplinary exchange! I wrote a post some years ago about the $C^{2}$ view economists seem to take versus a far more realistic noisy approach in the context of the Ramsey-Cass-Koopmans model. In any case, why exactly should we expect firm level production functions to be $C^{2}$ functions that add to a $C^{2}$ function?
One of the constraints Fisher notes is that individual firm production functions (for the $i^{th}$ firm) must take a specific additive form:
f_{i}(K_{i}, L_{i}) = \phi_{i}(K_{i}) + \psi_{i}(L_{i})
This is probably true if you think of an economy as one large $C^{2}$ function that has to factor (mathematically, like, say, a polynomial) into individual firms. But like Temple's argument, it denies the possibility that there can be stable distributions of states $(\alpha_{i}, \beta_{i})$ for individual firm production functions (that even might change over time!) such that
Y_{i} = f_{i}(K_{i}, L_{i}) = K_{i}^{\alpha_{i}}L_{i}^{\beta_{i}}
\langle Y \rangle \sim K^{\langle \alpha \rangle} L^{\langle \beta \rangle}
The left/first picture is a bunch of random production functions with beta distributed exponents. The right/second picture is an average of 10 of them. In the limit of an infinite number of firms, constant returns to scale hold (i.e. $\langle \alpha \rangle + \langle \beta \rangle \simeq 0.35 + 0.65 = 1$) at the macro level — however individual firms aren't required to have constant returns to scale (many don't in this example). In fact, none of the individual firms have to have any of the properties of the aggregate production function. (You don't really have to impose that constraint at either scale — and in fact, in the whole Solow model works much better empirically in terms of nominal quantities and without constant returns to scale.) Since these are simple functions, they don't have that many properties but we can include things like constant factor shares or constant returns to scale.
The information-theoretic partition function approach actually has a remarkable self-similarity between macro (i.e. aggregate level) and micro (i.e. individual or individual firm level) — this self-similarity is behind the reason why Cobb-Douglas or diagrammatic ("crossing curve") models at the macro scale aren't obviously implausible.
Both the arguments of Temple and Fisher seem to rest on strong assumptions about economies constructed from clean, noiseless, abstract functions — and either a paucity or surfeit of imagination (I'm not sure). It's a kind of love-hate relationship with neoclassical economics — working within its confines to try to show that it's flawed. A lot of these results are cases of what I personally would call mathiness. I'm sure Paul Romer might think they're fine, but to me they sound like an all-too-earnest undergraduate math major fresh out of real analysis trying to tell us what's what. Sure, man, individual firms production functions are continuous and differentiable additive functions. So what exactly have you been smoking?
These constraints on production functions from Fisher and Temple actually remind me a lot of Steve Keen's definition of an equilibrium that isn't attainable — it's mathematically forbidden! It's probably not a good definition of equilibrium if you can't even come up with a theoretical case that satisfies it. Fisher and Temple can't really come up with a theoretical production function that meets all their constraints besides the trivial "all firms are the same" function. It's funny that Fisher actually touches on that in one of his footnotes (#31):
Honesty requires me to state that I have no clear idea what technical differences actually look like. Capital augmentation seems unduly restrictive, however. If it held, all firms would produce the same market basket of outputs and hire the same relative collection of labors.
But the bottom line is that these claims to have exhausted all possibilities are just not true! I get the feeling that people have already made up their minds which side of the CCC they stand on, and it doesn't take much to confirm their biases so they don't ask questions after e.g. Temple's two sector economy. That settles it then! Well, no ... as there might be more than two sectors. Maybe even three!
Resolving the Cambridge capital controversy with MaxEnt
I came across this 2018 NBER working paper from Baqaee and Farhi again today (on Twitter) after seeing it around the time it came out. The abstract spells it out:
Aggregate production functions are reduced-form relationships that emerge endogenously from input-output interactions between heterogeneous producers and factors in general equilibrium. We provide a general methodology for analyzing such aggregate production functions by deriving their first- and second-order properties. Our aggregation formulas provide non-parametric characterizations of the macro elasticities of substitution between factors and of the macro bias of technical change in terms of micro sufficient statistics. They allow us to generalize existing aggregation theorems and to derive new ones. We relate our results to the famous Cambridge- Cambridge controversy.
One thing that they do in their paper is reference Samuelson's (version of Robinson's and Sraffa's) re-switching arguments. I'll quote liberally from the paper (this is actually the introduction and Section 5) because it sets up the problem we're going to look at:
Eventually, the English Cambridge prevailed against the American Cambridge, decisively showing that aggregate production functions with an aggregate capital stock do not always exist. They did this through a series of ingenious, though perhaps exotic looking, "re-switching" examples. These examples demonstrated that at the macro level, "fundamental laws" such as diminishing returns may not hold for the aggregate capital stock, even if, at the micro level, there are diminishing returns for every capital good. This means that a neoclassical aggregate production function could not be used to study the distribution of income in such economies.
... In his famous "Summing Up" QJE paper (Samuelson, 1966), Samuelson, speaking for the Cambridge US camp, finally conceded to the Cambridge UK camp and admitted that indeed, capital could not be aggregated. He produced an example of an economy with "re-switching": an economy where, as the interest rate decreases, the economy switches from one technique to the other and then back to the original technique. This results in a non-monotonic relationship between the capital-labor ratio as a function of the rate of interest r.
... [In] the post-Keynesian reswitching example in Samuelson (1966). ... [o]utput is used for consumption, labor can be used to produce output using two different production functions (called "techniques"). ... the economy features reswitching: as the interest rate is increased, it switches from the second to the first technique and then switches back to the second technique.
I wrote a blog post four years ago titled "Resolving the Cambridge capital controversy with abstract algebra" which was in part tongue-in-cheek, but also showed how Cambridge, UK (Robinson and Sraffa) had the more reasonable argument. With Samuelson's surrender summarized above, it's sort of a closed case. I'd like to re-open it, and show how a resolution in my blog post renders the post-Keynesian re-switching arguments as describing pathological cases unlikely to be realized in a real system — and therefore calling the argument in favor of the existence of aggregate production functions and Solow and Samuelson.
To some extent, this whole controversy is due to economists seeing economics as a logical discipline — more akin to mathematics — instead of an empirical one — more akin to the natural sciences. The pathological case of re-switching does in fact invalidate a general rigorous mathematical proof of the existence of aggregate production functions in all cases. But it is just that — a pathological case. It's the kind of situation where you have to show instead some sort of empirical evidence it exists until you take the impasse it presents to mathematical existence seriously.
If you follow through the NBER paper, they show a basic example of re-switching from Samuelson's 1966 paper. As the interest rate increases, one of the "techniques" becomes optimal over the other and we get a shift in capital to output and capital to labor:
Effectively, this is a shift in $\alpha$ in a production function
Y \sim K^{\alpha} L^{1-\alpha}
or more simply in terms of the neoclassical model in per-labor terms ($x \equiv X/L$)
y \sim k ^{\alpha}
That is to say in one case we have $y \sim k^{\alpha_{1}}$ and $y \sim k^{\alpha_{2}}$ in the other. As the authors of the paper put it:
The question we now ask is whether we could represent the disaggregated post-Keynesian example as a version of the simple neoclassical model with an aggregate capital stock given by the sum of the values of the heterogeneous capital stocks in the disaggregated post-Keynesian example. The non-monotonicity of the capital-labor and capital-output ratios as a function of the interest rate shows that this is not possible. The simple neoclassical model could match the investment share, the capital share, the value of capital, and the value of the capital-output and capital-labor ratios of the original steady state of the disaggregated model, but not across steady states associated with different values of the interest rate. In other words, aggregation via financial valuation fails.
But we must stress that this is essentially one (i.e. representative) firm with this structure, and that across a real economy, individual firms would have multiple "techniques" that change in a myriad ways — and there would be many firms.
The ensemble approach to information equilibrium (where we have a large number of production functions $y_{i} \sim k^{\alpha_{i}}$) recovers the traditional aggregate production function (see my paper here), but with ensemble average variables (angle brackets) evaluated with a partition function:
(see the paper for the details). This formulation does not depend on any given firm staying in a particular "production state" $\alpha_{i}$, and it is free to change from any one state to another in a different time period or at a different interest rate. The key question is that we do not know which set of $\alpha_{i}$ states describes every firm for every interest rate. With constant returns to scale, we are restricted to $\alpha$ states between zero and one, but we have no other knowledge available without a detailed examination of every firm in the economy. We'd be left to a uniform distribution over [0,1] if that is all we had, but we could (in principle) average the $\alpha$'s we observe and constrain our distribution to respect $\langle \alpha \rangle$ to be some (unknown) real value in [0, 1]. That defines a beta distribution:
Getting back to the Samuelson example, I've reproduced the capital to labor ratio:
Of course, our model has no compunctions against drawing a new $\alpha$ from a beta distribution for any value of the interest rate ...
That's a lot of re-switching. If we have a large number of firms, we'll have a large number of re-switching (micro) production functions — Samuelson's post-Keynesian example is but one of many paths:
The ensemble average (over that beta-distribution above) produces the bolder blue line:
This returns a function with respect to the interest rate that approximates a constant $\alpha$ as a function of the interest rate — and which only gets better as more firms are added and more re-switching is allowed:
This represents an emergent aggregate production function smooth in the interest rate where each individual production function is non-monotonic. The aggregate production function of the Solow model is in fact well-defined and does not suffer from the issues of re-switching unless the draw from the distribution is pathological — for example, all firms being the same or, equivalently, a representative firm assumption).
This puts the onus on the Cambridge, UK side to show that empirically such cases exist and are common enough to survive aggregation. However, if we do not know about the production structure of a sizable fraction of firms with respect to a broad swath of interest rates, we must plead ignorance and go with maximum entropy. As the complexity of an economy increases, we become less and less likely to see a scenario that cannot be aggregated.
Again, I mentioned this back four years ago in my blog post. The ensemble approach offers a simple workaround to the inability to simply add apples and oranges (or more accurately printing presses and drill presses). However, the re-switching example is a good one to show how a real economy — with heterogeneous firms and heterogeneous techniques — can aggregate into a sensible macroeconomic production function.
I am well aware of the Cobb-Douglas derangement syndrome associated with the Cambridge capital controversy that exists in on Econ twitter and the econoblogosphere (which is in part why I put that gif with the muppet in front of a conflagration on the tweets about this blog post ... three times). People — in particular post-Keynesian acolytes — hate Cobb-Douglas production functions. One of the weirder strains of thought out there is that a Cobb-Douglas function can fit any data arbitrarily well. This plainly false as
a \log X + b \log Y + c
is but a small subset of all possible functions $f(X, Y)$. Basically, this strain of thought is equivalent to saying a line $y = m x + b$ can fit any data.
A subset of this mindset appears to be a case of a logical error based on accounting identities. There have been a couple papers out there (not linking) that suggest that Cobb-Douglas functions are just accounting identities. The source of this might be that you can approximate any accounting identity by a Cobb Douglas form. If we define $X \equiv \delta X + X_{0}$, then
X_{0} \left( \log (\delta X + X_{0}) + 1\right) + Y_{0} \left( \log (\delta Y + Y_{0}) + 1\right) + C
is equal to $X + Y$ for $\delta X / X_{0} \ll 1$ if
C \equiv - X_{0} \log X_{0}- Y_{0} \log Y_{0}
That is to say you can locally approximate an accounting identity by taking into account that log linear is approximately linear for small deviations.
It appears that some people have taken this $p \rightarrow q$ to mean $q \rightarrow p$ — that any Cobb Douglas form $f(X, Y)$ can be represented as an accounting identity $X+Y$. That is false in general. Only the form above under the conditions above can do so, so if you have a different Cobb Douglas function it cannot be so transformed.
Another version of this thinking (from Anwar Shaikh) was brought up on Twitter. Shaikh has a well-known paper where he created the "Humbug" production function. I've reproduced it here:
I was originally going to write about something else here, but in working through the paper and reproducing the result for the production function ...
... I found out this paper is a fraud. Because of the way the values were chosen, the resulting production function has no dependence on the variation in $q$ aside from an overall scale factor. Here's what happens if you set $q$ to be a constant (0.8) — first "HUMBUG" turns into a line:
And the resulting production function? It lies almost exactly on top of the original:
It's not too hard to pick a set of $q$ and $k$ data that gives a production function that looks nothing like a Cobb-Douglas function by just adding some noise:
The reason can be seen in the table and relies mostly on Shaikh's choice of the variance in the $k$ values (click to enlarge):
But also, if we just plot the $k$-values and the $q$-values versus time, we have log-linear functions:
Is it any surprise that a Cobb-Douglas production function fits this data? Sure, it seems weird if we look at the "HUMBUG" parametric graph of $q$ versus $k$, but $k(t)$ and $q(t)$ are lines. The production function is smooth because the variance in $A(t)$ depends almost entirely on the variance in $q(t)$ so that taking $q(t)/A(t)$ leaves approximately a constant. The bit of variation left is the integrated $\dot{k}/k$, which is derived from a log-linear function — so it's going to have a great log-linear fit. It's log-linear!
Basically, Shaikh mis-represented the "HUMBUG" data as having a lot of variation — obviously nonsense by inspection, right?! But it's really just two lines with a bit of noise.
I was unable to see the article earlier, but apparently this is exactly what Solow said. Solow was actually much nicer (click to enlarge):
Solow:
The cute HUMBUG numerical example tends to bowl you over at first, but when you think about it for a minute it turns out to be quite straightforward in terms of what I have just said. The made-up data tell a story, clearer in the table than in the diagram. Output per worker is essentially constant in time. There are some fluctuations but they are relatively small, with a coefficient of variation about 1/7. The fact that the fluctuations are made to spell HUMBUG is either distraction or humbug. The series for capital per worker is essentially a linear function of time. The wage share has small fluctuations which appear not to be related to capital per worker. If you as any systematic method or educated mind to interpret those data using a production function and the marginal productivity relations, the answer will be that they are exactly what would be produced by technical regress with a production function that must be very close to Cobb-Douglas.
Emphasis in the original. That's exactly what the graph above (and reproduced below) shows. Shaikh not only does not address this comment in his follow up — he quotes only the last sentence of this paragraph and then doubles down on eliding the HUMBUG data as representative of "any data":
Yet confronted with the humbug data, Solow says: "If you ask any systematic method or any educated mind to interpret those data using a production function and the marginal productivity relations, the answer will be that they are exactly what would be produced by technical regress with a production function that must be very close to Cobb-Douglas" (Solow, 1957 [sic], p. 121). What kind of "systematic method" or "educated mind" is it that can interpret almost any data, even the humbug data, as arising from a neoclassical production function?
This is further evidence that Shaikh is not practicing academic integrity. Even after Solow points out that "Output per worker is essentially constant in time ... The series for capital per worker is essentially a linear function of time" continues to suggest that "even the humbug data" is somehow representative of the universe of "any data" when it is in fact a line.
The fact that Shaikh chose to graph "HUMBUG" rather than this time series is obfuscation and in my view academic fraud. As of 2017, he continues to misrepresent this paper in an Institute for New Economic Thinking (INET) video on YouTube saying "... this is essentially an accounting identity and I illustrated that putting the word humbug and putting points on the word humbug and showing that I could fit a perfect Cobb-Douglas production function to that ..."
I did want to add a bit about how the claims about the relationship between Cobb-Douglas production functions and accounting identities elide the direction of implication. Cobb-Douglas implies an accounting identity holds, but the logical content of the accounting identity on its own is pretty much vacuous without something like Cobb-Douglas. In his 2005 paper, Shaikh elides the point (and also re-asserts his disingenuous claim about the humbug production function above).
Wage growth, inflation, interest rates, and employ... | CommonCrawl |
The AMP-Foot 3, new generation propulsive prosthetic feet with explosive motion characteristics: design and validation
Towards Active Lower Limb Prosthetic Systems: Design Issues and Solutions
Pierre Cherelle1,
Victor Grosu1,
Manuel Cestari2,
Bram Vanderborght1 &
Dirk Lefeber1
The last decades, rehabilitation has become a challenging context for mechatronical engineering. From the state-of-the-art it is seen that the field of prosthetics offers very promising perspectives to roboticist. Today's prosthetic feet tend to improve amputee walking experience by delivering the necessary push-off forces while walking. Therefore, several new types of (compliant) actuators are developed in order to fulfill the torque and power requirements of a sound ankle-foot complex with minimized power consumption. At the Vrije Universiteit Brussel, the Robotics and Multibody Mechanics research group puts a lot of effort in the design and development of new bionic feet. In 2013, the Ankle Mimicking Prosthetic (AMP-) Foot 2, as a proof-of-concept, showed the advantage of using the explosive elastic actuator capable of delivering the full ankle torques (\(\pm 120\) Nm) and power (\(\pm 250\) W) with only a 60 W motor. In this article, the authors present the AMP-Foot 3, using an improved actuation method and using two locking mechanisms for improved energy storage during walking. The article focusses on the mechanical design of the device and validation of its working principle.
The past decades, researchers have been studying pathological and non-pathological gait to understand the human ankle-foot function during walking. These efforts resulted in the development of new lower limb prosthetic devices aiming at raising the 3C-level (control, comfort and cosmetics) of amputees, each with slightly different characteristics. Thanks to the technological advances in computer aided design (CAD) and mechatronics, challenges in this field have become an important source of interest for roboticists. Today's state-of-the-art in propulsive transtibial prostheses consist of no more than 23 devices that can be categorized based on their actuation principle as presented in [1]. From these, 16 prototypes have been developed in the USA [2–5], 5 in Belgium [6–8] and 1 in China [9]. Pioneers in the field are undoubtedly the research teams of Herr et al. (MIT—USA) [10–12], Sugar et al. (ASU—USA) [13–15] and Goldfarb et al. (Vanderbilt) [16, 17]. Yet 2 companies have emerged from these research centers, namely iWalk and SpringActive, bringing their know-how to the American market. Currently, most of the bionic feet are still on a research level, but show promising results and a preview of tomorrow's commercial prosthetic devices.
At the Vrije Universiteit Brussels, a new type of actuation system has been developed for use in ankle-foot prostheses, named the explosive elastic actuator (EEA) [1]. The EEA consists of a spring behind a locking mechanism placed in series with a series elastic actuator (SEA). This catapult-like mechanism is based on the use of stored energy to hurl a payload, without the use of an explosive. The EEA therefore has the advantage of storing energy and release it when needed. This type of explosive motions are widely used in e.g. jumping [18], kicking [19], throwing [20] and hammering robots [21]. The torque requirements of the EEA are similar to the SEA in prosthetic feet. But by using a locking mechanism, the motor can provide its work during a longer period of time (typically 2–3 times for a prosthetic ankle), reducing by the same amount the actuator's speed and power. This new type of actuation has proven its effectiveness with the Ankle Mimicking Prosthetic (AMP-) Foot 2 [6, 22, 23].
a Picture of the AMP-Foot 3. b The AMP-Foot 3 essential parts
In this article, the authors present their latest research prototype, the AMP-Foot 3 shown in Fig. 1. The prosthesis' design is described and results of experiments with an amputee are presented. The novelty of this work relies in the use of 2 locking mechanism to improve the energy storage of the device compared to its predecessor, the AMP-Foot 2. Also, unlike in the previous prototype, no cables have been used. Instead a compliant crank-slider mechanism has been chosen to transmit the propulsion forces and torques to the ankle of the device. At first the concept behind the AMP-Foot 3, its working principle, mechanical design and electronics design are described in depth. Further the experimental validation of the prosthesis is presented by means of treadmill experiments with an amputee. Conclusions and future work will close the article.
The AMP-Foot 3—development
In this section the development and working principle of the AMP-Foot 3 is presented.
An energy efficient concept
The main objective of this research is the implemetation of the 'principle of optimal power distribution' [6] into a prosthetic foot, i.e. retrieve as much energy as possible from the gait and to incorporate an electric actuator with minimized power consumption. As shown in [6] and [23] the required output power can be decreased significantly by using the explosive elastic actuation princple. Unlike a regular SEA, the torque output can be provided during a longer lapse of time, therefore decreasing the electric drive's speed, thus its power requirements.
Obviously, the AMP-Foot 3 predecessors are the AMP-Foot 2. But the new prototype is not just a redesign. The authors have improved its mechanics, functionality and decreased its power requirements by adding an extra, new locking mechanism to the system. For more information about the mechanical design and working principle of the AMP-Foot 2, the authors refer to [6] and [23].
a, b The AMP-Foot 3 prototype schematics
In Figs. 1 and 2, the essential parts of the AMP-Foot 3 are represented. The device consists of four bodies pivoting around a common axis (the ankle axis-point C), i.e. the leg, the foot and two lever arms (depicted as lever arm 1 and 2). The motor, gearbox and ballscrew assembly are fixed to the leg. The system also comprises 2 springs sets: a plantarflexion (PF) and a push-off (PO) spring set. The PF spring set is placed between the foot and the slider of a crank-slider mechanism (point \(A^{\prime}\)) and is used to store and release motion energy. Lever arm 1 represents the crank of the latter while the connection rod is placed between the lever (point \(B^{\prime}\)) and the slider (point \(A^{\prime}\)). It is through this compliant crank-slider mechanism that forces from the leg and motor are transmitted to the foot. The reason for choosing a linkage mechanism compared to a cables and pulley system (as used in the AMP-Foot 2) is to improve the reliability of the system. The push-off spring on the other hand is placed in a tube between the motor-ballscrew assembly and a fixed point (D) on lever arm 2. The main idea behind the AMP-Foot 3 is to store motion energy in the PF springs, while a low power actuator compresses the PO springs without affecting the ankle joint. When push-off is needed, the energy stored in the PO spring is released and added to the energy stored in the PF springs assembly. This sudden addition of energy is hereby fed to the ankle joint and thus provides the propulsive forces and torques desired during walking. As mentioned, the AMP-Foot 3 makes use of 2 locking mechanisms. Locking mechanism 1 is a resettable overrunning system providing a one way clutch connection between the two lever arms. This locking mechanism is used to maximize the stored motion energy during midstance compared with the AMP-Foot 2. A second advantage of this locking mechanism is a better mimicking of the human gait characteristics by allowing a change in PF spring rest position after the foot is stabilized and the ankle enters its dorsiflexion phase. Locking mechanism 2 provides a rigid connection between the leg and the lever arm when energy is injected into the system by the electric drive. Comparable to the one used in the AMP-Foot 2, its role is disengaging the electric actuator from the ankle joint when loading the PO spring. More information on the locking mechanisms' working principles is given further in the text. To maintain a consistent notation through the article, symbols and names used in Figs. 1 and 2 are described as:
$$\begin{aligned} L_1&= \text {distance between ankle axis (C)} \,\, \text {and point }B^{\prime}. \nonumber \\ L_2& = \text {distance between point }A^{\prime} \text{and point }B^{\prime}. \nonumber \\ L_3& = \sqrt{(L_1+L_2)^2-h^2} \end{aligned}$$
$$\begin{aligned} h&= \text {distance between ankle axis (C)} \,\, \text {and the origin O.}\nonumber \\ L_4&= \text {distance between} \,\,\text {ankle axis (C) and point D.}\nonumber \\ \theta &= \text {angle between foot and leg.}\nonumber \\ \xi &= \text {angle between lever arm 1 and 2}\nonumber \\ \alpha _0 = \text {angle between lever arm 1 and foot when} \,\, \text {the crank-slider is not loaded.}\nonumber \\ \alpha &= \theta + \xi = \text {angle rotation of lever arm 1.} \end{aligned}$$
$$\begin{aligned} \psi &= \text {angle between lever arm 1} \,\, \text {and connection rod.}\\ \phi &= \text {angle between connection rod and slider.}\\ k_{PF} &= \text {plantarflexion spring assembly stiffness.} \\ k_{PO} &= \text {push-off spring stiffness.} \\ \vec {F_v}&= \text {force exerted by the plantarflexion spring.}\\ \vec {T_A}&= \text {torque applied to the ankle joint.} \end{aligned}$$
The AMP-Foot 3 prototype simulated angle-torque characteristic compared to the reference data [24] and the AMP-Foot 2 simulated data. The shaded area represents the extra energy that can be stored thanks to the use of locking mechanism 1 compared to the AMP-Foot 2 prototype. This area represents approximately \(5\,J\)
A detailed description of the behavior of the AMP-Foot 3 using the principle of optimal power distribution is given by illustrating one complete gait cycle. To do this, on gait cycle is divided into its 5 main phases (shown in Fig. 3):
$$\begin{aligned} \begin{array}{ll} Phase 1 : &{} From\;initial\;contact\;(IC) to {}foot\;flat (FF).\\ Phase 2 : &{} From\;FF\;to\;heel\;off\;(HO).\\ Phase 3 : &{} At\;heel\;off\;(HO).\\ Phase 4 : &{} From\;HO\;to\;toe\;off\;(TO).\\ Phase 5 : &{} Swing\;phase.\\ \end{array} \end{aligned}$$
The gait cycle starts with a controlled plantarflexion from initial contact (IC) to foot flat (FF) produced by muscles as the Tibialis Anterior. This is followed by a controlled dorsiflexion phase ending in push-off at heel off (HO) during which propulsive forces are generated by the calf muscles. During late stance, the torque produced by the ankle decreases until the leg enters the swing phase at toe off (TO). Once the leg is engaged in the swing phase, the foot resets to prepare for a new step. The working principle of the prosthetic device during each phase is explained here under.
From IC to FF
A step is initiated by touching the ground with the heel. During this phase the foot rotates with respect to the leg, until \(\theta\) reaches approximately \(-5\)°. During this phase lever arm 2 is fixed to the leg. The resettable one way clutch placed between lever 1 (noted as \(L_1\)) and 2 (depicted as \(L_4\)) allows the leg to move backwards (until maximum \(12\)°) without moving lever arm 1. Therefore \(\xi\), being the angle between lever arm 1 and 2, increases. The small negative required torque in this phase is provided by two small tension springs attached between the leg and the foot (not shown in the figure). Because the range of motion is small (a few degrees) and the pretension of these small tension springs is high, their torque characteristic is highly linear and therefore can be modeled as a torsional spring with stiffness \(k_T = \pm 50\) Nm/rad. The torque is then calculated as:
$$\begin{aligned} T_A = k_T \theta \end{aligned}$$
During this period the electrical drive starts loading the PO spring. Since the motor is attached to the leg and lever arm is locked to the leg, the PO spring is loaded without delivering torque to the ankle joint. Therefore the prosthesis is not affected by the forces generated by the actuator.
From (FF) to heel off (HO)
When the foot stabilizes at FF, the leg moves from approximately \(\theta = -5\)° to +10°. Once the leg starts moving in this direction, the resettable overrunning mechanism is engaged instantaneously, fixing hereby lever arm 1 to lever arm 2 (which itself is fixed to the leg because of the second locking mechanism). Because of this, the two tension springs elongated previously in phase 1 are fixed and therefore do not provide any torque to the ankle joint anymore. One can say that their action is removed from the system (while they are still elongated). These springs will remain in this state until the overrunning mechanism is disengaged at the beginning of the swing phase. The energy stored in these springs will then serve for resetting the ankle-foot prosthesis. The lever follows the movement of the leg and torque is generated at the anke joint by actioning the compliant crank slider mechanism. Moving the leg forward elongates the plantarflexion (PF) springs. Thanks to the use of locking mechanism 1, motion energy is stored in the PF springs as soon as the ankle goes in dorsiflexion at approximately \(-5\)° (depending on the walking pattern of the user). This corresponds on average to an additional energy storage between 5 and 10 J compared to the AMP-Foot 2 in which the motion energy of the mid-stance phase could only be stored from 0°. During this phase, based on Fig. 2 the torque at the ankle is given by Eq. (4).
$$\begin{aligned} T_A = L_1 |\vec {AA{^{\prime}}}| k_{pf} \cos {\phi } \sin {\psi } \end{aligned}$$
in which:
$$\begin{aligned} \sin {\psi }&= \sqrt{1-\left(\frac{\vec {CB^{\prime}} . \vec \, {B^{\prime}A^{\prime}}}{L_1L_2}\right)^2 } \end{aligned}$$
$$\begin{aligned} \vec {OA}&= (-L_3 , 0) \end{aligned}$$
$$\begin{aligned} \vec {OB}&= (-L_1 \sin {\alpha _0} ,\quad h - L_1 \cos {\alpha _0})\end{aligned}$$
$$\begin{aligned} \vec {OC}&= (0 , h)\end{aligned}$$
$$\begin{aligned} \vec {OA^{\prime}}&= (-L_1 \sin {(\alpha _0-\alpha )} - \sqrt{ L_2^2 - (h - L_1 \cos {(\alpha _0-\alpha )})^2} , 0) \end{aligned}$$
$$\begin{aligned} \vec {OB^{\prime}}& = (-L_1 \sin {(\alpha _0-\alpha )} ,\,\, h - L_1 \cos {(\alpha _0-\alpha )})\end{aligned}$$
$$\begin{aligned} |\vec {AA^{\prime}}|&= L_3-L_1 \sin {(\alpha _0-\alpha )} - L_2 \cos {\phi } \end{aligned}$$
During this phase the motor is still injecting energy into the system by loading the PO spring without affecting the behavior of the device.
At heel off (HO)
Because the angle between the PO spring and the lever arm is fixed at \(90\)°, the torque exerted by the PO spring (no pretension) on the lever arm is given by
$$\begin{aligned} T_{EEA} = k_{PO}l_2L_4 \end{aligned}$$
with \(T_{EEA}\) representing the torque applied to lever arm 2 by the EEA and \(l_2\) the compression of the PO spring.
The torque \(T_A\) provided by the plantar flexion spring on lever arm 1 is given by Eq. (4). At HO, locking mechanism 2 is forced to unlocked and all the energy stored into the PO spring is fed to the system. Since \(T_A \le T_{EEA}\), both PF and HO springs tend to rotate the lever arm with an angle \(\chi\) to a new equilibrium position. \(T_A\) and \(T_{EEA}\) respectively evolve to new values \(T^{\prime}_A\) and \(T^{\prime}_{EEA}\) such that \(T^{\prime}_A = T^{\prime}_{EEA} = T'\) with \(T^{\prime} \ge T_A\) and \(T^{\prime} \le T_{EEA}\). The torque at the ankle is then calculated with Eq. (4) taking into account the extra angle \(\chi\). In other words \((\alpha _0-\alpha )\) becomes \((\alpha _0-\alpha - \chi )\).
The effect of this is a virtually instantaneous increase in torque and decrease in stiffness of the ankle joint. This is shown in Fig. 3 which represents the torque-angle characteristic of an intact ankle according to gait analysis conducted by Winter [24] and of the simulated AMP-Foot 3 behavior.
From HO to toe off (TO)
In the last phase of stance, the torque is decreasing until toe off (TO) occurs at \(\theta = -20\)°. Since the plantarflexion and push-off springs are now connected in series, the rest position of the system has changed according to the elongation and restlength of the PO spring. As a result of this a new equilibrium position is set to approximately \(\theta = -20\)°. The actuator is still working during this phase.
Swing phase
After TO, the leg enters into the so called swing phase in which the whole system is reset, including locking mechanism 1. How this is achieved will be explaind further in the text. While the motor turns in the opposite direction to bring the ballscrew mechanism back to its initial position, the 2 tension springs used in phase 1 are reactivated and its stored energy is used to set \(\theta\) back to 0° and to close the four bar linkage locking mechanism (locking mechanism 2). At this moment, the device is ready to undertake a new step.
In this section a detailed description of the mechancal design of the AMP-Foot 3 is given. At first the design criteria and general parameters are given. Then the EEA is presented followed by an explenation of the two locking mechanism designs.
Design criteria and general parameters
Table 1 Lever arm and springs
The AMP-Foot 3 prototype
A 75 kg subject walking at normal cadence on ground level produces a maximum joint torque at the ankle of appromately 120 Nm [24]. This has been taken as a criterion. Moreover, an ankle articulation has a moving range from approximately \(+ 10\)° at maximal dorsiflexion to \(- 20\)° at maximal plantarflexion. Therefore a moving range of \(-30\)° to \(+20\)° has been chosen for the system to fulfil the requirements of the ankle anatomy. The foot is made to match a European size between 41 and 45 with a ankle height of approximately 80 mm. In Fig. 4 the dimensions of the AMP-Foot 3 are depicted. With this design, the prosthesis fits in a shoe which is significantly more comfortable for the amputee. The connection with the socket of the subject is provided with an Otto-Bock pyramid adaptor. The device has a weight of approximately 3 kg (not including batteries which are currently worn at the hip), which is still acceptable according to the person subjected to the clinical trials. The length of the lever arms and springs stiffnesses used in Fig. 2 are given in Table 1.
The explosive elastic actuator (EEA)
Table 2 Motor and transmissions
To fulfil the requirements of a sound ankle, a motor with a high 'power and strength to weight' ratio and mechanical efficiency is needed. Based on peak torque and power estimation, a Maxon ECi-40 motor (50 W) was chosen with its corresponding gearbox and ballscrew assembly, as described in Table 2. The placement of the motor and its necessary electronics have been chosen to optimise compactness of the system.
Locking mechanisms
As mentioned before, the system comprises two locking mechanisms: a resettable one-way clutch and a four bar linkage locking mechanism. Both of them are critical to the well functioning of the device.
Locking mechanism 1 The novelty of the AMP-Foot 3 prototype relies in the design and use of this locking mechanism. To enable a change in rest position of the plantarflexion spring during the first phase of gait (from IC to FF) a resettable continuous one way clutch has been developed to decouple the two lever arms. The locking mechanism is based on the well known freewheel principle consisting of spring-loaded steel rollers inside a hardened cylinder. Rotating in one direction, the rollers lock with the outer race making it rotate in unison. If rotating in the other direction, the steel rollers will slip inside the cylinder without transmitting torque. In addition a lever is placed next to the clutch offering the possibility to push the rollers against the springs, disengaging the clutch and allowing it to rotate freely in both directions. However it should be noted that an energy efficient disengagement is only possible when the rollers are not wedged in the cylinder. As such, the presented resettable clutch mechanism is a rotative, continuous, one way locking without backlash with the possibility to be disengaged (and reset) when unloaded (at the very beginning of the swing phase). These features fits completely the requirements of the AMP-Foot 3 prototype. To ensure proper unlocking, a servomotor in series with a compression spring is attached to the reset lever of the clutch. During the gait (when the locking mechanism is loaded) the spring is compressed until the servomotor reaches a singular position. The principle is actually a small scale EEA. Once the load is removed from the clutch, and because the spring is compressed, the locking is disengaged instantaneously. This overrunning clutch is designed to keep up to 160 Nm of torque. Advantages of using this mechanism is the fact more energy can be stored in the PF spring assembly during mid-stance and its potential to adapt naturally to different walking speeds and slopes. Disadvantages are the extra weight and volume.
Locking mechanism 2 The second locking mechanism uses the same principle as the one used in the AMP-Foot 2. This mechanism is placed between the leg and the second lever arm in order to decouple the series elastic actuator (SEA) from the ankle joint. Because of this, it must be able to withstand high forces while being as compact and lightweight as possible. The crucial and challenging part is that the mechanism must be unlocked when bearing its maximum load and last but not least, this unlocking must require a minimum of energy. Fortunately, the lever arm has to be locked to the leg at a fixed angle. With these requirements it has been chosen to work with a four bar linkage moving in and out of a singular position. This principle has already proved its effectiveness in [6]. However, unlike in the AMP-Foot 2, the unlocking of the four bar linkage is not triggered by a servo motor. This time unlocking happens by moving the leg forward against a mechanical stop. This mechanical stop can be positioned as such that the unlocking angle can be adapted. This way, the authors have shown that unlocking, even under maximum load, can be done from the motion of the user.
The AMP-Foot 3 is, at the foot, equipped with a custom made loadcell which allows a force measurement with a resolution of \(\pm 5\) N and the elongation of the PO springs is measured with a linear potentiometer. To measure the position of the lever arm, and the leg with respect to the foot, two absolute magnetic encoders (Austria Micro Systems AS5055) are used with a resolution of \(\pm 0.08\)°. While the magnets of the encoders are glued to the ankle axis (which is fixed to lever arm 2) and the leg, the two hall sensors are fixed on the foot. As a result of this, the resulting torque at the ankle can be calculated using the mathematical model of the mechanical system which has been discussed before. To detect the important triggers during the stance phase (IC, FF, HO, TO), two force sensing resistors (FSR) are placed on the foot sole: one at the heel and one at the toes. These triggers will be used to control the motor and to unlock locking mechanism 1. A current sensor is also used to measure the current sent to the motor. This information serves essentially in the low level control of the device. In addition a 6 DOF IMU has been incorporated in the foot for future control perspectives.
Custom made microcontroller board
The electronics of the prosthesis consist of a Maxon Escon controller, that handles the low level control of the motor, and a custom made microcontroller board (shown in Fig. 5) based on the Atmel SAM3X8E ARM Cortex-M3 CPU managing the high level control and gait detection. All the data from the sensory network are recorded on an SD card.
The 'Optimal Power Distribution' has already shown its impact on the simplicity of the control of the prosthesis with the AMP-Foot 2 prototype [6, 23]. Since the output axis of the actuator is not directly controlling the ankle axis, a very simple control strategy can be used. Currently, the maxon ESCON controller only uses a PID current loop. In addition, the high level control detects walking patterns of the subject and, in function of this, sends the appropriate information to the ESCON controller. For the conducted experiments in this article, the current value sent to the motor controller is fixed and corresponds to the approximate requirements of walking on level ground at the subject's self selected speed. Future control perspectives are mentioned in the concluding section.
Table 3 Battery specifications
As power source, an oversized battery has been used to avoid any risk of power failure during the experiments. The battery specifications are listed in Table 3. This has been used during the experiments.
The AMP-Foot 3—validation
In this section, the authors present the captured data of an amputee walking with the AMP-Foot 3 prosthesis.
The AMP-Foot 3 prototype was tested with Mr. A. The subject being a transfemoral amputee, he has been using his own knee prosthesis (Össur Mauch Knee) together with the AMP-Foot 3. For the validation of the device, 3 experiments have been conducted. The first two experiments, Mr. A. was asked to walk on a treadmill at self selected speed with his own prosthesis (Össur Modular III) and with the AMP-Foot 3 in passive mode (without actuation). The third conducted experiment was identical but this time with actuation of the prosthesis, and thus push-off generation.
During the first experiment, Mr. A was asked to walk at self selected speed with his own prosthesis in order to compare with his self selected speed wearing the AMP-Foot 3. The subject appeared to feel most comfortable at a speed of about 3.5 km/h. Then the same experiment was repeated with the AMP-Foot 3 in its passive mode (meaning the electric motor was not used) and showed an improvement of 0.5 km/h resulting in a self selected speed of 4.0 km/h. According to Mr. A., he felt more comfortable while walking thanks to the change in rest position of the PF spring in the first phases of gait (after FF—due to locking mechanism 1) and the fact the AMP-Foot 3 is an efficient energy storing and returning (ESR) foot when used in passive mode compared to his own Modular III prosthesis. Indeed, this locking mechanism presents interesting assets such as passive self adaptation to different walking speeds and slopes which our subject noticed rapidely. However, this article only focusses on the validation of the AMP-Foot 3 concept prototype. The fact the AMP-Foot 3 can be used in passive mode remains a very interesting asset in case the battery would be discharged. In such situation the prosthesis can still be used in a safe way but without producing extra propulsive forces to the wearer.
Time-based data of level ground walking at 4.7 km/h with the AMP-Foot 3. a Ankle and lever angle vs. time. b Ankle torque vs. time
Torque-angle characteristic of the AMP-Foot 3 while walking at self selected speed (4.7 km/h)
Again the same experiment was repeated, but this time with actuation which revealed a comfortable self selected speed of 4.7 km/h. In Fig. 6, the time-based data of level ground walking at 4.7 km/h is shown. Fig. 6a represents the ankle and lever ankle of the AMP-Foot 3 and Fig. 6b is the deployed ankle torque while walking. Mr. A had a step length of approximately 1.5 m while walking on a treadmill. It can be noticed that the subject has a wide plantarflexion angle (on average \(-10\)°) during the 'HS to FF' phase compared to the reference data [24] (approximately \(-5\)°). This explains why Mr. A. particularly appreciates the change in rest position of the PF spring in this first phase by the action of locking mechanism 1. One can also notice that while loading the PF spring in midstance, the lever arm and ankle angle differs slightly. This is due to play in the four bar linkage which locks both moving parts. However it can be seen that the lever arm angle is slightly bigger than the ankle angle which means that the PO spring assembly produces more torque on the lever than the PF spring. This is a necessary condition to provide push-off to the amputee. At the end of mid stance, the energy stored in the PO spring is released by releasing the four bar locking mechanism. Therefore the lever finds a new equilibrium position. In Fig. 7 the corresponding torque characteristic of the AMP-Foot is shown. During the experiments it was noted that during some steps no extra power was provided. This is due to the fact the four bar locking mechanism did not unlock itself. Unlike in the AMP-Foot 2, the unlocking is done in a passive way in the AMP-Foot 3. However after using the prosthesis for approximately 30 min, Mr. A. did better understand its way of working and started to adapt himself for the proper use of the AMP-Foot prototype. As a matter of fact, changing from a passive, non-articulated carbon prosthesis to an articulated, powered system needs some serious adaptation of the wearer.
Time-based data of level ground walking at self selected speed (4.7 km/h) with the AMP-Foot 3 during one step. a Ankle, lever angle and PF spring force vs. time. b Motor ball nut displacement and current consumption vs. time. c Ankle torque vs. ankle angle. d Electrical and mechanical power vs. time
In Fig. 8a one-step representative is shown of level ground walking at self selected speed (4.7 km/h) with the AMP-Foot 3. Figure 8a represents the Ankle angle, lever arm angle and the PF force during one stride. Because of the mechanical design of locking mechanism 1 (acting between the two lever arms), it is seen that the lever doesn't follow the ankle angle at the very beginning of the gait cycle. This explains the difference between the lever angle and ankle angle during the dorsiflexion phase. They follow each other until the PO springs get tensioned and released. At push-off the two angles show major differences until the system is reset during the swing phase, bringing both the foot and the lever to approximately the same angle value. In Fig. 8b the motor displacement and current consumption is shown. It can be seen that the motor compresses the PO spring until approximately 11 mm while the motor consumes up to approximately 6 A. When the four bar linkage is unlocked, the motor's ballnut moves rapidely to 15 mm while the motor current decreases. Figure 8c shows the torque characteristic of the corresponding step. As noticed before, it can be seen that Mr. A. has a wide plantarflexion angle before FF occurs. Furthermore it is clear that the torque-angle characteristic represents a loop to be followed anticlockwise, which indicates energy production. The maximum plantarflexion angle at the end of stance goes to approximately \(-17\)° before the toes are lifted from the ground and the AMP-Foot enters the swing phase. During swing, the complete system undergoes a hardware reset to prepare for the next step. To close the validation of the AMP-Foot 3, the electrical and mechanical power of the device is shown in Fig. 8d. From the mechanical point of view it is clear that the AMP-Foot 3 respects the needs of an amputee when considering Winter's gait analysis as reference data [24]. From the electrical point of view it can be seen that the electric power increases while compressing the PO spring. At maximum compression a peak power of slightly less then 100 W is provided. It should be noted however that the RMS power is about 55.5 W. As explained before, the main idea is to provide the power during the complete stance phase, which is not exactly followed here. The reason for this is because of limitations imposed by the manufacturer of the Maxon ESCON controllers. Better tuning of these low level controllers may improve the power consumption of the device. During the one-step example shown in Fig. 8, integration of the mechanical power curve shows that approximately 13 J was stored in the PF spring assembly during early stance and that about 26 J of energy is delivered at push-off which corresponds to the requirements of a sound ankle.
Conclusions and future work
Picture of Mr. A. wearing the AMP-Foot 3
Walking sequence with the AMP-Foot 3
In this article, the authors have proposed a new design of an energy efficient powered transtibial prosthesis mimicking able-bodied ankle behavior, the AMP-Foot 3, combining the explosive elastic actuation and an extra locking mechanism. The innovation of this study is to gather energy from motion during the controlled dorsiflexion with a PF spring while storing energy produced by a low power electric motor into a PO spring. This energy is then released at a favorable time for push-off thanks to the use of a locking system. The AMP-Foot 3 mechanical design is presented and the prototype is validated by means of experiments with an amputee (Figs. 9, 10). It can be concluded that the AMP-Foot 3 is capable of providing a 75 kg amputee with the propulsive forces and torques of a sound ankle thanks to the use of the EEA. Although its mechanical properties showed positive results, its control (low and high level) needs to be improved to decrease the overall power consumption and to accommodate for different functions. However it is noted that the average power produced by the AMP-Foot 3 is only 55.5 W. A drawback of the system is its weight of approximately 3 kg, which is still acceptable for a prosthetic foot. Future work will consist of improving its low level control, adding a multi-functional high level control and gait detection system. The potential benefits of using the extra locking system to provide automatic adaptation to different walking speeds and slopes will also be further analyzed.
AMP-Foot:
Ankle Mimicking Prosthetic Foot
3C:
control, comfort and cosmetics
EEA:
explosive elastic actuator
series elastic actuator
PF:
plantarflexion
push-off
FF:
foot flat
HO:
heel off
toe off
ESR:
energy storing and returning
RMS:
Cherelle P, Mathijssen G, Wang Q, Vanderborght B, Lefeber D. Advances in propulsive bionic feet and their actuation principles—a review study. Adv Mech Eng. 2014;6:984046.
Caputo JM, Collins SH. A universal ankle-foot prosthesis emulator for experiments during human locomotion. J Biomech Eng. 2014;136(3):1–28.
Klute GK, Czerniecki JM, Hannaford B. Artificial muscles: actuators for biorobotic systems. Int J Robot Res. 2002;21(4):295–309.
Chen B, Zheng E, Fan W, Liang T, Wang Q, Wei K, Wang L. Locomotion mode classification using a wearable capacitive sensing system. IEEE Trans Neural Syst Rehabil Eng. 2013;21(5):744–55.
Bergelin BJ, Voglewede PA. Design of an active ankle-foot prosthesis utilizing a four-bar mechanism. J Mech Design. 2012;134:061004.
Cherelle P, Grosu V, Matthys A, Vanderborght B, Lefeber D. Design and validation of the ankle mimicking prosthetic (amp-) foot 2.0. IEEE Trans Neural Syst Rehabil Eng. 2014;22(1):138–48.
Geeroms J, Flynn L, Jimenez-Fabian R, Vanderborght B, Lefeber D. Ankle-knee prosthesis with powered ankle and energy transfer for cyberlegs \(\alpha\)-prototype. IEEE Int Conf Rehabil Robot. 2013:6650352. doi:10.1109/ICORR.2013.6650352.
Versluys R, Desomer A, Lenaerts G, Pareit O, Vanderborght B, der Perre GV, Peeraer L, Lefeber D. A biomechatronical transtibial prosthesis powered by pleated pneumatic artificial muscles. Int J Modell Identif Control. 2008;4(4):1–12.
Zhu J, Wang Q, Wang L. On the design of a powered transtibial prosthesis with stiffness adaptable ankle and toe joints. IEEE Trans Ind Electron. 2013;61(9):4797–807.
Au S, Berniker M, Herr H. Powered ankle-foot prosthesis to assist level-ground and stair-descent gaits. Neural Networks. 2008;21:654–66.
Au SK, Weber J, Herr H. Powered ankle-foot prosthesis improves walking metabolic economy. IEEE Trans Robot. 2009;25(1):1–16.
Au SK, Herr H. Powered ankle-foot prosthesis. IEEE Robot Autom Mag. 2008;15:52–9.
Hitt JK, Sugar TG, Holgate M, Bellman R. An active foot-ankle prosthesis with biomechanical energy regeneration. J Med Devices. 2010;4:011003.
Hollander KW, Ilg R, Sugar TG. Design of the robotic tendon. In: Design of medical devices conference. 2005. p. 1–6.
Hitt J, Sugar T, Holgate M, Bellman R, Hollander K. Robotic transtibial prosthesis with biomechanical energy regeneration. Ind Robot Int J. 2009;36(5):441–7.
Sup F, Bohara A, Goldfarb M. Design and control of a powered transfemoral prosthesis. Int J Robot Res. 2008;27(2):263–73.
Goldfarb M, Lawson BE, Shultz AH. Realizing the promise of robotic leg prostheses. Robot Neuroprosthetics. 2013;5(210):1–4.
Vanderborght B, Tsagarakis NG, Van Ham R, Ivar T, Caldwell D. Maccepa 2.0: Compliant actuator used for energy efficient hopping robot chobino1d. Auton Robots. 2011;31(1):55–65.
Haddadin S, Laue T, Frese U, Wolf S, Albu-Schaffer A, Hirzinger G. Kick it with elasticity: safety and performance in human-robot soccer. Robot Auton Syst. 2009;57:761–75.
Braun D, Howard M, Vijayakumar S. Optimal variable stiffness control: formulation and application to explosive movement tasks. Auton Robots. 2012;33:237.
Garabini M, Passaglia A, Belo F, Salaris P, Bicchi A. Optimality principles in variable stiffness control: the vsa hammer. In: 2011 IEEE/RSJ international conference on intelligent Robots and systems. IEEE; 2011. p. 3770–5.
Cherelle P, Grosu V, Van Damme M, Vanderborght B, Lefeber D. Use of compliant actuators in prosthetic feet and the design of the amp-foot 2.0. In: Springer, modeling, simulation and optimization of bipedal walking cognitive systems monographs 18. 2013. p. 17–30.
Cherelle P, Junius K, Grosu V, Cuypers H, Vanderborght B, Lefeber D. The amp-foot 2.1: actuator design, control and experiments with an amputee. Robotica. 2014;32(8):1347–61.
Winter DA. The biomechanics and motor control of human gait: normal, elderly and pathological, 2nd edn. Ontario: Waterloo Biomechanics; 1991.
This article has been published as part of BioMedical Engineering OnLine Vol 15 Suppl 3, 2016: Towards Active Lower Limb Prosthetic Systems: Design Issues and Solutions. The full contents of the supplement are available online at http://biomedical-engineering-online.biomedcentral.com/articles/supplements/volume-15-supplement-3.
PC is the main researcher behind the development of the AMP-Foot 3. He carried out the simulations, CAD designs, electronics, control and experiments with the device. VG and MC have contributed to the elaboration of the electronics and control of the prosthesis. BV and DL participated to all previously mentioned parts of the development, the coordination of the project and helped to draft the manuscript. All authors read and approved the final manuscript.
The datasets generated and/or analysed during the current study are not publicly available due intellectual property rights but are available from the corresponding author on reasonable request.
These experiments were approved by the VUB Commissie Medische Ethiek (O.G. 016).
This work and the publication costs of this article have been funded by the European Commissions 7th Framework Program as part of the project Cyberlegs under grant no. 287894 and by the European Commission ERC Starting grant SPEAR under grant no. 337596.
Department of Mechanical Engineering, VUB, Pleinlaan 2, 1050, Brussels, Belgium
Pierre Cherelle, Victor Grosu, Bram Vanderborght & Dirk Lefeber
Center of Automation and Robotics (UPM-CSIC), Arganda del Rey, 28500, Madrid, Spain
Manuel Cestari
Pierre Cherelle
Victor Grosu
Bram Vanderborght
Dirk Lefeber
Correspondence to Pierre Cherelle.
Cherelle, P., Grosu, V., Cestari, M. et al. The AMP-Foot 3, new generation propulsive prosthetic feet with explosive motion characteristics: design and validation. BioMed Eng OnLine 15 (Suppl 3), 145 (2016). https://doi.org/10.1186/s12938-016-0285-8
Bionic feet
Compliant actuation | CommonCrawl |
Outdoor location tracking of mobile devices in cellular networks
Jens Trogh ORCID: orcid.org/0000-0003-0185-54091,
David Plets1,
Erik Surewaard2,
Mathias Spiessens2,
Mathias Versichele3,
Luc Martens1 &
Wout Joseph1
This paper presents a technique and experimental validation for anonymous outdoor location tracking of all users residing on a mobile cellular network. The proposed technique does not require any intervention or cooperation on the mobile side but runs completely on the network side, which is useful to automatically monitor traffic, estimate population movements, or detect criminal activity. The proposed technique exploits the topology of a mobile cellular network, enriched open map data, mode of transportation, and advanced route filtering. Current tracking algorithms for cellular networks are validated in optimal or controlled environments on a small dataset or are merely validated by simulations. In this work, validation data consisting of millions of parallel location estimations from over a million users are collected and processed in real time, in cooperation with a major network operator in Belgium. Experiments are conducted in urban and rural environments near Ghent and Antwerp, with trajectories on foot, by bike, and by car, in the months May and September 2017. It is shown that the mode of transportation, smartphone usage, and environment impact the accuracy and that the proposed AMT location tracking algorithm is more robust and outperforms existing techniques with relative improvements up to 88%. Best performances were obtained in urban environments with median accuracies up to 112 m.
Network-based positioning algorithms locate a mobile user based on measured radio signals from base stations in its vicinity. The growing amount of available positioning data has led to many location-based services (LBS). These are a collection of applications that use geographical location data of mobile devices provided by Wi-Fi, Bluetooth Low Energy (BLE), Global Positioning System (GPS), or cellular networks [1]. They provide services for end users, e.g., wayfinding in large shopping centers or hospitals, personal navigation, and location-based gaming. This is also important for businesses and government, e.g., asset tracking, fleet management, optimizing productivity in manufacturing or distribution, analyzing traffic patterns, transportation planning, security, and surveillance [2, 3]. A more specific example is the estimation of population movements during disasters or outbreaks. These require timely and accurate location data which large-scale surveys cannot provide, whereas network operators manage data which can potentially be used to calculate location data in real time [4].
The main contribution of this paper is the novel positioning algorithm: AMT (antenna, map, and timing information-based tracking) to accurately locate all mobile users in a cellular network without any required modifications at the mobile side (client) or network side (server). The latter is useful for applications where there is typically no cooperation at the mobile side, e.g., traffic monitoring, population movement estimation, or criminal activity detection. The proposed location tracking algorithm exploits enriched open map data [5], a mode of transportation estimator, and advanced route filtering on top of the mobile cellular topology and measurements to track the movement and locations of mobile devices. Furthermore, it does not depend on additional or custom software, forced messages, dedicated infrastructures, direct communication between mobile users, or prior training data.
An extensive experimental validation was conducted that included trajectories on foot, by bike, and by car, in urban and rural environments while a person was actively using his or her smartphone, but also in standby mode. In this mode, all applications that use the mobile network are blocked (e.g., email and messaging services) and as such, standby mode represents a worst case scenario in terms of the number of location updates. The latter shows measurement gaps of up to 6 min while a user was on the move, i.e., time periods where no measurement data is available, which mainly occur in rural areas. Current existing location tracking algorithms for mobile cellular networks are not able to cope with large measurement gaps but instead are deployed in optimal or controlled environments with a high base station density, regularly available measurement updates, large training sets, or are merely validated by simulations with a fixed location update rate. The novel contributions of this paper are:
An immediately applicable location tracking algorithm that does not require any modifications to the client or network side
The algorithm does not depend upon any prior training via, e.g., offline fingerprinting, drive-testing, or crowd-sourced measurement campaigns
Confirmed to work for a large set of users, nationwide, and in real time based on an experimental validation instead of merely relying on simulations
The paper is structured as follows. Section 2 describes the related work. Section 3 outlines the mobile network and grid configuration, type of measurements, and trajectories for the experimental validation. Section 4 discusses the proposed location tracking algorithm in detail, and Section 5 presents the results. Finally, in Section 6, conclusions are provided.
GPS enabled
The Global Positioning System (GPS) is a satellite-based navigation technique that is ubiquitous due to its widespread use and worldwide coverage. It can be used to track mobile devices but only if the GPS receiver is enabled, the location data is transmitted to a central server, and there are no GPS outages. The latter refers to the unavailability of GPS signals from sufficient satellites due to, e.g., mountains, tall buildings, or multi-level overpasses. Possible solutions are geometry-based location techniques [6]. A system that utilizes mathematical geometry to estimate vehicle location focusing on road trajectory and vehicle dynamics is presented in [7].
Infrastructure enabled
The most widely used approach to locate a mobile device with telecommunication data from a network infrastructure is cell-ID based [1]. The mobile user is mapped to the location of its serving base station, i.e., the cell to which a mobile device is currently connected. It has a low cost and a short response time and is easy to implement and applicable in all places with cellular coverage but has a low accuracy for high cell ranges.
The most common signal parameters used for network-based location tracking are angle of arrival (AoA) [8], time of arrival (ToA) [9], time difference of arrival (TDoA) [10], and amplitude (signal strength) [11]. AoA techniques determine the direction of propagation of a radio frequency wave and require an antenna array at the side of the incoming wave (network side). This technique performs especially well in line-of-sight (LoS) conditions. ToA techniques measure the time the radio signals travel between a single transmitter (mobile user) and multiple receivers (base stations), and requires two-way ranging or synchronization between transmitter and receiver [12]. In TDoA techniques, time differences between the time of flight of multiple radio signals are measured at the receiving base stations; this is used in, e.g., LoRa [13]. Amplitude-based techniques convert the received signal strength to a distance based upon a path loss (PL) model for distance conversion; however, it is required that an accurate PL model is known for the considered environment. Knowledge of the network topology to estimate the distances between a mobile user and a set of base stations reduces the positioning for all signal parameters to a triangulation or multilateration problem [14]. Sensor fusion techniques combine two or more of these signal parameters to estimate the location [15].
Alternatively for the amplitude-based technique, the location can be estimated by searching for the closest match in a fingerprint database or coverage map. This look-up table maps possible positions with a vector of associated signal strength values or cell-IDs from a set of base stations [16]. The signal strength values are collected in an offline phase and can be measurement-based by test-driving the area of interest [17–19], simulation-based by using a propagation model [20–22], ray tracing [23], or a hybrid approach [24]. Drive-testing is labor intensive and needs to be redone each time the mobile network or even the environment undergoes changes. Also, possible locations for the mobile user are limited to places where a car can pass, meaning no indoor, pedestrian, or off-road locations will be estimated. The simulation-based approach is much faster but will generally lead to less accurate location estimations. Alternatively, a crowd-sourced measurement campaign can be used instead of drive-testing.
Network-based location tracking poses several problems due to multipath and non-line-of-sight (NLoS) conditions, small-scale and large-scale fading, low signal-to-noise ratios, and interference by other mobile users. These affect the radio signal parameters used as input data to location tracking algorithms. To process the noisy signal parameters and improve the accuracy, location tracking algorithms use additional intelligence and information. NLoS mitigation techniques use more robust estimators or simply discard the NLoS component [9]. Map-based algorithms use information about the environment to limit possible locations and transitions between two location updates; this can be done in combination with Kalman filters [25], particle filters [18], hidden Markov models (HMM) [26], data fusion [27], or least squares estimator [28]. A database correlation technique over Received Signal Strength Indication (RSSI) data that is based on advanced map- and mobility-based filtering is presented in [29]. The algorithm is validated in a field environment with trips by car, a location update rate forced to 2 Hz, and an electromagnetic field simulator. A cooperative positioning technique for cellular systems using RF pattern matching is presented in [30]. It is shown in simulations that leveraging the device-to-device (D2D) communication protocol can improve positioning performance if insufficient base stations are visible to a user entity. A crowd-sourced measurement campaign to develop radio frequency (RF) coverage maps and a similarity-based location algorithm is presented in [31]. A proprietary application, installed on the smartphone of a sample set of users in the network, periodically reports the RF channel measurement along with the GPS tag to a central server, which are then processed into the RF coverage map. This resulted in accuracies up to 50 m and 300 m, depending on the cell's coverage range. A semi- and unsupervised learning technique that minimizes the effort to label signal strength measurements for the network-side cellular localization problem is presented in [32]. This technique uses Gaussian mixture models to model the signal strength vectors and an expectation maximization approach to learn the distributions. Accuracies up to 30 m are reported as long as enough training data is available and the base station density is high. A machine learning technique for indoor-outdoor classification and particle filter with HMM for cellular localization is presented in [18]. The trajectory of a moving user was synthesized and reconstructed based on a data training set of around 129000 drive test data points and a fixed location update interval of 10 s, which led to accuracies up to 20 m in urban environments. Note that the latter accuracies are only achieved with large (crowd-sourced) training sets, synthesized data, and high location update rates, which our approach does not require. Furthermore, the proposed location tracking algorithm is confirmed to execute in real time for more than a million users in parallel and outperforms state-of-the-art particle filters [18].
A topic which currently attracts a lot of attention is user anonymity. Mobile network operators ensure anonymity between their mobile users by providing a temporary identifier (TMSI) instead of constantly using the long-term unique identifiers (IMSI). Lately, also anonymized location data has become a subject of concern [33, 34]. Countermeasures to tackle these exposed vulnerabilities are proposed in [35, 36]. In [37], simulations are used to calculate the number of devices necessary to locate non-participant individuals in urban environments. They prove that it is possible to track the movement of a significant portion of the population with a high granularity over long periods of time when a small part of the population is part of a (malicious) sensor network.
The mobile network, which is used in the experimental validation, consists of more than 2500 NodeBs (September 2017), distributed over Belgium's territory (30528 km2). In a 3G network, the base stations are referred to as NodeBs. Figure 1 shows the NodeB locations in a representative urban and rural environment on the same scale (i.e., Ghent and Melsele respectively). It is clear that the environment will have a major influence on the positioning accuracy, because of the difference in NodeB density on the one hand and in urban planning on the other: a sparser road network can limit plausible locations, and the type and height of buildings can affect the signal parameters used as input to location tracking algorithms (e.g., apartments vs. stand-alone houses vs. office buildings). There are more than 50 NodeBs in an area of approximately 45 km2 for the experiments in an urban environment, whereas for the rural environment there are roughly 10 NodeBs in an area of the same size. The comparison and influence on the performance are discussed in Section 5.
NodeB density in aurban and b rural environments (NodeBs are indicated by blue triangles)
A NodeB has multiple antennas with unique cell-IDs, oriented towards different directions (Fig. 2). Antenna configurations with one up to six distinct orientations occur in the mobile network, which is used in the experimental validation, the most common ones are with three (92%), one (4%), and two (3%) different antenna directions. Usually, a mobile user will connect to the NodeB antenna that is directed towards him. Likewise, a user for which measurements are available from antennas with different orientations but from the same NodeB has a large chance to be located between both zones. The aforementioned observations provide information that is exploited in the proposed location tracking algorithm (Section 4).
Antenna configuration. a Three directions. b Four directions
The grid represents a collection of points in the area of interest where a mobile user can be located. In a regular Cartesian grid, all elements are unit squares. It is a simplistic approach where all areas are equally important and take the same resources in both database size and processing time. Alternatively, a map-based grid can be used to limit possible points along the major (motorway, freeway, primary, secondary, and tertiary) and minor (local and residential) roads from the area of interest. The grid size determines the number and density of these points. Our grid is based on OpenStreetMap data, which consists of straight line segments enriched with metadata about the type of road, information about one-way traffic, relative layering, street name, and maximum allowed speed. Every start point and endpoint of a straight road segment is automatically included in the grid, and road segments are further divided into pieces equal to the grid size. The dots in Fig. 3 represent such a grid with grid size 50 m. For Belgium, this results in 3.2 million grid points for the map-based technique instead of 12.2 million for a Cartesian grid.
Grid based on OpenStreetMap data with grid size 50 m
Experimental data is collected in cooperation with a major network operator in Belgium. The experiments are conducted in and around the city center of Ghent and in a smaller town near Antwerp (Melsele), to represent urban and rural environments, and on the highway between both cities. The mobile network collects 3G data for more than a million mobile subscribers but to quantify the location errors (accuracy), the real position or ground truth needs to be known, for which permission and cooperation of a mobile user are needed. The experimental validation encompasses trajectories on foot, by bike, and by car. A smartphone with a GPS logging application is carried in all scenarios by a mobile user. It was put in the dashboard holder for the car rides and carried in the pocket for the trajectories on foot and by bike. The smartphone was forced on 3G to make the experiments independent of having 4G coverage and to ensure a fair comparison between urban and rural environments. Figures 4 and 5 show the GPS trajectories as black lines, and the sample rate of the GPS logging application was set to 1 location per second. The NodeB locations are indicated with gray triangles. The GPS trajectories are post-processed with a map matching algorithm [38] to increase the accuracy; this is especially useful in urban areas near tall buildings (urban canyoning). Section 4 describes the location tracking algorithm, and Section 5 discusses the performance and accuracy for all trajectories in detail. The total distance, duration, and average speed for all trajectories are summarized in Table 1.
GPS trajectories in and around the city center of Ghent (black lines), estimated positions (blue dots), error between estimation and ground truth (blue lines), and NodeBs (gray triangles). aTrajectory on foot (urban + standby). b Trajectory on foot (urban + streaming). c Trajectory by bike (urban + standby). d Trajectory by bike (urban + streaming). e Trajectory by car (urban + standby). f Trajectory by car (urban + streaming)
GPS trajectories in a rural area and on the highway between Antwerp and Ghent (black lines), estimated positions (blue dots), error between estimation and ground truth (blue lines), and NodeBs (gray triangles). aTrajectory on foot (rural + standby). b Trajectory by bike (rural + standby). c Trajectory by car (rural + standby). d Trajectory by car (highway + standby)
Table 1 Trajectory details
Measurement data format
3G measurements are made by the mobile network, i.e., by the radio network controller that controls the NodeBs. The input data for our location tracking algorithm are timing information and received signal strength values from a set of NodeBs. Both are reported on regular time periods but independently from each other. The timing information comes in the form of a propagation delay and is reported only by the serving NodeB. The signal strength values originate from the measurement reports and are reported for all NodeBs that a mobile device currently sees (i.e., from which it receives a broadcast message). Timing information to these other NodeBs would require network changes and increases the load in the mobile network and, hence, is not used in our approach.
The propagation delay parameter can be used to estimate the distance between a mobile device and its serving cell. This delay is used by the radio network controller to make communication possible. It checks and adjusts this delay to allow transmission and reception synchronization. The propagation delay has a time granularity of 780 ns, which corresponds to 234 m [39]. A value of 1 means the mobile user is located in the interval between 234 and 468 m from the NodeB, from which we derive the following formula to convert propagation delays to distances:
$$ distance = 234 \cdot (propagation\_delay + 0.5) $$
Figure 6 shows a plot of the real distance (between mobile user and NodeBs) as a function of the observed propagation delay parameter, during a walk of 8 km in the city center of Ghent, Belgium (Fig. 3b). For this test, a radio application was installed on the mobile device and was permanently streaming audio to ensure regular network updates and measurement data. The walk took 84 min during which 234 propagation delay measurements with 49 different cell-IDs from 15 NodeBs were recorded (one physical NodeB can have multiple cell-IDs depending on the number of supported frequencies and different orientations of its antennas). The maximum measured propagation delay during this walk in the city center of Ghent was 6, which corresponds to 1521 m. In rural areas, propagation delays up to 22 (≈ 5 km) were recorded with the same mobile device, which is to be expected due to the sparser base station density. The measured propagation delays fall in the correct interval in 69% of the observations. They are one, two, and three units apart in 27%, 3%, and 0.4% of the cases, respectively. The mean and standard deviation of the absolute differences between the real and calculated distance are 94 m and 82 m. These values are to be expected with a distance granularity of 234 m (i.e., the calculated distances, based on the 3G propagation delays, are in steps of 234 m). Note that the proposed technique can also be applied on 4G and 5G measurements, which have a higher base station density and more accurate timing information, and therefore, will yield a higher location precision (e.g., 4G has a time granularity of 260 ns, corresponding to 78 m).
Propagation delay granularity and accuracy
Measurement report
Measurement reports contain information about channel quality and are reported by a user entity (mobile device) to a NodeB. They assist the network in making handover and power control decisions. The received signal code power (RSCP) denotes the power measured by a mobile user on a particular physical communication channel, also known as common pilot channel. It continuously broadcasts the scrambling code from the NodeBs and carries no other information. These broadcast messages are transmitted with a constant transmit power and gain but can differ per NodeB (information available in network topology). The measurement reports contain measured signal strength values from all NodeBs the mobile user currently sees. As such, the RSCP values can be converted to a path loss value:
$$ PL = P_{TX} + G_{TX} - RSCP $$
where PL [dB] denotes the total path loss, PTX [dB] and GTX [dB] are the transmit power and gain of a NodeB, respectively, and RSCP is the received signal strength code power measured by a mobile device.
Figure 7 shows these path loss values on the y-axis and associated distances between mobile user and NodeBs on the x-axis (the measurement reports are collected in the same experiment as the propagation delays from Section 3.4.1). During the experiment, 578 measurement reports were collected with 4106 RSCP values to 136 different cell-IDs from 32 NodeBs. The fitted one-slope path loss model (red line) has the following form:
$$ PL = PL_{0} + 10\gamma\log_{10}\bigg(\frac{d}{d_{0}}\bigg) + X_{\sigma} $$
Measured path loss as a function of distance between mobile user and NodeBs (blue dots). A fitted path loss model is plotted as a red line
where PL [dB] denotes the total path loss, PL0 [dB] is the path loss at a reference distance d0 [m], γ [-] is the path loss exponent, d [m] is the distance along the path between transmitter and receiver, and Xσ [dB] is a log-normally distributed variable with zero mean and standard deviation σ, corresponding to the large-scale shadow fading. The measurement data from this experiment yields a PL0 of 118 dB at a reference distance of 10 m with a γ of 1.40, resulting in an R-squared of 23% and a standard deviation of 9.8 dB. Low R-squared values indicate that the data is not close to the fitted line, which results in bad estimations. Also, deviations in measured path loss will result in larger errors at greater distances to the NodeBs, e.g., for a deviation of 5 dB: a value of 135 dB (164 m) instead of 140 dB (372 m) results in a location error of 209 m and a value of 145 dB (848 m) instead of 150 dB (1931 m) results in an error of 1082 m. These larger errors occur rather often, and 26% of the measurements have a user to NodeB distance that is greater than 1 km. As such, the mean and median absolute errors for all 4106 measured values are 1143 m and 473 m, respectively. These location errors are much higher compared to those derived from the propagation delay, suggesting that many received path loss measurements contain no additional information and can worsen the accuracy when used together with the timing information as input to a location tracking algorithm. Note that these path loss measurements can be useful in combination with fingerprint maps based on test-driving or crowd-sourced measurement campaigns but these are labor intensive or require modifications on the client side [16, 18, 19]. Also, these crowd-sourced measurement campaigns will be heavily influenced by, e.g., passing cars, new buildings, or other infrastructure changes.
Cellular network data
The problem with cellular network data is the limited amount of available data, which determines the number of possible updates. Mobile devices can support a range of different wireless technologies, e.g., infrared, Bluetooth, Wi-Fi, GPS, Universal Mobile Telecommunications System (UMTS) in 3G networks, and Long-Term Evolution (LTE) in 4G systems, but not all data are available to the network operator and this also depends on the usage of a mobile user.
Figure 8 shows the average number of measurement reports and propagation delays, per user, per hour, during one week, measured on a 3G mobile network in Belgium for more than a million distinct active users. It is immediately clear that every day exhibits a similar pattern for both the measurement reports and propagation delays with the difference that there are about twice as many measurement reports. The least and most active hours are 3 a.m. and 6 p.m. respectively (x-axis ticks are set every 12 h and the labels are set at 12 p.m.). Saturdays and Sundays show a flatter and lower curve than weekdays because more people are staying at home, which translates into fewer measurements per user on the mobile network during the day. On Friday and Saturday between 11 p.m. and 5 a.m., there is an average increase of 20% in number of measurements for a similar amount of people compared to weekdays, indicating that there is more movement or usage of mobile devices (whether or not outdoors). There are more than a million distinct active users during the whole week, but the maximum number of active users in 1 h is only 700k. This is because not all users send updates to their mobile network when he or she is not moving, has WiFi coverage, or is on a different mobile network (2G or 4G). Current time-series or map-based tracking algorithms assume regular measurement updates to filter outliers and improve the accuracy [25, 29]. This assumption does not hold for many mobile users, making the aforementioned algorithms not generally applicable. The proposed location tracking algorithm can cope with this and consists of multiple phases, depending on the amount of available measurements. Also, it is successfully validated, in cooperation with a major network operator in Belgium, to work in real time on more than a million subscribers with an Apache Spark implementation to support fast cluster computing. The used cluster consists of nine nodes with a total memory of 1.58 TB and 408 physical cores.
Average number of measurement reports (solid blue line) and propagation delays (dotted red line) per user, per hour for more than a million distinct active users during 1 week in Belgium
Location tracking algorithms
The performance of the proposed location tracking algorithm will be compared with two reference algorithms: cell-ID (Section 4.1) and centroid based (Section 4.2). The new tracking algorithm is presented in Section 4.3.
Cell-ID
The first reference algorithm is the most simplistic, where a mobile user is mapped to the NodeB to which it is currently connected (also known as serving NodeB or serving cell-ID). This approach is easy to implement and has a low cost and short response time but usually has the lowest accuracy [40].
The second reference algorithm takes all different NodeBs from the measurement reports into account and calculates the centroid. In case there is only one NodeB with measurements, this approach results in the same location as the cell-ID technique. Alternatively, a weighted centroid algorithm can be used, where NodeBs get a weight assigned based on their measurement frequency or received signal strength information [41].
AMT: antenna, map, and timing information-based tracking
Figure 9 shows a flow graph of our proposed location tracking algorithm, which uses the orientation of NodeB antennas, map, and timing information as input (AMT). Phase I processes the data measured by the radio network controllers and calculates the temporary estimations (TEs). Phase II further refines these estimated locations with a route mapping filter that uses OpenStreetMap (meta)data, measurements from a recent past (user history), and an estimated mode of transportation as input.
Flow graph of the proposed location tracking algorithm: phase I (red) and phase II (blue)
Phase I: temporary estimation
The pseudo-code to calculate the temporary estimation of a user, residing on the mobile cellular network, is shown in Algorithm 1 and the variables and steps are discussed with an example in the text below.
Consider the example in Fig. 10: a mobile user is located in the center (yellow square), its serving NodeB (cellsc) is indicated with a green star (locsc), and there are three other NodeBs (cellnb) for which there are signal strength measurements (locnb indicated with red triangles). The antenna orientations (αsc and αnb) of cell-IDs with measurements are indicated with a red line. The other NodeBs in this area (without measurements at this time instance) are shown as gray triangles, and the grid points are shown as regular dots on top of the road network. The radio network controller reports a propagation delay of 4 from the serving NodeB, which triggers a new location update. This propagation delay corresponds to 1053 m, which limits the possible locations to an area (CAsc) bounded by two circular arcs with an opening angle (βsc) of 120 ∘ and radii of 936 m and 1170 m (indicated in transparent green on the left). The distance between both arcs is based on the time granularity of 3G (780 ns corresponds to 234 m). A window of 5 s is used to link measurement reports with propagation delays since they are not reported at the exact same time instances. Because the calculated distances based on the reported signal strengths from the measurement reports are not reliable (Section 3.4.2), only the orientation (αnb) and opening angle (βnb) of the antennas corresponding to these measurements are used. These are retrieved by looking up the reported cell-ID in the network topology, resulting in three additional circle sectors (CSnb), indicated in transparent blue.
Working principle of the proposed location tracking algorithm: phase I. a Overview. b Detail
The opening angle of the sectors depends on the number of antennas and different orientations the NodeBs have and is equally divided between all orientations. The most common case of three distinct and equally spread antenna orientations corresponds to an opening angle of 120 ∘ (similar to the different gray zones in Fig. 2a). If there are multiple measurements to one NodeB and the reported cell-IDs correspond to antennas with different orientation, then both measurements are merged (JoinedMeasReport) and a new circle sector is used instead, i.e., the smallest area between both orientations. For example, if there are measurements received on the antennas with directions 0 ∘ and 90 ∘, then the new circle sector would be the first quadrant (0 to 90 ∘) instead of the area from − 45 to 135 ∘ (see Fig. 2)b. Because users that are located just outside a circle sector could be picked up by the antenna, as is visible in Fig. 10 for the antenna on the bottom center, a margin of 10 ∘ is added to the left and right side of a sector.
The coloring of the grid points corresponds to the number of NodeBs (cell-IDs) that are visible from this grid point (it is visible if a grid point falls within the sector areas defined above). In this case, there are only 6 locations that satisfy all measurements, i.e., inside the propagation delay area and in all three circle sectors (green and blue areas). The median location of this set (GPMO) is the temporary estimation, indicated with a black plus sign (+) in Fig. 10.
If there is no overlap between the propagation delay area and the circle sectors (green and blue areas respectively), then the median location of the propagation delay area is used as temporary estimation. The latter happens in only 2% of all location updates in our experimental validation (Section 5). Using this approach results, for the depicted example, in an error of 132 m, whereas the cell-ID approach would map the mobile user to the serving NodeB (indicated with a green star), resulting in an error of 1103 m, and the centroid approach results in an error of 490 m (indicated with a black cross ×).
Phase II: route mapping filter
These temporary estimations can be improved with a route mapping filter if there are location updates available from a recent past (user history). For example, a user on foot will travel far less than a user by bike or by car, given a certain time interval. Furthermore, the most likely trajectory over a certain time period can be reconstructed by making use of OpenStreetMap (meta)data: road infrastructure (ways); maximum speed limits; one-way street information; type of road, e.g., sidewalk, bike path, or highway; and the user's measurement history. To take into account cars that are speeding and to avoid that location estimations are lagging behind, the allowed speed limit (for the reconstructed trajectory) can be increased by, e.g., 10% for each road segment. The proposed route mapping filter is based on the Viterbi path, a technique related to hidden Markov models [21, 22]. By processing all available data at once, previous estimated locations can be corrected by future measurements (similar to backward belief propagation). Naturally, this is only possible if the intended application tolerates a certain delay. A differentiation between real time and non-time critical will be made in the route mapping filter's output. Figure 11 shows a flow graph of our proposed route mapping filter which ensures realistic and physically possible paths.
Flow graph of our route mapping filter
The pseudo-code of the route mapping filter is shown in Algorithm 2, and the variables and steps are discussed in the text below. For the first positioning update or if there is no location history available from a recent past, the temporary estimation is taken as the current position (TE0). Then, a predefined number of other locations are selected around this position and their cost is initialized to 0, e.g., the 1000 closest grid points to the current position (MP). This ensures that the route mapping filter can recover from faulty first estimations, i.e., 1000 grid points and a grid size of 50 m result in a covered surface of roughly 2.5 km2 (the exact area depends on the road density). The initialization forms the starting point of all possible paths that are kept in the memory of the location tracking algorithm (pathsInMem). Next, when the mobile network reports new measurements, a new TE is calculated as described in Section 4.3.1. After that, for all paths in memory, all reachable positions (RGP) starting from the path's current last grid point (PGP: parent grid point) are determined by making use of the surrounding road network, the time elapsed since last location update (Δt), estimated mode of transportation (MoT), and OpenStreetMap metadata (maximum speed, type of road, and one-way information). These reachable positions, which are also grid points, are the candidate positions for the next location update. Each candidate position (CP) retains a link to the parent grid point (PGP) and a cost that represents this new branch along the road network (pathsTemp). For time-critical applications, the path which currently has the lowest cost is used as real-time location estimation (AMT-RT). In this case, previous estimated locations will not be corrected by future measurements, only the user's current history is taken into account. Lastly, the MP paths with lowest cost are retained to serve as input for the next iteration when the mobile network reports new measurements. At the end of an experiment or measurement interval, all parent grid points from the path with lowest cost are visited in backwards order; this results in the final estimated trajectory: AMT-NTC (non-time critical). Figure 12 shows a detail of the locations before and after the route mapping filter for the trajectory on foot in Ghent (Fig. 3b). The temporary estimations are indicated with green crosses, and the final estimated trajectory with blue dots.
Detail of the estimated locations before and after the route mapping filter: temporary estimations (green crosses), final estimated trajectory (blue dots), and GPS trajectory (black line)
The maximum allowed speed used by the route mapping filter can be refined if the mode of transportation is correctly estimated, e.g., pedestrians or cyclists will usually not move faster than 6 km/h or 30 km/h respectively. In our approach, the mode of transportation is estimated based on the rate and distance between serving cell handover zones, i.e., when a new NodeB becomes the serving cell. When a handover takes place, the middle between both NodeBs (estimated handover location) is saved together with the timestamp the handover took place. The average speed between all estimated handover locations that took place during a certain moving window is used to label the mode of transportation. A moving window of 10 min (5 min before and after the location update) could be used for the non-time-critical route mapping filter, but this is not possible for real-time applications (as no future measurements are available). For this reason, only the last 5 min (counting backwards from the location update that is being calculated) is considered to estimate the average speed. It is labeled as walking if it is below 10 km/h, as cycling if it is between 10 and 25 km/h, and otherwise as driving a motorized vehicle. In the latter case, the route mapping filter will continue to use the maximum allowed road speed for each segment. Although, the location updates (TEs) are more frequent and accurate than the estimated handover locations, they show more fluctuations which results in an overestimation of the average speed (see Fig. 12). For example, during the walk in the city center of Ghent (Fig. 3b), there are 232 location updates whereas there are only 48 handovers, which result in an average estimated speed of 25 km/h based on the location updates and 7 km/h based on the estimated handover locations with a moving window of 5 min.
Figures 4 and 5 show the estimated positions with the proposed location tracking algorithm as blue dots. The errors between the GPS ground truth and estimated positions are indicated with a blue line. The ground truth is defined as the GPS position which is closest in time to the timestamp from when the network received measurements that initiated the location update. The GPS logging application takes 1 sample per second and is mapped to the road network (which includes footpaths, paths for cycling, and service roads), ensuring a sufficient time synchronization and accuracy between the estimated positions and their ground truth.
Table 2 summarizes the mean, standard deviation, median, and 95th percentile value of the accuracy for all scenarios (walking, cycling, and driving in urban and rural environments with a user's smartphone in standby and streaming mode).
Table 2 Accuracy, number of positioning updates, and average time and distance between two consecutive location updates, per scenario and algorithm
The two basic algorithms are referred to as cell-ID (Section 4.1) and centroid (Section 4.2). The first phase of the proposed location tracking algorithm (without the route mapping filter) is referred to as TE (temporary estimation). The location tracking algorithm with route mapping filter, road speed limits, and mode of transportation estimation is referred to as AMT, named after the used inputs: antenna orientation, map, and timing information (phase II). To differentiate between the estimated locations that are available in real time and those that are corrected by future measurements, AMT-RT (real time) and AMT-NTC (non-time critical) are used. An existing location tracking algorithm [18] based on a particle filter and map information was implemented to validate our proposed route mapping filter. These results are included in Table 2 and referred to as PF. They used regression on drive test data to estimate the probability distribution of an observation. Since drive test data is generally not available for a nationwide mobile network, the likelihood function for the particles is modified to work with the temporary estimations as input (similar to the proposed route mapping filter, ensuring a fair comparison). This particle filter is configured with 2000 particles, and the mean μ and variance σ2 of the initial speed distribution are based on the mode of transportation and the maximum allowed speed of the road segments under consideration. Likewise, at each time step with measurements, our proposed route mapping filter retains the 1000 paths with the lowest associated costs in memory (MP in Algorithm 2). The latitude and longitude coordinates from all NodeBs in the mobile network, data from the GPS logging application, and OpenStreetMap data are projected to the Belgian Lambert 72 coordinate system. Hence, the grid points and estimated locations are in the same plane coordinate reference system. This enables the use of the Euclidean distance between the estimated and actual position to define the accuracy. The total number of location updates and the average time and distance between two consecutive location updates are also included in Table 2.
Figure 13 shows the median accuracy per scenario with the TE, PF, AMT-RT, and AMT-NTC techniques. The cell-ID and centroid approach are omitted to enhance clarity.
Median accuracy per scenario with the TE, PF, AMT-RT, and AMT-NTC techniques
Comparison with other algorithms
It is immediately clear that the proposed location tracking algorithms outperform the classic cell-ID and centroid approach in all ten scenarios. The particle filter [18] performs slightly worse than our proposed route mapping filter (real-time and non-time-critical version) in scenarios 1–4 and is outperformed in scenarios 5–10. The main reason for this is that the time between two location updates is variable and can be rather large (it ranges from 5 s to 6 min). In the update step of the particle filter, a new state is sampled for all particles, based on the previous state, current time, and a new random sample, and is then mapped on the road network. This can cause large deviations if the user's real speed or direction changes in this time period, which can happen multiple times during a sizeable measurement gap. The trajectories done by car and the ones in rural areas are most affected by this. In our approach, all possible locations that can be reached along the road network in this time period are considered as candidate positions for the next location update (given the previous states, i.e., user measurement history and paths in memory, estimated mode of transportation, maximum speed limits, type of roads, and one-way street information.) The median TE accuracy varies between 150 and 433 m and has an average improvement, over all scenarios, of 68% and 55% compared to the cell-ID and centroid approach, respectively. The median PF accuracy varies between 131 and 389 m and has an average improvement, over all scenarios, of 69%, 56%, and 2% compared to the cell-ID, centroid, and TE approach respectively. The median AMT-RT accuracy varies between 125 and 311 m and has an average improvement, over all scenarios, of 74%, 64%, 20%, and 18% compared to the cell-ID, centroid, TE, and PF approach, respectively. The median AMT-NTC accuracy varies between 112 and 275 m and has an average improvement, over all scenarios, of 78%, 69%, 33%, 31%, and 16% compared to the cell-ID, centroid, TE, PF, and AMT-RT approach, respectively. The mean accuracies, standard deviations, and 95th percentile values show similar improvements.
The largest relative improvements compared to the reference algorithms are achieved with the trajectory on the highway (scenario 10). The median accuracy improves with 88% (from 1021 to 122 m) compared to the cell-ID approach, with 85% (from 790 to 122 m) compared to the centroid approach and with 57% (from 283 to 122 m) compared to the PF approach.
The most accurate results reported in the state-of-the-art processing techniques from Section 2 are higher than our results (accuracies up to 20 m [18], 30 m [32], and 50 m [31]), but these are achieved with synthesized data, large training sets, optimal environments, crowd-sourced measurement campaigns, and forced location update rates. However, applying the same processing technique [18] on our validation data resulted in worse accuracies but gives a realistic idea of the achievable performance without crowd-sourcing or modifications on the network or mobile side (PF in Table 2).
Non-time critical vs. real time
The non-time-critical version of the route mapping filter (AMT-NTC), which takes into account all measurements at once, can also work with a smaller delay (instead of at the end of a trajectory). Previously predicted locations can be corrected by multiple future measurements, but the impact tends to decrease as more time has passed between the previous update and those future measurements. For our experimental validation, this time period is 8 min; taking into account additional future measurements does not further improve the overall accuracy. Even with only 2 min of future measurement data, the mean and median overall accuracy are already 200 m and 174 m (compared to 192 m and 165 m if all future measurements are taken into account). This means that if a time delay of 2 min is allowed for the intended application, the overall mean accuracy can already be improved by 19% compared to the real-time algorithm (AMT-RT).
Impact of environment
The highest accuracies are achieved for the scenarios in an urban environment with trajectories on foot or by bike (scenarios 1–4). For example, the trajectory by bike in the city center of Ghent with a smartphone in streaming mode (scenario 4) has a mean, standard deviation, median, and 95th percentile value of 122 m, 80 m, 112 m, and 301 m, respectively. These higher accuracies are mainly due to the higher base station density, which is typical in urban environments. This ensures that the serving base stations have smaller separations, and hence, this limits the possible grid points because of the lower propagation delays, i.e., the green sector in Fig. 10 will cover a smaller area. When driving a car, the absolute accuracy in urban environments is worse than that in rural scenarios. For example, the improvements between two trajectories by car in an urban (scenario 5) and rural environment (scenario 9) are 63% (306 to 188 m), 56% (291 to 186 m), 71% (220 to 129 m), and 72% (1012 to 589 m), for the mean, standard deviation, median, and 95th percentile value, respectively. This is due to the sparser road network in rural areas, which increases the chance that the route mapping filter selects the correct road segments as most likely. The trajectory on the highway (scenario 10) is accurately reconstructed because the roads surrounding the highway have lower speed limits which causes these (incorrect) candidate paths to lag behind and eventually be discarded in the route mapping algorithm. Note that this is only true if there is no traffic congestion.
Impact of smartphone usage
The shortest location update time or highest update rate happens when a user is walking in an urban environment while actively using his or her smartphone, i.e., through an application that sends or receives data over the mobile network on a regular basis (scenario 2). In this case, there are 234 updates during the entire trajectory, which corresponds to a location update every 21 s or every 32 m on average. Note that the update rate for this best case scenario is not as high as most localization algorithms for cellular networks are validated on. Location update rates of 0.5 s [29] and 10 s [18] are reported in related work, by using forced messages or synthesized validation data. Three trajectories are done for both smartphone usage modes (scenarios 1–6). The trajectories in an urban environment on foot and by bike are identical and yield similar performances for the streaming and standby mode (scenarios 1–4). The higher location update rate has a negligible impact due to the limited speed for these modes of transportation. The trajectory by car shows a significant improvement for higher location update rates (standby vs. streaming). The accuracy increases with 41% (306 to 217 m), 106% (291 to 141 m), 10% (220 to 200 m), and 117% (1012 to 467 m), for the mean, standard deviation, median, and 95th percentile value, respectively.
Impact of MoT
The trajectories done on foot and by bike yield similar accuracies as long as the environment is the same. The trajectories done by car perform worse in urban environments but better in rural environment as discussed in Section 5.4. It is to be noted that the proposed MoT estimator achieved an accuracy of 78% when a moving window of the last 5 min was used. Although this accuracy could be improved on our validation data by using a longer window, this will not always be the case, e.g., if the MoT changes during a scenario from walking to biking, a shorter window is recommended to detect the changes more quickly. Furthermore, the overall mean and median accuracy remained similar (192 m and 165 m vs. 183 m and 164 m) if the route mapping filter was provided with the correct MoT at each location update. This is because a wrong MoT estimation for a location update does not automatically result in a worse accuracy, e.g., when it is erroneously labeled as cycling while the user was actually driving at a slow speed due to traffic congestion.
In this paper, a technique for outdoor location tracking of all users residing on a mobile cellular network is presented. The proposed approach does not depend on prior training data and does not require any cooperation on the mobile side or changes on the network side. The topology and available measurements of a mobile cellular network are used as input for the proposed AMT algorithm (named after antenna, map, and timing information). An additional route mapping filter is applied to ensure realistic, physically possible, trajectories. The inputs for this route mapping filter are the user's measurement history, enriched open map data (road infrastructure, maximum speed limits, type of road, and one-way street information), and a mode of transportation estimator to improve the corresponding maximum speed. The novel AMT location tracking algorithm is implemented in Apache Spark to support fast cluster computing, runs completely on the network side, is confirmed to execute in real time for more than a million users in parallel, and outperforms state-of-the-art particle filters. The experimental validation is done in urban and rural environments, near Ghent and Antwerp, with experiments on foot, by bike, and by car, while a user's smartphone was used in standby and streaming mode. Improvements of up to 88%, 85%, and 57% were achieved compared to a cell-ID, a centroid, and a particle filter with map information-based location tracking technique, respectively. Future work will adapt and apply the proposed algorithm to a 4G LTE mobile network, where further improvements are expected thanks to the more accurate timing information and the higher eNodeB density. Furthermore, the proposed algorithm will be validated on a larger test set with multiple users, different mobile devices, changes in mode of transportation, and indoor usage.
AMT:
Antenna, map, and timing information-based tracking
AoA:
Angle of arrival
BLE:
IMSI:
International mobile subscriber identity
LBS:
Location-based services
Line-of-sight
LTE:
Long-term evolution
MoT:
NLoS:
Non-line-of-sight
PL:
Path loss
RF:
RSCP:
Received signal code power
RSSI:
Received signal strength indication
TDoA:
Time difference of arrival
Temporary estimation
TMSI:
Temporary mobile subscriber identity
ToA:
Time of arrival
UMTS:
Universal mobile telecommunications system
F. Gustafsson, F. Gunnarsson, Mobile positioning using wireless networks: possibilities and fundamental limitations based on available wireless network measurements. IEEE Signal Proc. Mag.22(4), 41–53 (2005).
R. Becker, R. Cáceres, K. Hanson, S. Isaacman, J. M. Loh, M. Martonosi, J. Rowland, S. Urbanek, A. Varshavsky, C. Volinsky, Human mobility characterization from cellular network data. Commun. ACM. 56(1), 74–82 (2013).
S. Çolak, L. P. Alexander, B. G. Alvim, S. R. Mehndiratta, M. C. González, Analyzing cell phone location data for urban travel: current methods, limitations, and opportunities. Transp. Res. Rec. J. Transp. Res. Board, 126–135 (2015).
L. Bengtsson, X. Lu, A. Thorson, R. Garfield, J. Von Schreeb, Improved response to disasters and outbreaks by tracking population movements with mobile phone network data: a post-earthquake geospatial study in Haiti. PLoS Med.8(8), 1001083 (2011).
OpenStreetMap contributors, Planet dump retrieved from https://planet.osm.org, (2017). https://www.openstreetmap.org.
A. N. Hassan, O. Kaiwartya, A. H. Abdullah, D. K. Sheet, S. Prakash, in Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. Geometry based inter vehicle distance estimation for instantaneous GPS failure in VANETS (ACM, 2016), p. 72.
O. Kaiwartya, Y. Cao, J. Lloret, S. Kumar, N. Aslam, R. Kharel, A. H. Abdullah, R. R. Shah, Geometry-based localization for GPS outage in vehicular cyber physical systems. IEEE Trans. Veh. Technol.67(5), 3800–3812 (2018).
L. Gazzah, L. Najjar, H. Besbes, in 2014 IEEE Wireless Communications and Networking Conference (WCNC). Selective hybrid RSS/AOA weighting algorithm for NLOS intra cell localization (IEEE, 2014), pp. 2546–2551.
I. Guvenc, C. -C. Chong, A survey on TOA based wireless localization and NLOS mitigation techniques. IEEE Commun. Surv. Tutor.11(3), 107–124 (2009).
Y. M. Chen, C. -L. Tsai, R. -W. Fang, in 2017 International Conference on Control, Artificial Intelligence, Robotics & Optimization (ICCAIRO). TDOA/FDOA mobile target localization and tracking with adaptive extended Kalman filter (IEEE, 2017), pp. 202–206.
A. H. Sayed, A. Tarighat, N. Khajehnouri, Network-based wireless location: challenges faced in developing techniques for accurate wireless location information. IEEE Signal Proc. Mag.22(4), 24–40 (2005).
E. Xu, Z. Ding, S. Dasgupta, Source localization in wireless sensor networks from signal time-of-arrival measurements. IEEE Trans. Signal Proc.59(6), 2887–2897 (2011).
F. Adelantado, X. Vilajosana, P. Tuset-Peiro, B. Martinez, J. Melia-Segui, T. Watteyne, Understanding the limits of LoRaWAN. IEEE Commun. Mag.55(9), 34–40 (2017).
V. Osa, J. Matamales, J. F. Monserrat, J. López, Localization in wireless networks: the potential of triangulation techniques. Wirel. Pers. Commun.68(4), 1–14 (2013).
J. Borkowski, J. Lempiäinen, Practical network-based techniques for mobile positioning in UMTS. EURASIP J. Appl. Signal Proc.2006:, 149–149 (2006).
T. Wigren, Adaptive enhanced cell-id fingerprinting localization by clustering of precise position measurements. IEEE Trans. Veh. Technol.56(5), 3199–3209 (2007).
M. Chen, T. Sohn, D. Chmelev, D. Haehnel, J. Hightower, J. Hughes, A. LaMarca, F. Potter, I. Smith, A. Varshavsky, Practical metropolitan-scale positioning for GSM phones. UbiComp 2006: Ubiquitous Computing. UbiComp 2006. Lecture Notes in Computer Science, vol 4206 (Springer, Berlin, Heidelberg, 2006).
A. Ray, S. Deb, P. Monogioudis, in Computer Communications, IEEE INFOCOM 2016-The 35th Annual IEEE International Conference On. Localization of lte measurement records with missing information (IEEE, 2016), pp. 1–9.
M. Ibrahim, M. Youssef, Cellsense: An accurate energy-efficient GSM positioning system. IEEE Trans. Veh. Technol.61(1), 286–296 (2012).
D. Plets, W. Joseph, K. Vanhecke, E. Tanghe, L. Martens, Coverage prediction and optimization algorithms for indoor environments. EURASIP J. Wirel. Commun. Netw.2012(1), 123 (2012).
J. Trogh, D. Plets, L. Martens, W. Joseph, Advanced real-time indoor tracking based on the viterbi algorithm and semantic data. Int. J. Distrib. Sens. Netw.11(10), 271818 (2015).
J. Trogh, D. Plets, A. Thielens, L. Martens, W. Joseph, Enhanced indoor location tracking through body shadowing compensation. IEEE Sens. J.16(7), 2105–2114 (2016).
V. Savic, H. Wymeersch, E. G. Larsson, Target tracking in confined environments with uncertain sensor positions. IEEE Trans. Veh. Technol.65(2), 870–882 (2016).
A. Hatami, K. Pahlavan, in Consumer Communications and Networking Conference, 2006. CCNC 2006. 3rd IEEE, 2. Comparative statistical analysis of indoor positioning using empirical data and indoor radio channel models (IEEE, 2006), pp. 1018–1022.
P. -H. Tseng, K. -T. Feng, Y. -C. Lin, C. -L. Chen, Wireless location tracking algorithms for environments with insufficient signal sources. IEEE Trans. Mob. Comput.8(12), 1676–1689 (2009).
M. Bshara, U. Orguner, F. Gustafsson, L. Van Biesen, Robust tracking in cellular networks using HMM filters and Cell-ID measurements. IEEE Trans. Veh. Technol.60(3), 1016–1024 (2011).
M. McGuire, K. N. Plataniotis, A. N. Venetsanopoulos, Data fusion of power and time measurements for mobile terminal location. IEEE Trans. Mob. Comput.4(2), 142–153 (2005).
Y. Feng, Y. Liu, M. Batty, Modeling urban growth with GIS based cellular automata and least squares SVM rules: a case study in Qingpu-Songjiang area of Shanghai, China. Stoch. Env. Res. Risk A.30(5), 1387–1400 (2016).
M. Anisetti, C. A. Ardagna, V. Bellandi, E. Damiani, S. Reale, Map-based location and tracking in multipath outdoor mobile networks. IEEE Trans. Wirel. Commun.10(3), 814–824 (2011).
R. M. Vaghefi, R. M. Buehrer, in Personal, Indoor, and Mobile Radio Communication (PIMRC), 2014 IEEE 25th Annual International Symposium On. Cooperative RF pattern matching positioning for LTE cellular systems (IEEE, 2014), pp. 264–269.
R. Margolies, R. Becker, S. Byers, S. Deb, R. Jana, S. Urbanek, C. Volinsky, in INFOCOM 2017-IEEE Conference on Computer Communications, IEEE. Can you find me now? evaluation of network-based localization in a 4G LTE network (IEEE, 2017), pp. 1–9.
A. Chakraborty, L. E. Ortiz, S. R. Das, in Computer Communications (INFOCOM), 2015 IEEE Conference On. Network-side positioning of cellular-band devices with minimal effort (IEEE, 2015), pp. 2767–2775.
H. Zang, J. Bolot, in Proceedings of the 17th Annual International Conference on Mobile Computing and Networking. Anonymization of location data does not work: a large-scale measurement study (ACM, 2011), pp. 145–156.
M. Arapinis, L. Mancini, E. Ritter, M. Ryan, N. Golde, K. Redon, R. Borgaonkar, in Proceedings of the 2012 ACM Conference on Computer and Communications Security. New privacy issues in mobile telephony: fix and verification (ACM, 2012), pp. 205–216.
M. Arapinis, L. I. Mancini, E. Ritter, M. Ryan, in NDSS. Privacy through pseudonymity in mobile telephony systems, (2014).
A. Shaik, R. Borgaonkar, N. Asokan, V. Niemi, J. -P. Seifert, Practical attacks against privacy and availability in 4G/LTE mobile communication systems (2015). arXiv preprint arXiv:1510.07563.
N. Husted, S. Myers, in Proceedings of the 17th ACM Conference on Computer and Communications Security. Mobile location tracking in metro areas: malnets and others (ACM, 2010), pp. 85–96.
P. Newson, J. Krumm, in Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. Hidden Markov map matching through noise and sparseness (ACM, 2009), pp. 336–343.
Propagation Delay. http://www.telecomhall.com/analyzing-coverage-with-propagation-delay-pd-and-timing-advance-ta-gsm-wcdma-lte.aspx. Accessed 9 Aug 2010.
E. Trevisani, A. Vitaletti, in Mobile Computing Systems and Applications, 2004. WMCSA 2004. Sixth IEEE Workshop On. Cell-ID location technique, limits and benefits: an experimental study (IEEE, 2004), pp. 51–60.
J. Wang, P. Urriza, Y. Han, D. Cabric, Weighted centroid localization algorithm: theoretical analysis and distributed implementation. IEEE Trans. Wirel. Commun.10(10), 3403–3413 (2011).
This research was supported by the VLAIO project ADORABLE: Anonymous Displacement and residence behaviOR based on Accurate moBile Location data from tElco.
Data sharing is not possible for this article due to company policy of the mobile network operator.
Department of Information Technology, IMEC - Ghent University, Ghent, Belgium
Jens Trogh
, David Plets
, Luc Martens
& Wout Joseph
Telenet Group, Brussels, Belgium
Erik Surewaard
& Mathias Spiessens
RetailSonar, Ghent, Belgium
Mathias Versichele
Search for Jens Trogh in:
Search for David Plets in:
Search for Erik Surewaard in:
Search for Mathias Spiessens in:
Search for Mathias Versichele in:
Search for Luc Martens in:
Search for Wout Joseph in:
JT and DP developed the novel algorithms and conducted the data analysis and interpretation. ES, MS, and MV participated in the mobile cellular data processing for the experimental validation. LM and WJ reviewed and edited the manuscript. All authors read and approved the final manuscript.
Correspondence to Jens Trogh.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Trogh, J., Plets, D., Surewaard, E. et al. Outdoor location tracking of mobile devices in cellular networks. J Wireless Com Network 2019, 115 (2019) doi:10.1186/s13638-019-1459-4
Mobile cellular network | CommonCrawl |
Surprising identities / equations
What are some surprising equations/identities that you have seen, which you would not have expected?
This could be complex numbers, trigonometric identities, combinatorial results, algebraic results, etc.
I'd request to avoid 'standard' / well-known results like $ e^{i \pi} + 1 = 0$.
Please write a single identity (or group of identities) in each answer.
I found this list of Funny identities, in which there is some overlap.
soft-question big-list
Calvin Lin
$\begingroup$ I really can't believe no one has posted this yet: xkcd.com/687 $\endgroup$ – MikeTheLiar Sep 26 '13 at 14:48
$\begingroup$ This is not in line with what you are looking for, but as a child I discovered that 10million pi is the number of seconds in a year to 1/2% accuracy. This is useful for quick back of envelope calculations, where seconds are involved. $\endgroup$ – JoeTaxpayer Sep 26 '13 at 19:48
$\begingroup$ Pi seconds is a nanocentury! $\endgroup$ – Oscar Cunningham Oct 1 '13 at 21:59
$\begingroup$ @CalvinLin Is that until you get the most rare badge? I got the 81st favorite too! $\endgroup$ – zerosofthezeta Oct 2 '13 at 4:32
$\begingroup$ The three trigonometric identities in the following exercises of my Wikibook: en.wikibooks.org/wiki/On_2D_Inverse_Problems/… $\endgroup$ – DVD Oct 12 '15 at 2:51
I know it's incredibly simple, but I'm always awed by $$ 2+2 = 2 \cdot 2 = 2^2 = \;^2 2. $$ Two is where addition, multiplication and exponentiation meet. And: tetration.
Kieren MacMillan
$\begingroup$ That extends beyond exponentiation, as well, in the sense that 2 op 2 has the same value for every binary operation op in Goodstein's infinite sequence of hyperoperations (+, *, ↑, ↑↑, ↑↑↑, ...). $\endgroup$ – r.e.s. Sep 27 '13 at 4:23
$\begingroup$ @r.e.s. Can you explain that? Isn't 2 ↑↑ 2 = 16 ? $\endgroup$ – MrZander Sep 28 '13 at 0:04
$\begingroup$ @MrZander For any op beyond addition, in the expression x op y the y specifies how many x's are to be "combined" using the hyperoperator at the next lower level. E.g., 2↑↑4 = 2↑2↑2↑2 = 2↑2↑4 = 2↑16 = 65536, 2↑↑3 = 2↑2↑2 = 2↑4 = 16, 2↑↑2 = 2↑2 = 4. $\endgroup$ – r.e.s. Sep 28 '13 at 3:57
$\begingroup$ If you look at "generalized commutative hyperoperations" one can find an approach by A. Bennet in the 1910'th to define operations on an fractional index between "+" (=index 0) and "*" (=index 1). In this spirit using base $b=\sqrt 2$ instead of Bennet's $\exp()$ and $\log()$ to base $e$ all fractional indexed commutative operations between "+", "*", "^" and so on have the property that $x+y $ which is also $x \circ_0 y$ and all $x \circ_k y$ equal $x+y = x \circ_k y = x \cdot y = ... $ See the example "multiplication-table" at math.stackexchange.com/a/1272791/1714 $\endgroup$ – Gottfried Helms Jan 21 at 0:31
Given a polynomial $p(x)$ of degree $n$, let $a$ be the leading coefficient. Then:
$$\sum_{k=0}^n (-1)^k{n\choose k}p(x-k)=an!$$
This happens to be equivalent to:
$$p^{(n)}(x)=an!$$
where $p^{(k)}$ is the $k$th derivative of $p(x)$.
The surprising part is that the sum can actually obtain the leading coefficient without any remaining reference to the polynomial aside from the factorial of the degree.
abiessu
$\begingroup$ An instance of the binomial transform which, together with its inversion, form a nice couple of formulas. $\endgroup$ – Jean-Claude Arbaut Oct 26 '15 at 17:39
$$ 2^{67}-1 = 193,707,721 × 761,838,257,287 $$ This identity was found by Cole in the early 20th century. He later said that the result had taken him "three years of Sundays" to find.
There's also the fact that:
$2^{127} -1 $ is indeed prime, as Mersenne claimed. This was the largest known prime number for 75 years, and the largest ever calculated by hand. Édouard Lucas proved its primality in 1876.
Rustyn
Let $p_n$ be the probability that a random permutation in the symmetric group $S_n$ doesn't have fixed points. Then $\lim_{n\to\infty}p_n=\frac{1}{e}$.
I was amazed the first time I saw this exercise!
rfauffar
$\begingroup$ No big surprise. This is just an application of the inclusion-exclusion formula and $\displaystyle{e^x=\sum_{n\geq0}\frac{x^n}{n!}}$ $\endgroup$ – Taladris Oct 19 '13 at 12:44
$\begingroup$ Of course that's how it's proven, but it's still surprising when you first see it! Many of the facts on this post are easy and not surprising once you know what's going on behind the scenes! $\endgroup$ – rfauffar Oct 19 '13 at 18:14
Plot the graphs of the functions $$f(x)=\dfrac{2(x^2+|x|-6)}{3(x^2+|x|+2)}+\sqrt{16-x^2}$$ and $$g(x)=\dfrac{2(x^2+|x|-6)}{3(x^2+|x|+2)}-\sqrt{16-x^2}$$ in $x\in[-4,4]$ on the same plane.
$\begingroup$ I made this one on valentine's day: desmos.com/calculator/xzwffxyucr $\endgroup$ – Simply Beautiful Art Mar 20 '17 at 23:47
$\begingroup$ Very nice picture. There is another two functions those make a similar picture: $$y=|x|^{3/2}\pm\sqrt{1-x^2}.$$ May be you can add more components to make this more nice :) $\endgroup$ – Bumblebee Mar 22 '17 at 5:19
The series $$\sum_{n=1}^{\infty} \frac{n^{13}}{e^{2\pi n} - 1} = \frac{1}{24}$$ is not entirely obvious. (At this time WolframAlpha is unable to find its closed form.)
$\begingroup$ Hmm, using $m=5,9,13$ in the exponent gives reciprocals of multiples of $24$. Stepping $m$ further the $24$ seems to occur as factor in numerator (or denominator, have it not at hand at the moment) - the limits seem to be rational numbers. Something behind this? $\endgroup$ – Gottfried Helms Jan 20 at 23:50
$y(x)=\left \{ \frac{x- \frac {\left \lceil \frac {\sqrt {1+8x}-1} {2} \right \rceil \left ( 1-\left \lceil \frac {\sqrt {1+8x}-1} {2} \right \rceil \mod2 \right)} {2}} {\left \lceil \frac {\sqrt {1+8x}-1} {2} \right \rceil}\right \}$
This function has the following slopes: 1 at [0,1), 2 at [1,3) and so on
Тимофей Ломоносов
$\begingroup$ can you tell me, where you have find this? $\endgroup$ – GA316 Nov 19 '13 at 10:27
Something I recently saw on Abstruse Goose (although I don't recall the exact link).
$$10^2+11^2+12^2=13^2+14^2$$
Moreover, one can easily prove that this is the only sequence of five consecutive positive numbers which have this property!
$\begingroup$ Nice! How would you prove your last claim? $\endgroup$ – dreamer Sep 29 '13 at 9:02
$\begingroup$ @rbm: Simplify and solve the polynomial equation: $$(x-1)^2+x^2+(x+1)^2=(x+2)^2+(x+3)^2$$ $\endgroup$ – Asaf Karagila♦ Sep 29 '13 at 9:04
$\begingroup$ Ah that's clever. Thanks $\endgroup$ – dreamer Sep 29 '13 at 9:06
The continued fraction of The Golden Ratio:
$\frac{1+\sqrt5}{2}=[1;,1,1,1,\dots]=[1,\bar1]$
Also: $\frac{1+\sqrt5}{2}=\sqrt{1+\sqrt{1+\sqrt{1+\dots}}}$.
It's funny noone mentioned the hockey-stick identity, partial sum of columns in a Pascal triangle:
$$ \sum_{k=0}^{m}\binom{n+k}{k}=\binom{n+m+1}{n} $$
http://www.artofproblemsolving.com/Wiki/index.php/Combinatorial_identity
$$ \sum_{n=1}^{+\infty}\frac{\mu(n)}{n}=1-\frac12-\frac13-\frac15+\frac16-\frac17+\frac1{10}-\frac1{11}-\frac1{13}+\frac1{14}+\frac1{15}-\cdots=0 $$ This relation was discovered by Euler in 1748 (before Riemann's studies on the $\zeta$ function as a complex variable function, from which this relation becomes much more easier!).
Another notable relation is the following, on the partition function, due to Ramanujan: $$ p(n)=\frac1{\pi\sqrt2}\sum_{k=1}^{N}\sqrt k\left(\sum_{h\mod k}\omega_{h,k}e^{-2\pi i\frac{hn}{k}}\right)\frac d{dn}\left(\frac{\cosh\left(\frac{\pi\sqrt{n-\frac1{24}}}{k}\sqrt{\frac23}\right)-1}{\sqrt{n-\frac1{24}}}\right)+O\left(n^{-\frac14}\right)\;. $$ Then one of the most impressive formulas is the functional equation for the $\zeta$ function, in its asimmetric form: it highlights a very very deep and smart connection between the $\Gamma$ and the $\zeta$: $$ \pi^{\frac s2}\Gamma\left(\frac s2\right)\zeta(s)= \pi^{\frac{1-s}2}\Gamma\left(\frac{1-s}2\right)\zeta(1-s)\;\;\;\forall s\in\mathbb C\;. $$
One more with continued fractions. In 2003 there was a discussion in sci.math about the continued fractions of powers of $e$ - if I recall correctly, then that of even powers are somehow folklore. But examining the pattern to the depth we came to the following infinite continued fraction with a variable parameter: $$ \operatorname{cfe}(x)= [1,\tfrac1x-1,1, \quad 1,\tfrac3x-1,1, \quad 1,\tfrac5x-1,1, \quad \ldots ]$$ where the pattern is easily recognizable.
Then "generalize" the continued fraction and allow irrational values for $x$. Then
$$ x = \operatorname{cfe}( \ln(x) ) \qquad \qquad x \ne 1$$ or $$ x= 1+\cfrac{1} {(\tfrac1{\ln x}-1) + \cfrac{1} {1+\cfrac{1} {1+\cfrac{1} {(\tfrac3{\ln x}-1) + \cfrac{1} {1+\cfrac{1} {1+\cfrac{1} {(\tfrac5{\ln x}-1) + \cfrac{1} {1+\cfrac{1} {...}}}} }}}}}$$
Gottfried Helms
I'm a fan of approximations, and I ran into this one the other day:
$$ \Gamma^{(k)}(1) \sim (-1)^k\, \Gamma(k+1) \quad \text{as } k \to \infty. $$
The form is interesting in that it relates the $k^\text{th}$ derivative of the function at $1$ to the value of the function at $k+1$.
The approximation isn't too bad either; the relative error is on the order of $2^{-k}$, i.e.
$$ \Gamma^{(k)}(1) = (-1)^k\, \Gamma(k+1) \left[1 + O\!\left(2^{-k}\right)\right] \quad \text{as } k \to \infty. $$
Antonio Vargas
I was really amazed by discovering that the squared arcsine function has a pretty nice Taylor series at the origin: $$\arcsin^2(z)=\frac{1}{2}\sum_{n\geq 0}\frac{(2z)^{2n}}{n^2\binom{2n}{n}}$$ Even more amazed by the variety of techniques one may employ to prove such identity: combinatorial convolutions, hypergeometric transformations, (poly)logarithmic integrals, the Lagrange inversion theorem, the residue theorem, Legendre polynomials, Euler's Beta function, creative telescoping... They're all pretty interesting.
Jack D'Aurizio
$$\frac{11}{10}\cdot\frac{1111}{1110}\cdot\frac{111111}{111110}\cdot\frac{11111111}{11111110}\cdots =1.101001000100001000001\cdots$$
Kemono Chen
$$ {\large\sqrt{\vphantom{\Large A}\,\color{#ff0000}{20}\color{#0000ff}{25}\,}\, = 45 = \color{#ff0000}{20} + \color{#0000ff}{25}} $$
Felix Marin
$\begingroup$ Oh come on, we can do better than that: $$\sqrt{3025}=30+25$$ $$\sqrt{99801}=998+1$$ $$\sqrt{4941729}=494+1729$$ $$\sqrt{7441984}=744+1984$$ $$\sqrt{52881984}=5288+1984$$ $$\sqrt{60481729}=6048+1729$$ $\endgroup$ – Frpzzd Dec 1 '18 at 17:53
$\begingroup$ @Frpzzd It's quite fine there are so many cases. However, your second example is wrong because $\displaystyle 999^2 = 998001 \not= 99801$. My ONLY example appears in the book "The Man who Counted". $\endgroup$ – Felix Marin Dec 3 '18 at 20:40
A surprising family of series for $e$ can be derived by algebraically combining the terms in Newton's series expansion:
\begin{equation} e=\sum_{k=0}^{\infty } \dfrac{1}{k!}=\dfrac{1}{0!}+\dfrac{1}{1!}+\dfrac{1}{2!}+\dfrac{1}{3!}+\dfrac{1}{4!}+\dfrac{1}{5!}+\ldots. \end{equation}
\begin{equation} e=\sum _{k=0}^{\infty } \frac{2k+1}{(2k)!}=\frac{1}{0!}+\frac{3}{2!}+\frac{5}{4!}+\frac{7}{6!}+\frac{9}{8!}+\frac{11}{10!}+\ldots \end{equation}
\begin{equation} 2e=\sum _{k=0}^{\infty } \frac{k+1}{k!}=\frac{1}{0!}+\frac{2}{1!}+\frac{3}{2!}+\frac{4}{3!}+\frac{5}{4!}+\frac{6}{5!}+\ldots \end{equation}
\begin{equation} 1/e=\sum _{k=0}^{\infty } \frac{1-2k}{(2k)!}=\frac{1}{0!}-\frac{1}{2!}-\frac{3}{4!}-\frac{5}{6!}-\frac{7}{8!}-\frac{9}{10!}-\ldots~. \end{equation}
Beyond being pretty, these series converge substantially faster than Newton's series. For more formulas and details on derivation see: http://www.brotherstechnology.com/math/cmj-supp.html
$\begingroup$ $$ 1=\sum_{k=0}^{\infty } \frac{1}{k!(k+2)}=\frac{1}{0!2}+\frac{1}{1!3}+\frac{1}{2!4}+\frac{1}{3!5}+\frac{1}{4!6}+\frac{1}{5!7}+\ldots $$ $\endgroup$ – Fred Kline Jan 30 '14 at 21:56
$\begingroup$ Nice, @FredKline. The above link leads to a list of formulas at: brotherstechnology.com/math/e-formulas.html. This is a variation of Formula (26) with n=1: \begin{equation} 1=\sum _{k=1}^{\infty } \frac{k}{(k+1)!}=\frac{1}{2!}+\frac{2}{3!}+\frac{3}{4!}+\frac{4}{5!}+\frac{5}{6!}+\frac{6}{7!}+\ldots~. \end{equation} $\endgroup$ – Harlan Feb 1 '14 at 0:23
$$\int_0^{\frac{\pi}{2}} x\ln(\tan(x))\ dx=\frac{7}{8}\zeta(3)$$
i. m. soloveichik
This one really surprised me: $$\int_0^{\pi/2}\frac{dx}{1+\tan^n(x)}=\frac{\pi}{4}$$
Frpzzd
I apologize if this is already here. I thought it was but I can't find it, so I must have seen it somewhere else on the site.
$$\begin{matrix} f(x) & \displaystyle\int f(x)dx \\[6pt] \hline x^2 & \dfrac{x^3}{3} \\[6pt] x & \dfrac{x^2}{2} \\[6pt] 1 & x \\[6pt] \dfrac{1}{x} & \color{red}{\log(x)} \\[6pt] \dfrac{1}{x^2} & -\dfrac{1}{x} \\[6pt] \dfrac{1}{x^3} & -\dfrac{1}{2x^2} \\[6pt] \dfrac{1}{x^4} & -\dfrac{1}{3x^3} \end{matrix}$$
$\displaystyle{\int x^n dx} = \frac{x^{n+1}}{n+1}$
$\displaystyle{\lim_{n \rightarrow -1} \left(\frac{x^{n+1}}{n+1}\right)} = \log(x)$?
No. Let $g(x,n)=\frac{x^{n+1}}{n+1}$.
For $x\in(0,\infty)$, you have:
$g(x,n)>0$ as $n\rightarrow -1$ from above, and
$g(x,n)<0$ as $n\rightarrow -1$ from below, but
$\log(x)<0$ on $x\in(0,1)$ and $\log(x)>0$ on $x\in(1,\infty)$.
I wish I understood this better.
Travis Bemrose
$$\sin \pi x=\pi x\prod_{n=1}^{\infty}\left(1-\frac {x^2}{n^2}\right).\quad \text {(L. Euler).}$$ Obviously the LHS and RHS have the same set of zeroes but that alone does not imply equality. And putting $x=1/2$ into it, we derive the Wallis product for $\pi, $ which itself is remarkable, especially as Wallis was Newton's immediate predecessor in the "Lucas chair" and obtained his product without the full generality of the methods of calculus developed by Newton..
DanielWainfleet
This formula thrills me and stirs my mind as nothing could for years... $$\int^\infty_{0}\!\!e^{-3\pi x^2}\frac{\sinh(\pi x)}{\sinh(3 \pi x)}\,dx=\frac{1}{e^{2\pi/3}\cdot \sqrt3} \sum^\infty_{n=0}\frac{e^{-2n(n+1)\pi}}{(1+e^{-\pi})^{2}(1+e^{-3\pi})^{2}\cdots(1+e^{-(2n+1)\pi})^{2}}$$
Shivam Patel
$\begingroup$ Can I ask why this one in particular? $\endgroup$ – Bennett Gardiner Sep 28 '13 at 8:05
$\begingroup$ @BennettGardiner This formula consists of A combination of irrational constants , trigonometric function and more over a connection of an infinite sum and infinite integral. $\endgroup$ – Shivam Patel Sep 28 '13 at 12:30
$\begingroup$ Note that $\sinh(x) = \frac 12 (\mathrm e^x - \mathrm e^{-x})$, so in fact you have "only" a combination of exponential functions under the integral. And it would be more impressive if the RHS was missing the $\pi$. $\endgroup$ – filmor Sep 30 '13 at 13:15
I find the Young-Frobenius identity, found on p. 8 here, surprising:
A partition $\lambda\vdash n $ of an integer $n\geq0$, i.e. a sequence $(\lambda_{1}, \cdots,\lambda_{k})$ with $\lambda_{1}\geq \cdots \geq \lambda_{k}>0$ and $\lambda_{1} + \cdots + \lambda_{k}=n$, can be identified with a diagram consisting of $k$ left-justified rows of boxes, where row $i$ (starting from the top) has $\lambda_{i}$ boxes, called a Young diagram of size $n$. For some Young diagram $\lambda$ of size $n$, a Young tableau of shape $\lambda$ is an assignment of integers $1$ through $n$ to the boxes of $\lambda$; a Young tableau is standard if these numbers increase in each row and column. For example, one size-$10$ standard Young tableau of shape $(5,4,1)$ is:
Now let $f^{\lambda}$ denote the number of standard Young tableaux of shape $\lambda$. Then the numbers of standard Young tableaux of each shape of size $n$ satisfy this identity:
$$\sum_{\lambda\vdash n}^{}(f^{\lambda})^{2}=n!$$
Scott Mutchnik
$\begingroup$ I presume there's some relatively clean bijection based around a canonical cycle structure of a permutation that explains this? $\endgroup$ – Steven Stadnicki Nov 22 '14 at 16:29
One of the most beautiful formulas in combinatorics:
Cayley's Formula:Number of labelled trees on $n$ vertices $=n^{n-2}$
And here are some interersting, yet not-so-popular combinatorial identities:
Notations:
$F_n = n^{th}$ Fibonacci number
$H_n =$ $n^{th}$ Harmonic number; $H_n = 1+ \frac{1}{2}+\frac{1}{3} +...+\frac{1}{n}$
$$ \sum_{n \ge 1} {\frac{F_n}{2^n}} =2 $$
$$ \sum_{n \ge 1} {n \frac{F_n}{2^n}} =10 $$
$$ \sum_{n \ge 1} {n^2\frac{F_n}{2^n}} =94 $$
For $0 \le m \le n$, $$\sum_{k=m}^{n-1} {\binom{k}{m} \frac {1}{n-k}}= \binom{n}{m}(H_n -H_m)$$
I have found the identity below from which can be deduced infinitely many others of increasing degree by the elementary theory of elliptic curves.
The 6-tuples indicate the coefficients of, respectively $n^5$, $n^4$, $n^3$, $n^2$, $n$ and $1$
$$ A = (1, 10, -8, 16, 64, -32) \\ B = (1, -10, -8, -16, 64, 32) \\ C = (-1, 8, 8, -16, 80, 32) \\ D = (-1, -8, 8, 16, 80, -32)$$
Then it is verified the identity $$n(6n^4 +24n^2 + 96)^3 = A^3 + B^3 + C^3 + D^3$$
The factor of $n$ at the left is never nul for any integer $n$.
Luis Gomez Sanchez
$\begingroup$ I wanted to write 6$n^4$ + 24$n^2$ + 96 at the left but I could not do $\endgroup$ – Piquito Apr 22 '15 at 0:14
It has surprised me that
$$F_n^\star=\frac{\phi^n-(-\phi)^{-n}}{\sqrt5}$$
where $\phi$ is the golden ratio and $F_n^\star$ is the nth Fibonacci number. Then, one stumbles upon characteristic equations:
$$a_kF_{n+k}=a_{k-1}F_{n+k-1}+a_{k-2}F_{n+k-2}+\dots+a_0F_n$$
$$\implies F_n=b_kr_k^n+b_{k-1}r_{k-1}^n+\dots+b_1r_1^n$$
where $r$ is a solution of the equation
$$a_kr^k=a_{k-1}r^{k-1}+a_{k-2}r^{k-2}+\dots+a_0$$
assuming the roots do not repeat. The coefficients are determined based on initial values.
It then surprises me further that this extends to differential equations:
$$a_ky^{(k)}+a_{k-1}y^{(k-1)}+\dots+a_0y=0$$
$$\implies y=b_ke^{r_kx}+b_{k-1}e^{r_{k-1}x}+\dots+b_1e^{r_1x}$$
where $r$ is a root of the equation
$$a_kr^k+a_{k-1}r^{k-1}+\dots+a_0=0$$
again assuming roots do not repeat.
Simply Beautiful Art
The most interesting identity I have come across so far is the Mellin Transform of Gauss' Hypergeometric Function with a negative $x$-argument which can be expressed as a combination of Beta and Gamma Functions
$$\mathcal{M}[_2F_1(a,b;c;-x)](s)=B(s,a-s)\frac{\Gamma(b-s)\Gamma(c)}{\Gamma(b)\Gamma(c-s)}$$
which again points out the extremely close relation between all these special functions in general.
mrtaurho
For the orthogonal ($\vec{v}_\perp$) and parallel component ($\vec{v}_\parallel$) of a $\Bbb R^3$ vector $\vec{v}$ along another vector $\vec{k}$ with lenght $k$ we have $$ k\,\vec{v}\, k= \underbrace{\vec{k}\,(\vec{v}\cdot\vec{k})}_{k \vec{v}_\parallel k} + \underbrace{\vec{k}\times(\vec{v}\times\vec{k})}_{k \vec{v}_\perp k} $$
or a bit less showy
$$\vec{v}= \frac{\vec{k}\,(\vec{v}\cdot\vec{k}) + \vec{k}\times(\vec{v}\times\vec{k})}{k^2},$$
which for unit length vectors $\vec{k}$ gives obvioulsy
$$ \vec{v}= \underbrace{\vec{k}\,(\vec{v}\cdot\vec{k})}_{\vec{v}_\parallel} + \underbrace{\vec{k}\times(\vec{v}\times\vec{k})}_{\vec{v}_\perp}. $$
To me this looked a first quite surprising since the scalar, the cross and the product of scalar and vector, are combined in a rather symmetric and harmonic fashion here. This is kind of unexpected, since as we know the one is a vector and the other one is a scalar. Though its of course very easy to show that this is correct and also the one contains the $\sin$ and the other the $\cos$ of the angle in between $\vec{k}$ and $\vec{v}$. But its more the vector algebra which kind of amazed me.
Rudi_Birnbaum
I got this integral,
$$\int_{-\infty}^{+\infty}\frac{\mathrm dt}{(\phi^n t)^2+\pi^2(F_{2n+1}-\phi F_{2n})(e^{\gamma}t^2+t-1)^2}=1$$
Where $F_{n}$ is the Fibonacci number,$\phi$ is the Golden ratio and $\gamma$ is the Euler's constant.
coffeee
Something very exotic... and I do not know, whether this example fits the bill for this question here. But let's see.
In some sense it seems to be possible to assign equality $$ e = \tfrac 1{e^1} \tfrac 1{e^2} \tfrac 1{e^4}\tfrac 1{e^8}...$$
Originally I thought I had a mathematical contradiction when I wrote: $$ \text{ How can } \qquad e^{-1-2-4-8-16-...} = e^{-1/(1-2)}= e^{+1} = e \qquad \text{?}$$
Initially I thought that this were an example where the rule of the closed form of the geometric series might break. The equality seems impossible because the product is even of only decreasing factors and should so be convergent and moreover converge to zero: $$e^{-(1+2+4+8+16+...)} = \tfrac 1{e^1} \tfrac 1{e^2} \tfrac 1{e^4}... $$ so $$ e \overset{???}{=} \tfrac 1{e^1} \tfrac 1{e^2} \tfrac 1{e^4}\tfrac 1{e^8}...$$ Well, in some circumstances in the context of divergent series we observe strange things - but here the factors are all nicely decreasing and no unexpected effect should occur.
But - that this actually holds in some sense was mentioned by Robert Israel here in MSE
$\begingroup$ This follows from the fallacious argument that $1 +2 + 4 + 8 + \ldots = \frac{1}{1-2} $, which doesn't hold because to apply the GP sum to infinity, we need $|x|<1$. $\endgroup$ – Calvin Lin Sep 27 '13 at 13:00
$\begingroup$ After looking over Robert's answer, he said that "I would avoid writing it as $e$". And in fact, you have not shown that it is $e$, because you do not know what the analytic continuation is. I do not think you can extend it beyond the unit circle. $\endgroup$ – Calvin Lin Sep 27 '13 at 13:10
$\begingroup$ @Calvin: Here is another link where I stumbled into this problem earlier, discussed it with some arguments and halfbrewn counterarguments - maybe somehow instructive, too (not only for historical reasons) : math.eretrandre.org/tetrationforum/showthread.php?tid=420 $\endgroup$ – Gottfried Helms Sep 27 '13 at 13:13
$\begingroup$ @Calvin: Well, he wrote "... it is indeed true that ... has an analytical continuation of.. with value $e$ at $z=2$ ..." . But true, he also said he would avoid to write the equality in the product-notation. That is why I added the phrase "in some sense" to the equation. All in all - maybe the given example is too complicated here for that list of curious examples... on the other hand, the OP didn't want to get the standard ones... Hmm, I don't know whether I should delete it? (Upps- the OP: that were you ;-) - sorry) $\endgroup$ – Gottfried Helms Sep 27 '13 at 13:33
$\begingroup$ I think you should justify the "in some sense". I'm fine with leaving it up and letting the community vote, and you can always delete it later. E.g. if you simply wrote $1+2 + 4 + \ldots = -\frac{1}{2}$, I think that will result in a lot of down votes immediately. My main concern is that the GP doesn't converge on the unit circle, and so I don't understand the analytic continuation argument that allows you to push through this boundary. $\endgroup$ – Calvin Lin Sep 27 '13 at 13:38
protected by Zev Chonoles Sep 27 '13 at 7:46
Not the answer you're looking for? Browse other questions tagged soft-question big-list or ask your own question.
Funny identities
Evaluating $\lim\limits_{n\to\infty} e^{-n} \sum\limits_{k=0}^{n} \frac{n^k}{k!}$
Symmetry of function defined by integral
What is the most surprising result that you have personally discovered?
Is it possible to simplify $\frac{\Gamma\left(\frac{1}{10}\right)}{\Gamma\left(\frac{2}{15}\right)\ \Gamma\left(\frac{7}{15}\right)}$?
Generalizing the sum of consecutive cubes $\sum_{k=1}^n k^3 = \Big(\sum_{k=1}^n k\Big)^2$ to other odd powers
Prove that $\int_{0}^{1}\sin{(\pi x)}x^x(1-x)^{1-x}\,dx =\frac{\pi e}{24} $
Where do Mathematicians Get Inspiration for Pi Formulas?
Are there other cases similar to Herglotz's integral $\int_0^1\frac{\ln\left(1+t^{4+\sqrt{15}}\right)}{1+t}\ \mathrm dt$?
How can I prove the convergence of a power-tower?
Which one result in mathematics has surprised you the most?
Elementary Papers at ArXiv
Why algebraic topology is also called combinatorial topology?
How can one define, in terms of equations, independence of elements in an algebraic structure defined by identities?
Applications of model theory to analysis
Problem Regarding Time Management
More unknown / underappreciated results of Euler
Reference for Engineering Mathematics
Learning multivariable/vector calculus through guided discovery
Tough integrals that can be easily beaten by using simple techniques | CommonCrawl |
SweepCluster: A SNP clustering tool for detecting gene-specific sweeps in prokaryotes
Junhui Qiu1,
Qi Zhou1,
Weicai Ye2,
Qianjun Chen1 &
Yun-Juan Bao ORCID: orcid.org/0000-0002-2654-23211
The gene-specific sweep is a selection process where an advantageous mutation along with the nearby neutral sites in a gene region increases the frequency in the population. It has been demonstrated to play important roles in ecological differentiation or phenotypic divergence in microbial populations. Therefore, identifying gene-specific sweeps in microorganisms will not only provide insights into the evolutionary mechanisms, but also unravel potential genetic markers associated with biological phenotypes. However, current methods were mainly developed for detecting selective sweeps in eukaryotic data of sparse genotypes and are not readily applicable to prokaryotic data. Furthermore, some challenges have not been sufficiently addressed by the methods, such as the low spatial resolution of sweep regions and lack of consideration of the spatial distribution of mutations.
We proposed a novel gene-centric and spatial-aware approach for identifying gene-specific sweeps in prokaryotes and implemented it in a python tool SweepCluster. Our method searches for gene regions with a high level of spatial clustering of pre-selected polymorphisms in genotype datasets assuming a null distribution model of neutral selection. The pre-selection of polymorphisms is based on their genetic signatures, such as elevated population subdivision, excessive linkage disequilibrium, or significant phenotype association. Performance evaluation using simulation data showed that the sensitivity and specificity of the clustering algorithm in SweepCluster is above 90%. The application of SweepCluster in two real datasets from the bacteria Streptococcus pyogenes and Streptococcus suis showed that the impact of pre-selection was dramatic and significantly reduced the uninformative signals. We validated our method using the genotype data from Vibrio cyclitrophicus, the only available dataset of gene-specific sweeps in bacteria, and obtained a concordance rate of 78%. We noted that the concordance rate could be underestimated due to distinct reference genomes and clustering strategies. The application to the human genotype datasets showed that SweepCluster is also applicable to eukaryotic data and is able to recover 80% of a catalog of known sweep regions.
SweepCluster is applicable to a broad category of datasets. It will be valuable for detecting gene-specific sweeps in diverse genotypic data and provide novel insights on adaptive evolution.
A selective sweep is a process where a beneficial allelic change sweeps through the population and becomes fixed in a specific population, and the nearby sites in linkage disequilibrium will hitchhike together and also become fixed [1, 2]. Those sweep regions containing beneficial alleles could possibly be introduced by recombination and rise to high frequency rapidly in the population under positive selection. If the increase in frequency is recent or fast relative to other recombination events, the mutation profile in the sweep regions across the population will be maintained without being interrupted. Finally, the process will imprint genetic signatures in the population genomes, leading to lowered within-population genetic diversity, increased between-population differentiation, and/or high linkage disequilibrium [3,4,5]. When such selective sweeps only occur at specific gene regions under selection without affecting the genome-wide diversity, they are described as gene-specific sweep [6].
Recently, the gene-specific sweep has been demonstrated to play important roles in adaptive evolution in microbial populations, such as ecological differentiation in Prochlorococcus [7] and Synechococcus [8], speciation in marine bacterium Vibrio cyclitrophicus (V. cyclitrophicus) [3, 9], and phenotypic divergence in human adapted pathogen Streptococcus pyogenes (S. pyogenes) [10]. The observation of the gene-specific sweeps in those scenarios in both environmental organisms and host pathogens suggests that the gene-specific sweep may represent one of the general mechanisms underlying adaptive evolution of microorganisms. Therefore, identifying the gene-specific sweep on the genome-wide scale will not only provide insights into the evolutionary mechanisms shaping the genetic diversity, but also help to unravel potential genetic markers associated with ecological adaptation or phenotypic differentiation.
An array of methods have been proposed to identify the gene-specific sweeps and are generally fall into three categories based on the genetic signatures being captured, i.e., (1) composite likelihood ratio (CLR) tests of the marginal likelihood of the allele frequency spectrum under a model with selective sweeps in comparison with that under a model of selective neutrality [11,12,13] (Kim and Stephan-2002, Nielsen-2005, Huber-2016); (2) comparison of the distribution of population subdivision or linkage disequilibrium in a region under positive selection with that of a neutral background [14, 15] (Akey-2002, Kim-Nielsen-2004); (3) haplotype-based approaches for detecting elevated haplotype homozygosity in a locus around the selected site in comparison with that under a neutral model [16,17,18,19,20] (Sabeti-2002, Voight-2006, Ferrer-Admetlla-2014, Harris-2018, Harris-2020). Those methods have demonstrated the power for detecting genetic signatures of selective sweep in numerous cases.
However, those methods were mainly developed for detecting selective sweeps in eukaryotic data and are not readily applicable to prokaryotic data, such as the haplotype-based approaches [21]. In addition, some challenges have not yet been sufficiently addressed by the currently available methods. For example, the gene-centric concept of the gene-specific sweep has not been taken into account leading to a low spatial resolution of sweep regions; the spatial distribution properties of the mutated sites within the sweep regions have not been fully considered.
In this study, we propose a new gene-centric approach specifically for identifying the gene-specific sweeps in prokaryotes, which search for regions with a higher level of spatial clustering of single nucleotide polymorphisms (SNPs) assuming a null distribution model of SNPs under neutral selection. The clustering applies to the SNP subsets of specific interests, which can be selected based on the genetic signatures of sweep regions, such as elevated population subdivision, reduced within-population diversity, excessive linkage disequilibrium, or significant phenotype association. Our approach is the first such type specifically for identifying gene-specific sweeps in prokaryotes and differs from the previous methods for eukaryotes in that: (1) it applies the gene-centric concept by considering the gene-specific location of SNPs; (2) it takes advantages of the spatial distribution properties of SNPs in the sweep region; (3) the clustering is performed on pre-selected target SNPs with specific genetic properties, thus minimizing the influences from uninformative SNPs. We offer it as an open-source tool "SweepCluster" and it is freely accessible at github: https://github.com/BaoCodeLab/SweepCluster.
Pre-selection of SNPs
The pre-selection of SNPs could be based on elevated population differentiation Fst, extended linkage disequilibrium LD, or phenotypic association. However, the determination could also depend on the data property and study purposes. For instance, if the positive selection acting on disease markers is of interest, the screening of SNPs with significant association with disease phenotypes using robust genome-wide association analysis is preferred. In the real and simulated datasets in this study, we selected the SNPs associated with phenotypic divergence or population differentiation.
Overview of the clustering approach
The SNP clustering algorithm employs a gene-centric concept to mimic the biological process of introducing gene-specific sweeps. In the gene-specific sweep model, non-synonymous SNPs (the SNPs causing amino acid alterations) or upstream regulatory SNPs (the SNPs in the regulatory regions) are more likely to be under positive selection than synonymous SNPs (the SNPs without causing amino acid alterations) or inter-genic SNPs, and the selected non-synonymous SNPs along with the nearby synonymous/inter-genic SNPs are introduced simultaneously in a single event. For a recent sweep event, the selected SNPs and the hitchhiking SNPs are tightly clustered in specific gene regions without severely ruined by other recombination events. Based on the gene-specific sweep model, our clustering strategy is illustrated in Fig. 1 and described previously [10]. Briefly, a non-synonymous or upstream regulatory SNP is randomly chosen in a specific gene/operon and serves as an anchor for an initial cluster. The initial cluster is then extended progressively by scanning and merging the neighboring SNPs or clusters. If the total span is shorter than the specified sweep length, then the surrounding SNPs or clusters are merged. Otherwise, the initial cluster is extended by merging the neighboring SNPs or clusters which minimize the normalized root-mean-square of inter-SNP distances (NRMSD):
$$NRMSD = \sqrt {\frac{{\sum\limits_{i}^{{{\text{n}} - 1}} {d_{i}^{2} } }}{n - 1}} /l$$
Outline of the clustering procedures of SweepCluster. A non-synonymous or upstream regulatory SNP is randomly chosen in each gene/operon and serves as an anchoring SNP for an initial cluster. The initial cluster is then extended by scanning and merging the neighboring SNPs or clusters. If the total span is shorter than the specified sweep length, then the surrounding SNPs or clusters are merged. Otherwise, the initial cluster is extended by merging the neighboring SNPs or clusters such that the normalized root-mean-square of inter-SNP distances (NRMSD) is minimized. All clusters after merging are re-examined and split if any inter-SNP distance within the cluster is longer than a given inter-SNP distance threshold (max_dist)
where di is the ith inter-SNP distance, n is the total number of the SNPs in the target cluster and l is the maximum spanning range of the SNPs in the target cluster. The merging process will iterate until no neighboring SNP or cluster satisfies the merging criteria. The majority of CPU time is spent on making decision and merging procedures with the time complexity of O(n + nb · k), where nb is the number of boundary SNPs and k is the number of initial cluster. In the meantime, the required memory is on the scale of O(N) due to its one-dimensional property.
Following merging, all clusters are re-examined and split if any inter-SNP distance within the cluster is longer than a given distance threshold. The distance threshold can be determined based on the genome-wide average inter-SNP distance. Under the null neutral model, the SNPs are independently and randomly distributed across the genome, and the significance of a cluster with m distinct SNPs spanning a length of l can be evaluated as the accumulative probability of observing m SNPs in a cluster spanning a distance ≤ the observed length l and the probability density function can be represented as the gamma distribution [22]:
$$p\left( {m,\beta } \right) = \smallint g\left( {x,m,\beta = \mu } \right)dx = \mathop \smallint \limits_{0}^{l} \frac{{\beta^{m} }}{{{\Gamma }\left( m \right)}}x^{m - 1} e^{ - \beta x} dx$$
Here, the rate parameter \(\beta\) is equal to the average mutation rate μ across the genome: μ = n/s, where n is the total number of SNPs in the genome and s is the length of the genome.
Evaluation of the performance of the clustering method
We evaluated the performance of the clustering using four metrics, i.e., CPU time, memory usage, accuracy and sensitivity based on simulation datasets. The evaluation of CPU time and memory usage was performed using real datasets with varying data size. The assessment of accuracy and sensitivity was conducted based on simulation datasets (see below). The accuracy is defined as the proportion of correctly assigned SNPs among the total SNPs. The sensitivity is defined as the proportion of detected clusters containing at least 90% of the SNPs correctly assigned. The mapping between detected clusters and expected clusters was determined based on reciprocal maximum overlapping between the two sets of clusters. The specificity is defined as the proportion of SNPs assigned outside of clusters among the SNPs expected to be outside of clusters.
Simulation datasets
Due to the lacking of a reference dataset with well-defined SNP clustering profiles, we created simulation datasets for assessing the accuracy, sensitivity, and specificity of the clustering algorithm. The simulation datasets were generated based on the gene region annotation of the bacterial strain S. pyogenes AP53, which was annotated and studied by us previously [23]. The purpose is to take the advantage of the natural gene regions in the genome for producing well-defined artificially SNP clusters under gene-specific sweeps. The SNPs were first generated independently and randomly on the genome based on the Poisson process of a given mutation rate (the average mutation rate of S. pyogenes). The SNPs were then processed to form expected clusters by taking the following procedures to satisfy the pre-defined parameter conditions of sweep length (sweep_lg) and maximum inter-SNP distance (max_dist): (1) roughly a half of the SNPs in each gene region were assigned non-synonymous such that each gene region contain non-synonymous SNPs. The purpose is to make the cluster detection by DBSCAN and SweepCluster independent on the biological functions of SNPs, such that their comparison is robust to biological factors given that the design of SweepCluster favors gene regions containing non-synonymous SNPs. (2) removing the SNPs in the gene regions longer than sweep_lg + 50 to create SNP clusters satisfying a specific condition of the parameter sweep_lg; (3) if the spanning length of the neighboring genes is longer than sweep_lg + 50 and the inter-genic distance is smaller than max_dist, then remove the downstream gene to create a larger inter-genic distance to create SNP clusters satisfying a specific condition of the threshold of max_dist.
Real dataset of S. pyogenes genotypes
We used the real genomic datasets from two bacterial species S. pyogenes and S. suis to assess the effects of the procedure of SNP pre-selection. The reason to choose the two species is that they are known to have a high level of genomic variability and a high density of genotypes, which facilitate the manifestation of influences of SNP pre-selection [24, 25]. S. pyogenes is a common human pathogenic bacterium causing diverse disease phenotypes, such as pharyngitis, skin infection, necrotizing fasciitis, and acute rheumatic fever. Previous studies have shown that the alleles in the gene regions of S. pyogenes exhibit phenotype-dependent changes, thus providing an excellent dataset for selecting SNPs associated with phenotype differentiation [10, 26].
The genomic sequences of S. pyogenes were downloaded from NCBI Genbank database (ftp://ftp.ncbi.nlm.nih.gov). A total of 46 genomes were chosen for this study with balanced distribution of phenotypes based on the known phenotypic information [10]. The core genome is defined as the regions encoded by all studied genomes and was determined by aligning the shredded genomes against the reference strain AP53 (CP013672). Finally, the core genome contains 69,171 segregating sites mutated in at least one of the genomes and were concatenated for downstream analysis. Both the whole set of SNPs at all segregating loci and a subset of selected SNPs associated with the phenotype of acute rheumatic fever were used for inferring sweep regions using SweepCluster. The SNPs associated with the disease phenotype were identified using the Chi-squared test. The parameters used for SweepCluster are "-sweep_lg 1781 -max_dist 1100 -min_num 2" and the clustering significance was evaluated using the function "Pval" with the parameter of mutation rate "-rate 0.0362". The linkage disequilibrium analysis of the SNPs was performed using Haploview [27].
Real dataset of Streptococcus suis (S. suis) genotypes
S. suis is a swine pathogen that colonizes pigs asymptomatically but can also causes severe clinical diseases in pigs such as respiratory infection, septicemia, and meningitis. S. suis can be classified to 29 distinct serotypes forming complex population structures [28]. Previous phylogenetic study showed that many serotypes exist in multiple subpopulations and each subpopulation may contain multiple serotypes [25]. The complexity has been associated with extensive genetic recombination and genomic shuffling among and between populations. Therefore, it will be interesting to investigate the occurrence of selective sweeps among subpopulations in the highly recombining genome of S. suis.
A total of 1,197 genomic sequences of S. suis strains were downloaded from the NCBI Genbank database (ftp://ftp.ncbi.nlm.nih.gov). We removed the redundancy among the genomes to reduce the data size by grouping them based on the submission institutions and selecting the most distant genomes within each group based on the phylogenetic structures built by SplitsTree [29] (Additional file 9: Figuure S1A). The selected genomes were further filtered based on their phylogenetic distance. The final dataset comprises 209 non-redundant genomes (Additional file 9: Figure S1B) and gives rise to a total of 236,860 segregating mutation sites with BM407 as the reference (FM252033). The core genome was identified using the same procedures as that for S. pyogenes. The inference of sweep regions using SweepCluster was performed respectively for all segregating SNPs and for those associated with differentiation of two subpopulations (branch-1 and branch-2 in Additional file 9: Figure S1C). The SNPs associated with population differentiation was identified using the Chi-squared test. The parameters used for SweepCluster and significance evaluation are "-sweep_lg 2000 -max_dist 2000 -min_num 2 and "-rate 0.1077", respectively. Here, "sweep_lg" is for sweep length, "max_dist" for maximum inter-SNP distance, "min_mum" for minimum number of SNPs in a cluster, and "rate" for average mutation rate across the genome.
Real dataset of V. cyclitrophicus genotypes
V. cyclitrophicus is a gram-negative bacterium inhabiting seawater. Previous studies reported ecological differentiation of the V. cyclitrophicus population associated with gene-specific sweeps [9]. The authors sequenced 20 strains of V. cyclitrophicus, which are divided into two phenotypic groups (S strains and L strains) according to their ecological partition. They showed that the partition is associated with the ecoSNPs, i.e., the dimorphic nucleotide positions with one allele present in all S strains and the other allele in all L strains. The authors then classified the ecoSNPs into 11 clusters and demonstrated the evidences of gene-specific sweeps in causing the ecoSNPs. This is the only available study of SNP clusters under gene-specific sweeps in bacteria. We used this dataset for benchmarking of our clustering method.
We downloaded the genomic sequences of the 20 strains from NCBI Genbank database (ftp://ftp.ncbi.nlm.nih.gov) and aligned them to a reference strain with complete genome assembly (ECSMB14105) to derive the segregating SNPs of 139,066 and the phylogenetic structure (Additional file 10: Figure S2). The ecoSNPs were obtained using the same definition as that in the reference [9]. The ecoSNPs were then subject to cluster detection using SweepCluster with the parameters "-sweep_lg 8000 -max_dist 5000 -min_num 2" and "-rate 0.000111".
Empirical datasets of human genotypes
We employed the genotype datasets from the human 1000 Genomes project [30] to evaluate the ability of SweepCluster of identifying selective sweeps in eukaryotic data. We chose the 1000 Genomes datasets because they have been extensively used in previous studies of selective sweeps and a handful of gene loci have been well-characterized to be under selective sweep in specific subpopulations. We extracted the genotype data from three subpopulations, i.e., EUR (Europeans), AFR (Africans) and EAS (East Asians), and selected the mutation sites associated with pairwise population differentiation Fst. The calculation of Fst was based on Hudson's estimator in the transformed formula [31]:
$$F_{st} = \frac{{\left( {p_{1} - p_{2} } \right)^{2} - p_{1} \left( {1 - p_{1} } \right)/\left( {n_{1} - 1} \right) - p_{2} \left( {1 - p_{2} } \right)/\left( {n_{2} - 1} \right)}}{{p_{1} \left( {1 - p_{2} } \right) + p_{2} \left( {1 - p_{1} } \right)}}$$
where n1/n2 is the subpopulation size and p1/p2 is the minor allele frequency for the two paired populations. Distinct subsets of SNPs were selected using a series of Fst thresholds (0.7, 0.65, 0.60, 0.55, 0.50, 0.45, 0.43, and 0.4) for inferring sweep regions to evaluate the robustness of SweepCluster in eukaryotic data. The parameters used for SweepCluster are: "-sweep_lg 200,000 –max_dist 40,000 –min_num 2". The sweep regions and SNPs were annotated based on the genome build hg19 using ANNOVAR [32].
Optimization of the parameters
We carried out the parameter simulation of sweep lengths by calculating the number of sweep regions inferred by SweepCluster for varying values of sweep lengths in the range 300–10,000 bp. The relationship between the number of sweep regions versus sweep length was approximated using non-linear fitting implemented in generalized additive models in the R package "mgcv". The optimal estimation of the sweep length is calculated based on the maximum curvature in the fitting curves with the curvature calculated with the following formula:
$$c = \left| {\frac{f^{\prime\prime}\left( x \right)}{{\left( {1 + f^{\prime}\left( x \right)^{2} } \right)^{3/2} }}} \right|$$
where \(f^{\prime}\left( x \right)\) and \(f^{\prime\prime}\left( x \right)\) are the first-order and second-order derivative of the fitting curves, respectively. We have provided in the package a shell script "sweep_lg_simulation.sh" for automatic optimization of the sweep length for any particular genotype dataset. The parallel acceleration was implemented in the script for fine-grained parameter searching.
Overview of SweepCluster
The package SweepCluster performs four major functions. (1) Density: calculates the SNP density using a window-scanning method in a specific genomic region or in the genome-wide scale; (2) Cluster: executes the core functionality of the package, i.e., gene-centric SNP clustering; (3) Pval: estimates the statistical significance of each SNP cluster based on a null gamma distribution of SNPs; (4) a driver script "sweep_lg_simulation.sh" for parameter optimization.
Computing performance
The computing performance of SweepCluster was evaluated using multiple real datasets with varying number of SNPs (designated as n). The memory usage of SweepCluster increases linearly with n consistent with the expected memory usage scale O(n). The used memory is fairly low even for the maximum datasets of 200,000 SNPs at about 260 megabytes (MB) (Fig. 2A). The CPU time consumption of SweepCluster is on the scale O(n2) at the initial stage and then becomes nearly linear O(n) when n > 140,000 (Fig. 2B). It is in compliance with the expected time complexity O(n + nb · k), whereby the CPU time is governed by optimizing the boundary SNPs when n is small, but governed by clustering the inner SNPs when n is large with the ratio of boundary SNPs rapidly declining. Considering the linear increment of memory usage and CPU time, and the downsized genotype datasets upon pre-selection, we anticipate that the computing resources will not be limiting factors for larger datasets. In the meanwhile, it should also be noted that the computing performance also depends on the applied parameters (such as the sweep length) and the genotype data properties (such as the proportion of the boundary SNPs).
Memory usage (A) and CPU time (B) of SweepCluster for varying numbers of SNPs. The datasets for evaluation were obtained by subsetting the genotype dataset of S. suis
Performance of accuracy and sensitivity
We evaluated the performance of the clustering algorithm in SweepCluster in terms of accuracy, sensitivity and specificity using artificially generated simulation datasets. Due to the lacking of such clustering method for prokaryotes, we compared the performance of SweepCluster with that of DBSCAN, a general-purpose density-based spatial clustering algorithm without considering any trait information of the data [33]. DBSCAN has been commonly used in diverse scenarios for spatial clustering, and a variety of extensions have also been proposed to address specific challenges. For instances, the hierarchical clustering by HDBSCAN and OPTICS appropriate for variable density distributions [34, 35]; GDBSCAN with the ability to automatically predict optimal parameters [36]; Fuzzy DBSCAN for dealing with datasets with partially overlapping borders [37]; MR-DBSCAN and DENCAST with distributed implementation for handling large-scale and high-dimensional datasets [38, 39]. In the current study, we chose DBSCAN for comparison due to its implementation in Python and efficiency matching with our dataset size.
The simulation datasets were created with the SNP distributions satisfying specific combinations of sweep lengths (sweep_lg) and maximum inter-SNP distances (max_dist). The clusters in the simulation datasets were made such that the clustering results are insensitive to the biological composition of the clusters (such as the synonymous and non-synonymous SNPs, see the Methods & Materials) and that the comparison with the general-purpose DBSCAN is meaningful. The comparison showed that the performance of both algorithms as a function of maximum inter-SNP distances is highly similar, and quickly approaches optimum when the maximum inter-SNP distance increases to roughly 200 bp, close to the average inter-SNP distance in the simulation datasets (Fig. 3A, C). Interestingly, the performance of SweepCluster and DBSCAN as a function of sweep lengths differs (Fig. 3B, D). DBSCAN is not influenced significantly by the sweep length and performs nearly equally well for a broad range of sweep lengths. However, the performance of SweepCluster is dependent on the sweep length. It gradually improves with increasing sweep lengths and achieves optimal results at around 800–1000 bp, coincident with the average gene length of our simulation datasets. The dependence of the performance of SweepCluster on the sweep length is a manifestation of the gene-aware concept of the design of the clustering method in SweepCluster. When the parameter sweep_lg approaches the true value, the clustering results become close to the true clustering profile.
The accuracy, sensitivity and specificity of the clustering algorithm in SweepCluster in comparison with DBSCAN. The accuracy, sensitivity, and specificity were calculated for a series of values of sweep lengths or maximum inter-SNP distances using SweepCluster. Only accuracy and sensitivity were calculated using DBSCAN due to the fact that DBSCAN classified all SNPs into clusters leaving no out-of-cluster SNPs. The accuracy and sensitivity were calculated for a series of values of eps (the maximum distance between two samples) and min_samples (the minimum number of samples in a neighbhorhood) using DBSCAN. Other parameters for DBSCAN were set as default, including metric (default = "euclidean"), algorithm (default = "auto"), leaf_size (default = 30), and n_jobs (default = -1)
Efficacy of SNP pre-selection in real datasets of S. pyogenes and S. suis
We test the efficacy of the procedure of SNP pre-selection prior to clustering by employing real datasets from two bacterial species, S. pyogenes and S. suis of dense genotypes.
For the datasets of S. pyogenes, a total of 69,171 core SNPs were obtained across 46 representative strains and selection of SNPs based on phenotypic association with the disease acute rheumatic fever reduced the number of SNPs to 1,631 (Additional file 1: Table S1, S2 and S3). SweepCluster was subsequently applied to the two SNP datasets and identified 215 and 131 significant clusters (p value ≤ 0.05), respectively (Additional file 11: Figure S3, Additional file 1: Table S4 and S5). The relevance of the identified clusters to gene-specific sweeps is confirmed by the significant difference of population differentiation Fst between the SNPs within the clusters and those outside clusters (p value < 2.0 × 10–16) (Additional file 11: Figure S3C). We then used linkage disequilibrium (LD) between SNPs within the clusters as a proxy to examine the effect of pre-selection. A snapshot of the comparison of the LD patterns before and after pre-selection is shown in Fig. 4A, B. The average LD within clusters was significantly increased after performing SNP pre-selection (p value < 2.2 × 10–16), indicating the significant effect of pre-selection on diminishing the spurious signals in inferring sweep regions (Fig. 4C).
Comparison of the LD patterns for the SNPs before and after pre-selection for the genotype datasets of S. pyogenes (A–C) and S. suis (D–F). A, D The LD pattern of SNPs in the most significant cluster for all segregating SNPs from S. pyogenes and S. suis, respectively. B, E The LD pattern of the selected SNPs with phenotypic association in S. pyogenes and population differentiation in S. suis. C, F Distribution of the average level of inter-SNP LD in the clusters for all segregating SNPs and the selected subset of SNPs from S. pyogenes and S. suis, respectively. The LD pattern in (A) involves 1,014 SNPs located in the genomic region 1,273,267–1,286,739 of S. pyogenes AP53. The pattern in (B) involves the same set of SNPs as those used in Fig. 5E of Ref.10 and includes 1631 SNPs associated with acute rheumatic fever. The LD pattern in (D) involves 1787 SNPs located in the genomic region 2,012,889–2,018,654 of S. suis BM407. The pattern in (E) includes 2,205 SNPs associated with population differentiation of S. suis. The LD patterns were generated by Haploview based on the pair-wise measure of the linkage disequilibrium D' and log likelihood of odds ratio LOD. The different LD levels are indicated in color with red for the strongest LD (D' = 1 and LOD > 2), pink for the intermediate LD (D' < 1 and LOD > 2) in pink, white for the weak LD (D' < 1 and LOD < 2) in white, and purple for uninformative (D' = 1 and LOD < 2). The average inter-SNP LD (measured as correlation coefficient r2) was significantly increased for SNPs subject to pre-selection. The between-group difference was evaluated using Wilcoxon rank-sum test
We carried out similar analysis for the genomic data of S. suis as that for S. pyogene. A total of 236,860 core SNPs were obtained across 209 non-redundant strains of S. suis and 349 clusters were identified using SweepCluster (p value ≤ 0.05) (Additional file 12: Figure S4A, Additional file 2: Table S6, S7 and S8). Without pre-selection of SNPs, we found that the clusters are densely distributed on the genome, implying that many of the clusters may contain false positive signals of selective sweep. Therefore, we selected the SNPs associated with differentiation of two subpopulations using the Chi-squared test (Additional file 9: Figure S1C). A total of 2,205 SNPs satisfies the significance threshold (p-value ≤ 0.05) and were subject to cluster detection using SweepCluster (Additional file 2: Table S9). A total of 111 clusters were identified with significance (p-value ≤ 0.05) (Additional file 12: Figure S4B and Additional file 2: Table S10). Similarly, the relevance of the identified clusters to gene-specific sweeps is confirmed by the difference of population differentiation Fst between the SNPs within the clusters and those outside clusters (p-value < 2.0 × 10–16) (Additional file 12: Figure S4C). We examined the effect of SNP pre-selection by calculating the average inter-SNP LD within the clusters (Fig. 4D–F). The results reveal a higher level of average LD in the clusters from the selected SNPs than that from the whole set of SNPs (p-value < 4.0 × 10–7), reiterating the efficiency of our strategy for identification of signals of sweep regions.
Influence of SNP pre-selection methods on SNP clustering
In order to examine the influence of different pre-selection methods on SNP clustering, we further used population differentiation Fst for SNP pre-selection for the S. pyogenes dataset and compared the clustering results with that from pre-selected SNPs using phenotypic association above. Finally, we obtained 2,729 selected SNPs with significant population differentiation (Fst ≥ 0.6), covering a large proportion of the SNPs (1,277 SNPs, 78.3%) selected using phenotypic association (Additional file 13: Figure S5A). We then performed clustering with SweepCluster for the 1,277 overlapped SNPs and 2,729 SNPs, generating 114 and 158 significant clusters, respectively (Additional file 13: Figure S5C and S5D). The two set of clusters cover 77% and 89% of the 131 clusters (with at least 90% common SNPs) detected from the SNPs with phenotypic association. It indicates that SNP clustering results are robust to the methods of pre-selection.
Application in empirical datasets of V. cyclitrophicus
We benchmark our method using the dataset in the Ref. [9], the only currently available study of SNP clusters under gene-specific sweep in bacteria. We processed the genomic data from the 20 strains of V. cyclitrophicus (13 L strains and 7 S strains) to obtain ecoSNPs associated with ecological differentiation between the L and S population (Additional file 3: Table S11 and S12). Cluster detection is subsequently performed to the ecoSNPs using SweepCluster and 11 significant clusters were identified (Fig. 5, Table 1 and Additional file 3: Table S13). We validated our results by comparing with all eleven but two clusters reported in the Ref. [9]. We excluded cluster2 annotated as "Conserved protein" of which the equivalent gene in our reference cannot be precisely located, and cluster4 which contains flexible genes without falling into the core genome. Among the remaining nine clusters, seven were recovered by our method corresponding to a concordance rate of 78%. It is noted that cluster5 was not recovered because it does not contain non-synonymous or upstream regulatory mutations, reflecting different clustering strategies of the two studies. It is noticeable that we also identified with high significance two novel clusters cluster12 and cluster13 containing 6 and 36 SNPs, respectively (Table 1 and Additional file 3: Table S13).
SNP clusters with signatures of selective sweep identified by SweepCluster for ecoSNPs of V. cyclitrophicus. The clusters are represented as colored bars with the bar height indicating the number of ecoSNPs in the clusters. Previously reported clusters in Ref. [9] recovered by SweepCluster are indicated in black numbering from 1 to 11 and those new clusters identified by SweepCluster indicated in red from 12 to 13
Table 1 List of gene clusters identified by SweepCluster for ecoSNPs in V. cyclitrophicus
In summary, the cluster comparison shows that the differences in the identified clusters between our results and those in the Ref. [9] are mainly due to distinct clustering methods and reference genomes used in the two studies. The current study used the strain of V. cyclitrophicus ECSMB14105, the only strain of this bacterium with complete genome assembly, while the study of [9] took an alternative but closely related species V. splendidus (12B01) as the reference. Therefore, the concordance rate between the two studies should have been underestimated.
Application in empirical human genotype datasets
Though SweepCluster is specifically developed for prokaryotic data of dense genotypes, it will be helpful to test whether it is also applicable to eukaryotic data. We examined three well-characterized gene regions (LCT, EDAR, and PCDH15) under selective sweep in pairwise populations of EUR, AFR, and EAS from the human 1000 Genomes Project genotype datasets [30]. We at first performed SNP pre-selection based on the population differentiation Fst at a series of cutoff values, and then applied SweepCluster to each dataset of selected SNPs to search for gene regions under potential selective sweep (Additional file 4–7). At the threshold of Fst = 0.4, all three gene loci were recovered as significant regions under selective sweep (Fig. 6). The LCT gene, encoding lactase, was previously shown to be associated with lactase persistence in European populations and the region around it has been acknowledged as the target for strong selective sweep [19, 20, 40]. In our cluster detection, the LCT locus along with the flanking gene regions (R3HDM1, UBXN4, and MCM6) forms a cluster of 57 variants spanning 235.6 kb with significance (p-value = 5.7 × 10–6), consistent with the strong positive selection. The gene EDAR is involved in ectodermal development and the missense mutation V370A showed evidences for association with hair thickness in East Asians [41, 42]. The region around EDAR has been identified to be the locus undergoing strong selective sweep [19, 42, 43]. We localized the EDAR-centered region (GCC2, LIMS1 and EDAR) of 132 variants (including V370A) spanning 145.8 kb with significance (p-value < 10–8), implying strong selection signals. The gene PCDH15 encodes protocadherin and previous studies showed evidences of positive selection in East Asian populations [43, 44]. We recovered the PCDH15 locus as a highly significant sweep region consisting of more than 300 variants spanning 369.5 kb (p-value < 10–8), indicating a strong signature of selective sweep.
Sweep regions recovered by SweepCluster at three gene loci in the human 1000 Genomes Project genotype datasets. A LCT, B EDAR, and C, D PCDH15. A series of SNP selection criteria of Fst were used to obtain distinct genotype datasets. The sweep regions are represented as colored bars with the bar height indicating the number of SNPs in the region (or cluster size) and the bar width indicating the spanning length. The significance is shown in -log10 (p value) indicated in gradient colors
Our results show that the size and significance of the sweep regions depend on the SNP selection threshold of Fst, but the detection efficiency is robust for a wide range of Fst. The signals of selective sweep emerge in all three gene regions at the threshold of Fst = 0.4, and persist until Fst ≥ 0.7. Above this threshold, the sweep signals in all three genes disappear. It is because a low number of mutations remain at the high level of Fst and are sparsely distributed across the chromosome, making spatial clustering of the mutations inaccessible.
In order to assess the overall performance of SweepCluster on detecting sweep regions for eukaryotic genotype data, we collected a catalog of 20 representative gene loci known to be under selective sweep or previously identified to be under positive selection by multiple studies [17, 19, 20, 40, 43], and examined whether they can be recovered by SweepCluster (Additional file 8). Notably, 16 of them (80%) were recovered by SweepCluster and 14 (70%) reach high statistical significance (p-value < 0.006). The high rate of recovery reiterates the efficiency and robustness of SweepCluster in detecting sweep regions of eukaryotic data.
Optimization of parameters
The performance evaluation based on simulation data showed that the performance of the clustering algorithm in SweepCluster is closely related with the sweep length. Therefore, proper estimation of sweep lengths is critical for confident inference of selective sweep regions. Unfortunately, in many cases, it is not straightforward to derive the value of sweep lengths from genotype data. Therefore, we provided in the package a simulation script "sweep_lg_simulation.sh" to search for the optimal estimation of the sweep length for a specific genotype dataset. It is particularly suitable for prokaryotic data because the prokaryotes use gene conversion as the main vehicle for introducing selective sweeps and the sweeps are generally uniform in size [21].
We did the simulation by calculating the number of sweep regions inferred by SweepCluster at a series of values of sweep lengths and then fitting a non-linear model for the relationship between the number of sweep regions and sweep lengths. The optimal estimation of sweep lengths is determined by the point of maximum curvature in the fitting model. Here we present the simulation results for the three real datasets of S. pyogenes, S. suis, and V. cyclitrophicus, respectively (Fig. 7). It is shown that all three datasets have the maximum curvatures at the sweep length of ~ 2000 bp (1638 bp for S. pyogenes, 1500 bp for S. suis, and 2157/1989 bp for the two chromosomes of V. cyclitrophicus). It is consistent with our previous estimation of 1,789 bp for S. pyogenes using the alternative tool ClonalFrame [10, 45].
Parameter optimization of sweep lengths based on non-linear fitting and maximum curvature in three prokaryotic datasets. A, B S. pyogenes. C, D S. suis. E, F V. cyclitrophicus. The relationship between the number of sweep regions and sweep lengths was fit using generalized additive models (red lines) and the curvature for each fitting curve was calculated using the formula (4)
We have proposed a gene-centric spatial clustering approach to identify gene-specific sweeps in bacterial polymorphism data. It targets for the mutation sites complying with specific genetic properties of selective sweeps and captures the regions with unusual clustering patterns of those mutations differing from that of a neutral expectation. Based on the known genetic properties of gene-specific sweeps, the target mutations are first obtained by selecting those with elevated population differentiation, reduced within-population diversity, heightened linkage disequilibrium, or significant phenotype association. The selected subsets of mutations are then subject to clustering. Therefore, our approach for inferring sweep regions employs two layer of information, i.e., genetic signatures and spatial distribution patterns of mutations under gene-specific sweeps in comparison with current methods focusing only on one layer of information in the genotype data [11,12,13,14, 16, 19, 20].
The purpose of the procedure of selecting target mutations of particular genetic signatures prior to clustering is to remove the spurious or uninformative signals and perform spatial clustering only for the mutations related with selective sweep. The impact of mutation selection was dramatic in our two example datasets from the bacteria S. pyogenes and S. suis. The level of linkage disequilibrium between SNPs, as a signature of selective is significantly increased by selecting those mutations associated with disease phenotypes or population differentiation. The ultimate datasets upon prior selection are more sensitive to the statistical test under the neutral model of spatial distribution of mutations, making it more efficient to identify gene regions under selective sweeps. Using the only available dataset of gene-specific sweeps in bacteria [9], we validated our method yielding a concordance rate of 78% for the detected clusters even with distinct clustering strategies and reference genomes in the two studies.
Our approach is specifically designed for prokaryotic data of dense genotypes such that the mutations of particular genetic properties can be exhaustively obtained and the distribution of those mutations can be statistically distinguished from the null model. However, the tests showed that our method also performs well for eukaryotic data of sparse genotypes. We recovered the well-characterized gene regions (LCT, EDAR, and PCDH15) under selective sweeps in the 1000 Genomes Project genotype datasets. The signals of selective sweeps in the three gene loci persist for a wide range of mutation selection criteria, suggesting the robustness of our method on identifying sweep regions in sparse genotype data. Moreover, the spatial-aware strategy of the clustering makes the resolution of detected sweep regions narrowed down to single nucleotides facilitating identifying relatively old sweeps of low numbers of selected sites.
There are some limitations of our approach. It cannot distinguish explicitly between hard sweeps and soft sweeps, or recent sweeps and older sweeps because mixed sites of varying strength of selection are treated as a whole for statistical tests. Our method does not deal with the confounding effects of background selection, as the signatures of background selection are very similar to the real selection and it has been a challenge to confidently classify the background selection for many alternative approaches.
We proposed a novel gene-centric approach for identifying gene-specific sweeps implemented in the Python tool SweepCluster. It performs spatial clustering of polymorphisms to infer the regions with signatures of gene-specific sweeps by employing two layers of information, i.e., genetic properties and spatial distribution models of the polymorphisms. It is specifically developed for prokaryotic data of dense genotypes and exhibit efficiency and robustness in detecting sweep regions in the validation datasets. It also performs well for eukaryotic data in a wide dynamic range of parameters of genetic properties. We expect that our new method will be valuable for detecting gene-specific sweeps in diverse genotype data and provide novel insights on evolutionary selection.
Project name: A python tool SweepCluster.
Project home page: https://github.com/BaoCodeLab/SweepCluster
Operating system: Linux.
Programming language: Python.
Other requirements: Python 3.7 or higher, scipy, numpy, pandas, scikit-learn, multiprocessing, R 3.5 or higher.
License: GPL3.0
Any restriction to use by non-academics: Not for non-academics.
All data generated during this study is included in this article and its supplementary files. The genomic sequences of S. pyogenes were downloaded from NCBI Genbank database (ftp://ftp.ncbi.nlm.nih.gov) and browsed at https://www.ncbi.nlm.nih.gov/genome/browse/#!/prokaryotes/175/. The genomic sequences of S. suis strains were downloaded from the NCBI Genbank database (ftp://ftp.ncbi.nlm.nih.gov) and browsed at https://www.ncbi.nlm.nih.gov/genome/browse/#!/prokaryotes/199/.
CLR:
Composite likelihood ratio
SNPs:
Single nucleotide polymorphisms
LD:
Linkage disequilibrium
NRMSD:
Normalized root-mean-square of inter-SNP distances
LCT:
EDAR:
Ectodysplasin A receptor
PCDH15:
Protocadherin
R3HDM1:
R3H domain containing 1
UBXN4:
UBX domain protein 4
MCM6:
Minichromosome maintenance complex component 6
GCC2:
GRIP and coiled-coil domain containing 2
LIMS1:
LIM zinc finger domain containing 1
Stephan W. Selective sweeps. Genetics. 2019;211(1):5.
Cohan FM. Bacterial speciation: genetic sweeps in bacterial species. Curr Biol. 2016;26(3):R112–5.
Shapiro BJ, Polz MF. Microbial speciation. Cold Spring Harb Perspect Biol. 2015;7:a01843.
Polz MF, Alm EJ, Hanage WP. Horizontal gene transfer and the evolution of bacterial and archaeal population structure. Trends Genet. 2013;29(3):170–5.
Shapiro BJ, Polz MF. Ordering microbial diversity into ecologically and genetically cohesive units. Trends Microbiol. 2014;22(5):235–47.
Bendall ML, Stevens SLR, Chan L-K, Malfatti S, Schwientek P, Tremblay J, et al. Genome-wide selective sweeps and gene-specific sweeps in natural bacterial populations. ISME J. 2016;10(7):1589–601.
Kashtan N, Roggensack SE, Rodrigue S, Thompson JW, Biller SJ, Coe A, et al. Single-cell genomics reveals hundreds of coexisting subpopulations in wild Prochlorococcus. Science. 2014;344(6182):416–20.
Rosen MJ, Davison M, Bhaya D, Fisher DS. Fine-scale diversity and extensive recombination in a quasisexual bacterial population occupying a broad niche. Science. 2015;348(6238):1019–23.
Shapiro BJ, Friedman J, Cordero OX, Preheim SP, Timberlake SC, Szabó G, et al. Population genomics of early events in the ecological differentiation of bacteria. Science. 2012;336(6077):48–51.
Bao Y-J, Shapiro BJ, Lee SW, Ploplis VA, Castellino FJ. Phenotypic differentiation of Streptococcus pyogenes populations is induced by recombination-driven gene-specific sweeps. Sci Rep. 2016;6:36644.
Kim Y, Stephan W. Detecting a local signature of genetic hitchhiking along a recombining chromosome. Genetics. 2002;160(2):765.
Nielsen R, Williamson S, Kim Y, Hubisz MJ, Clark AG, Bustamante C. Genomic scans for selective sweeps using SNP data. Genome Res. 2005;15(11):1566–75.
Huber CD, DeGiorgio M, Hellmann I, Nielsen R. Detecting recent selective sweeps while controlling for mutation rate and background selection. Mol Ecol. 2016;25(1):142–56.
Akey JM, Zhang G, Zhang K, Jin L, Shriver MD. Interrogating a high-density SNP map for signatures of natural selection. Genome Res. 2002;12(12):1805–14.
Kim Y, Nielsen R. Linkage disequilibrium as a signature of selective sweeps. Genetics. 2004;167(3):1513–24.
Sabeti PC, Reich DE, Higgins JM, Levine HZ, Richter DJ, Schaffner SF, et al. Detecting recent positive selection in the human genome from haplotype structure. Nature. 2002;419(6909):832–7.
Voight BF, Kudaravalli S, Wen X, Pritchard JK. A map of recent positive selection in the human genome. PLOS Biol. 2006;4(3):e72.
Ferrer-Admetlla A, Liang M, Korneliussen T, Nielsen R. On detecting incomplete soft or hard selective sweeps using haplotype structure. Mol Biol Evol. 2014;31(5):1275–91.
Harris AM, Garud NR, DeGiorgio M. Detection and classification of hard and soft sweeps from unphased genotypes by multilocus genotype identity. Genetics. 2018;210(4):1429–52.
Harris AM, DeGiorgio M. A likelihood approach for uncovering selective sweep signatures from haplotype data. Mol Biol Evol. 2020;37(10):3023–46.
Shapiro BJ. Signatures of natural selection and ecological differentiation in Microbial genomes. In: Aubin-Horth CRLaN, editor. Ecological genomics: ecology and the evolution of genes and genomes, advances in experimental medicine and biology, vol. 781. Dordrecht: Springer; 2013.
Sun YV, Levin AM, Boerwinkle E, Robertson H, Kardia SL. A scan statistic for identifying chromosomal patterns of SNP association. Genet Epidemiol. 2006;30(7):627–35.
Bao Y-J, Liang Z, Mayfield JA, Donahue DL, Carothers KE, Lee SW, et al. Genomic characterization of a pattern D Streptococcus pyogenes emm53 isolate reveals a genetic rationale for invasive skin tropicity. J Bacteriol. 2016;198:1712–24.
Davies MR, Holden MT, Coupland P, Chen JH. Emergence of scarlet fever Streptococcus pyogenes emm12 clones in Hong Kong is associated with toxin acquisition and multidrug resistance. Nat Genet. 2015;47(1):84–7.
Weinert LA, Chaudhuri RR, Wang J, Peters SE, Corander J, Jombart T, et al. Genomic signatures of human and animal disease in the zoonotic pathogen Streptococcus suis. Nat Commun. 2015;6:6740.
Bessen DE, Lizano S. Tissue tropisms in group A streptococcal infections. Future Microbiol. 2010;5(4):623–38.
Barrett JC, Fry B, Maller J, Daly MJ. Haploview: analysis and visualization of LD and haplotype maps. Bioinformatics. 2005;21(2):263–5.
Estrada AA, Gottschalk M, Rossow S, Rendahl A, Gebhart C, Marthaler DG. Serotype and genotype (multilocus sequence type) of Streptococcus suis isolates from the United States serve as predictors of pathotype. J Clin Microbiol. 2019;57(9):e00377-e419.
Huson DH, Bryant D. Application of phylogenetic networks in evolutionary studies. Mol Biol Evol. 2006;23(2):254–67.
Auton A, Brooks LD, Durbin RM, Garrison EP, Kang HM, Korbel JO, et al. A global reference for human genetic variation. Nature. 2015;526(7571):68–74.
Bhatia G, Patterson N, Sankararaman S, Price AL. Estimating and interpreting FST: the impact of rare variants. Genome Res. 2013;23(9):1514–21.
Wang K, Li M, Hakonarson H. ANNOVAR: functional annotation of genetic variants from high-throughput sequencing data. Nucleic Acids Res. 2010;38(16):e164.
Ester M, Kriegel H-P, Sander J, Xu X. A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of the second international conference on knowledge discovery and data mining; Portland, Oregon: AAAI Press; 1996. p. 226–31.
Campello RJGB, Moulavi D, Zimek A, Sander J. Hierarchical density estimates for data clustering, visualization, and outlier detection. ACM Trans Knowl Discov Data. 2015;10(1):1–51.
Ankerst M, Breunig MM, Kriegel H-P, Sander J. OPTICS: ordering points to identify the clustering structure. In: Proceedings of the 1999 ACM SIGMOD international conference on management of data; Philadelphia, Pennsylvania, USA: Association for Computing Machinery; 1999. p. 49–60.
Sander J, Ester M, Kriegel H-P, Xu X. Density-Based Clustering in Spatial Databases: The Algorithm GDBSCAN and Its Applications. Data Min Knowl Discov. 1998;2(2):169–94.
Ienco D, Bordogna G. Fuzzy extensions of the DBScan clustering algorithm. Soft Comput. 2018;22(5):1719–30.
He Y, Tan H, Luo W, Mao H, Ma D, Feng S, et al. MR-DBSCAN: An efficient parallel density-based clustering algorithm using MapReduce. In: Proceedings of the 2011 IEEE 17th international conference on parallel and distributed systems: IEEE Computer Society; 2011. p. 473–80.
Corizzo R, Pio G, Ceci M, Malerba D. DENCAST: distributed density-based clustering for multi-target regression. J of Big Data. 2019;6(1):43.
Bersaglieri T, Sabeti PC, Patterson N, Vanderploeg T, Schaffner SF, Drake JA, et al. Genetic signatures of strong recent positive selection at the lactase gene. Am J Hum Genet. 2004;74(6):1111–20.
Fujimoto A, Kimura R, Ohashi J, Omi K, Yuliwulandari R, Batubara L, et al. A scan for genetic determinants of human hair morphology: EDAR is associated with Asian hair thickness. Hum Mol Genet. 2008;17(6):835–43.
Sabeti PC, Varilly P, Fry B, Lohmueller J, Hostetter E, Cotsapas C, et al. Genome-wide detection and characterization of positive selection in human populations. Nature. 2007;449(7164):913–8.
Grossman SR, Shlyakhter I, Karlsson EK, Byrne EH, Morales S, Frieden G, et al. A composite of multiple signals distinguishes causal variants in regions of positive selection. Science. 2010;327(5967):883–6.
Ołdak M. Chapter 8 - Next generation sequencing in vision and hearing impairment. In: Demkow U, Płoski R, editors. Clinical applications for next-generation sequencing. Boston: Academic Press; 2016. p. 153–70.
Didelot X, Falush D. Inference of bacterial microevolution using multilocus sequence data. Genetics. 2007;175(3):1251–66.
The work was partially supported by The Science and Technology Program of Guangzhou, China (201804020053) and Guangdong Province Key Laboratory of Computational Science at the Sun Yat-sen University (2020B1212060032). The funding bodies did not play any roles in the design of the study, in the collection, analysis, or interpretation of data, or in writing the manuscript.
State Key Laboratory of Biocatalysis and Enzyme Engineering, Hubei Collaborative Innovation Center for Green Transformation of Bio-Resources, Hubei Key Laboratory of Industrial Biotechnology, School of Life Sciences, Hubei University, Wuhan, 430062, China
Junhui Qiu, Qi Zhou, Qianjun Chen & Yun-Juan Bao
School of Computer Science and Engineering, Guangdong Province Key Laboratory of Computational Science, and National Engineering Laboratory for Big Data Analysis and Application, Sun Yat-Sen University, Guangzhou, 510275, China
Weicai Ye
Junhui Qiu
Qi Zhou
Qianjun Chen
Yun-Juan Bao
Y-JB and WY conceived the idea. Y-JB and QC supervised the study. JQ and Y-JB developed the software. JQ, QZ, WY, QC, and Y-JB analyzed and interpreted the data. JQ drafted the manuscript. Y-JB, WY, and QC revised the manuscript critically. All authors read and approved the manuscript.
Correspondence to Qianjun Chen or Yun-Juan Bao.
Additional file 1: Table S1.
The list of all segregating SNPs in the core genome of S. pyogenes. Table S2. The list of 46 strains of S. pyogenes used for variant detection. Table S3. SNP clusters identified by SweepCluster from all segregating SNPs of S. pyogenes. Table S4. The list of SNPs from S. pyogenes associated with the disease phenotype acute rheumatic fever (p-value ≤ 0.05). Table S5. SNP clusters identified by SweepCluster from the SNPs associated with the disease phenotype acute rheumatic fever for S. pyogenes.
The list of all segregating SNPs in the core genome of S. suis. Table S7. The list of 209 strains of S. suis used for variant detection. Table S8. SNP clusters identified by SweepCluster from all segregating SNPs of S. suis. Table S9. The list of SNPs from S. suis associated with the subpopulation differentiation (p-value ≤ 0.05). Table S10. SNP clusters identified by SweepCluster from the SNPs associated with population differentiation for S. suis.
Additional file 3: Table S11.
The list of 20 strains of V. cycliotrophicus used for variant detection. Table S12. The ecoSNPs from V. cyclitrophicus associated with the ecological differentiation. Table S13. The list of SNP clusters identified by SweepCluster for ecoSNPs in V. cyclitrophicus.
Clusters detected by SweepCluster for chromosome 2 variants between EUR-AFR selected at a series of Fst cutoff (0.4, 0.43, 0.45, 0.5, 0.55, 0.6, 0.65, and 0.7).
Clusters detected by SweepCluster for chromosome 2 variants between EUR-EAS selected at a series of Fst cutoff (0.4, 0.43, 0.45, 0.5, 0.55, 0.6, 0.65, and 0.7).
Clusters detected by SweepCluster for chromosome 10 variants between EUR-EAS selected at a series of Fst cutoff (0.4, 0.43, 0.45, 0.5, 0.55, 0.6, 0.65, and 0.7).
Clusters detected by SweepCluster for chromosome 10 variants between AFR-EAS selected at a series of Fst cutoff (0.4, 0.43, 0.45, 0.5, 0.55, 0.6, 0.65, and 0.7).
Cluster detection at the known gene loci under positive selection from the human 1000 Genomes Project genotype datasets.
: Fig. S1 The phylogenetic trees for selection of non-redundant strains. (A) The trees for each of the seven groups of strains. The grouping was based on the submission institutions. The strains selected for downstream analysis in each group are indicated in red square. (B) The final phylogenetic tree for 209 selected non-redundant genomes. (C) The two subpopulations used for identification of SNPs associated with population differentiation are indicated with numbers.
Additional file 10
: Fig. S2 Phylogenetic tree of the 20 strains of V. cyclitrophicus. The ecological partition of the strains is indicated in color for thirteen S strains and seven L strains.
: Fig. S3 SNP clusters with signatures of selective sweep identified by SweepCluster for S. pyogenes genotype datasets. (A) The clusters detected from all segregating SNPs in the core genome. (B) The clusters detected from 1,631 selected SNPs with phenotypic association (Chi-squared test p-value ≤ 0.001). The clusters are represented as colored bars with the bar height indicating the cluster size (the number of SNPs in the clusters) and the bar width indicating the spanning length. The significance of the clustering evaluated with -log10 (p-value) is indicated in gradient colors. The gene loci in the top clusters are shown. (C) Distribution comparison of population differentiation Fst between two groups of SNPs: (i) those falling into the 131 significant clusters involving 1201 SNPs and (ii) those outside of clusters among the whole-genome core SNPs. The distribution of Fst for the latter group was constructed using a random sampling of a quarter of the 67,540 core SNPs outside clusters and the calculation of Fst was based on a window size of 10.
: Fig. S4 SNP clusters with signatures of selective sweep identified by SweepCluster for S. suis genotype datasets. (A) The clusters detected from all segregating SNPs in the core genome of S. suis. (B) The clusters detected from 2,205 selected SNPs associated with population differentiation (Chi-squared test p-value ≤ 0.05). The clusters are represented as colored bars with the bar height indicating the cluster size (the number of SNPs) and the bar width indicating the spanning length. The significance of the clustering evaluated with -log10 (p-value) is indicated in gradient colors. The gene loci in the top clusters are shown. (C) Distribution comparison of population differentiation Fst between two groups of SNPs: (i) those falling into the 111 significant clusters involving 2,049 SNPs and (ii) those outside of clusters among the whole-genome core SNPs. The distribution of Fst for the latter group was constructed using a random sampling of a ninth of the 234,655 core SNPs outside clusters and the calculation of Fst was based on a window size of 10.
: Fig. S5 SNP clusters with signatures of selective sweep identified by SweepCluster for S. pyogenes genotypes pre-selected using population differentiation Fst. (A) Comparison of the number of SNPs selected by population differentiation (G1, Fst ≥ 0.6) to that selected by phenotypic association (G2, Chi-squared test p-value ≤ 0.001). (B) The clusters detected from 1,631 selected SNPs with phenotypic association (the same as Fig. S3B, it is presented here for convenient comaprision). (C) The clusters detected from 1,277 SNPs selected by both methods. (D) The clusters detected from 2,729 SNPs selected by population differentiation. The clusters are represented as colored bars with the bar height indicating the cluster size (the number of SNPs in the clusters) and the bar width indicating the spanning length. The significance of the clustering evaluated with -log10 (p-value) is indicated in gradient colors.
Qiu, J., Zhou, Q., Ye, W. et al. SweepCluster: A SNP clustering tool for detecting gene-specific sweeps in prokaryotes. BMC Bioinformatics 23, 19 (2022). https://doi.org/10.1186/s12859-021-04533-6
SweepCluster
SNP clustering
Gene-specific sweep | CommonCrawl |
Entropy by unit length for the Ginzburg-Landau equation on the line. A Hilbert space framework
Limits of anisotropic and degenerate elliptic problems
May 2012, 11(3): 1231-1252. doi: 10.3934/cpaa.2012.11.1231
Dynamics of non-autonomous nonclassical diffusion equations on $R^n$
Cung The Anh 1, and Tang Quoc Bao 2,
Department of Mathematics, Hanoi National University of Education, 36 Xuan Thuy, Cau Giay, Hanoi, Vietnam
Faculty of Applied Mathematics and Informatics, Hanoi University of Science and Technology, No 1 Dai Co Viet, Hai Ba Trung, Hanoi, Vietnam
Received December 2010 Revised May 2011 Published December 2011
We consider the Cauchy problem for a non-autonomous nonclassical diffusion equation of the form $u_t-\varepsilon\Delta u_t - \Delta u+f(u)+\lambda u=g(t)$ on $R^n$. Under an arbitrary polynomial growth order of the nonlinearity $f$ and a suitable exponent growth of the external force $g$, using the method of tail-estimates and the asymptotic a priori estimate method, we prove the existence of an $(H^{1}(R^n) L^{p}(R^n), H^{1}(R^n) L^{p}(R^n))$ - pullback attractor $\hat{A}_{\varepsilon}$ for the process associated to the problem. We also prove the upper semicontinuity of $\{\hat{A}_{\varepsilon}: \varepsilon\in [0,1]\}$ at $\varepsilon = 0$.
Keywords: method of tail estimates, weak solution, Non-autonomous nonclassical diffusion equation, unbounded domain., pullback attractor, upper semicontinuity, a priori.
Mathematics Subject Classification: 35B41, 35K57, 35D05, 35B30.
Citation: Cung The Anh, Tang Quoc Bao. Dynamics of non-autonomous nonclassical diffusion equations on $R^n$. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1231-1252. doi: 10.3934/cpaa.2012.11.1231
E. C. Aifantis, On the problem of diffusion in solids,, Acta Mech., 37 (1980), 265. doi: 10.1007/BF01202949. Google Scholar
C. T. Anh and T. Q. Bao, Pullback attractors for a class of non-autonomous nonclassical diffusion equations,, Nonlinear Anal., 73 (2010), 399. doi: 10.1016/j.na.2010.03.031. Google Scholar
A. N. Carvalho, J. A. Langa and J. C. Robinson, On the continuity of pullback attractors for evolution processes,, Nonlinear Anal., 71 (2009), 1812. doi: 10.1016/j.na.2009.01.016. Google Scholar
Y. Li and C. K. Zhong, Pullback attractors for the norm-to-weak continuous process and application to the nonautonomous reaction-diffusion equations,, Appl. Math. Comp., 190 (2007), 1020. doi: 10.1016/j.amc.2006.11.187. Google Scholar
Y. Li, S. Wang and H. Wu, Pullback attractors for non-autonomous reaction-diffusion equations in $L^p$,, Appl. Math. Comp., 207 (2009), 373. doi: 10.1016/j.amc.2008.10.065. Google Scholar
J. L. Lions, "Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires,", Dunod, (1969). Google Scholar
Q. F. Ma, S. H. Wang and C. K. Zhong, Necessary and sufficient conditions for the existence of global attractor for semigroups and applications,, Indian University Math. J., 51 (2002), 1541. doi: 10.1512/iumj.2002.51.2255. Google Scholar
J. C. Peter and M. E. Gurtin, On a theory of heat conduction involving two temperatures,, Z. Angew. Math. Phys., 19 (1968), 614. doi: 10.1007/BF01594969. Google Scholar
C. Sun, S. Wang and C. Zhong, Global attractors for a nonclassical diffusion equation,, Acta Math. Appl. Sin., 23 (2007), 1271. doi: 10.1007/s10114-005-0909-6. Google Scholar
C. Sun and M. Yang, Dynamics of the nonclassical diffusion equations,, Asymp. Anal., 59 (2009), 51. doi: 10.3233/ASY-2008-0886. Google Scholar
C. Truesdell and W. Noll, "The Nonlinear Field Theories of Mechanics,", Encyclomedia of Physics, (1995). Google Scholar
B. Wang, Attractors for reaction-diffusion equations in unbounded domains,, Physica D, 179 (1999), 41. doi: 10.1016/S0167-2789(98)00304-2. Google Scholar
B. Wang, Pullback attractors for non-autonomous reaction-diffusion equations on $\mathbb R^n$,, Front. Math. China, 4 (2009), 563. doi: 10.1007/s11464-009-0033-5. Google Scholar
S. Wang, D. Li and C. Zhong, On the dynamics of a class of nonclassical parabolic equations,, J. Math. Anal. Appl., 317 (2006), 565. doi: 10.1016/j.jmaa.2005.06.094. Google Scholar
Y. Xiao, Attractors for a nonclassical diffusion equation,, Acta Math. Appl. Sin., 18 (2002), 273. doi: 10.1007/s102550200026. Google Scholar
C. K. Zhong, M. H. Yang and C. Y. Sun, The existence of global attractors for the norm-to-weak continuous semigroup and application to the nonlinear reaction-diffusion equations,, J. Differential Equations, 15 (2006), 367. doi: 10.1016/j.jde.2005.06.008. Google Scholar
Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247
Leilei Wei, Yinnian He. A fully discrete local discontinuous Galerkin method with the generalized numerical flux to solve the tempered fractional reaction-diffusion equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020319
Xiu Ye, Shangyou Zhang, Peng Zhu. A weak Galerkin finite element method for nonlinear conservation laws. Electronic Research Archive, 2021, 29 (1) : 1897-1923. doi: 10.3934/era.2020097
Gabrielle Nornberg, Delia Schiera, Boyan Sirakov. A priori estimates and multiplicity for systems of elliptic PDE with natural gradient growth. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3857-3881. doi: 10.3934/dcds.2020128
Nguyen Huy Tuan. On an initial and final value problem for fractional nonclassical diffusion equations of Kirchhoff type. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020354
Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020345
Biyue Chen, Chunxiang Zhao, Chengkui Zhong. The global attractor for the wave equation with nonlocal strong damping. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021015
Shuxing Chen, Jianzhong Min, Yongqian Zhang. Weak shock solution in supersonic flow past a wedge. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 115-132. doi: 10.3934/dcds.2009.23.115
Abdollah Borhanifar, Maria Alessandra Ragusa, Sohrab Valizadeh. High-order numerical method for two-dimensional Riesz space fractional advection-dispersion equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020355
Maika Goto, Kazunori Kuwana, Yasuhide Uegata, Shigetoshi Yazaki. A method how to determine parameters arising in a smoldering evolution equation by image segmentation for experiment's movies. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 881-891. doi: 10.3934/dcdss.2020233
Cung The Anh Tang Quoc Bao | CommonCrawl |
2016 ( 1301 )
Search Results: 1 - 10 of 538687 matches for " Milovanov A. V. "
Page 1 /538687
Localization-delocalization transition on a separatrix system of nonlinear Schrodinger equation with disorder
A. V. Milovanov,A. Iomin
Physics , 2012,
Abstract: Localization-delocalization transition in a discrete Anderson nonlinear Schr\"odinger equation with disorder is shown to be a critical phenomenon $-$ similar to a percolation transition on a disordered lattice, with the nonlinearity parameter thought as the control parameter. In vicinity of the critical point the spreading of the wave field is subdiffusive in the limit $t\rightarrow+\infty$. The second moment grows with time as a powerlaw $\propto t^\alpha$, with $\alpha$ exactly 1/3. This critical spreading finds its significance in some connection with the general problem of transport along separatrices of dynamical systems with many degrees of freedom and is mathematically related with a description in terms fractional derivative equations. Above the delocalization point, with the criticality effects stepping aside, we find that the transport is subdiffusive with $\alpha = 2/5$ consistently with the results from previous investigations. A threshold for unlimited spreading is calculated exactly by mapping the transport problem on a Cayley tree.
Topology of delocalization in the nonlinear Anderson model and anomalous diffusion on finite clusters
Abstract: This study is concerned with destruction of Anderson localization by a nonlinearity of the power-law type. We suggest using a nonlinear Schr\"odinger model with random potential on a lattice that quadratic nonlinearity plays a dynamically very distinguished role in that it is the only type of power nonlinearity permitting an abrupt localization-delocalization transition with unlimited spreading already at the delocalization border. For super-quadratic nonlinearity the borderline spreading corresponds to diffusion processes on finite clusters. We have proposed an analytical method to predict and explain such transport processes. Our method uses a topological approximation of the nonlinear Anderson model and, if the exponent of the power nonlinearity is either integer or half-integer, will yield the wanted value of the transport exponent via a triangulation procedure in an Euclidean mapping space. A kinetic picture of the transport arising from these investigations uses a fractional extension of the diffusion equation to fractional derivatives over the time, signifying non-Markovian dynamics with algebraically decaying time correlations.
EXTRACTING DNA WITH USING PEQGOLD PLANT DNA MINI KIT Выделение ДНК при помощи peqgold plant dna mini kit
Milovanov A. V.,Troshin L. P.
Polythematic Online Scientific Journal of Kuban State Agrarian University , 2013,
Abstract: In the present article, the new method of DNA extraction from fresh and herbarium leafs of grape to their subsequent sequencing was described
Functional background of the Tsallis entropy: 'coarse-grained' systems and 'kappa' distribution functions
A. V. Milovanov,L. M. Zelenyi
Nonlinear Processes in Geophysics (NPG) , 2000,
Abstract: The concept of the generalized entropy is analyzed, with the particular attention to the definition postulated by Tsallis [J. Stat. Phys. 52, 479 (1988)]. We show that the Tsallis entropy can be rigorously obtained as the solution of a nonlinear functional equation; this equation represents the entropy of a complex system via the partial entropies of the subsystems involved, and includes two principal parts. The first part is linear (additive) and leads to the conventional, Boltzmann, definition of entropy as the logarithm of the statistical weight of the system. The second part is multiplicative and contains all sorts of multilinear products of the partial entropies; inclusion of the multiplicative terms is shown to reproduce the generalized entropy exactly in the Tsallis sense. We speculate that the physical background for considering the multiplicative terms is the role of the long-range correlations supporting the "macroscopic" ordering phenomena (e.g., formation of the "coarse-grained" correlated patterns). We prove that the canonical distribution corresponding to the Tsallis definition of entropy, coincides with the so-called "kappa" redistribution which appears in many physical realizations. This has led us to associate the origin of the "kappa" distributions with the "macroscopic" ordering ("coarse-graining") of the system. Our results indicate that an application of the formalism based on the Tsallis notion of entropy might actually have sense only for the systems whose statistical weights, Ω, are relatively small. (For the "coarse-grained" systems, the weight omega could be interpreted as the number of the "grains".) For large Ω (i.e., Ω -> ∞), the standard statistical mechanical formalism is advocated, which implies the conventional, Boltzmann definition of entropy as ln Ω.
STUDY OF INDIGENEOUS RUSSIAN GRAPE VARIETIES USING MICROSATELLITE MARKERS Исследование аборигенных сортов винограда России с использованием микросателлитных маркеров
Zvyagin A. S.,Milovanov A. V.,Troshin L. P.
Abstract: The analysis of genetic polymorphisms of 12 autochthonous grape varieties grown in the National ampelographic collection of Russia (Anapa district of the Krasnodar region) through the study of allelic diversity at six microsatellite loci: VRZAG79, VVMD5, VVMD7, VVMD27, VRZAG62, VVS2 has been done. We have found that all native varieties have a unique set of allele. The assessment of genetic relationships varieties has been performed using cluster analysis. Data for DNA certification of the investigated genotypes of the grapes has also been obtained in the article
THREE SIBLINGS OF MODERN PRIVATE VITICULTURE OF RUSSIA AND UKRAINE Три сибса современного приватного виноградарства России и Украины
Troshin L. P.,Milovanov A. V.,Mahovickiy B. A.
Abstract: In the present article, we have described data of comparative ampelography of biometric evaluation of leaf parameters of the three table grapes: Preobragenie, Victor and Jubiley Novocherkasska, widespread in the amateur and farming areas of Russia and the Ukraine
PROGRESS IN GERMPLASM IDENTIFICATION AND GENOTYPING METHODS IN THE STUDY OF THREE TABLE GRAPE VARIETIES Идентификация и генотипирование зародышевой плазмы трех столовых сортов винограда
Abstract: In the present article, we have described data of comparative ampelography of biometric evaluation of leaf parameters of the three table grapes: Preobragenie, Victor and Jubiley Novocherkassk, widespread in the amateur and farming areas of Russia and the Ukraine. Showed results of molecular genetic analysis of DNA from these table grapes
E-pile model of self-organized criticality
A. V. Milovanov,K. Rypdal,J. J. Rasmussen
Abstract: The concept of percolation is combined with a self-consistent treatment of the interaction between the dynamics on a lattice and the external drive. Such a treatment can provide a mechanism by which the system evolves to criticality without fine tuning, thus offering a route to self-organized criticality (SOC) which in many cases is more natural than the weak random drive combined with boundary loss/dissipation as used in standard sand-pile formulations. We introduce a new metaphor, the e-pile model, and a formalism for electric conduction in random media to compute critical exponents for such a system. Variations of the model apply to a number of other physical problems, such as electric plasma discharges, dielectric relaxation, and the dynamics of the Earth's magnetotail.
Pseudochaos and low-frequency percolation scaling for turbulent diffusion in magnetized plasma
Alexander V. Milovanov
Physics , 2009, DOI: 10.1103/PhysRevE.79.046403
Abstract: The basic physics properties and simplified model descriptions of the paradigmatic "percolation" transport in low-frequency, electrostatic (anisotropic magnetic) turbulence are theoretically analyzed. The key problem being addressed is the scaling of the turbulent diffusion coefficient with the fluctuation strength in the limit of slow fluctuation frequencies (large Kubo numbers). In this limit, the transport is found to exhibit pseudochaotic, rather than simply chaotic, properties associated with the vanishing Kolmogorov-Sinai entropy and anomalously slow mixing of phase space trajectories. Based on a simple random walk model, we find the low-frequency, percolation scaling of the turbulent diffusion coefficient to be given by $D/\omega\propto Q^{2/3}$ (here $Q\gg 1$ is the Kubo number and $\omega$ is the characteristic fluctuation frequency). When the pseudochaotic property is relaxed the percolation scaling is shown to cross over to Bohm scaling. The features of turbulent transport in the pseudochaotic regime are described statistically in terms of a time fractional diffusion equation with the fractional derivative in the Caputo sense. Additional physics effects associated with finite particle inertia are considered.
Percolation Models of Self-Organized Critical Phenomena
Abstract: In this chapter of the e-book "Self-Organized Criticality Systems" we summarize some theoretical approaches to self-organized criticality (SOC) phenomena that involve percolation as an essential key ingredient. Scaling arguments, random walk models, linear-response theory, and fractional kinetic equations of the diffusion and relaxation type are presented on an equal footing with theoretical approaches of greater sophistication, such as the formalism of discrete Anderson nonlinear Schr\"odinger equation, Hamiltonian pseudochaos, conformal maps, and fractional derivative equations of the nonlinear Schr\"odinger and Ginzburg-Landau type. Several physical consequences are described which are relevant to transport processes in complex systems. It is shown that a state of self-organized criticality may be unstable against a bursting ("fishbone") mode when certain conditions are met. Finally we discuss SOC-associated phenomena, such as: self-organized turbulence in the Earth's magnetotail (in terms of the "Sakura" model), phase transitions in SOC systems, mixed SOC-coherent behavior, and periodic and auto-oscillatory patterns of behavior. Applications of the above pertain to phenomena of magnetospheric substorm, market crashes, and the global climate change and are also discussed in some detail. Finally we address the frontiers in the field in association with the emerging projects in fusion research and space exploration. | CommonCrawl |
A tag is a keyword or label that categorizes your question with other, similar questions. Using the right tags makes it easier for others to find and answer your question.
Popular Name New
arduino× 6619
Be sure to use the Arduino Stack Exchange for questions that are more Arduino and less electronics.
20 asked this week, 62 this month
microcontroller× 6575
A device which includes a central processing unit (CPU), memory, and (generally) an assortment of I/O peripherals (UART, ADC, DAC, general-purpose I/O, I2C, etc.) in a tightly-coupled standalone packa…
power-supply× 6022
An electronic device which supplies electrical energy to a load. Can be AC or DC input. Typically DC output.
voltage× 5196
Voltage, otherwise known as electrical potential difference (denoted ∆V and measured in volts) is the difference in electric potential between two points (adapted from Wikipedia). Voltage can be const…
operational-amplifier× 4875
Questions relating to the construction and applications of operational amplifiers,
power× 4589
a primary concern for the design under discussion. Use the "low-power" tag when that applies.
transistors× 4535
a semiconductor device that can amplify signals and switch power. Most used types are bipolar (BJT, for Bipolar Junction Transistor), UJT (Unijunction transistor) and MOSFET (FET, fo…
led× 4369
a light-emitting diode. Lighting an LED is considered the "Hello world" of a circuit design, and it can be as simple as putting a series resistor or can get more complicated, involving PWM a…
batteries× 4030
a device consisting of one or more electrochemical cells that convert stored chemical energy into electrical energy
capacitor× 3942
A fundamental electronic component that stores energy in an electric field, commonly used in filtering applications.
mosfet× 3908
An transconductance (using voltage to control current) electronic component used for switching and amplification. Acronym for Metal-Oxide-Semiconductor Field-Effect Transistor. (from http://en.wikiped…
circuit-analysis× 3646
the process of finding the voltages across, and the currents through, every component in the network.
pcb× 3626
the acronym to Printed Circuit Board. A PCB is a carrier for the circuit's components and their electrical connections.
9 asked this week, 46 this month
current× 3537
Flow of electric charge - typically movement of charge carriers, such as electrons. Measured in amperes (A).
amplifier× 3145
adapt the range of the signal to a requirement, to make it more robust for transmission, or to satisfy interface requirement (like input/output impedance)
digital-logic× 3079
Digital electronics treats discrete signals, unlike analog electronics that treat continuous signals. Digital logic is used to perform arithmetic operations with electric signals, and constitutes the …
22 asked this month, 489 this year
resistors× 2890
A resistor obeys Ohm's law (V=IR); the current through it is equal to the voltage across it divided by the resistance (equivalently \$I=\frac{V}{R}\$)
rf× 2481
Short for Radio-Frequency. Frequencies at which radiation (intentional or not) plays a role. Typically associated with wireless communications, but also relevant for high-speed PCB design.
pic× 2402
a brand of 8, 16, and 32 bit RISC microcontrollers manufactured by Microchip. "PIC" originally was an acronym for "Peripheral Interface Controller".
motor× 2364
An electrical actuator that converts electrical energy into rotational motion or sustained linear motion (linear motor). There are many types of electric motors. If the specific type of motor is known…
usb× 2358
Universal Serial Bus. If your question relates to a specific chip, please mention it in your question.
pcb-design× 2331
About designing the boards which carry the components of an electronic circuit. For questions about getting them built instead use PCB-fabrication. If your question is specific to a certain CAD tool, …
switches× 2277
Devices to interrupt or route a signal or power one of several ways.
voltage-regulator× 2272
an analog circuit that produces a stable output voltage that doesn't vary with input voltage or load changes. Switching regulators are much more efficient than linear ones.
sensor× 2265
Sensors convert a physical quantity (e.g. temperature, pressure) into an electrical signal.
diodes× 2264
semiconductor components made from a P-type and N-type silicon material, that allows current to only flow in one direction.
transformer× 2230
A transformer couples two or more AC signals through a magnetic field. Often used as galvanic isolation and to transform one AC voltage to another.
audio× 2225
Questions about designing electronics for measuring, processing, and amplifying audio signals.
ac× 2179
refers to alternating current mains power. It usually applies to voltages with >100V RMS, but can also be used for ex. 24V industrial AC power.
fpga× 2140
a logic chip that is configured by the customer after manufacturing—hence "field-programmable".
analog× 2104
Analog circuits have a range of voltages, rather than just two as in digital logic.
relay× 2036
an electrically controlled switch. Electromechanical relays use an electromagnet to activate mechanical contacts, solid-state relays use semiconductor switches.
integrated-circuit× 1975
an electronic circuit built onto a single plate of a semiconductor material, normally silicon. Modern ICs may contain billions of transistors and they have played a major…
adc× 1947
an Analog to Digital Converter. This device converts analog signals into digital form. It is mainly used by the digital circuitry to take analog measurements.
battery-charging× 1918
Please specify the battery type in your question. Include chemistry (e.g. lead-acid), voltage, number of cells and how they are connected (series or parallel), capacity (in A·h or W·h).
dc× 1915
DC stands for Direct Current, which means the flow of electric charge in a single direction. Examples of a DC source are batteries, solar panels, dynamos.
tag synonyms | CommonCrawl |
AI News, Neural Networks for Beginners: Popular Types and Applications
On Sunday, June 3, 2018
Neural Networks for Beginners: Popular Types and Applications
Recently there has been a great buzz around the words "neural network" in the field of computer science and it has attracted a great deal of attention from many people.
Each neuron multiplies an initial value by some weight, sums results with other values coming into the same neuron, adjusts the resulting number by the neuron's bias, and then normalizes the output with an activation function.
key feature of neural networks is an iterative learning process in which records (rows) are presented to the network one at a time, and the weights associated with the input values are adjusted each time.
The network processes the records in the "training set" one at a time, using the weights and functions in the hidden layers, then compares the resulting outputs against the desired outputs.
We know that after training, each layer extracts higher and higher-level features of the dataset (input), until the final layer essentially makes a decision on what the input features refer to.
This approach is based on the observation that random initialization is a bad idea and that pre-training each layer with an unsupervised learning algorithm can allow for better initial weights.
stochastic corruption process randomly sets some of the inputs to zero, forcing the denoising autoencoder to predict missing (corrupted) values for randomly selected subsets of missing patterns.
Also, MLP neural network prediction accuracy depended greatly on neural network architecture, pre-processing of data, and the type of problem for which the network was developed.
The detector evaluates the input image at low resolution to quickly reject non-face regions and carefully process the challenging regions at higher resolution for accurate detection.
In the last chapter we learned that deep neural networks are often much harder to train than shallow neural networks.
We'll also look at the broader picture, briefly reviewing recent progress on using deep nets for image recognition, speech recognition, and other applications.
We'll work through a detailed example - code and all - of using convolutional nets to solve the problem of classifying handwritten digits from the MNIST data set:
As we go we'll explore many powerful techniques: convolutions, pooling, the use of GPUs to do far more training than we did with our shallow networks, the algorithmic expansion of our training data (to reduce overfitting), the use of the dropout technique (also to reduce overfitting), the use of ensembles of networks, and others.
We conclude our discussion of image recognition with a survey of some of the spectacular recent progress using networks (particularly convolutional nets) to do image recognition.
We'll briefly survey other models of neural networks, such as recurrent neural nets and long short-term memory units, and how such models can be applied to problems in speech recognition, natural language processing, and other areas.
And we'll speculate about the future of neural networks and deep learning, ranging from ideas like intention-driven user interfaces, to the role of deep learning in artificial intelligence.
For the $28 \times 28$ pixel images we've been using, this means our network has $784$ ($= 28 \times 28$) input neurons.
Our earlier networks work pretty well: we've obtained a classification accuracy better than 98 percent, using training and test data from the MNIST handwritten digit data set.
But the seminal paper establishing the modern subject of convolutional networks was a 1998 paper, 'Gradient-based learning applied to document recognition', by Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner.
LeCun has since made an interesting remark on the terminology for convolutional nets: 'The [biological] neural inspiration in models like convolutional nets is very tenuous.
That's why I call them 'convolutional nets' not 'convolutional neural nets', and why we call the nodes 'units' and not 'neurons' '.
Despite this remark, convolutional nets use many of the same ideas as the neural networks we've studied up to now: ideas such as backpropagation, gradient descent, regularization, non-linear activation functions, and so on.
In a convolutional net, it'll help to think instead of the inputs as a $28 \times 28$ square of neurons, whose values correspond to the $28 \times 28$ pixel intensities we're using as inputs:
To be more precise, each neuron in the first hidden layer will be connected to a small region of the input neurons, say, for example, a $5 \times 5$ region, corresponding to $25$ input pixels.
So, for a particular hidden neuron, we might have connections that look like this: That region in the input image is called the local receptive field for the hidden neuron.
To illustrate this concretely, let's start with a local receptive field in the top-left corner: Then we slide the local receptive field over by one pixel to the right (i.e., by one neuron), to connect to a second hidden neuron:
Note that if we have a $28 \times 28$ input image, and $5 \times 5$ local receptive fields, then there will be $24 \times 24$ neurons in the hidden layer.
This is because we can only move the local receptive field $23$ neurons across (or $23$ neurons down), before colliding with the right-hand side (or bottom) of the input image.
In this chapter we'll mostly stick with stride length $1$, but it's worth knowing that people sometimes experiment with different stride lengths* *As was done in earlier chapters, if we're interested in trying different stride lengths then we can use validation data to pick out the stride length which gives the best performance.
The same approach may also be used to choose the size of the local receptive field - there is, of course, nothing special about using a $5 \times 5$ local receptive field.
In general, larger local receptive fields tend to be helpful when the input images are significantly larger than the $28 \times 28$ pixel MNIST images..
In other words, for the $j, k$th hidden neuron, the output is: \begin{eqnarray} \sigma\left(b + \sum_{l=0}^4 \sum_{m=0}^4 w_{l,m} a_{j+l, k+m} \right).
Informally, think of the feature detected by a hidden neuron as the kind of input pattern that will cause the neuron to activate: it might be an edge in the image, for instance, or maybe some other type of shape.
To see why this makes sense, suppose the weights and bias are such that the hidden neuron can pick out, say, a vertical edge in a particular local receptive field.
To put it in slightly more abstract terms, convolutional networks are well adapted to the translation invariance of images: move a picture of a cat (say) a little ways, and it's still an image of a cat* *In fact, for the MNIST digit classification problem we've been studying, the images are centered and size-normalized.
One of the early convolutional networks, LeNet-5, used $6$ feature maps, each associated to a $5 \times 5$ local receptive field, to recognize MNIST digits.
Let's take a quick peek at some of the features which are learned* *The feature maps illustrated come from the final convolutional network we train, see here.:
Each map is represented as a $5 \times 5$ block image, corresponding to the $5 \times 5$ weights in the local receptive field.
By comparison, suppose we had a fully connected first layer, with $784 = 28 \times 28$ input neurons, and a relatively modest $30$ hidden neurons, as we used in many of the examples earlier in the book.
That, in turn, will result in faster training for the convolutional model, and, ultimately, will help us build deep networks using convolutional layers.
Incidentally, the name convolutional comes from the fact that the operation in Equation (125)\begin{eqnarray} \sigma\left(b + \sum_{l=0}^4 \sum_{m=0}^4 w_{l,m} a_{j+l, k+m} \right) \nonumber\end{eqnarray}$('#margin_223550267310_reveal').click(function() {$('#margin_223550267310').toggle('slow', function() {});});
A little more precisely, people sometimes write that equation as $a^1 = \sigma(b + w * a^0)$, where $a^1$ denotes the set of output activations from one feature map, $a^0$ is the set of input activations, and $*$ is called a convolution operation.
In particular, I'm using 'feature map' to mean not the function computed by the convolutional layer, but rather the activation of the hidden neurons output from the layer.
In max-pooling, a pooling unit simply outputs the maximum activation in the $2 \times 2$ input region, as illustrated in the following diagram:
Note that since we have $24 \times 24$ neurons output from the convolutional layer, after pooling we have $12 \times 12$ neurons.
So if there were three feature maps, the combined convolutional and max-pooling layers would look like:
Here, instead of taking the maximum activation of a $2 \times 2$ region of neurons, we take the square root of the sum of the squares of the activations in the $2 \times 2$ region.
It's similar to the architecture we were just looking at, but has the addition of a layer of $10$ output neurons, corresponding to the $10$ possible values for MNIST digits ('0', '1', '2', etc):
Problem Backpropagation in a convolutional network The core equations of backpropagation in a network with fully-connected layers are (BP1)\begin{eqnarray} \delta^L_j = \frac{\partial C}{\partial a^L_j} \sigma'(z^L_j) \nonumber\end{eqnarray}$('#margin_511945174620_reveal').click(function() {$('#margin_511945174620').toggle('slow', function() {});});-(BP4)\begin{eqnarray} \frac{\partial C}{\partial w^l_{jk}} = a^{l-1}_k \delta^l_j \nonumber\end{eqnarray}$('#margin_896578903066_reveal').click(function() {$('#margin_896578903066').toggle('slow', function() {});});
Suppose we have a network containing a convolutional layer, a max-pooling layer, and a fully-connected output layer, as in the network discussed above.
The program we'll use to do this is called network3.py, and it's an improved version of the programs network.py and network2.py developed in earlier chapters* *Note also that network3.py incorporates ideas from the Theano library's documentation on convolutional neural nets (notably the implementation of LeNet-5), from Misha Denil's implementation of dropout, and from Chris Olah..
But now that we understand those details, for network3.py we're going to use a machine learning library known as Theano* *See Theano: A CPU and GPU Math Expression Compiler in Python, by James Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, Ravzan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio (2010).
The examples which follow were run using Theano 0.6* *As I release this chapter, the current version of Theano has changed to version 0.7.
Note that the code in the script simply duplicates and parallels the discussion in this section.Note also that throughout the section I've explicitly specified the number of training epochs.
In practice, it's worth using early stopping, that is, tracking accuracy on the validation set, and stopping training when we are confident the validation accuracy has stopped improving.: >>>
Using the validation data to decide when to evaluate the test accuracy helps avoid overfitting to the test data (see this earlier discussion of the use of validation data).
Your results may vary slightly, since the network's weights and biases are randomly initialized* *In fact, in this experiment I actually did three separate runs training a network with this architecture.
This $97.80$ percent accuracy is close to the $98.04$ percent accuracy obtained back in Chapter 3, using a similar network architecture and learning hyper-parameters.
Second, while the final layer in the earlier network used sigmoid activations and the cross-entropy cost function, the current network uses a softmax final layer, and the log-likelihood cost function.
I haven't made this switch for any particularly deep reason - mostly, I've done it because softmax plus log-likelihood cost is more common in modern image classification networks.
In this architecture, we can think of the convolutional and pooling layers as learning about local spatial structure in the input training image, while the later, fully-connected layer learns at a more abstract level, integrating global information from across the entire image.
filter_shape=(20, 1, 5, 5),
poolsize=(2, 2)),
validation_data, test_data)
Can we improve on the $98.78$ percent classification accuracy?
filter_shape=(40, 20, 5, 5),
In fact, you can think of the second convolutional-pooling layer as having as input $12 \times 12$ 'images', whose 'pixels' represent the presence (or absence) of particular localized features in the original input image.
The output from the previous layer involves $20$ separate feature maps, and so there are $20 \times 12 \times 12$ inputs to the second convolutional-pooling layer.
In fact, we'll allow each neuron in this layer to learn from all $20 \times 5 \times 5$ input neurons in its local receptive field.
More informally: the feature detectors in the second convolutional-pooling layer have access to all the features from the previous layer, but only within their particular local receptive field* *This issue would have arisen in the first layer if the input images were in color.
In that case we'd have 3 input features for each pixel, corresponding to red, green and blue channels in the input image.
So we'd allow the feature detectors to have access to all color information, but only within a given local receptive field..
Problem Using the tanh activation function Several times earlier in the book I've mentioned arguments that the tanh function may be a better activation function than the sigmoid function.
Try training the network with tanh activations in the convolutional and fully-connected layers* *Note that you can pass activation_fn=tanh as a parameter to the ConvPoolLayer and FullyConnectedLayer classes..
Try plotting the per-epoch validation accuracies for both tanh- and sigmoid-based networks, all the way out to $60$ epochs.
If your results are similar to mine, you'll find the tanh networks train a little faster, but the final accuracies are very similar.
Can you get a similar training speed with the sigmoid, perhaps by changing the learning rate, or doing some rescaling* *You may perhaps find inspiration in recalling that $\sigma(z) = (1+\tanh(z/2))/2$.?
Try a half-dozen iterations on the learning hyper-parameters or network architecture, searching for ways that tanh may be superior to the sigmoid.
Personally, I did not find much advantage in switching to tanh, although I haven't experimented exhaustively, and perhaps you may find a way.
In any case, in a moment we will find an advantage in switching to the rectified linear activation function, and so we won't go any deeper into the use of tanh.
Using rectified linear units: The network we've developed at this point is actually a variant of one of the networks used in the seminal 1998 paper* *'Gradient-based learning applied to document recognition', by Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner (1998).
poolsize=(2, 2),
activation_fn=ReLU),
However, across all my experiments I found that networks based on rectified linear units consistently outperformed networks based on sigmoid activation functions.
The reason for that recent adoption is empirical: a few people tried rectified linear units, often on the basis of hunches or heuristic arguments* *A common justification is that $\max(0, z)$ doesn't saturate in the limit of large $z$, unlike sigmoid neurons, and this helps rectified linear units continue learning.
A simple way of expanding the training data is to displace each training image by a single pixel, either up one pixel, down one pixel, left one pixel, or right one pixel.
Just to remind you of the flavour of some of the results in that earlier discussion: in 2003 Simard, Steinkraus and Platt* *Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis, by Patrice Simard, Dave Steinkraus, and John Platt (2003).
improved their MNIST performance to $99.6$ percent using a neural network otherwise very similar to ours, using two convolutional-pooling layers, followed by a hidden fully-connected layer with $100$ neurons.
There were a few differences of detail in their architecture - they didn't have the advantage of using rectified linear units, for instance - but the key to their improved performance was expanding the training data.
Using this, we obtain an accuracy of $99.60$ percent, which is a substantial improvement over our earlier results, especially our main benchmark, the network with $100$ hidden neurons, where we achieved $99.37$ percent.
In fact, I tried experiments with both $300$ and $1,000$ hidden neurons, and obtained (very slightly) better validation performance with $1,000$ hidden neurons.
Why we only applied dropout to the fully-connected layers: If you look carefully at the code above, you'll notice that we applied dropout only to the fully-connected section of the network, not to the convolutional layers.
But apart from that, they used few other tricks, including no convolutional layers: it was a plain, vanilla network, of the kind that, with enough patience, could have been trained in the 1980s (if the MNIST data set had existed), given enough computing power.
In particular, we saw that the gradient tends to be quite unstable: as we move from the output layer to earlier layers the gradient tends to either vanish (the vanishing gradient problem) or explode (the exploding gradient problem).
In particular, in our final experiments we trained for $40$ epochs using a data set $5$ times larger than the raw MNIST training data.
I've occasionally heard people adopt a deeper-than-thou attitude, holding that if you're not keeping-up-with-the-Joneses in terms of number of hidden layers, then you're not really doing deep learning.
To speed that process up you may find it helpful to revisit Chapter 3's discussion of how to choose a neural network's hyper-parameters, and perhaps also to look at some of the further reading suggested in that section.
Here's the code (discussion below)* *Note added November 2016: several readers have noted that in the line initializing self.w, I set scale=np.sqrt(1.0/n_out), when the arguments of Chapter 3 suggest a better initialization may be scale=np.sqrt(1.0/n_in).
np.random.normal(
loc=0.0, scale=np.sqrt(1.0/n_out), size=(n_in, n_out)),
dtype=theano.config.floatX),
I use the name inpt rather than input because input is a built-in function in Python, and messing with built-ins tends to cause unpredictable behavior and difficult-to-diagnose bugs.
So self.inpt_dropout and self.output_dropout are used during training, while self.inpt and self.output are used for all other purposes, e.g., evaluating accuracy on the validation and test data.
prev_layer, layer = self.layers[j-1], self.layers[j]
prev_layer.output, prev_layer.output_dropout, self.mini_batch_size)
Now, this isn't a Theano tutorial, and so we won't get too deeply into what it means that these are symbolic variables* *The Theano documentation provides a good introduction to Theano.
# define the (regularized) cost function, symbolic gradients, and updates
0.5*lmbda*l2_norm_squared/num_training_batches
for param, grad in zip(self.params, grads)]
self.x:
training_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size],
self.y:
training_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size]
validation_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size],
validation_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size]
test_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size],
test_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size]
test_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size]
iteration = num_training_batches*epoch+minibatch_index
if iteration
print("Training mini-batch number {0}".format(iteration))
cost_ij = train_mb(minibatch_index)
if (iteration+1)
validation_accuracy = np.mean(
[validate_mb_accuracy(j) for j in xrange(num_validation_batches)])
print("Epoch {0}: validation accuracy {1:.2
epoch, validation_accuracy))
if validation_accuracy >= best_validation_accuracy:
print("This is the best validation accuracy to date.")
best_validation_accuracy = validation_accuracy
best_iteration = iteration
if test_data:
test_accuracy = np.mean(
[test_mb_accuracy(j) for j in xrange(num_test_batches)])
print('The corresponding test accuracy is {0:.2
test_accuracy))
In these lines we symbolically set up the regularized log-likelihood cost function, compute the corresponding derivatives in the gradient function, as well as the corresponding parameter updates.
With all these things defined, the stage is set to define the train_mb function, a Theano symbolic function which uses the updates to update the Network parameters, given a mini-batch index.
The remainder of the SGD method is self-explanatory - we simply iterate over the epochs, repeatedly training the network on mini-batches of training data, and computing the validation and test accuracies.
if iteration % 1000 == 0:
if (iteration+1) % num_training_batches == 0:
print("Epoch {0}: validation accuracy {1:.2%}".format(
print('The corresponding test accuracy is {0:.2%}'.format(
activation_fn=sigmoid):
of filters, the number of input feature maps, the filter height, and the
`poolsize` is a tuple of length 2, whose entries are the y and
np.random.normal(loc=0, scale=np.sqrt(1.0/n_out), size=filter_shape),
np.random.normal(loc=0, scale=1.0, size=(filter_shape[0],)),
pooled_out + self.b.dimshuffle('x', 0, 'x', 'x'))
Earlier in the book we discussed an automated way of selecting the number of epochs to train for, known as early stopping.
Hint: After working on this problem for a while, you may find it useful to see the discussion at this link.
Earlier in the chapter I described a technique for expanding the training data by applying (small) rotations, skewing, and translation.
Note: Unless you have a tremendous amount of memory, it is not practical to explicitly generate the entire expanded data set.
Show that rescaling all the weights in the network by a constant factor $c > 0$ simply rescales the outputs by a factor $c^{L-1}$, where $L$ is the number of layers.
Still, considering the problem will help you better understand networks containing rectified linear units.
Note: The word good in the second part of this makes the problem a research problem.
In 1998, the year MNIST was introduced, it took weeks to train a state-of-the-art workstation to achieve accuracies substantially worse than those we can achieve using a GPU and less than an hour of training.
With that said, the past few years have seen extraordinary improvements using deep nets to attack extremely difficult image recognition tasks.
They will identify the years 2011 to 2015 (and probably a few years beyond) as a time of huge breakthroughs, driven by deep convolutional nets.
The 2012 LRMD paper: Let me start with a 2012 paper* *Building high-level features using large scale unsupervised learning, by Quoc Le, Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg Corrado, Jeff Dean, and Andrew Ng (2012).
Note that the detailed architecture of the network used in the paper differed in many details from the deep convolutional networks we've been studying.
Details about ImageNet are available in the original ImageNet paper, ImageNet: a large-scale hierarchical image database, by Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei (2009).:
If you're looking for a challenge, I encourage you to visit ImageNet's list of hand tools, which distinguishes between beading planes, block planes, chamfer planes, and about a dozen other types of plane, amongst other categories.
The 2012 KSH paper: The work of LRMD was followed by a 2012 paper of Krizhevsky, Sutskever and Hinton (KSH)* *ImageNet classification with deep convolutional neural networks, by Alex Krizhevsky, Ilya Sutskever, and Geoffrey E.
By this top-$5$ criterion, KSH's deep convolutional network achieved an accuracy of $84.7$ percent, vastly better than the next-best contest entry, which achieved an accuracy of $73.8$ percent.
The input layer contains $3 \times 224 \times 224$ neurons, representing the RGB values for a $224 \times 224$ image.
The feature maps are split into two groups of $48$ each, with the first $48$ feature maps residing on one GPU, and the second $48$ feature maps residing on the other GPU.
Their respectives parameters are: (3) $384$ feature maps, with $3 \times 3$ local receptive fields, and $256$ input channels;
A Theano-based implementation has also been developed* *Theano-based large-scale visual recognition with multiple GPUs, by Weiguang Ding, Ruoyan Wang, Fei Mao, and Graham Taylor (2014)., with the code available here.
As in 2012, it involved a training set of $1.2$ million images, in $1,000$ categories, and the figure of merit was whether the top $5$ predictions included the correct category.
The winning team, based primarily at Google* *Going deeper with convolutions, by Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich (2014)., used a deep convolutional network with $22$ layers of neurons.
GoogLeNet achieved a top-5 accuracy of $93.33$ percent, a giant improvement over the 2013 winner (Clarifai, with $88.3$ percent), and the 2012 winner (KSH, with $84.7$ percent).
In 2014 a team of researchers wrote a survey paper about the ILSVRC competition* *ImageNet large scale visual recognition challenge, by Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C.
...the task of labeling images with 5 out of 1000 categories quickly turned out to be extremely challenging, even for some friends in the lab who have been working on ILSVRC and its classes for a while.
In the end I realized that to get anywhere competitively close to GoogLeNet, it was most efficient if I sat down and went through the painfully long training process and the subsequent careful annotation process myself...
Some images are easily recognized, while some images (such as those of fine-grained breeds of dogs, birds, or monkeys) can require multiple minutes of concentrated effort.
In other words, an expert human, working painstakingly, was with great effort able to narrowly beat the deep neural network.
In fact, Karpathy reports that a second human expert, trained on a smaller sample of images, was only able to attain a $12.0$ percent top-5 error rate, significantly below GoogLeNet's performance.
One encouraging practical set of results comes from a team at Google, who applied deep convolutional networks to the problem of recognizing street numbers in Google's Street View imagery* *Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks, by Ian J.
And they go on to make the broader claim: 'We believe with this model we have solved [optical character recognition] for short sequences [of characters] for many applications.'
For instance, a 2013 paper* *Intriguing properties of neural networks, by Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus (2013) showed that deep networks may suffer from what are effectively blind spots.
The existence of the adversarial negatives appears to be in contradiction with the network's ability to achieve high generalization performance.
The explanation is that the set of adversarial negatives is of extremely low probability, and thus is never (or rarely) observed in the test set, yet it is dense (much like the rational numbers), and so it is found near virtually every test case.
For example, one recent paper* *Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, by Anh Nguyen, Jason Yosinski, and Jeff Clune (2014).
shows that given a trained network it's possible to generate images which look to a human like white noise, but which the network classifies as being in a known category with a very high degree of confidence.
If you read the neural networks literature, you'll run into many ideas we haven't discussed: recurrent neural networks, Boltzmann machines, generative models, transfer learning, reinforcement learning, and so on, on and on $\ldots$ and on!
One way RNNs are currently being used is to connect neural networks more closely to traditional ways of thinking about algorithms, ways of thinking based on concepts such as Turing machines and (conventional) programming languages.
A 2014 paper developed an RNN which could take as input a character-by-character description of a (very, very simple!) Python program, and use that description to predict the output.
For example, an approach based on deep nets has achieved outstanding results on large vocabulary continuous speech recognition.
And another system based on deep nets has been deployed in Google's Android operating system (for related technical work, see Vincent Vanhoucke's 2012-2015 papers).
Many other ideas used in feedforward nets, ranging from regularization techniques to convolutions to the activation and cost functions used, are also useful in recurrent nets.
Deep belief nets, generative models, and Boltzmann machines: Modern interest in deep learning began in 2006, with papers explaining how to train a type of neural network known as a deep belief network (DBN)* *See A fast learning algorithm for deep belief nets, by Geoffrey Hinton, Simon Osindero, and Yee-Whye Teh (2006), as well as the related work in Reducing the dimensionality of data with neural networks, by Geoffrey Hinton and Ruslan Salakhutdinov (2006)..
A generative model like a DBN can be used in a similar way, but it's also possible to specify the values of some of the feature neurons and then 'run the network backward', generating values for the input activations.
And the ability to do unsupervised learning is extremely interesting both for fundamental scientific reasons, and - if it can be made to work well enough - for practical applications.
Active areas of research include using neural networks to do natural language processing (see also this informative review paper), machine translation, as well as perhaps more surprising applications such as music informatics.
In many cases, having read this book you should be able to begin following recent work, although (of course) you'll need to fill in gaps in presumed background knowledge.
It combines deep convolutional networks with a technique known as reinforcement learning in order to learn to play video games well (see also this followup).
The idea is to use the convolutional network to simplify the pixel data from the game screen, turning it into a simpler set of features, which can be used to decide which action to take: 'go left', 'go down', 'fire', and so on.
What is particularly interesting is that a single network learned to play seven different classic video games pretty well, outperforming human experts on three of the games.
But looking past the surface gloss, consider that this system is taking raw pixel data - it doesn't even know the game rules!
Google CEO Larry Page once described the perfect search engine as understanding exactly what [your queries] mean and giving you back exactly what you want.
In this vision, instead of responding to users' literal queries, search will use machine learning to take vague user input, discern precisely what was meant, and take action on the basis of those insights.
Over the next few decades, thousands of companies will build products which use machine learning to make user interfaces that can tolerate imprecision, while discerning and acting on the user's true intent.
Inspired user interface design is hard, and I expect many companies will take powerful machine learning technology and use it to build insipid user interfaces.
Machine learning, data science, and the virtuous circle of innovation: Of course, machine learning isn't just being used to build intention-driven interfaces.
But I do want to mention one consequence of this fashion that is not so often remarked: over the long run it's possible the biggest breakthrough in machine learning won't be any single conceptual breakthrough.
If a company can invest 1 dollar in machine learning research and get 1 dollar and 10 cents back reasonably rapidly, then a lot of money will end up in machine learning research.
So, for example, Conway's law suggests that the design of a Boeing 747 aircraft will mirror the extended organizational structure of Boeing and its contractors at the time the 747 was designed.
If the application's dashboard is supposed to be integrated with some machine learning algorithm, the person building the dashboard better be talking to the company's machine learning expert.
I won't define 'deep ideas' precisely, but loosely I mean the kind of idea which is the basis for a rich field of enquiry.
The backpropagation algorithm and the germ theory of disease are both good examples.: think of things like the germ theory of disease, for instance, or the understanding of how antibodies work, or the understanding that the heart, lungs, veins and arteries form a complete cardiovascular system.
Instead of a monolith, we have fields within fields within fields, a complex, recursive, self-referential social structure, whose organization mirrors the connections between our deepest insights.
Deep learning is the latest super-special weapon I've heard used in such arguments* *Interestingly, often not by leading experts in deep learning, who have been quite restrained.
And there is paper after paper leveraging the same basic set of ideas: using stochastic gradient descent (or a close variation) to optimize a cost function.
Training an Artificial Neural Network - Intro
Artificial neural networks are relatively crude electronic networks of 'neurons' based on the neural structure of the brain. They process records one at a time, and 'learn' by comparing their classification of the record (which, at the outset, is largely arbitrary) with the known actual classification of the record. The errors from the initial classification of the first record is fed back into the network, and used to modify the networks algorithm the second time around, and so on for many iterations. Roughly speaking, a neuron in an artificial neural network is
there may be several hidden layers. The final layer is the output layer, where there is one node for each class. A single sweep forward through the network results in the assignment of a value to each output node, and the record is assigned to whichever class's node had the highest value.
Training an Artificial Neural Network In the training phase, the correct class for each record is known (this is termed supervised training), and the output nodes can therefore be assigned 'correct' values -- '1' for the node corresponding to the correct class, and '0' for the others. (In practice it has been found better to use values of 0.9 and 0.1, respectively.) It is thus possible to compare the network's calculated values for the output nodes to these 'correct' values, and calculate an error term for each node (the 'Delta' rule). These error terms are then used to adjust the weights in the hidden layers so that, hopefully, the next time around the output values will be closer to the 'correct' values. The Iterative Learning Process A
key feature of neural networks is an iterative learning process in which data cases (rows) are presented to the network one at a time, and the weights associated with the input values are adjusted each time. After all cases are presented, the process often starts over again.
Errors are then propagated back through the system, causing the system to adjust the weights for application to the next record to be processed. This process occurs over and over as the weights are continually tweaked. During the training of a network the same set of data is processed many times as the connection weights are continually refined. Note that some networks never learn.
Multi-Layer Neural Networks with Sigmoid Function— Deep Learning for Rookies (2)
Welcome back to my second post of the series Deep Learning for Rookies (DLFR), by yours truly, a rookie ;) Feel free to refer back to my first post here or my blog if you find it hard to follow.
You'll be able to brag about your understanding soon ;) Last time, we introduced the field of Deep Learning and examined a simple a neural network — perceptron……or a dinosaur……ok, seriously, a single-layer perceptron.
After all, most problems in the real world are non-linear, and as individual humans, you and I are pretty darn good at the decision-making of linear or binary problems like should I study Deep Learning or not without needing a perceptron.
Fast forward almost two decades to 1986, Geoffrey Hinton, David Rumelhart, and Ronald Williams published a paper "Learning representations by back-propagating errors", which introduced: If you are completely new to DL, you should remember Geoffrey Hinton, who plays a pivotal role in the progress of DL.
Remember that we iterated the importance of designing a neural network so that the network can learn from the difference between the desired output (what the fact is) and actual output (what the network returns) and then send a signal back to the weights and ask the weights to adjust themselves?
Secondly, when we multiply each of the m features with a weight (w1, w2, …, wm) and sum them all together, this is a dot product: So here are the takeaways for now: The procedure of how input values are forward propagated into the hidden layer, and then from hidden layer to the output is the same as in Graph 1.
One thing to remember is: If the activation function is linear, then you can stack as many hidden layers in the neural network as you wish, and the final output is still a linear combination of the original input data.
So basically, a small change in any weight in the input layer of our perceptron network could possibly lead to one neuron to suddenly flip from 0 to 1, which could again affect the hidden layer's behavior, and then affect the final outcome.
Non-linear just means that the output we get from the neuron, which is the dot product of some inputs x (x1, x2, …, xm) and weights w (w1, w2, …,wm) plus bias and then put into a sigmoid function, cannot be represented by a linear combination of the input x (x1, x2, …,xm).
This non-linear activation function, when used by each neuron in a multi-layer neural network, produces a new "representation" of the original data, and ultimately allows for non-linear decision boundary, such as XOR.
if our output value is on the lower flat area on the two corners, then it's false or 0 since it's not right to say the weather is both hot and cold or neither hot or cold (ok, I guess the weather could be neither hot or cold…you get what I mean though…right?).
You can memorize these takeaways since they're facts, but I encourage you to google a bit on the internet and see if you can understand the concept better (it is natural that we take some time to understand these concepts).
From the XOR example above, you've seen that adding two hidden neurons in 1 hidden layer could reshape our problem into a different space, which magically created a way for us to classify XOR with a ridge.
Now, the computer can't really "see" a digit like we humans do, but if we dissect the image into an array of 784 numbers like [0, 0, 180, 16, 230, …, 4, 77, 0, 0, 0], then we can feed this array into our neural network.
So if the neural network thinks the handwritten digit is a zero, then we should get an output array of [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], the first output in this array that senses the digit to be a zero is "fired" to be 1 by our neural network, and the rest are 0.
If the neural network thinks the handwritten digit is a 5, then we should get [0, 0, 0, 0, 0, 1, 0, 0, 0, 0].
Remember we mentioned that neural networks become better by repetitively training themselves on data so that they can adjust the weights in each layer of the network to get the final results/actual output closer to the desired output?
For the sake of argument, let's imagine the following case in Graph 14, which I borrow from Michael Nielsen's online book: After training the neural network with rounds and rounds of labeled data in supervised learning, assume the first 4 hidden neurons learned to recognize the patterns above in the left side of Graph 14.
Then, if we feed the neural network an array of a handwritten digit zero, the network should correctly trigger the top 4 hidden neurons in the hidden layer while the other hidden neurons are silent, and then again trigger the first output neuron while the rest are silent.
If you train the neural network with a new set of randomized weights, it might produce the following network instead (compare Graph 15 with Graph 14), since the weights are randomized and we never know which one will learn which or what pattern.
It involves subtracting the mean across every individual feature in the data, and has the geometric interpretation of centering the cloud of data around the origin along every dimension.
It only makes sense to apply this preprocessing if you have a reason to believe that different input features have different scales (or units), but they should be of approximately equal importance to the learning algorithm. In
case of images, the relative scales of pixels are already approximately equal (and in range from 0 to 255), so it is not strictly necessary to perform this additional preprocessing step.
Then, we can compute the covariance matrix that tells us about the correlation structure in the data: The (i,j) element of the data covariance matrix contains the covariance between i-th and j-th dimension of the data.
To decorrelate the data, we project the original (but zero-centered) data into the eigenbasis: Notice that the columns of U are a set of orthonormal vectors (norm of 1, and orthogonal to each other), so they can be regarded as basis vectors.
This is also sometimes refereed to as Principal Component Analysis (PCA) dimensionality reduction: After this operation, we would have reduced the original dataset of size [N x D] to one of size [N x 100], keeping the 100 dimensions of the data that contain the most variance.
The geometric interpretation of this transformation is that if the input data is a multivariable gaussian, then the whitened data will be a gaussian with zero mean and identity covariance matrix.
One weakness of this transformation is that it can greatly exaggerate the noise in the data, since it stretches all dimensions (including the irrelevant dimensions of tiny variance that are mostly noise) to be of equal size in the input.
Note that we do not know what the final value of every weight should be in the trained network, but with proper data normalization it is reasonable to assume that approximately half of the weights will be positive and half of them will be negative.
The idea is that the neurons are all random and unique in the beginning, so they will compute distinct updates and integrate themselves as diverse parts of the full network.
The implementation for one weight matrix might look like W = 0.01* np.random.randn(D,H), where randn samples from a zero mean, unit standard deviation gaussian.
With this formulation, every neuron's weight vector is initialized as a random vector sampled from a multi-dimensional gaussian, so the neurons point in random direction in the input space.
That is, the recommended heuristic is to initialize each neuron's weight vector as: w = np.random.randn(n) / sqrt(n), where n is the number of its inputs.
The sketch of the derivation is as follows: Consider the inner product \(s = \sum_i^n w_i x_i\) between the weights \(w\) and input \(x\), which gives the raw activation of a neuron before the non-linearity.
And since \(\text{Var}(aX) = a^2\text{Var}(X)\) for a random variable \(X\) and a scalar \(a\), this implies that we should draw from unit gaussian and then scale it by \(a = \sqrt{1/n}\), to make its variance \(1/n\).
In this paper, the authors end up recommending an initialization of the form \( \text{Var}(w) = 2/(n_{in} + n_{out}) \) where \(n_{in}, n_{out}\) are the number of units in the previous layer and the next layer.
A more recent paper on this topic, Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification by He et al., derives an initialization specifically for ReLU neurons, reaching the conclusion that the variance of neurons in the network should be \(2.0/n\).
This gives the initialization w = np.random.randn(n) * sqrt(2.0/n), and is the current recommendation for use in practice in the specific case of neural networks with ReLU neurons.
Another way to address the uncalibrated variances problem is to set all weight matrices to zero, but to break symmetry every neuron is randomly connected (with weights sampled from a small gaussian as above) to a fixed number of neurons below it.
For ReLU non-linearities, some people like to use small constant value such as 0.01 for all biases because this ensures that all ReLU units fire in the beginning and therefore obtain and propagate some gradient.
However, it is not clear if this provides a consistent improvement (in fact some results seem to indicate that this performs worse) and it is more common to simply use 0 bias initialization.
A recently developed technique by Ioffe and Szegedy called Batch Normalization alleviates a lot of headaches with properly initializing neural networks by explicitly forcing the activations throughout a network to take on a unit gaussian distribution at the beginning of the training.
In the implementation, applying this technique usually amounts to insert the BatchNorm layer immediately after fully connected layers (or convolutional layers, as we'll soon see), and before non-linearities.
It is common to see the factor of \(\frac{1}{2}\) in front because then the gradient of this term with respect to the parameter \(w\) is simply \(\lambda w\) instead of \(2 \lambda w\).
Lastly, notice that during gradient descent parameter update, using the L2 regularization ultimately means that every weight is decayed linearly: W += -lambda * W towards zero.
L1 regularization is another relatively common form of regularization, where for each weight \(w\) we add the term \(\lambda \mid w \mid\) to the objective.
Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron and use projected gradient descent to enforce the constraint.
In practice, this corresponds to performing the parameter update as normal, and then enforcing the constraint by clamping the weight vector \(\vec{w}\) of every neuron to satisfy \(\Vert \vec{w} \Vert_2 <
Vanilla dropout in an example 3-layer Neural Network would be implemented as follows: In the code above, inside the train_step function we have performed dropout twice: on the first hidden layer and on the second hidden layer.
It can also be shown that performing this attenuation at test time can be related to the process of iterating over all the possible binary masks (and therefore all the exponentially many sub-networks) and computing their ensemble prediction.
Since test-time performance is so critical, it is always preferable to use inverted dropout, which performs the scaling at train time, leaving the forward pass at test time untouched.
Inverted dropout looks as follows: There has a been a large amount of research after the first introduction of dropout that tries to understand the source of its power in practice, and its relation to the other regularization techniques.
As we already mentioned in the Linear Classification section, it is not common to regularize the bias parameters because they do not interact with the data through multiplicative interactions, and therefore do not have the interpretation of controlling the influence of a data dimension on the final objective.
For example, a binary classifier for each category independently would take the form: where the sum is over all categories \(j\), and \(y_{ij}\) is either +1 or -1 depending on whether the i-th example is labeled with the j-th attribute, and the score vector \(f_j\) will be positive when the class is predicted to be present and negative otherwise.
A binary logistic regression classifier has only two classes (0,1), and calculates the probability of class 1 as: Since the probabilities of class 1 and 0 sum to one, the probability for class 0 is \(P(y = 0 \mid x;
The expression above can look scary but the gradient on \(f\) is in fact extremely simple and intuitive: \(\partial{L_i} / \partial{f_j} = y_{ij} - \sigma(f_j)\) (as you can double check yourself by taking the derivatives).
The L2 norm squared would compute the loss for a single example of the form: The reason the L2 norm is squared in the objective is that the gradient becomes much simpler, without changing the optimal parameters since squaring is a monotonic operation.
For example, if you are predicting star rating for a product, it might work much better to use 5 independent classifiers for ratings of 1-5 stars instead of a regression loss.
If you're certain that classification is not appropriate, use the L2 but be careful: For example, the L2 is more fragile and applying dropout in the network (especially in the layer right before the L2 loss) is not a great idea.
Lecture 6 | Training Neural Networks I
In Lecture 6 we discuss many practical issues for training modern neural networks. We discuss different activation functions, the importance of data ...
Neural Network train in MATLAB
This video explain how to design and train a Neural Network in MATLAB.
How to train neural Network in Matlab ??
This tutorial video teaches about training a neural network in Matlab .....( Download Matlab Code Here:
Normalized Inputs and Initial Weights
This video is part of the Udacity course "Deep Learning". Watch the full course at
What is a Neural Network - Ep. 2 (Deep Learning SIMPLIFIED)
With plenty of machine learning tools currently available, why would you ever choose an artificial neural network over all the rest? This clip and the next could ...
Processing our own Data - Deep Learning with Neural Networks and TensorFlow part 5
Welcome to part five of the Deep Learning with Neural Networks and TensorFlow tutorials. Now that we've covered a simple example of an artificial neural ...
Neural Network Training (Part 2): Neural Network Error Calculation
From An important part of the training process is error calculation. In this video, see how errors are calculated
Neural Networks 6: solving XOR with a hidden layer
Neural Networks Learn Logic Gates?
Install Tensorflow first, then keras. Follow instructions here: Tensorflow: keras: I installed tf and .
Batch Size in a Neural Network explained
In this video, we explain the concept of the batch size used during training of an artificial neural network and also show how to specify the batch size in code with ...
New open-source Machine Learning Framework written in Java
10 AI companies to watch in 2019
Tuesday, February 13, 2018;
Robots Build Large Structures With Brick and Concrete
NIPS Proceedingsβ
Deep neural networks are easily fooled: High confidence predictions ... artificial intelligence
Thursday, December 13, 2018;
Stephen Hawking says AI could be 'worst event' in civilization artificial intelligence
Wednesday, April 24, 2019;
Block Center for Technology and Society artificial intelligence
Sunday, February 17, 2019;
"Alien Artificial Intelligence is Out There" artificial intelligence
ICRAI 2019|Robotics and Artificial Intelligence artificial intelligence | CommonCrawl |
The Mahler measure of $ (x+1/x)(y+1/y)(z+1/z)+\sqrt{k} $
ERA Home
Initial boundary value problem for a inhomogeneous pseudo-parabolic equation
March 2020, 28(1): 91-102. doi: 10.3934/era.2020006
Finite time blow-up for a wave equation with dynamic boundary condition at critical and high energy levels in control systems
Xiaoqiang Dai 1, , Chao Yang 2,, , Shaobin Huang 2, , Tao Yu 3, and Yuanran Zhu 3,
Department of Electronic Information, Jiangsu University of Science and Technology, Zhenjiang, MO 212003, China
College of Computer Science and Technology, Harbin Engineering University, Harbin, MO 150001, China
College of Mathematical Sciences, Harbin Engineering University, Harbin, MO 150001, China
* Corresponding author: Chao Yang
Received November 2019 Published March 2020
Fund Project: The first author is supported by Natural Science Foundation of Jiangsu Province (BK20160564) and Jiangsu key R & D plan(BE2018007).
Full Text(HTML)
We study the initial boundary value problem of linear homogeneous wave equation with dynamic boundary condition. We aim to prove the finite time blow-up of the solution at critical energy level or high energy level with the nonlinear damping term on boundary in control systems.
Keywords: High energy level, critical energy level, dynamical boundary condition, wave equation.
Mathematics Subject Classification: Primary: 35L05.
Citation: Xiaoqiang Dai, Chao Yang, Shaobin Huang, Tao Yu, Yuanran Zhu. Finite time blow-up for a wave equation with dynamic boundary condition at critical and high energy levels in control systems. Electronic Research Archive, 2020, 28 (1) : 91-102. doi: 10.3934/era.2020006
[1] R. A. Adams, Sobolev Spaces, Academic Press, New York, 1975. Google Scholar
M. M. Cavalcanti, V. N. Domingos Cavalcanti and I. Lasiecka, Well-posedness and optimal decay rates for the wave equation with nonlinear boundary damping–source interaction, J. Differential Equations, 236 (2007), 407-459. doi: 10.1016/j.jde.2007.02.004. Google Scholar
C. E. Kenig, The method of energy channels for nonlinear wave equations, Discrete and Continuous Dynamical Systems, 39 (2019), 6979-6993. doi: 10.3934/dcds.2019240. Google Scholar
G. Chen, Energy decay estimates and exact boundary value controllabiity for the wave equation in a bounded domin, J. Math. Pures Appl., 58 (1979), 249-273. Google Scholar
G. Chen, Control and stabilization for the wave equation in a bounded domain, SIAM J. Control Optim., 17 (1979), 66-81. doi: 10.1137/0317007. Google Scholar
G. Chen, Control and stabilization for the wave equation in a bounded domain, part Ⅱ, SIAM J. Control Optim., 19 (1981), 114-122. doi: 10.1137/0319009. Google Scholar
G. Chen, A note on the boundary stabilization of the wave equation, SIAM J. Control Optim., 19 (1981), 106-113. doi: 10.1137/0319008. Google Scholar
F. Gazzola and M. Squassina, Global solutions and finite time blow-up for damped semilinear wave equations, Nonlinear Analysis, 23 (2006), 185-207. doi: 10.1016/j.anihpc.2005.02.007. Google Scholar
S. Gerbi and B. Said-Houari, Global existence and exponential growth for a viscoelastic wave equation with dynamic boundary conditions, Adv. Nonlinear Anal., 2 (2013), 163-193. Google Scholar
N. Hoai-Minh, Superlensing using complementary media and reflecting complementary media for electromagnetic waves, Adv. Nonlinear Anal., 7 (2018), 449-467. doi: 10.1515/anona-2017-0146. Google Scholar
E. Iryna, M. Johanna and T. Gerald, Rarefaction waves for the toda equation via nonlinear steepest descent, Discrete and Continuous Dynamical Systems, 38 (2018), 2007-2028. doi: 10.3934/dcds.2018081. Google Scholar
V. Komorkin and E. Zuazua, A direat method for boundary stablization of wave equation, J. Math. Pures Appl., 69 (1990), 33-54. Google Scholar
J. Lagnese, Deacy of solutions of wave equations in a bounded region with boundary dissipation, Journal of Differential Equations, 50 (1983), 163-182. doi: 10.1016/0022-0396(83)90073-6. Google Scholar
J. Lagnese, Note on boundary stabilization of wave equations, SIAM J. Control Optim., 26 (1988), 1250-1256. doi: 10.1137/0326068. Google Scholar
I. Lasiecka and D. Tataru, Uniform boundary stabilization of wave equation with nonlieary boundary damping, Differential and Integral Equations, 6 (1990), 507-533. Google Scholar
M. J. Lee, J. R. Kang and S. H. Park, Blow-up of solution for quasilinear viscoelastic wave equation with boundary nonlinear damping and source terms, Bound. Value Probl., 67 (2019), 11pp. doi: 10.1186/s13661-019-1180-6. Google Scholar
M. J. Lee and J. Y. Park, Energy decay of solutions of nonlinear viscoelastic problem with the dynamic and acoustic boundary conditions, Bound. Value Probl., 1 (2018), 26pp. doi: 10.1186/s13661-017-0918-2. Google Scholar
H. A. Levine and J. Serrin, Global nonexistence theorems for quasilinear evolution equations with dissipation, Arch. Rational Mech. Anal., 137 (1997), 341-361. doi: 10.1007/s002050050032. Google Scholar
H. A. Levine and A. Smith, A potential well theory for the wave equation with a nonlinear boundary conditions, J. Reine angew. Math., 374 (1987), 1-23. doi: 10.1515/crll.1987.374.1. Google Scholar
H. A. Levine and L. E. Payn, Nonexistence theorems for the heat equation with nonlinear boundary conditions and for the porous medium equation backward in time, J. Differential Equations, 16 (1974), 319-334. doi: 10.1016/0022-0396(74)90018-7. Google Scholar
W. Lian and R. Z. Xu, Global well-posedness of nonlinear wave equation with weak and strong damping terms and logarithmic source term, Adv. Nonlinear Anal., 9 (2020), 613-632. doi: 10.1515/anona-2020-0016. Google Scholar
G. Olivier and M. Imen, Theoretical analysis of a water wave model with a nonlocal viscous dispersive term using the diffusive approach, Adv. Nonlinear Anal., 8 (2019), 253-266. doi: 10.1515/anona-2016-0274. Google Scholar
C. Shane and S. Anton, Homogenisation with error estimates of attractors for damped semi-linear anisotropic wave equations, Adv. Nonlinear Anal., 9 (2020), 745-787. doi: 10.1515/anona-2020-0024. Google Scholar
E. Vitillaro, Some new results on global nonexistence and blow-up for evolution problems with positive initial energy, Rend. Istit. Mat. Univ. Trieste, 31 (2000), 245-275. Google Scholar
E. Vitillaro, Global existence for the wave equation with nonlinear boundary damping and source term, J. Diffrential Equations, 186 (2002), 259-298. doi: 10.1016/S0022-0396(02)00023-2. Google Scholar
B. Vural, N. Emil and O. Ibrahim, Local-in-space blow-up crireria for two-component nonlinear dispersive wave sysytem, Discrete and Continuous Dynamical Systems, 39 (2019), 6023-6037. doi: 10.3934/dcds.2019263. Google Scholar
R. Z. Xu, M. Y. Zhang and S. H. Chen, The initial-boundary value problems for a class of sixth order nonlinear wave equation, Discrete and Continuous Dynamical Systems, 37 (2017), 5631-5649. doi: 10.3934/dcds.2017244. Google Scholar
R. Z. Xu and J. Su, Global existence and finite time blow-up for a class of semilinear pseudo-parabolic equations, J. Funct. Anal., 264 (2013), 2732-2763. doi: 10.1016/j.jfa.2013.03.010. Google Scholar
H. W. Zhang and Q. Y. Hu, Asymptotic behavior and nonexistence of wave equation with nonlinear boundary condition, Commun. Pure Appl. Anal., 4 (2005), 861-869. doi: 10.3934/cpaa.2005.4.861. Google Scholar
H. W. Zhang, C. S. Hou and Q. Y. Ho, Energy decay and blow-up of solution for a Kirchhoff equation with dynamic boundary condition, Bound. Value Probl., 2013 (2013), 12pp. doi: 10.1186/1687-2770-2013-166. Google Scholar
X. Zhao and W. P. Yan, Existence of standing waves for quasi-linear Schrödinger equations on $ {\rm T^n} $, Adv. Nonlinear Anal., 9 (2020), 978-933. doi: 10.1515/anona-2020-0038. Google Scholar
W. P. Ziemer, Weakly Differently Functions, Graduate Text in Mathematicas, Springer, New York, 1989. doi: 10.1007/978-1-4612-1015-3. Google Scholar
E. Zuazua, Uniform stabilization of the wave equations by nonlinear boundary feedback, SIAM. J. Control Optim., 28 (1990), 466-477. doi: 10.1137/0328025. Google Scholar
show all references
Md. Masum Murshed, Kouta Futai, Masato Kimura, Hirofumi Notsu. Theoretical and numerical studies for energy estimates of the shallow water equations with a transmission boundary condition. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1063-1078. doi: 10.3934/dcdss.2020230
Marcello D'Abbicco, Giovanni Girardi, Giséle Ruiz Goldstein, Jerome A. Goldstein, Silvia Romanelli. Equipartition of energy for nonautonomous damped wave equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 597-613. doi: 10.3934/dcdss.2020364
Manuel del Pino, Monica Musso, Juncheng Wei, Yifu Zhou. Type Ⅱ finite time blow-up for the energy critical heat equation in $ \mathbb{R}^4 $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3327-3355. doi: 10.3934/dcds.2020052
Xinfu Chen, Huiqiang Jiang, Guoqing Liu. Boundary spike of the singular limit of an energy minimizing problem. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3253-3290. doi: 10.3934/dcds.2020124
Kai Yang. Scattering of the focusing energy-critical NLS with inverse square potential in the radial case. Communications on Pure & Applied Analysis, 2021, 20 (1) : 77-99. doi: 10.3934/cpaa.2020258
Xiaoxiao Li, Yingjing Shi, Rui Li, Shida Cao. Energy management method for an unpowered landing. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020180
Larissa Fardigola, Kateryna Khalina. Controllability problems for the heat equation on a half-axis with a bounded control in the Neumann boundary condition. Mathematical Control & Related Fields, 2021, 11 (1) : 211-236. doi: 10.3934/mcrf.2020034
Ludovick Gagnon, José M. Urquiza. Uniform boundary observability with Legendre-Galerkin formulations of the 1-D wave equation. Evolution Equations & Control Theory, 2021, 10 (1) : 129-153. doi: 10.3934/eect.2020054
Mokhtari Yacine. Boundary controllability and boundary time-varying feedback stabilization of the 1D wave equation in non-cylindrical domains. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021004
Luis Caffarelli, Fanghua Lin. Nonlocal heat flows preserving the L2 energy. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 49-64. doi: 10.3934/dcds.2009.23.49
Wenbin Li, Jianliang Qian. Simultaneously recovering both domain and varying density in inverse gravimetry by efficient level-set methods. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020073
Lingfeng Li, Shousheng Luo, Xue-Cheng Tai, Jiang Yang. A new variational approach based on level-set function for convex hull problem with outliers. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020070
Masaru Hamano, Satoshi Masaki. A sharp scattering threshold level for mass-subcritical nonlinear Schrödinger system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1415-1447. doi: 10.3934/dcds.2020323
Peter Frolkovič, Viera Kleinová. A new numerical method for level set motion in normal direction used in optical flow estimation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 851-863. doi: 10.3934/dcdss.2020347
Tetsuya Ishiwata, Takeshi Ohtsuka. Numerical analysis of an ODE and a level set methods for evolving spirals by crystalline eikonal-curvature flow. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 893-907. doi: 10.3934/dcdss.2020390
Nitha Niralda P C, Sunil Mathew. On properties of similarity boundary of attractors in product dynamical systems. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021004
Darko Dimitrov, Hosam Abdo. Tight independent set neighborhood union condition for fractional critical deleted graphs and ID deleted graphs. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 711-721. doi: 10.3934/dcdss.2019045
Lan Luo, Zhe Zhang, Yong Yin. Simulated annealing and genetic algorithm based method for a bi-level seru loading problem with worker assignment in seru production systems. Journal of Industrial & Management Optimization, 2021, 17 (2) : 779-803. doi: 10.3934/jimo.2019134
Xu Zhang, Chuang Zheng, Enrique Zuazua. Time discrete wave equations: Boundary observability and control. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 571-604. doi: 10.3934/dcds.2009.23.571
PDF downloads (229)
HTML views (522)
Xiaoqiang Dai Chao Yang Shaobin Huang Tao Yu Yuanran Zhu | CommonCrawl |
Chaperone Hsp27, a Novel Subunit of AUF1 Protein Complexes, Functions in AU-Rich Element-Mediated mRNA Decay
Kristina S. Sinsimer, Frances M. Gratacós, Anna M. Knapinska, Jiebo Lu, Christopher D. Krause, Alexandria V. Wierzbowski, Lauren R. Maher, Shannon Scrudato, Yonaira M. Rivera, Swati Gupta, Danielle K. Turrin, Mary Pauline De La Cruz, Sidney Pestka, Gary Brewer
Kristina S. Sinsimer
Department of Molecular Genetics, Microbiology and Immunology, UMDNJ-Robert Wood Johnson Medical School, Piscataway, New Jersey 08854-5635
Frances M. Gratacós
Anna M. Knapinska
Jiebo Lu
Christopher D. Krause
Alexandria V. Wierzbowski
Lauren R. Maher
Shannon Scrudato
Yonaira M. Rivera
Swati Gupta
Danielle K. Turrin
Mary Pauline De La Cruz
Sidney Pestka
Gary Brewer
For correspondence: [email protected]
Controlled, transient cytokine production by monocytes depends heavily upon rapid mRNA degradation, conferred by 3′ untranslated region-localized AU-rich elements (AREs) that associate with RNA-binding proteins. The ARE-binding protein AUF1 forms a complex with cap-dependent translation initiation factors and heat shock proteins to attract the mRNA degradation machinery. We refer to this protein assembly as the AUF1- and signal transduction-regulated complex, ASTRC. Rapid degradation of ARE-bearing mRNAs (ARE-mRNAs) requires ubiquitination of AUF1 and its destruction by proteasomes. Activation of monocytes by adhesion to capillary endothelium at sites of tissue damage and subsequent proinflammatory cytokine induction are prominent features of inflammation, and ARE-mRNA stabilization plays a critical role in the induction process. Here, we demonstrate activation-induced subunit rearrangements within ASTRC and identify chaperone Hsp27 as a novel subunit that is itself an ARE-binding protein essential for rapid ARE-mRNA degradation. As Hsp27 has well-characterized roles in protein ubiquitination as well as in adhesion-induced cytoskeletal remodeling and cell motility, its association with ASTRC may provide a sensing mechanism to couple proinflammatory cytokine induction with monocyte adhesion and motility.
Many mRNAs encoding proteins transiently required for inflammatory responses, cell proliferation, and intracellular signaling are labile due to AU-rich elements (AREs) in their 3′ untranslated regions (UTRs) (14, 21, 57). ARE association by ELAV-like (embryonic lethal, abnormal vision) proteins, such as HuR, blocks ARE-mediated mRNA decay (AMD) (31). By contrast, association of proteins such as AUF1, tristetraprolin (TTP), BRF1 (butyrate-responsive factor-1), K-homology splicing regulatory protein (KSRP), ring finger K-homology domain 1 (RKHD1), polymyositis-scleroderma 75-kDa antigen (PM-Scl75), or microRNA miR16 or miR289 with an ARE promotes AMD (6, 8, 12, 18, 24, 34, 43). The phosphorylation state of TTP, BRF1, and AUF1 affects AMD efficiency (3, 37, 51, 56), indicating that signal transduction networks regulate this pathway.
AUF1 has four protein isoforms—p37, p40, p42, and p45—generated by alternative pre-mRNA splicing (50). Based upon extensive biochemical studies of AUF1, we proposed an integrated, three-step model for induction of AMD by AUF1 via assembly of a trans-acting complex that targets the mRNA for degradation (52). The first step is dynamic AUF1 dimer binding to an ARE and formation of an oligomeric AUF1 complex (7, 52). Stabilizing ARE-binding proteins (AUBPs) may compete with AUF1 for binding to the ARE during this step, thus preventing AUF1 oligomerization and subsequent factor recruitment (25). Binding of AUF1 to an ARE then permits the second step involving recruitment of additional trans-acting factors including eukaryotic translation initiation factor eIF4G, poly(A)-binding protein, dual-functional heat shock/AUBPs Hsp/Hsc70 (27), and additional unknown proteins, forming a multisubunit AUF1- and signal transduction-regulated complex (ASTRC) on ARE-bearing mRNAs (ARE-mRNAs). The third step, mRNA degradation, involves two linked catabolic steps—ubiquitin-dependent degradation of AUF1 by proteasomes and mRNA destruction by mRNA degradation enzymes (27, 28). Most observations indicate that 3′-5′ exoribonucleolytic cleavage of the poly(A) tract is the initial catabolic step during AMD (4). Decapping and 5′-3′ and additional 3′-5′ degradation follow (11, 34, 44).
In circulating monocytes, ARE-bearing mRNAs (ARE-mRNAs) encoding proinflammatory cytokines and chemokines are maintained at very low, basal levels. This is due in large part to their rapid degradation. Upon monocyte adhesion to extracellular matrix components at sites of tissue damage, these mRNAs undergo rapid stabilization, which increases their levels 50- to 100-fold within 1 to 2 h (40). In vitro ARE-binding experiments with extracts of nonadherent monocytes showed that they support assembly of RNP complexes containing AUF1. Parallel experiments with extracts of adherent monocytes demonstrated both qualitative and quantitative differences in the assembly of AUF1-RNP complexes, indicative of protein-ARE remodeling. We hypothesized that these RNP remodeling events contribute to stabilization of cytokine ARE-mRNAs in adherent monocytes. In addition, inhibition of a number of signal transduction pathways blocked both adhesion-induced mRNA stabilization and protein-ARE remodeling (40). In subsequent work, we utilized the human promonocyte cell line THP-1. Activation of THP-1 cells by acute treatment with phorbol ester 12-O-tetradecanoylphorbol-13-acetate (TPA) mimics cytokine ARE-mRNA stabilization of adherent monocytes (41), activates protein kinase C, and promotes adhesion to extracellular matrix components (38). In nonactivated THP-1 cells, p40AUF1 is phosphorylated on Ser83 and Ser87, and ARE-mRNAs encoding interleukin-1β and tumor necrosis factor alpha (TNF-α) are unstable. Activation with TPA leads to robust transcript stabilization coincident with dephosphorylation of p40AUF1 on both serines (56). Fluorescence resonance energy transfer (FRET) experiments revealed that binding of nonphosphorylated AUF1 induces transition of an ARE-RNA from a flexible, open conformation to a spatially condensed structure that exhibits restricted backbone flexibility (55). By contrast, p40AUF1 phosphorylated on Ser83 and Ser87 does not induce this structural transition. Thus, the AUF1 phosphorylation state influences local ARE-RNA structure (51). Within the context of the three-step model of AUF1 function noted above, these studies led us to hypothesize that activation of THP-1 cells may induce subunit rearrangements within ASTRC concomitant with cytokine mRNA stabilization.
To better understand the role of ASTRC in control of proinflammatory cytokine AMD in monocytes, we examined AUF1-containing complexes in nonactivated and activated THP-1 cells. We first found that cell activation results in ASTRC subunit reorganization and ARE-mRNA stabilization. Secondly, we identified chaperone Hsp27 as a subunit of ASTRC and found it to possess high-affinity ARE-binding activity. Knockdown of Hsp27 expression led to dramatic stabilization of a cytokine ARE-mRNA. Taken together, these studies indicate that Hsp27 functions as a novel trans-acting modulator of AMD.
Materials.THP-1, a human promonocytic leukemia cell line, was provided by Charles McCall (Wake Forest University School of Medicine). K562, a human chronic myelogenous leukemia cell line, was from the American Type Culture Collection. Anti-eIF4G-I was a kind gift from Nahum Sonenberg. Hsp27 and Hsp70 antibodies were from Stressgen (SPA-803 and SPA-812). Anti-Hsc70 was from Santa Cruz (sc-7298). Horseradish peroxidase-conjugated secondary antibodies were from Promega Corporation (Madison, WI). All primer oligonucleotides were synthesized by Integrated DNA Technologies (Coralville, IA).
Cell culture.THP-1 and K562 cell lines were maintained in RPMI 1640 medium (Cellgro Mediatech, Herndon, VA) supplemented with 10% defined, endotoxin-free, fetal bovine serum (HyClone, Logan, UT), and 1× penicillin-streptomycin-glutamine (Gibco) at 37°C in 5% CO2. In some experiments noted below, cells were cultured in the absence of antibiotics.
THP-1 cell fractionation.The pellet fraction from centrifugation of cytoplasm at 130,000 × g (P130) was prepared from THP-1 control (treated with dimethyl sulfoxide [DMSO] vehicle) or TPA-treated (10 nM for 1 h) cells by lysis in buffer A (10 mM Tris [pH 7.4], 1 mM potassium acetate, 1.5 mM magnesium acetate, 2 mM dithiothreitol and protease inhibitors [10 μg/ml leupeptin, 10 μg/ml pepstatin A, 1 mM phenylmethylsulfonyl fluoride]) as described previously (2).
Immunopurifications.Affinity-purified AUF1 antibody was isolated from crude serum with immobilized His6-p37AUF1. Eighty million cell equivalents of P130 fraction was treated with RNase A (Qiagen, Hilden, Germany) at a concentration of 4 mg/ml at 30°C for 15 min, and 4 μg of affinity-purified antibody was used for immunoprecipitation with a Catch and Release Immunoprecipitation System (Upstate, Charlottesville, VA). Two purifications were combined, fractionated by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), and stained with Sypro Ruby (Molecular Probes, Eugene, OR). Alternatively, proteins were transferred to nitrocellulose membranes and detected by Western blotting with chemiluminescence reagent (Pierce, Rockford, IL). Protein band intensities from films were quantified with the Kodak EDAS 120 gel documentation system and software (Eastman Kodak Co.).
Protein identification by mass spectrometry.A ∼26-kDa band was excised from a Sypro Ruby-stained gel containing immunoprecipitated proteins. The gel slice was subject to in-gel tryptic digestion, and peptide fragments in the mass range of 810 to 2,000 Da were analyzed by matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) with a PerkinElmer Biosystems DE-PRO mass spectrometer (PerSeptive Biosystems, Framingham, MA) linked to a Voyager-DE PRO work station as described previously (56). Peptide mass/charge (m/z) ratios were used for their identification utilizing MS-Fit and MS-Digest programs (P. R. Baker and K. R. Clauser, Mass Spectrometry Facility, University of California, San Francisco, CA [http://prospector.ucsf.edu]).
Plasmid constructs, transfections, and spectrum deconvolution for live-cell FRET analyses.The cDNAs encoding enhanced cyan fluorescent protein (ECFP) and enhanced yellow fluorescent protein (EYFP), each carrying an A206K mutation, were obtained from plasmids pcDNA3-FL-IFN-γR2/[A206K]EYFP and pcDNA3-FL-IFN-γR2/[A206K]EYFP (23) (where IFN-γR2 is gamma interferon receptor chain 2). The A206K modification prevents autodimerization of the fluorescent proteins (58). For the experiments described in this report, the ECFP and EYFP cDNAs were fused to the N terminus of Hsp27 and to the C terminus of p37AUF1. To fuse ECFP and EYFP to Hsp27, the coding regions of ECFP and EYFP were amplified by PCR with the following primers: 5′-GCACGGTACCGCCACCATGGTGAGCAAGGGCGAGG-3′ (C/YFP forward) and 5′-GCACGGATCCGGAAACTTGTACAGCTCGTCCATGCC-3′ (reverse). The resulting PCR products contain a KpnI restriction site (underlined) and a BamHI site (underlined). PCR fragments were digested with KpnI and BamHI and ligated into expression vector pEF3 (22, 23), also digested with KpnI and BamHI. A SacI-truncated human EF1A promoter drives transgene expression in pEF3. The coding region of Hsp27 was amplified by PCR with the following primers: 5′-GCACGGATCCCACCGAGCGCCGCGTCCC-3′ (forward) and 5′-GCACGAATTCTTACTTGGCGGCAGTCTCATC-3′ (reverse). The forward primer contains a BamHI site (underlined), and the reverse primer contains an EcoRI site (underlined). Amplified Hsp27 cDNA was digested with BamHI and EcoRI and ligated into plasmids pEF3-ECFP and pEF3-EYFP digested with the same enzymes.
To fuse ECFP and EYFP to p37AUF1, the coding regions of ECFP and EYFP were amplified by PCR with the following primers: 5′-GCACGGATCCGTGAGCAAGGGCGAGGAG-3′ (forward) and 5′-GCACGATATCTTACTTGTACAGCTCGTCCATG-3′ (C/YFP reverse). The resulting PCR products contain a BamHI site (underlined) and an EcoRV site (underlined). PCR fragments were digested with BamHI and EcoRV and ligated into expression vector pEF3, also digested with BamHI and EcoRV. The coding region of p37AUF1 was amplified with the following primers: 5′-GCACGGTACCGCCACCATGTCGGAGGAGCAGTTCGG-3′ (forward) and 5′-GCACGGATCCGTATGGTTTGTAGCTATTTTGATG-3′ (reverse). The forward primer contains a KpnI restriction site (underlined) at the 5′ end, and the reverse primer contains a BamHI site at the 3′ end. The amplified p37AUF1 was digested with KpnI and BamHI and ligated into the similarly digested pEF3-ECFP/EYFP vector.
To synthesize pEF3-ECFP (i.e., ECFP alone, not as a fusion protein), the coding region of ECFP was amplified with the C/YFP forward and C/YFP reverse primers listed above. Amplified ECFP cDNA was digested with KpnI and EcoRV and ligated into the similarly digested pEF3 vector.
Hsp90 cDNA was amplified by reverse transcription-PCR (RT-PCR) from HeLa cell total RNA and modified for placement of EYFP on its C terminus. The following primers were used to amplify Hsp90: 5′-GCACGTTTAAACATGCCTGAGGAAGTGCACC-3′ (forward) and 5′-GCACACTAGTATCGACTTCTTCCATGCGAG-3′ (reverse). The forward primer contains a PmeI site (underlined) and the reverse primer contains a SpeI site (underlined). Plasmid pEF3/Hsp27-EYFP was digested with Acc65I, blunted, and then digested with SpeI to release the Hsp27 cDNA. Amplified Hsp90 cDNA was digested with PmeI and SpeI and then ligated with the digested vector, yielding plasmid pEF3/Hsp90-EYFP.
One million THP-1 cells were transiently cotransfected with the ECFP and EYFP plasmid pairs with Effectene reagent (Qiagen) according to the manufacturer's protocol. Twenty-four hours after transfection, cells were harvested and prepared as described below for live-cell FRET assays. Twenty million K562 cells were electroporated with 10 μg each of the ECFP and EYFP plasmid pairs in RPMI 1640 medium supplemented with 10% fetal calf serum in the absence of antibiotics. After 48 to 72 h, whole-cell lysates of K562 cells were prepared in 1× SDS loading buffer for Western blot analyses of fusion protein expression.
Transiently transfected THP-1 cells were washed with phosphate-buffered saline and water mounted onto coverslips at 24 h posttransfection. Protein-protein interactions between p37AUF1-p37AUF1 and p37AUF1-Hsp27 were determined by confocal fluorescence spectroscopy (22, 23). To demonstrate EYFP/ECFP fluorescence and FRET, a custom-built confocal microscope adapted to include a monochrometer interfaced with a cooled charge-coupled-device camera was used so that both confocal fluorescence images and fluorescence emission spectra could be obtained from optical sections. EYFP was directly excited with a 488-nm argon laser rather than a 514-nm laser to permit separation of the laser spectrum from the EYFP emission spectrum (i.e., 527 nm) and to minimize photobleaching of EYFP. ECFP was excited with a 442-nm helium-cadmium laser. Images of ECFP emission were obtained with a 480- ± 10-nm band-pass filter and Olympus FluoView software. EYFP emission images were obtained with a 520- ± 10-nm band-pass filter whether excited by 442 or 488 nm light so that either direct EYFP emission or EYFP emission indicative of FRET, respectively, could be measured.
Accurate calculation of FRET efficiency depends on reliable estimation of the amount of donor and acceptor fluorescence present in a cell. Total fluorescence emission is the combined contributions of ECFP, EYFP, and endogenous cellular fluorescent components (e.g., nicotinamide and riboflavin derivates). Thus, a recently developed algorithm for spectral deconvolution was employed to separate an emission spectrum into contributions by these three components (22) for subtraction of endogenous fluorescence. This procedure permits a more accurate estimate of donor fluorescence in the presence of acceptor (FDA) and acceptor fluorescence in the presence of donor (FAD). The efficiency of energy transfer (EFRET) was then calculated from a deconvoluted spectrum by the following equation (22): $$mathtex$$\[E_{\mathrm{FRET}}{=}F_{\mathrm{AD}}/(F_{\mathrm{AD}}{+}F_{\mathrm{DA}}{\Phi}_{A})\]$$mathtex$$(1) where ΦA is the quantum yield of acceptor fluorescence emission; ΦA is 0.61 for EYFP. The distance between the two proteins (R) was estimated by the following equation (22): $$mathtex$$\[R{=}R_{o}[(1{-}E_{\mathrm{FRET}})/E_{\mathrm{FRET}}]^{1/6}\]$$mathtex$$(2) where Ro is the Förster distance, defined as the radius between freely rotating donor and acceptor molecules that yields an EFRET value of 0.5; Ro is 49.2 Å for the ECFP-EYFP pair. Spectra were obtained with Andor charge-coupled device control software by analyzing a specific area within the cytoplasm chosen with emission images.
FRET efficiencies vary at different points within a cell and from cell to cell. Some of the cell-to-cell variation can be attributed to different relative expression levels of ECFP- and EYFP-tagged fusion proteins in each cell within a transfected population. Thus, each cell has a different acceptor/donor fluorophore ratio. Consequently, FRET variability can be attributed to underrepresentation of acceptor fluorescence protein in some cells, resulting in lower EFRET. As a result of this observation, deconvoluted EYFP fluorescence intensities resulting from 488-nm excitation of at least 40 cells were plotted versus their respective FRET efficiencies obtained upon 442-nm excitation. Data were fit to a hyperbola, and only those cells that exhibited EYFP fluorescence higher than 10,000 fluorescence intensity units (at which EFRET was asymptotic and no longer increased with increasing EYFP fluorescence) were analyzed statistically for comparison among plasmid pairs by a Student's t test. P values of <0.05 were considered significantly different.
To verify that FRET signatures corresponded to bona fide protein-protein interactions, control photobleaching experiments were performed. Specific photobleaching of the acceptor (EYFP) was performed by continuously scanning a region of a fluorescent cell with 514-nm light set at the highest intensity to photobleach a minimum of ∼85% of the EYFP fluorescence. Spectra were obtained before and after photobleaching of the acceptor EYFP. The resulting spectra were deconvoluted, and fluorescence intensities of the donor were obtained from the deconvoluted spectra before and after photobleaching of the acceptor. For both p37AUF1-p37AUF1 and p37AUF1-Hsp27 protein pairs, three independent cells were analyzed by this method.
Preparation of recombinant Hsp27.PCR was utilized with plasmid pCMVtag2B-FLAG-Hsp27 (29) to prepare Hsp27 devoid of the FLAG tag. This fragment was subcloned into pBAD/HisB (Invitrogen, Carlsbad, CA) to generate plasmid pBAD/HisB-Hsp27. Recombinant His6-Hsp27 was purified from Escherichia coli TOP10 cells transformed with pBAD/HisB-Hsp27 as described previously (54). When necessary, cleavage of the His6 tag from Hsp27 was achieved with an Enterokinase Cleavage Capture Kit (Novagen-EMD Biosciences).
RNA oligoribonucleotides.RNAs containing the 38-nucleotide (nt) core ARE from TNF-α mRNA or a fragment of similar size from the rabbit β-globin (Rβ) coding region were synthesized by Dharmacon (Lafayette, CO) as described previously (53). Fl-TNF-α ARE and Fl-Rβ substrates contain 5′-fluorescein (Fl) and were tested and quantified spectrophotometrically as described previously (54). For electrophoretic mobility shift assays (EMSAs), TNF-α ARE or Rβ RNAs were labeled at their 5′ termini with [γ-32P]ATP and T4 polynucleotide kinase (52).
EMSAs.EMSAs with His6-Hsp27 and 5′-32P-TNF-α or -Rβ RNA substrates were performed as described previously (52). Reaction products were visualized and analyzed with a Typhoon 9410 PhosphorImager (Molecular Dynamics, Amersham Biosciences).
Analysis of RNA-protein binding by fluorescence polarization.Equilibrium His6-Hsp27-RNA binding activity was assessed by monitoring interactions between a constant amount of fluorescein-conjugated RNA (0.15 nM) and a titration of His6-Hsp27 protein by fluorescence polarization with a Beacon 2000 Variable Temperature Fluorescence Polarization System (Panvera, Madison, WI) as described previously (52, 54). Where indicated on the figures, reaction mixtures included 5 mM MgCl2; all mixtures without Mg2+ contained 0.5 mM EDTA. Fluorescence polarimetry data were resolved by a variant of the Hill equation (42): $$mathtex$$\[A_{t}{=}(A_{R}{+}A_{\mathrm{PR}}K[P]^{x})/(1{+}K[P]^{x})\]$$mathtex$$(3) where [P] is the Hsp27 protein concentration. Total measured anisotropy (At) and intrinsic anisotropy of free RNA (AR) were determined experimentally. The Hill coefficient (x), intrinsic anisotropy of the protein-associated RNA (APR), and equilibrium constant (K) were solved by nonlinear least squares regression with the PRISM program, version 3.03 (GraphPad, San Diego, CA). Data are presented as means ± standard deviations. Student's t test was performed, and a P value of <0.05 was considered statistically significant (GraphPad InStat, version 3.05, San Diego, CA).
Plasmids for shRNA expression and cell transfections.pSilencer 2.1-U6-hygro vector (Ambion, Austin TX) was modified by replacing the U6 promoter with a U6 promoter/tetracycline (Tet)-operator combination (32). Oligonucleotides encoding short hairpin RNAs (shRNAs) were annealed, phosphorylated with T4 polynucleotide kinase, and ligated into the BamHI-HindIII sites, creating pSilencer/U6/tetO/shHsp27 (5′-GATCCCGCTAGCCACGCAGTCCAACTTCAAGAGAGTTGGACTGCGTGGCTAGCTTTTTTGGAAA-3′ and 5′-AGCTTTTCCAAAAAAGCTAGCCACGCAGTCCAACTCTCTTGAAGTTGGACTGCGTGGCTAGCGG-3′), pSilencer/U6/tetO/shAUF1 (5′-GATCCCGTTGTAGACTGCACTCTGATTCAAGAGATCAGAGTGCAGTCTACAACTTTTTTGGAAA-3′ and 5′-AGCTTTTCCAAAAAAGTTGTAGACTGCACTCTGATCTCTTGAATCAGAGTGCAGTCTACAACGG-3′), or a negative control not found in the human genome which was supplied with the Ambion kit. Vectors expressing shRNAs were linearized with XmnI and transfected into THP-1 cells with Effectene reagent (Qiagen, Hilden, Germany). Stably transfected cells were selected with 250 units/ml hygromycin B (Calbiochem). To assess knockdown, proteins were visualized and quantified as described above, with α-tubulin as a loading control.
Determination of TNF-α mRNA half-life.THP-1 cells were treated with vehicle (DMSO; J. T. Baker) or 10 nM TPA (Sigma) for 1 h, and then actinomycin D (Calbiochem, La Jolla, CA) at a final concentration of 5 μg/ml was added to inhibit transcription. Time courses were limited to 3 h to avoid affecting cellular mRNA decay pathways by actinomycin D-enhanced apoptosis (46). Cells were harvested at each time point, lysed with QiaShredders (Qiagen, MD), and purified with an RNeasy kit (Qiagen, MD). Molecular beacons for β-actin [5′-6-FAM-d(CGCGATCATGGAGTCCTGTGGCATCCACGAAGATCGCG)-DABCYL-3′, where FAM is carboxyfluorescein and DABCYL is 4-(4′-dimethylaminophenylazo)benzoic acid] and TNF-α [5′-QUASAR 670-d(CGCGATCACTCCCAGGTCCTCTTCAAGGGCGATCGCG)-BHQ-2-3′, where BHQ is Black Hole quencher] and primers (for β-actin, 5′-TTGGCAATGAGCGGTTCC-3′ and 5′-AGCACTGTGTTGGCGTAC-3′; for TNF-α, 5′-ATGGCGTGGAGCTGAGAG-3′ and 5′-GATGCGGCTGATGGTGTG-3′) were designed with Premier Biosoft Beacon Designer Software (Stratagene, Cedar Creek, TX) purchased from Biosearch Technologies (Novato, CA). Melting temperatures were determined as described previously (47). Total RNA (1.25 μg) was reverse transcribed with an Access RT-PCR kit (Promega, Madison WI) and primers specific to β-actin and TNF-α. Real-time quantitative PCR (qPCR) was performed with a Stratagene MX3005P qPCR System with a Stratagene Brilliant qPCR MasterMix kit. Reaction mixtures were assembled in triplicate with 0.5 mM primer, 100 ng of molecular beacon, and 0.25 ng of the RNA equivalent of cDNA for β-actin or 25 ng of the RNA equivalent of cDNA for TNF-α. Relative mRNA levels were calculated from a standard curve. TNF-α levels were normalized with β-actin and plotted as a percentage of the time zero value. Data were analyzed by nonlinear regression, and half-life was calculated from the first-order decay constant (k) obtained with PRISM software, version 3.03 (GraphPad, San Diego, CA). Standard error about the regression solution was calculated by the software using n − 2 degrees of freedom and is linear about k (and therefore hyperbolic about the mRNA half-life, ln2/k). Suitability of each regression solution was evaluated with the runs test for random distribution of residuals (P < 0.05 cutoff). Assignment of explicit half-lives to very stable mRNAs is rarely informative, since the hyperbolic nature of half-life relative to the first-order mRNA decay constant inflates errors in the half-life value when k approaches zero. Accordingly, half-life values are given as >5 h for mRNAs where k is at least two standard errors below 0.139 h−1 (equal to ln2/5 h). Comparisons between mRNA decay constants were performed with an unpaired two-tailed t test, with differences yielding a P value of <0.05 considered significant.
Reporter mRNA half-life determinations.Plasmid pTRE (Clontech) expressing the 1.7-kb Rβ gene, pTRE/Rβ-wt (where wt is wild type), was used as a stable control mRNA for reporter assays. The 38-bp core ARE from the human TNF-α gene (51) was inserted into the 3′ UTR of the Rβ gene at the unique BglII site to create plasmid pTRE/Rβ-ARE. pTRE/Rβ-wt or pTRE/Rβ-ARE reporter plasmids, pTet-Off (encoding Tet-responsive transcriptional activator tTA), and pEGFP-C2 (encoding internal control mRNA) were cotransfected into THP-1 cell lines with Effectene reagent (Qiagen). Two days posttransfection, doxycycline (Sigma) was added to culture medium to a final concentration of 2 μg/ml. Cells were harvested at each time point and lysed with Qiagen QiaShredder cartridges, and RNA was purified with a Qiagen RNeasy kit. Multiplex reactions were assembled with the SuperScriptIII Platinum One-Step qRT-PCR kit (Invitrogen, Carlsbad, CA) with 15 picomoles of each primer (for Rβ, 5′-GTGAACTGCACTGTGACAAGC-3′ and 5′-ATGATGAGACAGCACAATAACCAG-3′; for EGFP, 5′-GCGACACCCTGGTGAACC-3′ and 5′-GATGTTGTGGCGGATCTTGAAG-3′), 5 picomoles of each probe [for Rβ, 5′-(56-FAM)-CGTTGCCCAGGAGCCTGAAGTTCTCA(3BHQ_1)-3′; for EGFP, 5′-(5Cal610)-CACCTTGATGCCGTTCTTCTGCTTGTCG-(3BHQ_2)-3′], and 1 μg of total RNA. Reactions were run with the Stratagene MX3005P thermocycler. Relative mRNA levels were calculated based upon standard curves. Reporter mRNA levels were normalized with EGFP mRNA and plotted as a percentage of the levels at time zero. Data were analyzed as described above.
Hsp27 is an ASTRC-associated protein.The balance of stabilizing and destabilizing trans-acting factors within ASTRC, their posttranslational modifications, and overall subunit composition and stoichiometry might collectively dictate the rate of AMD in response to extracellular signals (56). A P130 fraction of cytoplasmic extracts contains ribosome/polyribosome-associated mRNAs, mRNA degradation enzymes, and AUF1 (35, 59). The P130 fraction from activated versus nonactivated control THP-1 cells (Fig. 1A) was thus used for immunopurification of AUF1-containing complexes with affinity-purified anti-AUF1 immunoglobulin G. Fractionation by SDS-PAGE and staining revealed an additional 26-kDa polypeptide in complexes from activated cells compared to control cells (Fig. 1A). MALDI-TOF analysis of this polypeptide (Fig. 1B) indicated the presence of several fragments with masses predicted from an in silico trypsin digest of Hsp27 within the mass range examined (Fig. 1C, underlined). Six of the predicted 13 trypsin fragments in the 810 to 2,000 m/z range were resolved, representing 25% of the Hsp27 polypeptide (Fig. 1D). Western blot analysis confirmed the 26-kDa polypeptide to be Hsp27 (Fig. 1E, top) (see below).
Hsp27 is an ASTRC-associated protein. (A) Cytoplasmic P130 fractions from nonactivated (Con) and activated (TPA) THP-1 cells were immunoprecipitated with anti-AUF1. Proteins were resolved by SDS-PAGE and detected with Sypro Ruby. The arrow denotes the band from the TPA sample excised and analyzed by MALDI-TOF mass spectrometry. (B) MALDI-TOF analysis of the 26-kDa polypeptide. (C) Predicted tryptic peptides for Hsp27. One-letter codes for amino acids are shown, and letters in parentheses are amino acids at the trypsin cleavage site. An asterisk denotes a fragment containing a tryptic site not cleaved by trypsin. Underlined peptides were detected experimentally in panel B, where peptides with mass/charge ratios (m/z) corresponding to Hsp27 are indicated by amino acid numbers within the Hsp27 sequence. mi, monoisotopic mass; av, average mass. Monoisotopic mass is calculated with the lowest common isotope for each element (e.g., 12C, 1H, 14N, 16O, 32S, and 31P). Average mass is calculated with isotopes for each element with abundances reflecting their normal proportion in the biosphere (http://prospector.ucsf.edu). (D) Amino acid sequence of Hsp27. Underlined residues represent peptides from panels B and C that were resolved by MALDI-TOF mass spectrometry. Some regions were contained in up to three fragments. (E) Western blot analyses of ASTRC subunits. AUF1-associated proteins were immunopurified from equal cell equivalents of cytoplasmic P130 fraction from untreated (CTRL) and activated (TPA) (10 nM TPA for 1 h) THP-1 cells. The indicated proteins were detected.
Analyses of ASTRC subunits.Western blot analyses of immunopurified ASTRC from control and activated THP-1 cells revealed Hsp27 and, as expected, Hsp70, Hsc70, eIF4G, and AUF1 (Fig. 1E). Additionally, Hsp27 antibody was immunoreactive with several polypeptides ranging from 26 to 33 kDa, and their association with ASTRC increased upon activation (Fig. 1E, top). With the AUF1 signal for normalization, the 26-kDa polypeptide increased ∼6.5-fold upon activation, and polypeptides in the 29- to 33-kDa range increased ∼2.5-fold. (MALDI-TOF mass spectrometry was performed with the 26-kDa polypeptide) (Fig. 1A). We speculate that the higher-molecular-weight polypeptides may be posttranslationally modified Hsp27. Hsp27 can be phosphorylated on Ser15, Ser78, and/or Ser82 (10, 26). Indeed, MALDI-TOF analysis identified a fragment with an m/z consistent with phospho-Ser82 (Fig. 1B). However, detailed analysis of Hsp27 phosphorylation is beyond the scope of this work. Activation also increased association of eIF4G with ASTRC by approximately fivefold, but the levels of Hsp70, Hsc70, and AUF1 varied less than 50% following activation from values for control cells (Fig. 1E). These results resemble the increases in eIF4G association with ASTRC that occurs upon heat shock-induced ARE-mRNA stabilization (27). In contrast to heat shock, however, activation of THP-1 cells did not significantly alter Hsp70 association with ASTRC. We conclude that Hsp27 resides in one or more complexes with AUF1, Hsc/Hsp70, and eIF4G in the cytoplasm of THP-1 cells. These results also suggest that ASTRC subunit levels are dynamic in response to specific cellular stimuli that alter mRNA degradation dynamics.
Analyses of AUF1-AUF1 and AUF1-Hsp27 interactions by live-cell FRET.To complement the biochemical approach employed to identify association of Hsp27 with ASTRC, we utilized a combination of confocal fluorescence spectroscopy and FRET to examine protein-protein interactions in live cells expressing p37AUF1 and Hsp27 fused to fluorescent proteins. FRET is a noninvasive, spectroscopic technique that allows real-time analysis of protein-protein interactions in live cells. FRET is a nonquantum transfer of energy from a fluorescent donor protein to a fluorescent acceptor protein, the efficiency of which is directly related to the intermolecular distance between the two proteins. As p37AUF1-p37AUF1 interactions are known to occur in vitro (7) and in cells (5, 27), this interaction pair was examined in live cells prior to experiments with the p37AUF1-Hsp27 interaction pair.
Plasmids expressing full-length p37AUF1 fused at its N or C terminus with ECFP (the donor) or EYFP (the acceptor) and Hsp27 fused at its N or C terminus with ECFP or EYFP were used for experiments. Unlike endogenous p37AUF1, which is localized in the nucleus and cytoplasm (59), p37AUF1 with N-terminal ECFP or EYFP was restricted to the nucleus (data not shown). By contrast, p37AUF1 with C-terminal ECFP or EYFP (p37AUF1-ECFP and p37AUF1-EYFP, respectively) localized in the nucleus and cytoplasm (see below). Thus, FRET experiments were performed only with p37AUF1-ECFP and p37AUF1-EYFP. Western blot analyses of lysates from transfected cells verified that all fluorescently tagged proteins had the expected apparent molecular weights (see Fig. S1 in the supplemental material).
To examine interactions between p37AUF1-ECFP and p37AUF1-EYFP, plasmids encoding these proteins were transiently cotransfected into THP-1 cells. Both p37AUF1-ECFP (Fig. 2A, left panel) and p37AUF1-EYFP (Fig. 2A, middle panel) were located in the nucleus and cytoplasm, just as endogenous p37AUF1 is (5, 36, 59). Excitation of ECFP at 442 nm also led to emission of EYFP at 527 nm (Fig. 2A, right panel). EYFP emission, observed primarily in the cytoplasm, is indicative of a FRET signature and implied p37AUF1-ECFP-p37AUF1-EYFP interactions.
Analysis of p37AUF1-p37AUF1 interactions in live cells by FRET. (A) Confocal fluorescence imaging of THP-1 cells coexpressing p37AUF1-ECFP and p37 AUF1-EYFP. Emission of p37AUF1-ECFP upon excitation at 442 nm (left), emission of p37AUF1-EYFP upon excitation at 488 nm (middle), and emission of p37AUF1-EYFP upon excitation of p37AUF1-ECFP at 442 nm (right), indicative of a FRET signature. Images are shown as thermal gradients, where white indicates the strongest signal. Scale bar, 10 μm. (B) Deconvolutions of fluorescence emission spectra of the cell shown in panel A. The cell was irradiated with 442 nm light, and the spectrum was deconvoluted into its major components (left). After the raw spectrum (black line) was transformed so that it possessed even wavelength intervals (red line), it was separated into the following components: ECFP (i.e., p37AUF1-ECFP expression, green line), EYFP (FRET signature from p37AUF1-EYFP, yellow line and arrow), nicotinamide background at 500 nm (navy blue line), and riboflavin background at 550 nm (pink line). Addition of the four components (light blue line) closely approximated the observed spectrum, as it should. For this cell, EFRET = 0.31, with a calculated distance of 56 Å between the two fluorescent-tagged p37AUF1 proteins. At right is shown the deconvoluted spectrum of the same cell upon direct EYFP excitation at 488 nm. The yellow line represents p37AUF1-EYFP expression. For both panels and all deconvolutions shown in subsequent figures, fluorescence units are listed for ECFP, EYFP, and the cell at 550 nm (cell550); these were determined by integration of the curves at the respective emission wavelength maxima ± 10 nm (the band-pass width of the filters). (C) Statistical analysis of p37AUF1-EYFP/p37AUF1-ECFP interactions and the control ECFP/p37AUF1-EYFP interaction. Scatter plots of deconvoluted spectra of cells coexpressing p37AUF1-EYFP with p37AUF1-ECFP or ECFP with p37AUF1-EYFP were derived from FRET analyses of 42 and 8 cells, respectively. Cells that exhibited EYFP fluorescence of 10,000 units or higher (n = 14 and 8, respectively) (see Materials and Methods) were used in subsequent statistical analyses. Each point represents EFRET calculated from one deconvoluted spectrum. P < 0.0001 for this analysis.
To accurately quantify FRET efficiencies for comparisons between protein pairs, we employed a twofold strategy (described in detail in Materials and Methods). First, spectral scans of a selected region of cytoplasm were performed from 450 to 650 nm, and fluorescence data were deconvoluted to determine the contributions of ECFP and EYFP to total fluorescence; these corrected fluorescence values were used for calculation of FRET efficiency (EFRET) by equation 1 (see Materials and Methods). Second, EFRET was determined for multiple cells within a transfected population to permit statistical comparisons of differences between two protein pairs (interactions of p37AUF1-ECFP and p37AUF1-EYFP versus interactions of control ECFP and p37AUF1-EYFP). Figure 2B shows a spectral deconvolution analysis of a section of cytoplasm in the THP-1 cell shown in Fig. 2A. Excitation of ECFP at 442 nm produced a peak at 475 nm indicative of p37AUF1-ECFP expression (Fig. 2B, left panel); excitation of EYFP at 488 nm produced a peak at 527 nm indicative of p37AUF1-EYFP expression (Fig. 2B, right panel). These data indicate that p37AUF1-ECFP and p37AUF1-EYFP were indeed coexpressed in the cell. Excitation of ECFP at 442 nm in this cell also produced a peak of yellow emission at 527 nm, consistent with a FRET signature (Fig. 2B, left panel). EFRET was calculated from deconvoluted spectra to be 0.31 for this cell. Analyses of multiple cells provided an average EFRET of 0.27 ± 0.05 for the transfected cell population (Fig. 2C) with a mean distance of 58 Å between the proteins (calculated by equation 2) (see Materials and Methods).
To confirm bona fide p37AUF1-p37AUF1 interactions, two control experiments were performed. First, EFRET was determined for cells cotransfected with plasmids expressing ECFP alone (i.e., not fused to p37AUF1) and p37AUF1-EYFP. Second, photobleaching of p37AUF1-EYFP was performed to examine the effect on fluorescence intensity of p37AUF1-ECFP. For a true protein-protein interaction, photobleaching p37AUF1-EYFP should increase p37AUF1-ECFP fluorescence as photobleached p37AUF1-EYFP is less capable of acting as an energy acceptor. THP-1 cells were cotransfected with plasmids expressing ECFP (not fused to p37AUF1) and p37AUF1-EYFP. Fluorescent images were obtained, and spectral scans were performed. Excitation at 442 nm revealed ECFP expression (Fig. 3B, left panel) and homogenous distribution of ECFP across the cell (Fig. 3A, left panel). Likewise, excitation at 488 nm revealed p37AUF1-EYFP expression (Fig. 3B, right panel) and both nuclear and cytoplasmic p37AUF1-EYFP localization (Fig. 3A, middle panel). However, excitation of ECFP at 442 nm did not produce significant yellow fluorescence at 527 nm (Fig. 3A, compare right panel to middle panel); a deconvoluted spectrum had no detectable fluorescence component originating from EYFP (Fig. 3B, left panel). Thus, EFRET was essentially zero. Analyses of deconvoluted data from cells coexpressing p37AUF1-ECFP and p37AUF1-EYFP (Fig. 2) and cells coexpressing ECFP and p37AUF1-EYFP (Fig. 3) revealed a highly significant difference in mean EFRET between the two populations (Fig. 2C) (P < 0.0001). For a second control, selective photobleaching of p37AUF1-EYFP was performed in cells coexpressing p37AUF1-ECFP. Fluorescence spectra from cytoplasmic regions that exhibited FRET were obtained prior to and after photobleaching. The photobleached region had higher ECFP fluorescence than it did before photobleaching (∼10,000 versus 5,776 fluorescence units, respectively) (Fig. 3C, compare right and left panels), demonstrating that EYFP emission from 442-nm (ECFP) excitation arose from FRET and not from direct excitation of EYFP. Three independent cells showed similar results. Taken together, the data in Fig. 2 and 3 indicate that p37AUF1-p37AUF1 interactions occur in live THP-1 cells. We should note, however, that since p37AUF1-p37AUF1 interactions occur both in solution and when bound to an ARE, the FRET experiments cannot distinguish between these two states. Indeed, it is highly likely that within the cell, p37AUF1-p37AUF1 dimers are present in RNA-bound and unbound populations.
Control FRET experiments for p37AUF1-p37AUF1 interactions. (A) THP-1 cells were cotransfected with plasmids expressing ECFP alone (not linked to p37AUF1) and p37AUF1-EYFP. Emission of ECFP upon excitation at 442 nm (left), emission of p37AUF1-EYFP upon excitation at 488 nm (middle), and emission of p37AUF1-EYFP upon excitation of ECFP at 442 nm (right), indicative of no FRET signature. Images are shown as thermal gradients. Scale bar, 10 μm. (B) Deconvolutions of fluorescence emission spectra from the cell shown in panel A. The cell was irradiated with 442 nm light, and the spectrum was deconvoluted into its major components (left) as described in the legend of Fig. 2B. For this cell, EFRET was not detectable. Figure 2C contains a scatter plot analysis of eight control cells analyzed. At right is shown the deconvoluted spectrum of the same cell upon direct EYFP excitation at 488 nm. The yellow line represents p37AUF1-EYFP expression. (C) Photobleaching control. Deconvolution analyses of a THP-1 cell coexpressing p37AUF1-ECFP and p37AUF1-EYFP showed a value of 5,779 fluorescence units for ECFP (left panel) and an increase to 10,398 fluorescence units after photobleaching of EYFP (right panel), indicative of a true FRET signature. Three cells were analyzed with comparable results. cell550, fluorescence of the cell at 550 nm.
Similar experiments were performed to identify p37AUF1-Hsp27 interactions in live THP-1 cells. Cells were cotransfected with plasmids encoding p37AUF1-ECFP and EYFP-Hsp27. Coexpression of p37AUF1-ECFP (Fig. 4A, left panel) and EYFP-Hsp27 (Fig. 4A, middle panel) produced a FRET signature (Fig. 4A, right panel) indicative of interactions between these proteins. Spectral scans from 450 to 650 nm of a selected region of cytoplasm in multiple cells and data deconvolution permitted determinations of the contributions of ECFP and EYFP to total fluorescence and calculations of EFRET as described above for AUF1-AUF1 interactions. For example, excitation of ECFP at 442 nm produced a peak at 475 nm indicative of p37AUF1-ECFP expression (Fig. 4B, left panel); excitation of EYFP at 488 nm produced a peak at 527 nm, indicative of EYFP-Hsp27 expression (Fig. 4B, right panel). Excitation of ECFP at 442 nm in this cell also produced a peak of yellow emission at 527 nm, consistent with a FRET signature (Fig. 4B, left panel). EFRET was calculated from deconvoluted spectra to be 0.62 in this cell. Analyses of multiple cells provided a mean EFRET of 0.52 ± 0.06 for the transfected cell population (Fig. 4C) with an average distance of 49 Å between the proteins (calculated by equation 2; see Materials and Methods). Similar results were obtained with p37 AUF1-EYFP and ECFP-Hsp27 pairs (data not shown).
Analysis of p37AUF1-Hsp27 interactions in live cells by FRET. (A) Confocal fluorescence imaging of THP-1 cells coexpressing p37AUF1-ECFP and EYFP-Hsp27. Emission of p37AUF1-ECFP upon excitation at 442 nm (left), emission of EYFP-Hsp27 upon excitation at 488 nm (middle), and emission of EYFP-Hsp27 upon excitation of p37AUF1-ECFP at 442 nm (right), indicative of a FRET signature. Images are shown as thermal gradients. Scale bar, 5 μm. (B) Deconvolutions of fluorescence emission spectra from the cell shown in panel A. The cell was irradiated with 442 nm light, and the spectrum was deconvoluted into its major components (left) as described in the legend for Fig. 2B. For this cell, EFRET = 0.62, with a distance of 45 Å between the two fluorescently tagged proteins. At right is shown the deconvoluted spectrum of the same cell upon direct EYFP excitation at 488 nm. The yellow line represents p37AUF1-EYFP expression. (C) Statistical analysis of interactions of p37AUF1-ECFP with EYFP-Hsp27 and the interaction of the negative control p37AUF1-ECFP with Hsp90-EYFP. Scatter plots of deconvoluted spectra of cells coexpressing p37AUF1-ECFP and EYFP-Hsp27 or p37AUF1-ECFP and Hsp90-EYFP were derived from FRET analyses of >30 cells for each protein pair, as described in the legend to Fig. 2C. cell550, fluorescence of the cell at 550 nm.
As a control, EFRET was determined for cells cotransfected with plasmids expressing p37AUF1-ECFP and another chaperone, Hsp90-EYFP. Excitation at 442 nm indicated p37AUF1-ECFP expression (Fig. 5A, left panel) and excitation at 488 nm indicated Hsp90-EYFP expression (Fig. 5A, right panel). However, the deconvoluted spectrum had no significant EYFP fluorescence upon excitation of ECFP at 442 nm (Fig. 5A, left panel), indicating near-zero EFRET. Analyses of deconvoluted data from cells coexpressing p37AUF1-ECFP and EYFP-Hsp27 (Fig. 4) and cells coexpressing p37AUF1-ECFP and Hsp90-EYFP (Fig. 5) demonstrated a highly significant difference in mean EFRET between the two populations (Fig. 4C) (P < 0.0001). This result demonstrates that AUF1 does not indiscriminately associate with heat shock/chaperone proteins, consistent with previous biochemical experiments (27).
Control FRET experiments for p37AUF1-Hsp27 interactions. (A) THP-1 cells were cotransfected with plasmids expressing p37AUF1-ECFP and Hsp90-EYFP (a negative control). Deconvolutions of fluorescence emission spectra from at least 30 cells were performed as described in the legend to Fig. 2B. The cell was irradiated with 442-nm light, and the spectrum was deconvoluted into its major components to permit calculation of EFRET, which was not detectable (left). Figure 4C contains a scatter plot analysis of 30 cells analyzed for each transfection pair. At right is shown the deconvoluted spectrum of the same cell upon direct EYFP excitation at 488 nm. The yellow line represents Hsp90-EYFP expression. (B) Photobleaching control. Deconvolution analyses of a THP-1 cell coexpressing p37AUF1-ECFP and EYFP-Hsp27 showed a value of 3,466 fluorescence units for ECFP before photobleaching and an increase to 9,423 fluorescence units after photobleaching of EYFP, indicative of a true FRET signature. Three cells were analyzed with comparable results.
For a second control, selective photobleaching of EYFP-Hsp27 was performed in cells coexpressing p37AUF1-ECFP. As expected, photobleached regions exhibited more ECFP fluorescence than they did prior to photobleaching (9,243 versus 3,466 fluorescence units, respectively) (Fig. 5B, compare right and left panels). This demonstrates that EYFP emission upon 442-nm (ECFP) excitation was due to FRET and not to excitation of EYFP. Three independent cells showed similar results. In conclusion, the data in Fig. 4 and 5 indicate that p37AUF1-Hsp27 interactions occur in live THP-1 cells. This has likely implications for AMD.
Binding of Hsp27 to the TNF-α ARE.Since Hsp70, Hsc70, and AUF1 all display high-affinity ARE-binding activity (6, 16, 53), we hypothesized that Hsp27 might also bind AREs with high affinity. RNA EMSAs were performed with purified recombinant His6-Hsp27 and a 38-nt RNA containing the core ARE from TNF-α mRNA. A major RNA-protein complex was observed with increasing amounts of recombinant Hsp27 (Fig. 6A). Hsp27 did not bind a control 31-nt RNA derived from the Rβ coding region (Fig. 6A, lanes 10 to 11). Thus, Hsp27 displays high ARE-binding affinity. UV cross-linking confirmed direct ARE contact by Hsp27 (Fig. 6B, lanes 3 to 5). Lack of UV cross-linking to the Rβ coding region again confirmed specificity for ARE binding (Fig. 6B, lanes 6 to 9).
Hsp27 is an AUBP. (A) EMSA for Hsp27-ARE interaction. 32P-labeled TNF-α ARE (lanes 1 to 9) or a fragment of the Rβ coding region (lanes 10 and 11) was incubated with the indicated concentrations of His6-Hsp27 and fractionated by native gel electrophoresis. Free RNA is indicated with a bracket, and the protein-RNA complex is indicated by the arrow. NP, no protein added. (B) UV cross-linking analysis of Hsp27-ARE interaction. Binding reactions contained increasing concentrations of His6-Hsp27 and 0.2 nM 5′-32P-labeled TNF-α ARE (lanes 1 to 5) or Rβ (lanes 6 to 9). Reactions were irradiated with UV and fractionated by SDS-PAGE. The arrowhead indicates the protein-RNA complex. The migration positions of markers are indicated on the left of the gel. (C) Evaluation of Hsp27-RNA equilibrium binding by fluorescence polarization. Individual binding reactions containing fluorescein-labeled RNA substrates Fl-TNF-α ARE (filled circles) or Fl-Rβ (open circles) were assembled across a titration of His6-Hsp27 concentrations in the absence (left) or the presence of 5 mM Mg2+ (right). The anisotropy value for each binding reaction is plotted versus protein concentration. Data were described by nonlinear regression by equation 3. Residuals plots of these solutions depict the calculated anisotropy (Acalc) subtracted from the observed anisotropy value (Aobs) and demonstrate that solutions are not biased (lower panels). The same 38-nt core ARE-RNA was used for all three binding assays.
To quantitatively evaluate equilibrium association between Hsp27 and RNA, fluorescence polarization assays were performed with 5′ fluorescein-conjugated RNA substrates. A fluorescent RNA alone yields a low anisotropy value since its small molecular volume allows rapid tumbling and, hence, significant light depolarization. By contrast, protein-RNA association increases molecular volume, slowing RNA mobility, resulting in a higher anisotropy value. Consequently, increasing concentrations of His6-Hsp27 under conditions of limiting Fl-TNF-α RNA increased total measured anisotropy (At) (Fig. 6C, top panel). These data were well described by the Hill equation, equation 3 (see Materials and Methods) (Fig. 6C, top left panel), as data points were randomly distributed around this solution, indicated by the residuals plot (Fig. 6C, bottom left panel). The Hill coefficient (x) resolved to x = 1.15 ± 0.08 (n = 3), indicating that association of Hsp27 with the TNF-α ARE did not significantly deviate from a single-site binding model. The association binding constant (K) resolved to (6.8 ± 0.9) × 108 M−1, corresponding to a dissociation binding constant of 1.5 nM (Kd = 1/K), which is comparable in magnitude to the AUBPs AUF1 (Kd = 0.8 nM) and HuR (Kd = 0.5 nM) (9, 55). Similar results were obtained with Hsp27 in which the His6 tag was enzymatically removed prior to binding assays (data not shown), indicating no contribution of the His6 tag. Addition of increasing amounts of His6-Hsp27 to the Fl-Rβ RNA substrate had no effect upon fluorescence anisotropy, indicative of no binding (Fig. 6C, top left panel). We conclude that Hsp27 binds specifically to the TNF-α ARE-RNA with high affinity.
Mg2+ or other multivalent cations stabilize the TNF-α ARE-RNA in a folded, condensed structure that significantly inhibits association with AUF1 (54). To a lesser extent Mg2+ impedes ARE binding by Hsp70 but has no influence on HuR binding to the ARE (55). These observations imply that ARE presentation within mRNAs may favor binding of some AUBPs. To determine whether RNA folding affects ARE binding by Hsp27, fluorescence polarimetry experiments were repeated with 5 mM Mg2+. Consistent with earlier experiments, Mg2+ increased the intrinsic anisotropy of free RNA substrates (AR; equation 3) due to cation-induced or cation-stabilized RNA structures (54). His6-Hsp27 binding to the TNF-α ARE was again well described by a single-site binding model (Fig. 6C, top right panel), as indicated by the residuals plot (Fig. 6C, bottom right panel). The Hill coefficient (x = 1.17 ± 0.05) and association constant [K = (6.2 ± 0.1) × 108 M−1, corresponding to a Kd of 1.6 nM] resolved for these data did not differ significantly from values obtained in the absence of Mg2+. His6-Hsp27 did not bind to Rβ in the presence of 5 mM Mg2+ (Fig. 6C, top right panel). Based upon these results, we conclude that Hsp27 binds specifically to the TNF-α ARE with high affinity, and unlike AUF1, binding appears independent of RNA structural influences. Multiple AUBPs possessing different binding preferences may permit ASTRC to associate with ARE-mRNAs that present a broad array of secondary structures and perhaps dictate competitive binding equilibria between stabilizing and destabilizing trans-acting factors in response to extracellular stimuli.
Degradation of TNF-α mRNA requires both Hsp27 and AUF1.To examine the biological significance of Hsp27-ARE interactions, we established THP-1 cells stably expressing either a scrambled control shRNA (shCTRL) or shRNA directed against Hsp27 (shHsp27) and measured effects upon TNF-α mRNA degradation. The Hsp27 level was reduced 63% compared to cells expressing control shRNA (Fig. 7A, compare lane 4 to lane 1). Likewise, we established THP-1 cells expressing shRNA against all four AUF1 isoforms (shAUF1); knockdown was greater than 90% (Fig. 7B). The half-life of TNF-α mRNA was determined by the actinomycin D time course normalized to β-actin mRNA levels for each time point following an acute treatment with vehicle (DMSO) or TPA (see Materials and Methods for details). Consistent with earlier observations (56), activation with TPA of cells expressing shCTRL stabilized TNF-α mRNA approximately sixfold (Fig. 7C, shCTRL versus shCTRL+TPA) (0.26 h versus 1.5 h, respectively; P = 0.0007). Thus, shRNA expression has no effect upon stabilization of TNF-α mRNA following activation with TPA. Knockdown of Hsp27 stabilized TNF-α mRNA more than 10-fold to a half-life of >5 h, compared to cells expressing shCTRL (i.e., >5 h versus 0.26 h, respectively; P = 0.0003) (Fig. 7C). The TNF-α mRNA half-life was approximately 4 h upon activation of cells expressing shHsp27 with TPA (Fig. 7C), indicating little effect of activation on stabilization of TNF-α mRNA in cells with reduced Hsp27 expression. We conclude that (i) proper Hsp27 expression is essential for rapid degradation of TNF-α mRNA in nonactivated cells and (ii) TPA-mediated activation may reduce Hsp27 activity to stabilize TNF-α mRNA.
Knockdown of Hsp27 or AUF1 stabilizes TNF-α mRNA. THP-1 cells were stably transfected with plasmids expressing shCTRL, shHsp27, or shAUF1 to elicit RNA interference. Knockdown of Hsp27 (A) and AUF1 (B) was assessed by Western blotting with various amounts of cell extracts from cells expressing the indicated shRNAs and antibodies to Hsp27 and AUF1. Hsc70 or α-tubulin served as internal controls. Analyses of multiple exposures indicated 63% knockdown of Hsp27 and >90% knockdown of AUF1. *, nonspecific band in panel A. (C and D) Cells expressing the indicated shRNAs were treated with vehicle or 10 nM TPA for 1 h, and then 5 μg/ml actinomycin D was added to culture medium to block transcription. RNA was purified at the indicated time points, quantified by quantitative RT-PCR, and analyzed with nonlinear regression to determine mRNA half-life. (C) Analysis of shHsp27-expressing cells. (D) Analysis of shAUF1-expressing cells. t1/2, half-life.
Similar to AUF1−/− mice (30), knockdown of AUF1 in THP-1 cells stabilized TNF-α mRNA, in this case, approximately sevenfold (0.26 h versus 1.8 h, respectively; P = 0.0007) (Fig. 7D, shCTRL versus shAUF1). However, activation of cells expressing shAUF1 did not lead to further statistically significant mRNA stabilization compared to nonactivated cells (Fig. 7D, shAUF1 versus shAUF+TPA) (P = 0.08). Taken together, the knockdown experiments indicated that (i) proper expression of both Hsp27 and AUF1 is necessary for rapid degradation of TNF-α mRNA in nonactivated cells and (ii) activation reduces AUF1 and/or Hsp27 activities to effect mRNA stabilization.
Hsp27 and AUF1 promote AMD via the TNF-α ARE.Next, we determined whether Hsp27 and AUF1 affect TNF-α mRNA, at least in part, through the ARE. Tet-responsive reporter plasmids containing either an unmodified rabbit β-globin gene (Rβ-wt) or β-globin linked to the core TNF-α ARE in the 3′ UTR (Rβ-ARE) were cotransfected into THP-1 cells expressing shCTRL, shHsp27, or shAUF1 together with a plasmid encoding a fusion protein of the Tet repressor DNA-binding domain and VP16 trans-activation domain (13). After 2 days, doxycycline was added to the culture medium to block reporter transcription, and RNA was analyzed at each time point. In shCTRL-expressing cells, Rβ mRNA was relatively stable with a half-life of >5 h; by contrast, the Rβ-ARE mRNA half-life was 0.4 h, indicating that the TNF-α ARE potently induces AMD (Fig. 8A) (P < 0.0001). However, the Rβ-ARE mRNA half-life was >5 h in shHsp27-expressing cells (Fig. 8A) (P = 0.0001 compared to shCTRL); Rβ mRNA was still relatively stable with a half-life of >5 h. Likewise, knockdown of AUF1 stabilized the Rβ-ARE reporter mRNA over twofold to 0.9 h (Fig. 8B) (P = 0.0024 compared to shCTRL). We note, however, that the time points appear to plateau between 2 to 6 h with <15% reporter mRNA remaining in cells with AUF1 knockdown. While this might suggest biphasic kinetics, single-population decay kinetics were employed to analyze these data, and as such, the 0.9-h half-life represents an average decay rate for the ARE-reporter mRNA upon knockdown of AUF1. This does not preclude that multiple subpopulations may exist with different decay kinetics. However, the data suggest that any such a stable population, if it exists, must be <15% of the total reporter mRNA. In any event, the twofold stabilization observed here is consistent with AUF1−/− mice and AUF1 knockdown experiments examining other AREs (25, 30, 45).
Knockdown of Hsp27 or AUF1 reduces AMD efficiency. Transcription of the indicated Rβ reporter constructs was blocked by adding 2 μg/ml doxycycline (Dox) to the media of shCTRL-, shHsp27-, and shAUF1-expressing THP-1 cells. Reporter mRNA half-lives were determined as described in the legend of Fig. 7. (A) shHsp27-expressing cells. (B) shAUF1-expressing cells. t1/2, half-life.
Since wild-type β-globin mRNA is relatively stable in cells regardless of the shRNA expressed, we examined an additional control for specificity of mRNA stabilization in shHsp27- and shAUF1-expressing cells; the constitutive decay element, an AMD-independent destabilizing sequence in TNF-α mRNA (45), was inserted into the 3′ UTR of Rβ. This reporter mRNA was unstable in shCTRL-, shHsp27-, and shAUF1-expressing cells (data not shown). Taken together, these results indicate that Hsp27 and AUF1 can promote AMD via the TNF-α ARE.
Proper regulation of innate immune responses is essential as prolonged production of proinflammatory cytokines is highly deleterious. AUBPs and microRNAs play indispensable roles in immune regulation. For example, the AUBPs HuR, TIA-1, and TTP act in concert to limit proinflammatory cytokine biosynthesis in macrophages at the levels of mRNA decay and translation (20). In addition, TTP collaborates with miR16 to promote degradation of a reporter mRNA containing the TNF-α ARE in both HeLa and Drosophila cells (18). Moreover, AUF1 knockout mice are highly susceptible to endotoxemia due to compromised degradation of ARE-mRNAs encoding TNF-α and interleukin-1β (30). Recent work has unveiled yet another layer of complexity in TNF-α regulation: cell cycle-dependent activation of TNF-α translation involving Ago, FXRI, and miR396-3, a microRNA that activates TNF-α translation in nonproliferating cells (48, 49). Clearly, to attain a comprehensive understanding of proinflammatory cytokine gene expression in innate immunity, it is important to identify all the effectors of AMD. Toward this goal, we focused on the ASTRC protein ensemble, as AUF1 knockout mice revealed its central requirement for AMD and proinflammatory gene regulation in vivo (30). In this work, our key finding was that chaperone Hsp27 is a novel ASTRC subunit critical for cytokine AMD.
Hsp27 has diverse cellular functions including, but not limited to, molecular chaperoning (17), actin polymerization (1), and protection from oxidative stress via modulation of glutathione levels (33). Our results demonstrate that Hsp27 is also a novel, high-affinity AUBP that associates with AUF1-containing protein complexes in vivo and is essential for AMD in monocytes. Three previous observations indicated a possible role for Hsp27 in mRNA stability. (i) Shchors and colleagues found that AUF1 and Hsp27 associate with cell death-inhibiting RNA, a U-rich transcript derived from the 3′ UTR of a gene with unknown function. Binding of the AUF1-Hsp27 complex was associated with reduced AMD and an antiapoptotic phenotype (39). (ii) Lasa and colleagues showed that overexpression of an Hsp27 phosphomimetic (glutamic acid substitutions at serines 15, 78, and 82), but not wild-type, protein resulted in stabilization of a β-globin/COX-2 ARE reporter transcript (29). (iii) Sommer and colleagues showed that overexpression of Hsp27 reduced levels of the ARE-mRNA encoding the c-Yes oncoprotein (42). However, in these studies, there was no clear mechanistic connection between Hsp27 and mRNA stability. Our data suggest that these previous results might be explained by the association of Hsp27 with ASTRC and the ability of Hsp27 to bind and modulate the stability of ARE-mRNAs.
What are potential roles for Hsp27 in AMD? Consideration of a previous observation may provide clues. During heat shock, increased association of Hsp70 and eIF4G with ASTRC occurs coincident with inactivation of AMD (27). Indeed, activation of THP-1 cells with TPA increased association of both eIF4G and Hsp27, but not Hsp70, with ASTRC (Fig. 1). These observations together suggest that activation-induced association of Hsp27 with ASTRC may permit coupling of monocyte activation and AMD. By contrast, increased association of Hsp70 with ASTRC following heat shock might be indicative of a heat shock-specific signal to the AMD machinery. Clearly, future work, including the identification of novel ASTRC components and additional AUBPs, will be required to adequately address the hypothesis that ASTRC is adaptive and that its individual subunits can serve as intermediaries between specific stimuli and AMD.
For reasons that are not yet clear, ASTRC contains at least four AUBPs: AUF1, Hsp70, Hsc70, and Hsp27. We offer two hypotheses. (i) HuR, Hsp70, and p37AUF1 display distinct binding affinities for stabilized ARE secondary structures (9). For example, secondary structure has a modest effect upon Hsp70 and no effect upon HuR binding but impairs ARE-binding affinity of p37AUF1 by >10-fold. By contrast, ARE binding by Hsp27 is unaffected by RNA structure (Fig. 6C). As such, it does not favor particular ARE-secondary structure presentations, distinguishing it from AUF1. Thus, multiple AUBPs possessing different binding preferences may permit ASTRC to associate with ARE-mRNAs that present broad arrays of secondary structures. (ii) Reporter assays utilizing either a control ARE or a folded ARE indicated that a strongly folded ARE slows AMD (9), likely due to the favored binding of stabilizing AUBPs. Therefore, competition between stabilizing and destabilizing trans-acting factors for ARE occupancy is likely regulated in part by ARE conformation. Thus, regulated mRNA stabilization may be mediated by factors that have the potential to modulate ARE presentation, such as flanking sequences, association of RNA-binding proteins at adjacent sequences, microRNAs, or local cation concentrations. Another observation worth noting is that despite the fact that many ASTRC subunits are RNA-binding proteins, their association with the complex may not be simply due to RNA bridging as proteins were coimmunoprecipitated following exhaustive RNase digestion (Fig. 1). Although this experiment does not rule out the possibility that complex assembly may be ARE dependent, this evidence, together with the ARE-binding activity of Hsp27, suggests that association of Hsp27 with ASTRC may occur via both protein-protein and protein-mRNA interactions. Nonetheless, a protein complex consisting of several AUBPs, such as ASTRC, that possesses both various preferences for ARE-structure and various contributions to ARE-mRNA stability would permit rapid alterations in AMD in response to extracellular stimuli.
In conclusion, our experiments have unveiled chaperone Hsp27 as both a new subunit of ASTRC and essential for cytokine AMD. We note that among its many functions, Hsp27 is also an actin-binding protein involved in cell motility (19). As monocyte activation by adhesion to endothelium and motility stimulate cytoskeletal reorganization and stabilize many cytokine ARE-mRNAs, it is tempting to speculate that Hsp27 association with ASTRC provides a new piece to an age-old puzzle as to how proinflammatory cytokine biosynthesis is coupled with cell adhesion/motility (15, 40, 41). As extracellular stimuli drive Hsp27 into numerous signaling complexes as well (60), future elucidation of their interactions with ASTRC will add additional details to our understanding of the multilayered control systems required to initiate, maintain, and limit the innate immune response.
We thank Andy Clark and Daiya Takai for plasmids, Nahum Sonenberg for eIF4G antibodies, Daniel Sinsimer for assistance with molecular beacon design and methods, and Gerald Wilson for reporter quantitative RT-PCR methods and assistance with statistical analyses. We thank Lori Covey for comments on the manuscript.
This work was supported by grants P01 AI057596 from the NIH to S.P. and G.B. and R01 AI059465 from the NIH to S.P. K.S. was supported by training grant T32 AI00743 from the NIH to S.P. F.M.G. and A.M.K. were supported by Integrative Graduate Education and Research Traineeship DGE0333196 from the NSF to Prabhas Moghe (Department of Biomedical Engineering, Rutgers University). F.M.G. was also supported by Initiative for Minority Students R25 GM058389 from the NIH to Michael Leibowitz (UMDNJ).
Returned for modification 19 May 2008.
Accepted 11 June 2008.
↵▿ Published ahead of print on 23 June 2008.
↵† Supplemental material for this article may be found at http://mcb.asm.org/.
Benndorf, R., K. Hayess, S. Ryazantsev, M. Wieske, J. Behlke, and G. Lutsch. 1994. Phosphorylation and supramolecular organization of murine small heat shock protein HSP25 abolish its actin polymerization-inhibiting activity. J. Biol. Chem. 269:20780-20784.
Brewer, G., and J. Ross. 1990. Messenger RNA turnover in cell-free extracts. Methods Enzymol. 181:202-209.
Carballo, E., H. Cao, W. S. Lai, E. A. Kennington, D. Campbell, and P. J. Blackshear. 2001. Decreased sensitivity of tristetraprolin-deficient cells to p38 inhibitors suggests the involvement of tristetraprolin in the p38 signaling pathway. J. Biol. Chem. 276:42580-42587.
Chen, C. Y., R. Gherzi, S. E. Ong, E. L. Chan, R. Raijmakers, G. J. Pruijn, G. Stoecklin, C. Moroni, M. Mann, and M. Karin. 2001. AU binding proteins recruit the exosome to degrade ARE-containing mRNAs. Cell 107:451-464.
David, P. S., R. Tanveer, and J. D. Port. 2007. FRET-detectable interactions between the ARE binding proteins, HuR and p37AUF1. RNA 13:1453-1468.
DeMaria, C. T., and G. Brewer. 1996. AUF1 binding affinity to A+U-rich elements correlates with rapid mRNA degradation. J. Biol. Chem. 271:12179-12184.
DeMaria, C. T., Y. Sun, L. Long, B. J. Wagner, and G. Brewer. 1997. Structural determinants in AUF1 required for high affinity binding to A+U-rich elements. J. Biol. Chem. 272:27635-27643.
Donnini, M., A. Lapucci, L. Papucci, E. Witort, A. Jacquier, G. Brewer, A. Nicolin, S. Capaccioli, and N. Schiavone. 2004. Identification of TINO: a new evolutionarily conserved BCL-2 AU-rich element RNA-binding protein. J. Biol. Chem. 279:20154-20166.
Fialcowitz, E. J., B. Y. Brewer, B. P. Keenan, and G. M. Wilson. 2005. A hairpin-like structure within an AU-rich mRNA-destabilizing element regulates trans-factor binding selectivity and mRNA decay kinetics. J. Biol. Chem. 280:22406-22417.
Gaestel, M., W. Schroder, R. Benndorf, C. Lippmann, K. Buchner, F. Hucho, V. A. Erdmann, and H. Bielka. 1991. Identification of the phosphorylation sites of the murine small heat shock protein Hsp25. J. Biol. Chem. 266:14721-14724.
Gao, M., C. J. Wilusz, S. W. Peltz, and J. Wilusz. 2001. A novel mRNA-decapping activity in HeLa cytoplasmic extracts is regulated by AU-rich elements. EMBO J. 20:1134-1143.
Gherzi, R., K. Y. Lee, P. Briata, D. Wegmuller, C. Moroni, M. Karin, and C. Y. Chen. 2004. A KH domain RNA binding protein, KSRP, promotes ARE-directed mRNA turnover by recruiting the degradation machinery. Mol. Cell 14:571-583.
Gossen, M., and H. Bujard. 1992. Tight control of gene expression in mammalian cells by tetracycline-responsive promoters. Proc. Natl. Acad. Sci. USA 89:5547-5551.
Guhaniyogi, J., and G. Brewer. 2001. Regulation of mRNA stability in mammalian cells. Gene 265:11-23.
Haskill, S., C. Johnson, D. Eierman, S. Becker, and K. Warren. 1988. Adherence induces selective mRNA expression of monocyte mediators and proto-oncogenes. J. Immunol. 140:1690-1694.
Henics, T., E. Nagy, H. J. Oh, P. Csermely, A. von Gabain, and J. R. Subjeck. 1999. Mammalian Hsp70 and Hsp110 proteins bind to RNA motifs involved in mRNA stability. J. Biol. Chem. 274:17318-17324.
Jakob, U., M. Gaestel, K. Engel, and J. Buchner. 1993. Small heat shock proteins are molecular chaperones. J. Biol. Chem. 268:1517-1520.
Jing, Q., S. Huang, S. Guth, T. Zarubin, A. Motoyama, J. Chen, P. F. Di, S. C. Lin, H. Gram, and J. Han. 2005. Involvement of microRNA in AU-rich element-mediated mRNA instability. Cell 120:623-634.
Jog, N. R., V. R. Jala, R. A. Ward, M. J. Rane, B. Haribabu, and K. R. McLeish. 2007. Heat shock protein 27 regulates neutrophil chemotaxis and exocytosis through two independent mechanisms. J. Immunol. 178:2421-2428.
Katsanou, V., O. Papadaki, S. Milatos, P. J. Blackshear, P. Anderson, G. Kollias, and D. L. Kontoyiannis. 2005. HuR as a negative posttranscriptional modulator in inflammation. Mol. Cell 19:777-789.
Knapinska, A. M., P. Irizarry-Barreto, S. Adusumalli, I. Androulakis, and G. Brewer. 2005. Molecular mechanisms regulating mRNA stability: physiological and pathological significance. Curr. Genomics 6:471-486.
Krause, C. D., E. Mei, O. Mirochnitchenko, N. Lavnikova, J. Xie, Y. Jia, R. M. Hochstrasser, and S. Pestka. 2006. Interactions among the components of the interleukin-10 receptor complex. Biochem. Biophys. Res. Commun. 340:377-385.
Krause, C. D., E. Mei, J. Xie, Y. Jia, M. A. Bopp, R. M. Hochstrasser, and S. Pestka. 2002. Seeing the light: preassembly and ligand-induced changes of the interferon γ receptor complex in cells. Mol. Cell. Proteomics 1:805-815.
Lai, W. S., E. Carballo, J. R. Strum, E. A. Kennington, R. S. Phillips, and P. J. Blackshear. 1999. Evidence that tristetraprolin binds to AU-rich elements and promotes the deadenylation and destabilization of tumor necrosis factor alpha mRNA. Mol. Cell. Biol. 19:4311-4323.
Lal, A., K. Mazan-Mamczarz, T. Kawai, X. Yang, J. L. Martindale, and M. Gorospe. 2004. Concurrent versus individual binding of HuR and AUF1 to common labile target mRNAs. EMBO J. 23:3092-3102.
Landry, J., H. Lambert, M. Zhou, J. N. Lavoie, E. Hickey, L. A. Weber, and C. W. Anderson. 1992. Human HSP27 is phosphorylated at serines 78 and 82 by heat shock and mitogen-activated kinases that recognize the same amino acid motif as S6 kinase II. J. Biol. Chem. 267:794-803.
Laroia, G., R. Cuesta, G. Brewer, and R. J. Schneider. 1999. Control of mRNA decay by heat shock-ubiquitin-proteasome pathway. Science 284:499-502.
Laroia, G., B. Sarkar, and R. J. Schneider. 2002. Ubiquitin-dependent mechanism regulates rapid turnover of AU-rich cytokine mRNAs. Proc. Natl. Acad. Sci. USA 99:1842-1846.
Lasa, M., K. R. Mahtani, A. Finch, G. Brewer, J. Saklatvala, and A. R. Clark. 2000. Regulation of cyclooxygenase 2 mRNA stability by the mitogen-activated protein kinase p38 signaling cascade. Mol. Cell. Biol. 20:4265-4274.
Lu, J. Y., N. Sadri, and R. J. Schneider. 2006. Endotoxic shock in AUF1 knockout mice mediated by failure to degrade proinflammatory cytokine mRNAs. Genes Dev. 20:3174-3184.
Ma, W. J., S. Cheng, C. Campbell, A. Wright, and H. Furneaux. 1996. Cloning and characterization of HuR, a ubiquitously expressed Elav-like protein. J. Biol. Chem. 271:8144-8151.
Matsukura, S., P. A. Jones, and D. Takai. 2003. Establishment of conditional vectors for hairpin siRNA knockdowns. Nucleic Acids Res. 31:e77.
Mehlen, P., C. Kretz-Remy, X. Preville, and A. P. Arrigo. 1996. Human hsp27, Drosophila hsp27 and human αB-crystallin expression-mediated increase in glutathione is essential for the protective activity of these proteins against TNFα-induced cell death. EMBO J. 15:2695-2706.
Mukherjee, D., M. Gao, J. P. O'Connor, R. Raijmakers, G. Pruijn, C. S. Lutz, and J. Wilusz. 2002. The mammalian exosome mediates the efficient degradation of mRNAs that contain AU-rich elements. EMBO J. 21:165-174.
Ross, J., and G. Kobs. 1986. H4 histone messenger RNA decay in cell-free extracts initiates at or near the 3′ terminus and proceeds 3′ to 5′. J. Mol. Biol. 188:579-593.
Sarkar, B., J. Y. Lu, and R. J. Schneider. 2003. Nuclear import and export functions in the different isoforms of the AUF1/heterogeneous nuclear ribonucleoprotein protein family. J. Biol. Chem. 278:20700-20707.
Schmidlin, M., M. Lu, S. A. Leuenberger, G. Stoecklin, M. Mallaun, B. Gross, R. Gherzi, D. Hess, B. A. Hemmings, and C. Moroni. 2004. The ARE-dependent mRNA-destabilizing activity of BRF1 is regulated by protein kinase B. EMBO J. 23:4760-4769.
Schwende, H., E. Fitzke, P. Ambs, and P. Dieter. 1996. Differences in the state of differentiation of THP-1 cells induced by phorbol ester and 1,25-dihydroxyvitamin D3. J. Leukoc. Biol. 59:555-561.
Shchors, K., F. Yehiely, R. K. Kular, K. U. Kotlo, G. Brewer, and L. P. Deiss. 2002. Cell death inhibiting RNA (CDIR) derived from a 3′-untranslated region binds AUF1 and heat shock protein 27. J. Biol. Chem. 277:47061-47072.
Sirenko, O. I., A. K. Lofquist, C. T. DeMaria, J. S. Morris, G. Brewer, and J. S. Haskill. 1997. Adhesion-dependent regulation of an A+U-rich element-binding activity associated with AUF1. Mol. Cell. Biol. 17:3898-3906.
Sirenko, O., U. Bocker, J. S. Morris, J. S. Haskill, and J. M. Watson. 2002. IL-1β transcript stability in monocytes is linked to cytoskeletal reorganization and the availability of mRNA degradation factors. Immunol. Cell Biol. 80:328-339.
Sommer, S., Y. Cui, G. Brewer, and S. A. Fuqua. 2005. The c-Yes 3′-UTR contains adenine/uridine-rich elements that bind AUF1 and HuR involved in mRNA decay in breast cancer cells. J. Steroid Biochem. Mol. Biol. 97:219-229.
Stoecklin, G., M. Colombi, I. Raineri, S. Leuenberger, M. Mallaun, M. Schmidlin, B. Gross, M. Lu, T. Kitamura, and C. Moroni. 2002. Functional cloning of BRF1, a regulator of ARE-dependent mRNA turnover. EMBO J. 21:4709-4718.
Stoecklin, G., T. Mayo, and P. Anderson. 2006. ARE-mRNA degradation requires the 5′-3′ decay pathway. EMBO Rep. 7:72-77.
Stoecklin, G., M. Lu, B. Rattenbacher, and C. Moroni. 2003. A constitutive decay element promotes tumor necrosis factor alpha mRNA degradation via an AU-rich element-independent pathway. Mol. Cell. Biol. 23:3506-3515.
Suzuki, A., Y. Tsutomi, K. Akahane, T. Araki, and M. Miura. 1998. Resistance to Fas-mediated apoptosis: activation of caspase 3 is regulated by cell cycle regulator p21WAF1 and IAP gene family ILP. Oncogene 17:931-939.
Tyagi, S., and F. R. Kramer. 1996. Molecular beacons: probes that fluoresce upon hybridization. Nat. Biotechnol. 14:303-308.
Vasudevan, S., and J. A. Steitz. 2007. AU-rich-element-mediated upregulation of translation by FXR1 and Argonaute 2. Cell 128:1105-1118.
Vasudevan, S., Y. Tong, and J. A. Steitz. 2007. Switching from repression to activation: microRNAs can up-regulate translation. Science 318:1931-1934.
Wagner, B. J., C. T. DeMaria, Y. Sun, G. M. Wilson, and G. Brewer. 1998. Structure and genomic organization of the human AUF1 gene: alternative pre-mRNA splicing generates four protein isoforms. Genomics 48:195-202.
Wilson, G. M., J. Lu, K. Sutphen, Y. Suarez, S. Sinha, B. Brewer, E. Villanueva-Feliciano, R. M. Ysla, S. Charles, and G. Brewer. 2003. Phosphorylation of p40AUF1 regulates binding to A+U-rich mRNA-destabilizing elements and protein-induced changes in ribonucleoprotein structure. J. Biol. Chem. 278:33039-33048.
Wilson, G. M., Y. Sun, H. Lu, and G. Brewer. 1999. Assembly of AUF1 oligomers on U-rich RNA targets by sequential dimer association. J. Biol. Chem. 274:33374-33381.
Wilson, G. M., K. Sutphen, S. Bolikal, K. Y. Chuang, and G. Brewer. 2001. Thermodynamics and kinetics of Hsp70 association with A+U-rich mRNA-destabilizing sequences. J. Biol. Chem. 276:44450-44456.
Wilson, G. M., K. Sutphen, K. Chuang, and G. Brewer. 2001. Folding of A+U-rich RNA elements modulates AUF1 binding. Potential roles in regulation of mRNA turnover. J. Biol. Chem. 276:8695-8704.
Wilson, G. M., K. Sutphen, M. Moutafis, S. Sinha, and G. Brewer. 2001. Structural remodeling of an A+U-rich RNA element by cation or AUF1 binding. J. Biol. Chem. 276:38400-38409.
Wilson, G. M., J. Lu, K. Sutphen, Y. Sun, Y. Huynh, and G. Brewer. 2003. Regulation of A+U-rich Element-directed mRNA turnover involving reversible phosphorylation of AUF1. J. Biol. Chem. 278:33029-33038.
Wilusz, C. J., M. Wormington, and S. W. Peltz. 2001. The cap-to-tail guide to mRNA turnover. Nat. Rev. Mol. Cell Biol. 2:237-246.
Zacharias, D. A., J. D. Violin, A. C. Newton, and R. Y. Tsien. 2002. Partitioning of lipid-modified monomeric GFPs into membrane microdomains of live cells. Science 296:913-916.
Zhang, W., B. J. Wagner, K. Ehrenman, A. W. Schaefer, C. T. DeMaria, D. Crater, K. DeHaven, L. Long, and G. Brewer. 1993. Purification, characterization, and cDNA cloning of an AU-rich element RNA-binding protein, AUF1. Mol. Cell. Biol. 13:7652-7665.
Zheng, C., Z. Lin, Z. J. Zhao, Y. Yang, H. Niu, and X. Shen. 2006. MAPK-activated protein kinase-2 (MK2)-mediated formation and phosphorylation-regulated dissociation of the signal complex consisting of p38, MK2, Akt, and Hsp27. J. Biol. Chem. 281:37215-37226.
Molecular and Cellular Biology Aug 2008, 28 (17) 5223-5237; DOI: 10.1128/MCB.00431-08
You are going to email the following Chaperone Hsp27, a Novel Subunit of AUF1 Protein Complexes, Functions in AU-Rich Element-Mediated mRNA Decay | CommonCrawl |
Stochastic AUC optimization with general loss
Approximation by multivariate max-product Kantorovich-type operators and learning rates of least-squares regularized regression
Lucian Coroianu 1, , Danilo Costarelli 2, , Sorin G. Gal 1,, and Gianluca Vinti 2,
Department of Mathematics and Computer Science, University of Oradea, Oradea, Romania
Department of Mathematics and Computer Science, University of Perugia, Perugia, Italy
Received October 2019 Revised March 2020 Published May 2020
In a recent paper, for univariate max-product sampling operators based on general kernels with bounded generalized absolute moments, we have obtained several $ L^{p}_{\mu} $ convergence properties on bounded intervals or on the whole real axis. In this paper, firstly we obtain quantitative estimates with respect to a $ K $-functional, for the multivariate Kantorovich variant of these max-product sampling operators with the integrals written in terms of Borel probability measures. Applications of these approximation results to learning theory are obtained.
Keywords: Multivariate max-product sampling Kantorovich operators, Borel probability measures, multivariate generalized kernels, Lµp-norm, $ 1\le p <\infty $, $ K $-functional, learning theory, regularizing function, sample error, regularized error.
Mathematics Subject Classification: Primary: 41A35, 41A25, 41A63; Secondary: 62J02, 68T05.
Citation: Lucian Coroianu, Danilo Costarelli, Sorin G. Gal, Gianluca Vinti. Approximation by multivariate max-product Kantorovich-type operators and learning rates of least-squares regularized regression. Communications on Pure & Applied Analysis, 2020, 19 (8) : 4213-4225. doi: 10.3934/cpaa.2020189
L. Angeloni, D. Costarelli and G. Vinti, A characterization of the convergence in variation for the generalized sampling series, Ann. Acad. Sci. Fenn. Math., 43 (2018), 755-767. doi: 10.5186/aasfm.2018.4343. Google Scholar
F. Asdrubali, G. Baldinelli, F. Bianchi, D. Costarelli, A. Rotili, M. Seracini and G. Vinti, Detection of thermal bridges from thermographic images by means of image processing approximation algorithms, Appl. Math. Comput., 317 (2018), 160-171. doi: 10.1016/j.amc.2017.08.058. Google Scholar
C. Bardaro, P. L. Butzer, R. L. Stens and G. Vinti, Kantorovich-type generalized sampling series in the setting of Orlicz spaces, Sampl. Theory Signal Image Process., 6 (2007), 19-52. Google Scholar
C. Bardaro and I. Mantellini, Generalized sampling approximation of bivariate signals: rate of pointwise convergence, Numer. Funct. Anal. Optim., 31 (2010), 131-154. doi: 10.1080/01630561003644702. Google Scholar
B. Bede, L. Coroianu and S. G. Gal, Approximation by Max-Product Type Operators, Springer, New York, 2016. doi: 10.1007/978-3-319-34189-7. Google Scholar
B. Bede, L. Coroianu and S. G. Gal, Approximation and shape preserving properties of the Bernstein operator of max-product kind, Int. J. Math. Math. Sci., 2009 (2009), Art. 590589, 26 pp. doi: 10.1155/2009/590589. Google Scholar
P. L. Butzer, A survey of the Whittaker-Shannon sampling theorem and some of its extensions, J. Math. Res. Expos., 3 (1983), 185-212. Google Scholar
P. L. Butzer, H. G. Feichtinger and K. Grochenig, Error analysis in regular and irregular sampling theory, Appl. Anal., 50 (1993), 167-189. doi: 10.1080/00036819308840192. Google Scholar
[9] P. L. Butzer and R. J. Nessel, Fourier Analysis and Approximations, I, Academic Press, New York-London, 1971. Google Scholar
P. L. Butzer, S. Riesz and R. L. Stens, Approximation of continuous and discontinuous functions by generalized sampling series, J. Approx. Theory, 50 (1987), 25-39. doi: 10.1016/0021-9045(87)90063-3. Google Scholar
L. Coroianu, D. Costarelli, S. G. Gal and G. Vinti, The max-product generalized sampling operators: convergence and quantitative estimates, Appl. Math. Comput., 355 (2019), 173-183. doi: 10.1016/j.amc.2019.02.076. Google Scholar
L. Coroianu, D. Costarelli, S. G. Gal and G. Vinti, Approximation by max-product sampling Kantorovich operators with generalized kernels, Anal. Appl., 2019, in press. doi: 10.1142/S0219530519500155. Google Scholar
L. Coroianu and S. G. Gal, Approximation by nonlinear generalized sampling operators of max-product kind, Sampl. Theory Signal Image Process., 9 (2010), 59-75. Google Scholar
L. Coroianu and S. G. Gal, Approximation by max-product sampling operators based on sinc-type kernels, Sampl. Theory Signal Image Process., 10 (2011), 211-230. Google Scholar
L. Coroianu and S. G. Gal, Classes of functions with improved estimates in approximation by the max-product Bernstein operator, Anal. Appl., 9 (2011), 249-274. doi: 10.1142/S0219530511001856. Google Scholar
L. Coroianu and S. G. Gal, Approximation by truncated max-product operators of Kantorovich-type based on generalized $(\varphi, \psi)$-kernels, Math. Meth. Appl. Sci., 41 (2018), 7971-7984. doi: 10.1002/mma.5262. Google Scholar
L. Coroianu and S. G. Gal, $L^{p}$-approximation by truncated max-product sampling operators of Kantorovich-type based on Fejér kernel, J. Integr. Equ. Appl., 29 (2017), 349-364. doi: 10.1216/JIE-2017-29-2-349. Google Scholar
L. Coroianu and S. G. Gal, Saturation results for the truncated max-product sampling operators based on sinc and Fejér-type kernels, Sampl. Theory Signal Image Process., 11 (2012), 113-132. Google Scholar
D. Costarelli, A. M. Minotti and G. Vinti, Approximation of discontinuous signals by sampling Kantorovich series, J. Math. Anal. Appl., 450 (2017), 1083-1103. doi: 10.1016/j.jmaa.2017.01.066. Google Scholar
D. Costarelli and A. R. Sambucini, Approximation results in Orlicz spaces for sequences of Kantorovich max-product neural network operators, Results Math., 73 (2018), Art. 15. doi: 10.1007/s00025-018-0799-4. Google Scholar
D. Costarelli, A. R. Sambucini and G. Vinti, Convergence in Orlicz spaces by means of the multivariate max-product neural network operators of the Kantorovich type and applications, Neural Comput. Appl., 31 (2019), 5069-5078. doi: 10.1007/s00521-018-03998-6. Google Scholar
D. Costarelli and G. Vinti, Order of approximation for sampling Kantorovich type operators, J. Integr. Equ. Appl., 26 (2014), 345-368. doi: 10.1216/JIE-2014-26-3-345. Google Scholar
D. Costarelli and G. Vinti, Rate of approximation for multivariate sampling Kantorovich operators on some functions spaces, J. Integr. Equ. Appl., 26 (2014), 455-481. doi: 10.1216/JIE-2014-26-4-455. Google Scholar
D. Costarelli and G. Vinti, Approximation by max-product neural network operators of Kantorovich type, Results Math., 69 (2016), 505-519. doi: 10.1007/s00025-016-0546-7. Google Scholar
D. Costarelli and G. Vinti, Max-product neural network and quasi-interpolation operators activated by sigmoidal functions, J. Approx. Theory, 209 (2016), 1-22. doi: 10.1016/j.jat.2016.05.001. Google Scholar
D. Costarelli and G. Vinti, Pointwise and uniform approximation by multivariate neural network operators of the max-product type, Neural Netw., 81 (2016), 81-90. Google Scholar
D. Costarelli and G. Vinti, Convergence results for a family of Kantorovich max-product neural network operators in a multivariate setting, Math. Slovaca, 67 (2017), 1469-1480. doi: 10.1515/ms-2017-0063. Google Scholar
D. Costarelli and G. Vinti, Convergence for a family of neural network operators in Orlicz spaces, Math. Nachr., 290 (2017), 226-235. doi: 10.1002/mana.201600006. Google Scholar
D. Costarelli and G. Vinti, Estimates for the neural network operators of the max-product type with continuous and $p$-integrable functions, Results Math., 73 (2018), Art. 12. doi: 10.1007/s00025-018-0790-0. Google Scholar
D. Costarelli and G. Vinti, An inverse result of approximation by sampling Kantorovich series, Proc. Edinb. Math. Soc., 62 (2019), 265-280. doi: 10.1017/s0013091518000342. Google Scholar
S. Y. Güngör and N. Ispir, Approximation by Bernstein-Chlodowsky operators of max-product kind, Math. Commun., 23 (2018), 205-225. Google Scholar
A. Holhos, Weighted Approximation of functions by Meyer-König and Zeller operators of max-product type, Numer. Funct. Anal. Optim., 39 (2018), 689-703. doi: 10.1080/01630563.2017.1413386. Google Scholar
A. Holhos, Weighted approximation of functions by Favard operators of max-product type, Period. Math. Hungar., 77 (2018), 340-346. doi: 10.1007/s10998-018-0249-9. Google Scholar
B. Z. Li, Approximation by multivariate Bernstein-Durrmeyer operators and learning rates of least-squares regularized regression with multivariate polynomial kernels, J. Approx. Theory, 173 (2013), 33-55. doi: 10.1016/j.jat.2013.04.007. Google Scholar
O. Orlova and G. Tamberg, On approximation properties of generalized Kantorovich-type sampling operators, J. Approx. Theory, 201 (2016), 73-86. doi: 10.1016/j.jat.2015.10.001. Google Scholar
R. J. Ravier and R. S. Stichartz, Sampling theory with average values on the Sierpinski gasket, Constr. Approx., 44 (2016), 159-194. doi: 10.1007/s00365-016-9341-7. Google Scholar
R. L. Stens, Error estimates for sampling sums based on convolution integrals, Inform. Control, 45 (1980), 37-47. doi: 10.1016/S0019-9958(80)90857-8. Google Scholar
D. X. Zhou, Deep distributed convolutional neural networks: universality, Anal. Appl., 16 (2018), 895-919. doi: 10.1142/S0219530518500124. Google Scholar
D. X. Zhou, Universality of deep convolutional neural networks, Appl. Comput. Harmon. Anal., 48 (2020), 787-794. doi: 10.1016/j.acha.2019.06.004. Google Scholar
D. X. Zhou and K. Jetter, Approximation with polynomial kernels and SVM classifiers, Adv. Comput. Math., 25 (2006), 323-344. doi: 10.1007/s10444-004-7206-2. Google Scholar
Aihua Fan, Jörg Schmeling, Weixiao Shen. $ L^\infty $-estimation of generalized Thue-Morse trigonometric polynomials and ergodic maximization. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 297-327. doi: 10.3934/dcds.2020363
Lihong Zhang, Wenwen Hou, Bashir Ahmad, Guotao Wang. Radial symmetry for logarithmic Choquard equation involving a generalized tempered fractional $ p $-Laplacian. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020445
Hai Q. Dinh, Bac T. Nguyen, Paravee Maneejuk. Constacyclic codes of length $ 8p^s $ over $ \mathbb F_{p^m} + u\mathbb F_{p^m} $. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020123
Luca Battaglia, Francesca Gladiali, Massimo Grossi. Asymptotic behavior of minimal solutions of $ -\Delta u = \lambda f(u) $ as $ \lambda\to-\infty $. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 681-700. doi: 10.3934/dcds.2020293
Yichen Zhang, Meiqiang Feng. A coupled $ p $-Laplacian elliptic system: Existence, uniqueness and asymptotic behavior. Electronic Research Archive, 2020, 28 (4) : 1419-1438. doi: 10.3934/era.2020075
Mokhtar Bouloudene, Manar A. Alqudah, Fahd Jarad, Yassine Adjabi, Thabet Abdeljawad. Nonlinear singular $ p $ -Laplacian boundary value problems in the frame of conformable derivative. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020442
Fuensanta Andrés, Julio Muñoz, Jesús Rosado. Optimal design problems governed by the nonlocal $ p $-Laplacian equation. Mathematical Control & Related Fields, 2021, 11 (1) : 119-141. doi: 10.3934/mcrf.2020030
Sebastian J. Schreiber. The $ P^* $ rule in the stochastic Holt-Lawton model of apparent competition. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 633-644. doi: 10.3934/dcdsb.2020374
Chunming Tang, Maozhi Xu, Yanfeng Qi, Mingshuo Zhou. A new class of $ p $-ary regular bent functions. Advances in Mathematics of Communications, 2021, 15 (1) : 55-64. doi: 10.3934/amc.2020042
Zaizheng Li, Qidi Zhang. Sub-solutions and a point-wise Hopf's lemma for fractional $ p $-Laplacian. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020293
Raffaele Folino, Ramón G. Plaza, Marta Strani. Long time dynamics of solutions to $ p $-Laplacian diffusion problems with bistable reaction terms. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020403
Lei Liu, Li Wu. Multiplicity of closed characteristics on $ P $-symmetric compact convex hypersurfaces in $ \mathbb{R}^{2n} $. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020378
Wenqiang Zhao, Yijin Zhang. High-order Wong-Zakai approximations for non-autonomous stochastic $ p $-Laplacian equations on $ \mathbb{R}^N $. Communications on Pure & Applied Analysis, 2021, 20 (1) : 243-280. doi: 10.3934/cpaa.2020265
Sugata Gangopadhyay, Constanza Riera, Pantelimon Stănică. Gowers $ U_2 $ norm as a measure of nonlinearity for Boolean functions and their generalizations. Advances in Mathematics of Communications, 2021, 15 (2) : 241-256. doi: 10.3934/amc.2020056
Bing Sun, Liangyun Chen, Yan Cao. On the universal $ \alpha $-central extensions of the semi-direct product of Hom-preLie algebras. Electronic Research Archive, , () : -. doi: 10.3934/era.2021004
Chandra Shekhar, Amit Kumar, Shreekant Varshney, Sherif Ibrahim Ammar. $ \bf{M/G/1} $ fault-tolerant machining system with imperfection. Journal of Industrial & Management Optimization, 2021, 17 (1) : 1-28. doi: 10.3934/jimo.2019096
Junchao Zhou, Yunge Xu, Lisha Wang, Nian Li. Nearly optimal codebooks from generalized Boolean bent functions over $ \mathbb{Z}_{4} $. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020121
Waixiang Cao, Lueling Jia, Zhimin Zhang. A $ C^1 $ Petrov-Galerkin method and Gauss collocation method for 1D general elliptic problems and superconvergence. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 81-105. doi: 10.3934/dcdsb.2020327
PDF downloads (293)
Lucian Coroianu Danilo Costarelli Sorin G. Gal Gianluca Vinti | CommonCrawl |
Skip to main content Skip to sections
SN Applied Sciences
January 2020 , 2:111 | Cite as
Mathematics of uncertainty: an exploration on semi-elliptic fuzzy variable and its properties
Palash Dutta
Part of the following topical collections:
Engineering: Industrial Informatics: Data Analytics in Remote Sensing and Cyber-Physical Systems
Uncertainty is more or less encountered in industrial and medical systems as well. Uncertainty theory is an upgraded theory which comprises of possibility measure, necessity measure and credibility measure plays significant role in modelling uncertainty. In connection with uncertainty modeling, a special and intricate fuzzy variable viz., semi-elliptic fuzzy variable (SEFV) is thrashed out here. Subsequently, an attempt has been made to derive possibility, necessity and credibility measure of the SEFV first. Later, some other properties such as expected value, variance, rational upper bound etc are presented and based on that mean and variance ranking of SEFVs are proposed. Afterwards reliability analysis and medical diagnosis cases are carried out which exhibit the efficiency and novelty of the derived SEFV. In this work done, it is observed that the present work has the capability to resolve problems under uncertain complex situations.
Uncertainty Fuzzy variable Semi-elliptic fuzzy variable Possibility measure Necessity measure Credibility distribution
The online version of this article ( https://doi.org/10.1007/s42452-019-1871-8) contains supplementary material, which is available to authorized users.
Uncertainty occurs due to lack of precision, deficiency in data, diminutive sample sizes, foreseeable man-made/artificial mistakes etc., is an unavoidable component of real world problems. To deal with this type of uncertainty fuzzy set theory (FST) [1] is explored. In power system planning reliability investigation is an extremely significant feature. The electrical energy production and consumption are the essential operating characteristic of the power system those are operated simultaneously and consequently, the investigation of reliability obligation for power system is incredibly elevated. Generally probabilistic approaches of reliability investigation are explored. However, due to association of uncertainty in the system classical probability approaches are seemed to be inappropriate and subsequently, fuzzy reliability investigation models are taken into consideration [2]. Some recent applications in reliability investigations can be found in [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. On the other hand, in medical diagnosis, usually a disease is characterized by several unswervingly perceptible symptoms which persuade the patient to visit a consultant or practitioner. A set of clinical inspections are commenced to make out the incidence of a disease. In the sphere of medical diagnosis, plenty of variables persuade the decision making process and subsequently, discriminate the judgments of the consultant or practitioner. Furthermore, mainly the medical diagnosis quandary engages dealing with uncertainties and so needed to integrate all the information into investigation. Therefore, fuzzy sets are explored to represent uncertainty and to perform medical diagnosis as well [16]. Some recent development in medical diagnosis can also be encountered in [17, 18, 19, 20, 21, 22, 23, 24]. Afterwards Zadeh [25] himself developed possibility theory which was thought to be better to treat uncertainty and further studied by acolyte researchers such as Dubois and Prade [26], Klir [27], Yager [28] etc. Furthermore, Dubois and Prade [29] studied mean value of fuzzy numbers, Ban [30] discussed fuzzy valued measure and conditional expectations of fuzzy numbers, Heilpern [31]studied expected value of fuzzy numbers, Carlsson and Fuller [32] developed possiblistic mean and variance of fuzzy numbers, Chen and Tan [33] further developed mean value and variance of multiplication of fuzzy numbers.
Nevertheless in the absence of self duality measure the earlier studies lead to the exaggeration of the reality. keeping this in mind, Liu and Liu [34] initiated a concept termed as credibility theory. Li and Liu [35] presented a sufficient and necessary condition for credibility measures. Further Liu and Liu [36] systematically studied and developed credibility theory. After that some extended studies on credibility theory can be observed in Liu [37], Zhou et al. [38], Yi et al. [39], Garai et al. [40].
Although various types of fuzzy variables are encountered, however, an exceptional and intricate fuzzy variable SEFV in terms of credibility theory is not deliberated yet. This paper presents an approach to derive possibility, necessity and credibility measure of the SEFV. Furthermore, expected value, variance, rational upper bound etc of SEFV are presented. Then ranking of SEFNs through expected value and variance is proposed. Finally, novelty and applicability has been exhibited by performing reliability analysis and medical diagnosis cases.
2 Preliminaries
Uncertainty is an important as well as unavoidable ingredient of decision making process. Depending on the nature and accessibility of data, information, uncertainty is generally modelled using fuzzy set, possibility theory and Credibility theory.
Let \(\Theta\) be a non-empty set, and P the power set of \(\Theta\), and Pos is possibility measure. Then, the triplet \((\Theta , P, Pos)\) is known as a possibility space. A fuzzy variable is a mapping from possibility space \((\Theta , P, Pos)\) to the set of real numbers [37, 39].
Let \(\zeta\) be a fuzzy variable defined on the credibility space \((\Theta , P, Pos)\). Then its membership function (MF) defined from the credibility measure is given by [37, 39]
$$\begin{aligned} \mu _\zeta (x)=min(2Cr{\zeta =1});x\in {\mathbb {R}}. \end{aligned}$$
The \(\alpha\)-cut of a fuzzy variable A is defined as
$$\begin{aligned} ^{\alpha }A=\left\{ x \in X:\mu _A(x)\ge \alpha \right\} . \end{aligned}$$
Let A be a fuzzy variable, \(\mu\) be the MF of A, and r be any real number. Then, the possibility measure of A is defined as [25]
$$\begin{aligned} Pos\left\{ A\le r\right\} =\underset{x\le r}{sup}\,\mu _{A}(x) \end{aligned}$$
$$\begin{aligned} Nec\left\{ A\le r\right\} =1-\underset{x>r}{sup}\,\mu _{A}(x) \end{aligned}$$
A credibility measure (Cr) is a non–negative set function holds the following [35]
\(Cr(\Theta )=1\)
\(Cr(A)\le Cr(B)\) for whenever \(A\subset B\)
\(Cr(A) + Cr(A^c)=1\) for any A
\(Cr\left\{ \cup A\right\} =\underset{i}{Sup}Cr \{A_i\}\) for any events \({A_i}\) with \(\underset{i}{Sup}Cr\{A_i\}<0.5\)
If the fuzzy variable A is given by its MF \(\mu\), then
$$\begin{aligned} Cr(A\le r)=\dfrac{1}{2}\left\{ Sup_{x\le r}\mu _A (x) +1 - Sup_{x> r}\mu _A (x) \right\} ,x,y\in {\mathbb {R}} \end{aligned}$$
The credibility distribution \(\Phi _A:{\mathbb {R}}\rightarrow [0,1]\) of a fuzzy variable A is defined as [36]
$$\begin{aligned} \Phi _A(x)=Cr \{\theta \in \Theta :\zeta (\theta )\le x\}. \end{aligned}$$
That is, the credibility that the fuzzy variable \(\zeta\) takes a value less than or equal to x.
The credibility density function of credibility distribution defined as \(\phi _A:{\mathbb {R}} \rightarrow [o,\infty )\) of any fuzzy variable A, is a function such that [37]
$$\begin{aligned} \phi _A (x)= \int _{\infty }^x \phi (y)dy, \forall x\in {\mathbb {R}} \end{aligned}$$
[38] A credibility distribution \(\Phi _A\) of a fuzzy variable A is called regular if it is a continuous and strictly increasing function w.r.t x such that\(0<\Phi _A<1\) and if \(lim_{x\rightarrow -\infty }\Phi _A=0,\) and \(lim_{x\rightarrow \infty }\Phi _A=1.\)
[38] Let A be a fuzzy variable with a regular credibility distribution \(\Phi _A\), then the inverse function \(\Phi _A^{-1}\) is called the inverse credibility distribution of A.
3 Construction of semi-elliptic fuzzy variable
Let's consider the general equation of the horizontal ellipse centred at (a, b) is
$$\begin{aligned} \dfrac{(x-a)^2}{h^2}+\dfrac{(y-b)^2}{k^2}=1 \end{aligned}$$
To construct a normal semi-circular fuzzy variable (SEFV), it is needed to consider \(b=0\) and \(k=1\). i.e.,
$$\begin{aligned} \dfrac{(x-a)^2}{h^2}+y^2=1 \end{aligned}$$
Thus, the required membership function of the SEFV \(A=S_E(a,h)\) is
$$\begin{aligned} \mu _A(x)=\sqrt{1-\dfrac{(x-a)^2}{h^2}}, a-h\le x\le a+h \end{aligned}$$
where a indicates the mean/core of the fuzzy variable while h determines width of the fuzzy variable A.
The \(\alpha\)-cut of the SEFV \(A=S_E(a,h)\) is
$$\begin{aligned} ^\alpha A = \left[ a-h\sqrt{1-\alpha ^2},a+h\sqrt{1-\alpha ^2}\right] \end{aligned}$$
Suppose \(A=S_E(10,7)\) is SEFV representing a uncertain variable. The MF of the SEFV A is
$$\begin{aligned} \mu _A(x)=\sqrt{1-\dfrac{(x-10)^2}{49}}, 3\le x\le 17 \end{aligned}$$
The graphical representation of A is depicted in Fig. 1.
The SEFV \(A=S_E(10,7)\)
4 Possibility, necessity and credibility measures of SEFV
In this section,the possibility measure, necessity measure and credibility measure of the SEFV are derived.
4.1 Possibility measures of SEFV
Suppose \(A=S_E(a,h)\) is a SEFV.
Then, the possibility measure of (\(A\le x\)) and (\(A\ge x\)) are respectively
$$\begin{aligned} Pos(A \le x)= & {} \left\{ \begin{array}{ll} \,\,\,1, &{}\,\,{\text{if}} \,\,x \ge a, \\ \sqrt{1-\dfrac{(x-a)^2}{h^2}}, &{}\,\,{\text{ if }} \,\, x<a, \\ \end{array} \right. \\ Pos(A \ge x)= & {} \left\{ \begin{array}{ll} \,\,\,1, &{}\,\,{\text{ if }} \,\, x \le a, \\ \sqrt{1-\dfrac{(x-a)^2}{h^2}}, &{}\,\,{\text{ if }} \,\, x>a, \\ \end{array} \right. \end{aligned}$$
The possibility measure of the SEFV \(A=S_E(10,7)\) for (\(A\le x\)) and (\(A\ge x\)) are depicted in Figs. 2 and 3 repectively.
The possibility measure of SEFV \(A=S_E(10,7)\) for (\(A\le x\))
The possibility measure of SEFV \(A=S_E(10,7)\) for (\(A\ge x\))
4.2 Necessity measures of SEFV
Then, the necessity measure of \(A\le x\) and \(A\ge x\) are respectively
$$\begin{aligned} Nec(A \le x)= & {} \left\{ \begin{array}{ll} \,\,\,0, &{}\,\,{\text{ if }} \,\,x \le a, \\ \sqrt{1-\dfrac{(x-a)^2}{h^2}}, &{}\,\,{\text{ if }} \,\, x>a, \\ \end{array} \right. \\ Nec(A \ge x)= & {} \left\{ \begin{array}{ll} \,\,\,0, &{}\,\,{\text{ if }} \,\, x \ge a, \\ \sqrt{1-\dfrac{(x-a)^2}{h^2}}, &{}\,\,{\text{ if }} \,\, x<a, \\ \end{array} \right. \end{aligned}$$
The necessity measure of the SEFV \(A=S_E(10,7)\) for (\(A\le x)\) and (\(A\ge x)\) are depicted in Figs. 4 and 5 repectively.
The necessity measure of SEFV \(A=S_E(10,7)\) for (\(A\le x)\)
The necessity measure of SEFV \(A=S_E(10,7)\) for (\(A\ge x\))
4.3 Credibility measures of SEFV
Then, the credibility measure of \(A\le x\) and \(A\ge x\) are respectively
$$\begin{aligned} Cr(A \le x)= & {} \left\{ \begin{array}{ll} \,\,\,\dfrac{1}{2}\sqrt{1-\dfrac{(x-a)^2}{h^2}}, &{}\,\,{\text{ if }} \,\,a-h\le x \le a, \\ 1-\dfrac{1}{2}\sqrt{1-\dfrac{(x-a)^2}{h^2}}, &{}\,\,{\text{ if }} \,\, a\le x \le a+h, \\ \end{array} \right. \\ Cr(A \ge x)= & {} \left\{ \begin{array}{ll} 1-\dfrac{1}{2}\sqrt{1-\dfrac{(x-a)^2}{h^2}}, &{}\,\,{\text{ if }} \,\, a-h\le x \le a, \\ \,\,\,\dfrac{1}{2}\sqrt{1-\dfrac{(x-a)^2}{h^2}}, &{}\,\,{\text{ if }} \,\, a\le x \le a+h, \\ \end{array} \right. \end{aligned}$$
The necessity measure of the SEFV \(A=S_E(10,7)\) for (\(A\le x\)) and (\(A\ge x\)) are depicted in Figs. 6 and 7 repectively.
The credibility measure of SEFV \(A=S_E(10,7)\) for (\(A\le x\))
The credibility measure of SEFV \(A=S_E(10,7)\) for (\(A\ge x\))
4.4 Credibility distribution of SEFV
As the credibility distribution \(\Phi _A:{\mathbb {R}}\rightarrow [0,1]\) of a fuzzy variable A is defined as \(\Phi _A(x)=Cr \{\theta \in \Theta :\zeta (\theta )\le x\}\).
Hence, the credibility distribution of the SEFV \(A=S_E(a,h)\) is
$$\begin{aligned} \Phi _A(x)=\left\{ \begin{array}{ll} \,\,\,\dfrac{1}{2}\sqrt{1-\dfrac{(x-a)^2}{h^2}}, &{}\,\,{\text{ if }} \,\,a-h\le x \le a, \\ 1-\dfrac{1}{2}\sqrt{1-\dfrac{(x-a)^2}{h^2}}, &{}\,\,{\text{ if }} \,\, a\le x \le a+h, \\ \end{array} \right. \end{aligned}$$
The credibility distribution function of the SEFV \(A=S_E(10,7)\) is
$$\begin{aligned} \Phi _A(x)=\left\{ \begin{array}{ll} \,\,\,\dfrac{1}{2}\sqrt{1-\dfrac{(x-10)^2}{49}}, &{}\,\,{\text{ if }} \,\,3\le x \le 10, \\ 1-\dfrac{1}{2}\sqrt{1-\dfrac{(x-10)^2}{49}}, &{}\,\,{\text{ if }} \,\, 10\le x \le 17, \\ \end{array} \right. \end{aligned}$$
4.5 Inverse credibility distribution (ICD) of SEFV
The credibility distribution of the SEFV \(A=S_E(a,h)\) is
$$\begin{aligned} \Phi ^{-1}(\alpha )==\left\{ \begin{array}{ll} a-h \sqrt{1-4\alpha ^2}, &{}\,\,{\text{ if }} \,\,0\le \alpha \le 0.5, \\ a+h \sqrt{1-4(1-\alpha )^2}, &{}\,\,{\text{ if }} \,\, \,\,0.5\le \alpha \le 1, \\ \end{array} \right. \end{aligned}$$
5 Expected value
Using the idea of credibility distribution, Liu and Liu [34] provided the expected value of fuzzy variables. Zhou et al. [38] presented expected value of fuzzy variables via ICD.
5.1 Expected value via credibility distribution
If A is a fuzzy variable then the expected value of A in terms of credibility distribution is defined as [34]
$$\begin{aligned}&E(A)=\int _0^{\infty }Cr\{A\ge r\}dr-\int _{\infty }^0 Cr\{A\le r\}dr\\ \end{aligned}$$
Then, the expected value of the SEFV \(A=S_E(a,h)\) is
$$\begin{aligned} E(A)&=\int _{0}^{a-h}{dr}+\int _{a-h}^{a}\dfrac{1}{2}\sqrt{1-\dfrac{(r-a)^2}{h^2}}dr\\&\quad + \, \int _{a}^{a+h}\left\{ 1- \dfrac{1}{2}\sqrt{1-\dfrac{(r-a)^2}{h^2}}\right\} dr\\&=(a-h)+h +\int _{a-h}^{a}\dfrac{1}{2}\sqrt{1-\dfrac{(r-a)^2}{h^2}}dr\\&\quad - \, \int _{a}^{a+h} \dfrac{1}{2}\sqrt{1-\dfrac{(r-a)^2}{h^2}}dr\\&=a+\dfrac{\pi h}{8} -\dfrac{\pi h}{8} \\&=a \end{aligned}$$
5.2 Expected value via ICD
If A is a SEFV then the expected value of A in terms of ICD is defined as [38]
$$\begin{aligned} E[A]=\int _0^1\Phi ^{-1}(\alpha )d\alpha \end{aligned}$$
$$\begin{aligned} E(A)&=\int _0^{0.5}\left\{ a-h\sqrt{1-4\alpha ^2}\right\} d\alpha \\&\quad +\, \int _{0.5}^{1}\left\{ a+h\sqrt{1-4(1-\alpha )^2}\right\} d\alpha \\&=a -\frac{h}{2}\int _{0}^{1}\sqrt{1-t^2}dt- \frac{h}{2}\int _{1}^{0}\sqrt{1-t^2}dt\\&=a \end{aligned}$$
Thus, in both the approaches, it is obtained that the expected value of the SEFV A is \(e=a\).
If \(A=S_E(h_1,h_2)\) is a asymmetric SEFV where \(h_1\) and \(h_2\) are left spread and right spread of A. Then, the expected value of A is
$$\begin{aligned} E[A]=a+\frac{\pi }{8}(h_2-h_1). \end{aligned}$$
6 Variance
In this section, variance of SEFV is calculated in terms of regular credibility distribution.
If A is a fuzzy variable with expected value e, then the variance of A is defined as [36]
$$\begin{aligned} V[A]=E[(A-e)^2] \end{aligned}$$
It should be noted that if the expected value e of the fuzzy variable is finite, then the variance satisfies
$$\begin{aligned} V[A]=E[(A-e)^2]=\int _0^{+\infty }Cr\left\{ (A-e)^2\ge r\right\} dr \end{aligned}$$
6.1 Variance of a SEFV
To evaluate variance V[A] of a SEFV A, it is needed to calculate the MF of \((A-e)^2\) first and to find MF of SEFV \(A=S_E(a,h)\), \(\alpha -\)cut technique is explored here.It is already obtained that the expected valued of SEFV is \(e=a\)
The \(\alpha\)-cut of the SEFV \(A=S_E(a,h)\) is \(^\alpha A = [a-h\sqrt{1-\alpha ^2},a+h\sqrt{1-\alpha ^2}]\).
The procedure is presented below.
$$\begin{aligned}&(^{\alpha }A-a)^2\\&\quad =\left[ (a-h\sqrt{1-\alpha ^2}-a)^2,(a+h\sqrt{1-\alpha ^2}-a)^2\right] \\&\quad =\left[ (-h\sqrt{1-\alpha ^2})^2,(h\sqrt{1-\alpha ^2})^2\right] \\&\quad =\left[ h^2(1-\alpha ^2),(h^2(1-\alpha ^2\right] \end{aligned}$$
Now, taking \(x=h^2(1-\alpha ^2)\) gives \(\alpha =\sqrt{1-\dfrac{x}{h^2}}\), \(0 \le x\le h^2\).
Thus, the MF of \((A-e)^2\) is
$$\begin{aligned} \mu _{(A-e)^2} (x)=\sqrt{1-\dfrac{x}{h^2}},0 \le x\le h^2. \end{aligned}$$
Since \(Cr\{(A-e)^2<r\}= \dfrac{1}{2}\{\underset{x<r}{Sup}\mu _{(A-e)^2}(x)+1 -\underset{x\ge r}{sup}\mu _{(A-e)^2}(x)\}\)
$$\begin{aligned} \therefore\, Cr\{(A-e)^2<r\}= \dfrac{1}{2}\left\{ 1+1-\sqrt{1-\dfrac{x}{h^2}}\right\} ,0 \le x\le h^2. \end{aligned}$$
Again, \(Cr\{(A-e)^2\ge r\}=1-Cr\{(A-e)^2<r\}\). Hence, \(Cr\{(A-e)^2\ge r\}={\left\{ \begin{array}{ll}\\ 1-Cr\{(A-e)^2<r\},\quad 0 \le x\le h^2\\ 0, \quad r>h^2 \end{array}\right. }.\) Then, the variance of SEFV \(A=S_E(a,h)\) is
$$\begin{aligned}&V[A]\\&\quad =\int _0^\infty {Cr\{(A-e)^2\ge r\}}dr\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,.\\&\quad =\int _0^{h^2}\dfrac{1}{2}\sqrt{1-\dfrac{r}{h^2}}dr\\&\quad =\dfrac{h^2}{3} \end{aligned}$$
For example, the variance of the SEFV \(A=S_E(10,7)\) is \(\dfrac{7^2}{3}=16.33\).
If the width h of a SEFV is unit then SEFV will represent a semi-circular fuzzy variable (SCFV). Then, it can be derived that for all SCFV the expected value is 0.33.
7 Rational upper bound of the variance
Yi et al. [39] derived the concept of rational upper bounded of the variance (RUBV) along with some definitions and results in terms of credibility distribution and ICD.
Let A be a fuzzy variable with credibility distribution \(\Phi\) and finite expected value e. The RUBV is defined as [39]
$$\begin{aligned} \overline{V}[A]=\int _0^\infty (1-\Phi (e+\sqrt{x})\Phi (e-\sqrt{x})) \end{aligned}$$
Let A be a fuzzy variable with credibility distribution \(\Phi\). If the expected value is e, then RUBV is evaluated as [39]
$$\begin{aligned} \overline{V}[A]=\int _0^1(\Phi ^{-1}(\alpha )-e)^2d\alpha \end{aligned}$$
The RUBV of SEFV depends on the width h and it is \(\dfrac{2h^2}{3}.\)
Consider the SEFV \(A=S_E(a,h)\).
$$\begin{aligned}&\int _0^{0.5}h^2(1-4\alpha ^2)d\alpha +\int _{0.5}^1 h^2(1-4(1-\alpha )^2)d\alpha \\&\quad = h^2 + h^2\int _0^{0.5}4\alpha ^2d\alpha -h^2int_{0.5}^14(1-\alpha )^2d\alpha \\&\quad = h^2-\dfrac{h^2}{2}\int _0^1t^2dt+ \dfrac{h^2}{2}\int _1^0t^2dt\\&\quad = h^2-h^2\int _0^1t^2dt\\&\quad = h^2-\dfrac{h^2}{3}\\&\quad =\dfrac{2h^2}{3} \end{aligned}$$
Let \(A=S_E(a,h)\) be SEFV. Then, \(\overline{V}[A]=2V[A]\).
The SCFV also satisfies the above corollary and it is also observed that for all SCFV, the RUBV is 0.66.
Suppose \(A=S_E(a_1,h_1)\) and \(B=S_E(a_2,h_2)\) are two SRFV.
Then, \(\overline{V}[A+B]\le 2(\overline{V}[A]+\overline{V}[B]\).
For the two SEFV A and B, the ICV of \(A+B\) is
$$\begin{aligned}&\Phi ^{-1}(\alpha )\\&\quad =\Phi _A^{-1}(\alpha )+\Phi _B^{-1}(\alpha )\\&\quad ={\left\{ \begin{array}{ll} (a_1+b_1)-(h_1+h_2)\sqrt{1-4\alpha ^2},\alpha \le 0.5\\ (a_1+b_1)+(h_1+h_2)\sqrt{1-4(1-\alpha )^2},\alpha >0.5\\ \end{array}\right. } \end{aligned}$$
$$\begin{aligned}&\overline{V}[A+B]\\&\quad =\int _0^{0.5}[(a_1+b_1)-(h_1+h_2)\sqrt{1-4\alpha ^2}d\alpha \\&\qquad +\, \int _{0.5}^1(a_1+b_1)+(h_1+h_2)\sqrt{1-4(1-\alpha )^2}d\alpha \\&\quad =\dfrac{2(h_1+h_2)^2}{3}. \end{aligned}$$
Again, \(\overline{V}[A]=\dfrac{2(h_1)^2}{3}\) and \(\overline{V}[B]=\dfrac{2(h_2)^2}{3}\).
Since, \(\dfrac{2(h_1+h_2)^2}{3}\le \dfrac{2(h_1)^2}{3}+ \dfrac{2(h_2)^2}{3}.\)
Consequently, \(\overline{V}[A+B]\le 2(\overline{V}[A]+\overline{V}[B]\).
Suppose \(A=S_E(a_1,h_1)\) and \(B=S_E(a_2,h_2)\) are two SRFVs.
Then,\(\sqrt{\overline{V}[A+B]}= \sqrt{(\overline{V}[A]}+\sqrt{\overline{V}[B]}\).
Since \(\sqrt{\overline{V}[A+B]}=\sqrt{\dfrac{2(h_1+h_2)^2}{3}}\) and \(\sqrt{\overline{V}[A]}=\sqrt{\dfrac{2h_1^2}{3}}\), \(\sqrt{\overline{V}[B]}=\sqrt{\dfrac{2h_2^2}{3}}\).
Thus, \(\sqrt{\overline{V}[A+B]}= \sqrt{(\overline{V}[A]}+\sqrt{\overline{V}[B]}\).
8 Arithmetic on SEFVs
In this section, basic operations on SEFVs are reviewed and adopted from [41].
Suppose \(A=S_E(a,h)\) and \(B=S_E(b,k)\) are two SEFVs defined on a universe of discourse X.
8.1 Addition
The membership function of \(A+B\) is
$$\begin{aligned} \mu _{(A+B)}&=\sqrt{1-{\Big \{\frac{x-(a+b)}{h_{}+k_{}}\Big \}}^{2}},\\&\quad x\in [(a-h_{})+(b-k_{}), \,(a+h_{})+(b+k_{})] \end{aligned}$$
8.2 Subtraction
The membership function of \(A-B\) is
$$\begin{aligned} \mu _{(A-B)}&=\sqrt{1-{\Big \{\dfrac{x-(a-b)}{h_{}+k_{}}\Big \}}^{2}},\\&\quad x\in [(a-h_{})-(b+k_{}),\,\, (a+h_{})-(b-k_{})] \end{aligned}$$
8.3 Multiplication
The membership function of AB is
$$\begin{aligned} \mu _{AB}(x)&=\sqrt{1-\Big \{\frac{(ak_{}+bh_{})-\sqrt{(ak_{}+bh_{})^{2}-4h_{}k_{}(ab-x)}}{2h_{}k_{}}\Big \}^{}},\\&\quad x \in (a-h_{})(b-k_{})\le x \le (a+h_{})(b+k_{}) \end{aligned}$$
8.4 Division
The membership function of A / B is
$$\begin{aligned} \mu _{\frac{A}{B}}(x)&=\sqrt{1-\Big \{\frac{a-xb}{h_{}+xk_{}}\Big \}^{}},\,\frac{a-h_{}}{b+k_{}}\le x \le \frac{a+h_{}}{b-k_{}} \end{aligned}$$
9 Rank of SEFVs
In this section, ranking of two SEFVs are defined based on the expected value and variance of the SEFVs. Suppose \(A=S_E(a_1,h_1)\) and \(B=S_E(a_2,h_2)\) are two SEFVs.
Then rank of A and B are defined as
\(A\le B\) if \(E[A]\le E[B]\).
If \(E[A]= E[B]\) then
\(A\le B\) if \(V[A]\ge V[B]\)
Example 9.1
Suppose \(A = [-4, 0, 4]\) and \(B = [-2, 0, 2]\) are two fuzzy variables adopted from [42]. It is observed that the approaches [42, 43, 44, 45, 46, 47] fail to compare the fuzzy variables. Reforming these fuzzy variables to SEFVs as \(A = S_E(0,4)\) and \(B = S_E(0,2)\) and applying the present approach it is obtained that \(A\le B\) which is consistent with human intuitions. Here, \(E[A]= E[B]= 0\), but \(V[A]= 5.33 \ge V[B]= 1.33\) and consequently, it can be adopted that \(A\le B\). A details comparison has been presented in Table 1.
Ranking of fuzzy variables for example 8.1
Abbasbandy et al. [42]
\(A\sim B\)
Wang [43]
Asady [44]
Abbasbandy and Hajjari [47]
\(A\succ B\)
Present approach
\(E[A]=0.5\)
\(E[B]=0.5\)
\(V[A]=5.33\)
V[B]=1.33
\(A\le B\)
Consider the fuzzy variables \(A=[0.2,0.5,0.8]\) and \(B=[0.4,0.5,0.6]\). The approaches [44, 47, 48, 49, 51] are not applicable to distinguish A and B while [52, 53, 54] and [50] produces illogical output. Here also the general or human intuition is that \(A\le B\). Applying the present approach we obtain \(E[A]= E[B]= 0.5\), but \(V[A]= 0.1 \ge V[B]= 0.033\) which gives \(A\le B\). A detail discussion is presented in Table 2.
Decision-level
Yager [48]
Abbhasbandy and Hajjari [47]
Chen and Sanguansat [51]
Chen et al. [49]
Vincent et al. [52, 53]
\(\alpha =1\)
Vincent and Dat [54]
Rezvani [50]
\(V[A]=0.1\)
V[B]=0.033
From the above analysis it can be opined that the present ranking approach has the capability to overcome the drawbacks of the existing approaches.
10 Application of SEFVs
In this segment, application of SEFVs are performed in structural reliability analysis and medical diagnosis. For structural reliability arithmtic of SEFVs are taken into consideration while in medical diagnosis arithmetic as well as ranking of SEFVs are adopted.
10.1 Application in structural reliability
Depictions of the elements of structural analysis are more often SEFVs in some circumstances. In such circumstances structural failure can be evaluated using arithmetic of SEFVs. Consider the following problem of structural failure adopted from Dutta [3].
Example 10.1
Suppose a beam of height \(h=9 mm\), length \(L=1250 mm\) and the force density \(f=78.5\times 10^{-5} kN/mm^3\). The load w, breadth of the beam b and ultimate bending moment \(M_o\) are uncertain input variables represented by SEFVs where \(w=S_E(400,15)kN\), \(b=S_E(40,5)\) and \(M_o=S_E(2.05\times 10^5, 0.05\times 10^5) kN-mm.\) which is depicted in Fig. 8.
The limit state function is \(g(b,f,h,w,M_o,L)=M_o-\left( \dfrac{wL}{4}+\dfrac{fbhL^2}{8}\right)\).
It is needed to evaluate the structural failure of the beam.
Beam associated with its bending moment
Applying the arithmetic of SEFVs on the problem the value of structural reliability or g is obtained as \(S_E(0.248\times 10^5,0.1658\times 10^5)\).
10.2 Application in medical diagnosis
It is observed that patient's explanations, medical information even medical assessment process tainted with imprecision/vagueness/uncertainty. On the other hand, knowledge base correlating the symptom-disease relationship encompasses of ambiguity and uncertainty in medical assessment process. Accordingly to deal with such uncertainties FST is being adopted and became most demanding area in medical assessment process. Here, SEFVs are considered to represent uncertain information.
Consider the Patient-symptom and Symptom-disease relations presented in Tables 3 and 4, respectively.
Patient-symptom relation
\(R_1\)
\(S_1\)
\(P_1\)
\(S_E(3,1)\)
Symptom-disease relation
\(D_1\)
Patient-Disease relation
\(S_E(36,17,40)\)
Crisp values of the patient-disease relation
Using the multiplication and addition of SEFVs the resultant Patient-disease relation is evaluated and presented in Table 5. Then, ranking of SEFVs is adopted to obtain crisp values of resultant SEFVs and presented in Table 6. It should be noted that maximum value in each row indicates that the patient is likely to have the disease. Here, \(\{P_1, P_2, P_3\}\), \(\{S_1, S_2, S_3\}\) and \(\{D_1, D_2,D_3\}\) are the set of patients, symptoms and diseases, respectively. From Table 6, it is clear that the maximum value (the bold value) in 1st row is 59.0686 which associates patient \(P_1\) and disease \(D_2\). That is, patient \(P_1\) is likely to have the disease \(D_2\). Similarly, from 2nd and 3rd row (bold values in the Table 6) it can be opined that patient \(P_2\) is suffering from disease \(D_1\) and patient \(P_3\) is suffering from disease \(D_3\).
11 Conclusions
Uncertainty is an integral part of real world problems such as reliability assessment as well as medical diagnosis problems. To cope with uncertainty a handful number of fuzzy variables are demonstrated yet in literature. However, a special complicated fuzzy variable SEFV is not deliberated well and in this regard here SEFV has been introduced in terms of credibility theory first. Then, some important properties such as possibility measure, necessity measure, credibility measure, credibility distribution and ICD were presented. Afterwards, investigations on expected value of SEFV using credibility distribution and ICD along with variance and RUVB of SEFV have been performed and established relationship between them. Another important concept ranking of SEFVs is introduced based on expected value of SEFV and if it fails then variance of SEFVs concept has been utilized to evaluate order of SEFVs. Comparative numerical illustrations have been presented where results of existing methods and present method were compared and exhibited that present method smoothly over come the limitations of earlier methods. Finally, reliability analysis has been performed using arithmetic of SEFV while a medical diagnosis is performed using arithmetic and rank of SEFVs as well. The present model successfully solves both the problems which exhibits the novelty and applicability of the present model. However, the limitation of this present model is that it can't work properly when asymmetric SEFVs come into picture. Therefore, as an extension of this work, asymmetric SEFV will be investigated.
Compliance with ethical standards
This article does not contain any studies with human or animal subjects.
42452_2019_1871_MOESM1_ESM.m (0 kb)
Supplementary material 1 (m 0 KB)
Zadeh LA (1965) Fuzzy sets, Inform. Control 8:338–356MathSciNetzbMATHGoogle Scholar
Mohanta DK (2010) Fuzzy reliability evaluations in electric power systems. In: Panigrahi BK, Abraham A, Das S (eds) Computational intelligence in power engineering. Springer, Berlin, pp 103–130Google Scholar
Dutta P (2019) Structural reliability analysis with inverse credibility distributions. New Math Nat Comput 15(01):47–63MathSciNetGoogle Scholar
Rahimi T, Jahan H, Blaabjerg F, Bahman A, Hosseini S (2019) Fuzzy-logic-based mean time to failure (MTTF) analysis of interleaved DC–DC converters equipped with redundant-switch configuration. Appl Sci 9(1):88Google Scholar
Abdolshah M, Samavi A, Khatibi SA, Mamoolraftar M (2019) A review of systems reliability analysis using fuzzy logic. In: Ram M (ed) Advanced fuzzy logic approaches in engineering science. IGI Global, pp 362–377Google Scholar
Li H, Nie Z (2018) Structural reliability analysis with fuzzy random variables using error principle. Eng Appl Artif Intell 67:91–99. https://doi.org/10.1016/j.engappai.2017.08.015 CrossRefGoogle Scholar
Gao P, Xie L, Hu W, Liu C, Feng J (2018) Dynamic fuzzy reliability analysis of multistate systems based on universal generating function. Math Probl Eng. https://doi.org/10.1109/ACCESS.2019.2941508 CrossRefGoogle Scholar
Ebrahimnejad A, Jamkhaneh EB (2018) System reliability using generalized intuitionistic fuzzy Rayleigh lifetime distribution. Appl Appl Math 13(1):97–113MathSciNetzbMATHGoogle Scholar
Kumar A, Ram M (2018) System reliability analysis based on weibull distribution and hesitant fuzzy set. Int J Math Eng Manag Sci 3(4):513–521Google Scholar
Olawoyin R (2017) Risk and reliability evaluation of gas connector systems using fuzzy theory and expert elicitation. Cogent Eng 4(1):1372731Google Scholar
Jamali S, Bani MJ (2017) Application of fuzzy assessing for reliability decision making. In: Proceedings of the world congress on engineering and computer science (2017)Google Scholar
Lacas E, Santolay JL, Biedermann A (2016) Obtaining sustainable production from the product design analysis. J Clean Prod 139:706–716. https://doi.org/10.1016/j.jclepro.2016.08.078 CrossRefGoogle Scholar
Zhi-gang L, Jun-gang Z, Bo-ying L (2016) Research on reliability evaluation method of complex multistate system based on fuzzy fault tree. Int Conf Fuzzy Theory Appl (Fuzzy). https://doi.org/10.1109/iFUZZY.2016.8004957 CrossRefGoogle Scholar
Rachid B, Hafaifa A, Hadroug N, Boumehraz M (2016) Reliability evaluation based on a fuzzy expert system: centrifugal pump application. Stud Inf Control. https://doi.org/10.24846/v25i2y201605 CrossRefGoogle Scholar
Rizv S, Singh V, Khan A (2016) Fuzzy logic based software reliability quantification framework: early stage perspective (FLSRQF). Procedia Comput Sci 89:359–368. https://doi.org/10.1016/j.procs.2016.06.083 CrossRefGoogle Scholar
Dutta P (2017) Decision making in medical diagnosis via distance measures on interval valued fuzzy sets. Int J Syst Dyn Appl 6(4):63–83Google Scholar
Dutta P (2018) Medical diagnosis based on distance measures between picture fuzzy sets. Int J Fuzzy Syst Appl 7(4):15–36Google Scholar
Dutta P (2018) Medical diagnosis via distances measures between credibility distributions. Int J Decis Support Syst Technol 10(4):1–16Google Scholar
Talukdar P, Dutta P (2018) Disease diagnosis using an advanced distance measure for Intuitionistic Fuzzy Sets. Int Res J Microbiol 7(2):029–042Google Scholar
Dutta P, Dash SR (2018) Medical decision making via the arithmetic of generalized triangular fuzzy numbers. Open Cybern Syst. https://doi.org/10.2174/1874110X01812010001 CrossRefGoogle Scholar
Dutta P, Limboo B (2017) Bell-shaped fuzzy soft sets and their application in medical diagnosis. Fuzzy Inf Eng 9(1):67–91Google Scholar
Dutta P (2017) Medical diagnosis via distance measures on picture fuzzy sets. AMSE J AMSE IIETA Publ 2017 Ser Adv A 54(2):657–672Google Scholar
Dutta P, Doley D (2020) Medical diagnosis under uncertain environment through bipolar-valued fuzzy sets. In: Gupta M, Konar D, Bhattacharyya S, Biswas S (eds) Computer vision and machine intelligence in medical image analysis. Springer, Singapore, pp 127–135Google Scholar
Dutta P (2020) A straightforward advanced ranking approach of fuzzy numbers. In: Satapathy SC, Bhateja V, Mohanty JR, Udgata SK (eds) Smart intelligent computing and applications. Springer, Singapore, pp 475–483Google Scholar
Zadeh LA (1978) Fuzzy set as a basis for a theory of possibility. Fuzzy Set Syst 1:3–28MathSciNetzbMATHGoogle Scholar
Dubois D, Prade H (1988) Possibility theory. Plenum Press, New YorkzbMATHGoogle Scholar
Klir JG (1992) On fuzzy set interpretation of possibility theory. Fuzzy Sets Syst 108:263–273MathSciNetzbMATHGoogle Scholar
Yager RR (1992) On the specificity of a possibility distribution. Fuzzy Sets Syst 50:279–292MathSciNetzbMATHGoogle Scholar
Dubois D, Prade H (1987) The mean value of a fuzzy number. Fuzzy Sets Syst 24:279–300MathSciNetzbMATHGoogle Scholar
Ban J (1990) Radon–Nikodým theorem and conditional expectation of fuzzy-valued measures and variables. Fuzzy Sets Syst 34(3):383–392zbMATHGoogle Scholar
Heilpern S (1992) The expected value of a fuzzy number. Fuzzy Sets Syst 47:81–86MathSciNetzbMATHGoogle Scholar
Carlsson C, Fuller R (2001) On possibilistic mean and variance of fuzzy number. Fuzzy Sets Syst 122:315–326MathSciNetzbMATHGoogle Scholar
Chen W, Tan S (2009) On the possibilistic mean value and variance of multiplication of fuzzy numbers. J Comput Appl Math 232(2):327–334MathSciNetzbMATHGoogle Scholar
Liu B, Liu YK (2002) Expected value of fuzzy variable and fuzzy expected value model. IEEE Trans Fuzzy Syst 10(4):445–45Google Scholar
Li X, Liu B (2006) A sufficient and necessary condition for credibility measures. Int J Uncertain Fuzziness Knowl Based Syst 14(5):527–535MathSciNetzbMATHGoogle Scholar
Liu B (2006) A survey of credibility theory. Fuzzy Optim Decis Mak 5(4):387–408MathSciNetzbMATHGoogle Scholar
Liu B (2004) Uncertainty theory: a branch of mathematics for modeling human uncertainty. Springer, BerlinGoogle Scholar
Zhou J, Yang F, Wang K (2015) Fuzzy arithmetic on LR fuzzy numbers with applications to fuzzy programming. J Intell Fuzzy Syst. https://doi.org/10.3233/IFS-151712 CrossRefzbMATHGoogle Scholar
Yi X, Miao Y, Zhou J, Wang Y (2016) Some novel inequalities for fuzzy variables on the variance and its rational upper bound. J Inequal Appl 1:41MathSciNetzbMATHGoogle Scholar
Garai T, Chakraborty D, Roy TK (2017) Expected value of exponential fuzzy number and its application to multi-item deterministic inventory model for deteriorating items. J Uncertain Anal Appl 5(1):8Google Scholar
Dutta P, Saikia B (2019) Arithmetic operations on normal semi elliptic intuitionistic fuzzy numbers and their application in decision-making. Granul Comput. https://doi.org/10.1007/s41066-019-00175-5 CrossRefGoogle Scholar
Abbasbandy S, Nuraei R, Ghanbari M (2013) Revision of sign distance method for ranking of fuzzy numbers. Iran J Fuzzy Syst 10(4):101–117MathSciNetzbMATHGoogle Scholar
Wang ZX, Liu YJ, Fan ZP, Feng B (2009) Ranking L–R fuzzy number based on deviation degree. Inf Sci 179(13):2070–2077MathSciNetzbMATHGoogle Scholar
Asady B (2010) The revised method of ranking LR fuzzy number based on deviation degree. Expert Syst Appl 37(7):5056–5060Google Scholar
Asady B, Zendehnam A (2007) Ranking fuzzy numbers by distance minimization. Appl Math Model 31(11):2589–2598zbMATHGoogle Scholar
Asady B (2011) Revision of distance minimization method for ranking of fuzzy numbers. Appl Math Model 35(3):1306–1313MathSciNetzbMATHGoogle Scholar
Abbasbandy S, Hajjari T (2009) A new approach for ranking of trapezoidal fuzzy numbers. Comput Math Appl 57(3):413–419MathSciNetzbMATHGoogle Scholar
Yager RR (1979) Ranking fuzzy subsets over the unit interval. In: 1978 IEEE conference on decision and control including the 17th symposium on adaptive processes, IEEE, pp 1435–1437Google Scholar
Chen SM, Munif A, Chen GS, Liu HC, Kuo BC (2012) Fuzzy risk analysis based on ranking generalized fuzzy numbers with different left heights and right heights. Expert Syst Appl 39(7):6320–6334Google Scholar
Rezvani S (2015) Ranking generalized exponential trapezoidal fuzzy numbers based on variance. Appl Math Comput 262:191–198MathSciNetzbMATHGoogle Scholar
Chen SM, Sanguansat K (2011) Analyzing fuzzy risk based on a new fuzzy ranking method between generalized fuzzy numbers. Expert Syst Appl 38(3):2163–2171Google Scholar
Vincent FY, Chi HTX, Shen CW (2013) Ranking fuzzy numbers based on epsilon-deviation degree. Appl Soft Comput 13(8):3621–3627Google Scholar
Vincent FY, Chi HTX, Dat LQ, Phuc PNK, Shen CW (2013) Ranking generalized fuzzy numbers in fuzzy decision making based on the left and right transfer coefficients and areas. Appl Math Model 37(16–17):8106–8117MathSciNetzbMATHGoogle Scholar
Vincent FY, Dat LQ (2014) An improved ranking method for fuzzy numbers with integral values. Appl Soft Comput 14:603–608Google Scholar
© Springer Nature Switzerland AG 2019
Email authorView author's OrcID profile
1.Department of MathematicsDibrugarh UniversityDibrugarhIndia
Dutta, P. SN Appl. Sci. (2020) 2: 111. https://doi.org/10.1007/s42452-019-1871-8
Received 16 September 2019
Accepted 09 December 2019 | CommonCrawl |
The Solar Wind as a Turbulence Laboratory
Roberto Bruno1 &
Vincenzo Carbone2
Living Reviews in Solar Physics volume 10, Article number: 2 (2013) Cite this article
Latest version View article history
This article is a revised version of 10.12942/lrsp-2005-4.
In this review we will focus on a topic of fundamental importance for both astrophysics and plasma physics, namely the occurrence of large-amplitude low-frequency fluctuations of the fields that describe the plasma state. This subject will be treated within the context of the expanding solar wind and the most meaningful advances in this research field will be reported emphasizing the results obtained in the past decade or so. As a matter of fact, Helios inner heliosphere and Ulysses' high latitude observations, recent multi-spacecrafts measurements in the solar wind (Cluster four satellites) and new numerical approaches to the problem, based on the dynamics of complex systems, brought new important insights which helped to better understand how turbulent fluctuations behave in the solar wind. In particular, numerical simulations within the realm of magnetohydrodynamic (MHD) turbulence theory unraveled what kind of physical mechanisms are at the basis of turbulence generation and energy transfer across the spectral domain of the fluctuations. In other words, the advances reached in these past years in the investigation of solar wind turbulence now offer a rather complete picture of the phenomenological aspect of the problem to be tentatively presented in a rather organic way.
The whole heliosphere is permeated by the solar wind, a supersonic and super-Alfvén plasma flow of solar origin which continuously expands into the heliosphere. This medium offers the best opportunity to study directly collisionless plasma phenomena, mainly at low frequencies where high-amplitude fluctuations have been observed. During its expansion, the solar wind develops a strong turbulent character, which evolves towards a state that resembles the well known hydrodynamic turbulence described by Kolmogorov (1941, (1991). Because of the presence of a strong magnetic field carried by the wind, low-frequency fluctuations in the solar wind are usually described within a magnetohydrodynamic (MHD, hereafter) benchmark (Kraichnan, (1965; Biskamp, (1993; Tu and Marsch, (1995a; Biskamp, (2003; Petrosyan et al., (2010). However, due to some peculiar characteristics, the solar wind turbulence contains some features hardly classified within a general theoretical framework.
Turbulence in the solar heliosphere plays a relevant role in several aspects of plasma behavior in space, such as solar wind generation, high-energy particles acceleration, plasma heating, and cosmic rays propagation. In the 1970s and 80s, impressive advances have been made in the knowledge of turbulent phenomena in the solar wind. However, at that time, spacecraft observations were limited by a small latitudinal excursion around the solar equator and, in practice, only a thin slice above and below the equatorial plane was accessible, i.e., a sort of 2D heliosphere. A rather exhaustive survey of the most important results based on in-situ observations in the ecliptic plane has been provided in an excellent review by Tu and Marsch (1995a) and we invite the reader to refer to that paper. This one, to our knowledge, has been the last large review we find in literature related to turbulence observations in the ecliptic.
In the 1990s, with the launch of the Ulysses spacecraft, investigations have been extended to the high-latitude regions of the heliosphere, allowing us to characterize and study how turbulence evolves in the polar regions. An overview of Ulysses results about polar turbulence can also be found in Horbury and Tsurutani (2001). With this new laboratory, relevant advances have been made. One of the main goals of the present work will be that of reviewing observations and theoretical efforts made to understand the near-equatorial and polar turbulence in order to provide the reader with a rather complete view of the low-frequency turbulence phenomenon in the 3D heliosphere.
New interesting insights in the theory of turbulence derive from the point of view which considers a turbulent flow as a complex system, a sort of benchmark for the theory of dynamical systems. The theory of chaos received the fundamental impulse just through the theory of turbulence developed by Ruelle and Takens (1971) who, criticizing the old theory of Landau and Lifshitz (1971), were able to put the numerical investigation by Lorenz (1963) in a mathematical framework. Gollub and Swinney (1975) set up accurate experiments on rotating fluids confirming the point of view of Ruelle and Takens (1971) who showed that a strange attractor in the phase space of the system is the best model for the birth of turbulence This gave a strong impulse to the investigation of the phenomenology of turbulence from the point of view of dynamical systems (Bohr et al., (1998). For example, the criticism by Landau leading to the investigation of intermittency in fully developed turbulence was worked out through some phenomenological models for the energy cascade (cf. Frisch, (1995). Recently, turbulence in the solar wind has been used as a big wind tunnel to investigate scaling laws of turbulent fluctuations, multifractals models, etc. The review by Tu and Marsch (1995a) contains a brief introduction to this important argument, which was being developed at that time relatively to the solar wind (Burlaga, (1993; Carbone, (1993; Biskamp, (1993, (2003; Burlaga, (1995). The reader can convince himself that, because of the wide range of scales excited, space plasma can be seen as a very big laboratory where fully developed turbulence can be investigated not only per se, rather as far as basic theoretical aspects are concerned.
Turbulence is perhaps the most beautiful unsolved problem of classical physics, the approaches used so far in understanding, describing, and modeling turbulence are very interesting even from a historic point of view, as it clearly appears when reading, for example, the book by Frisch (1995). History of turbulence in interplanetary space is, perhaps, even more interesting since its knowledge proceeds together with the human conquest of space Thus, whenever appropriate, we will also introduce some historical references to show the way particular problems related to turbulence have been faced in time, both theoretically and technologically. Finally, since turbulence is a phenomenon visible everywhere in nature, it will be interesting to compare some experimental and theoretical aspects among different turbulent media in order to assess specific features which might be universal, not limited only to turbulence in space plasmas. In particular, we will compare results obtained in interplanetary space with results obtained from ordinary fluid flows on Earth, and from experiments on magnetic turbulence in laboratory plasmas designed for thermonuclear fusion.
What does turbulence stand for?
The word turbulent is used in the everyday experience to indicate something which is not regular. In Latin the word turba means something confusing or something which does not follow an ordered plan. A turbulent boy, in all Italian schools, is a young fellow who rebels against ordered schemes. Following the same line, the behavior of a flow which rebels against the deterministic rules of classical dynamics is called turbulent. Even the opposite, namely a laminar motion, derives from the Latin word lámina, which means stream or sheet, and gives the idea of a regular streaming motion. Anyhow, even without the aid of a laboratory experiment and a Latin dictionary, we experience turbulence every day. It is relatively easy to observe turbulence and, in some sense, we generally do not pay much attention to it (apart when, sitting in an airplane, a nice lady asks us to fasten our seat belts during the flight because we are approaching some turbulence!). Turbulence appears everywhere when the velocity of the flow is high enoughFootnote 1, for example, when a flow encounters an obstacle (cf., e.g., Figure 1) in the atmospheric flow, or during the circulation of blood, etc. Even charged fluids (plasma) can become turbulent. For example, laboratory plasmas are often in a turbulent state, as well as natural plasmas like the outer regions of stars. Living near a star, we have a big chance to directly investigate the turbulent motion inside the flow which originates from the Sun, namely the solar wind. This will be the main topic of the present review.
Turbulence as observed in a river. Here we can see different turbulent wakes due to different obstacles (simple stones) emerging naturally above the water level.
Turbulence that we observe in fluid flows appears as a very complicated state of motion, and at a first sight it looks (apparently!) strongly irregular and chaotic, both in space and time. The only dynamical rule seems to be the impossibility to predict any future state of the motion. However, it is interesting to recognize the fact that, when we take a picture of a turbulent flow at a given time, we see the presence of a lot of different turbulent structures of all sizes which are actively present during the motion. The presence of these structures was well recognized long time ago, as testified by the beautiful pictures of vortices observed and reproduced by the Italian genius Leonardo da Vinci, as reported in the textbook by Frisch (1995). Figure 2 shows, as an example, one picture from Leonardo which can be compared with Figure 3 taken from a typical experiment on a turbulent jet.
Three examples of vortices taken from the pictures by Leonardo da Vinci (cf. Frisch, (1995).
Turbulence as observed in a turbulent water jet (Van Dyke, (1982) reported in the book by Frisch (1995) (photograph by P. Dimotakis, R. Lye, and D. Papantoniu).
Turbulent features can be recognized even in natural turbulent systems like, for example, the atmosphere of Jupiter (see Figure 4). A different example of turbulence in plasmas is reported in Figure 5 where we show the result of a typical high resolution numerical simulations of 2D MHD turbulence In this case the turbulent field shown is the current density. These basic features of mixing between order and chaos make the investigation of properties of turbulence terribly complicated, although extraordinarily fascinating.
When we look at a flow at two different times, we can observe that the general aspect of the flow has not changed appreciably, say vortices are present all the time but the flow in each single point of the fluid looks different. We recognize that the gross features of the flow are reproducible but details are not predictable. We have to use a statistical approach to turbulence, just as it is done to describe stochastic processes, even if the problem is born within the strange dynamics of a deterministic system!
Turbulence in the atmosphere of Jupiter as observed by Voyager.
High resolution numerical simulations of 2D MHD turbulence at resolution 2048 × 2048 (courtesy by H. Politano). Here, the authors show the current density J(x, y), at a given time, on the plane (x, y).
Turbulence increases the properties of transport in a flow. For example, the urban pollution, without atmospheric turbulence, would not be spread (or eliminated) in a relatively short time. Results from numerical simulations of the concentration of a passive scalar transported by a turbulent flow is shown in Figure 6. On the other hand, in laboratory plasmas inside devices designed to achieve thermo-nuclear controlled fusion, anomalous transport driven by turbulent fluctuations is the main cause for the destruction of magnetic confinement. Actually, we are far from the achievement of controlled thermo-nuclear fusion. Turbulence, then, acquires the strange feature of something to be avoided in some cases, or to be invoked in some other cases.
Turbulence became an experimental science since Osborne Reynolds who, at the end of 19th century, observed and investigated experimentally the transition from laminar to turbulent flow. He noticed that the flow inside a pipe becomes turbulent every time a single parameter, a combination of the viscosity coefficient η, a characteristic velocity U, and length L, would increase. This parameter Re = ULρ/η (ρ is the mass density of the fluid) is now called the Reynolds number. At lower Re, say Re ≤ 2300, the flow is regular (that is the motion is laminar), but when Re increases beyond a certain threshold of the order of Re ≃ 4000, the flow becomes turbulent. As Re increases, the transition from a laminar to a turbulent state occurs over a range of values of Re with different characteristics and depending on the details of the experiment. In the limit Re → ∞ the turbulence is said to be in a fully developed turbulent state. The original pictures by Reynolds are shown in Figure 7.
Concentration field c(x, y), at a given time, on the plane (x, y). The field has been obtained by a numerical simulation at resolution 2048 × 2048. The concentration is treated as a passive scalar, transported by a turbulent field. Low concentrations are reported in blue while high concentrations are reported in yellow (courtesy by A. Noullez).
The original pictures by Reynolds which show the transition to a turbulent state of a flow in a pipe, as the Reynolds number increases from top to bottom (see the website Reynolds, (1883).
Dynamics vs. statistics
In Figure 8 we report a typical sample of turbulence as observed in a fluid flow in the Earth's atmosphere. Time evolution of both the longitudinal velocity component and the temperature is shown. Measurements in the solar wind show the same typical behavior. A typical sample of turbulence as measured by Helios 2 spacecraft is shown in Figure 9. A further sample of turbulence, namely the radial component of the magnetic field measured at the external wall of an experiment in a plasma device realized for thermonuclear fusion, is shown in Figure 10.
As it is well documented in these figures, the main feature of fully developed turbulence is the chaotic character of the time behavior. Said differently, this means that the behavior of the flow is unpredictable. While the details of fully developed turbulent motions are extremely sensitive to triggering disturbances, average properties are not. If this was not the case, there would be little significance in the averaging process. Predictability in turbulence can be recast at a statistical level. In other words, when we look at two different samples of turbulence, even collected within the same medium, we can see that details look very different. What is actually common is a generic stochastic behavior. This means that the global statistical behavior does not change going from one sample to the other. The idea that fully developed turbulent flows are extremely sensitive to small perturbations but have statistical properties that are insensitive to perturbations is of central importance throughout this review. Fluctuations of a certain stochastic variable ψ are defined here as the difference from the average value δψ = ψ−ψ, where brackets mean some averaging process. Actually, the method of taking averages in a turbulent flow requires some care. We would like to recall that there are, at least, three different kinds of averaging procedures that may be used to obtain statistically-averaged properties of turbulence The space averaging is limited to flows that are statistically homogeneous or, at least, approximately homogeneous over scales larger than those of fluctuations. The ensemble averages are the most versatile, where average is taken over an ensemble of turbulent flows prepared under nearly identical external conditions. Of course, these flows are not completely identical because of the large fluctuations present in turbulence Each member of the ensemble is called a realization. The third kind of averaging procedure is the time average, which is useful only if the turbulence is statistically stationary over time scales much larger than the time scale of fluctuations. In practice, because of the convenience offered by locating a probe at a fixed point in space and integrating in time, experimental results are usually obtained as time averages. The ergodic theorem (Halmos, (1956) assures that time averages coincide with ensemble averages under some standard conditions (see Appendix B).
Turbulence as measured in the atmospheric boundary layer. Time evolution of the longitudinal velocity and temperature are shown in the upper and lower panels, respectively. The turbulent samples have been collected above a grass-covered forest clearing at 5 m above the ground surface and at a sampling rate of 56 Hz (Katul et al., (1997).
A different property of turbulence is that all dynamically interesting scales are excited, that is, energy is spread over all scales. This can be seen in Figure 11 where we show the magnetic field intensity within a typical solar wind stream (see top panel). In the middle and bottom panels we show fluctuations at two different detailed scales. A kind of self-similarity (say a similarity at all scales) is observed.
Since fully developed turbulence involves a hierarchy of scales, a large number of interacting degrees of freedom are involved. Then, there should be an asymptotic statistical state of turbulence that is independent on the details of the flow. Hopefully, this asymptotic state depends, perhaps in a critical way, only on simple statistical properties like energy spectra, as much as in statistical mechanics equilibrium where the statistical state is determined by the energy spectrum (Huang, (1987). Of course, we cannot expect that the statistical state would determine the details of individual realizations, because realizations need not to be given the same weight in different ensembles with the same low-order statistical properties.
A sample of fast solar wind at distance 0.9 AU measured by the Helios 2 spacecraft. From top to bottom: speed, number density, temperature, and magnetic field, as a function of time.
Turbulence as measured at the external wall of a device designed for thermonuclear fusion, namely the RFX in Padua (Italy). The radial component of the magnetic field as a function of time is shown in the figure (courtesy by V. Antoni).
Magnetic intensity fluctuations as observed by Helios 2 in the inner solar wind at 0.9 AU, for different blow-ups. Some self-similarity is evident here.
It should be emphasized that there are no firm mathematical arguments for the existence of an asymptotic statistical state. As we have just seen, reproducible statistical results are obtained from observations, that is, it is suggested experimentally and from physical plausibility. Apart from physical plausibility, it is embarrassing that such an important feature of fully developed turbulence, as the existence of a statistical stability, should remain unsolved. However, such is the complex nature of turbulence
Equations and Phenomenology
In this section, we present the basic equations that are used to describe charged fluid flows, and the basic phenomenology of low-frequency turbulence Readers interested in examining closely this subject can refer to the very wide literature on the subject of turbulence in fluid flows, as for example the recent books by, e.g., Pope (2000); McComb (1990); Frisch (1995) or many others, and the less known literature on MHD flows (Biskamp, (1993; Boyd and Sanderson, (2003; Biskamp, (2003). In order to describe a plasma as a continuous medium it will be assumed collisional and, as a consequence, all quantities will be functions of space r and time t. Apart for the required quasi-neutrality, the basic assumption of MHD is that fields fluctuate on the same time and length scale as the plasma variables, say ωτH ≃ 1 and kLH ≃ 1 (k and ω are, respectively, the wave number and the frequency of the fields, while τH and LH are the hydrodynamic time and length scale, respectively). Since the plasma is treated as a single fluid, we have to take the slow rates of ions. A simple analysis shows also that the electrostatic force and the displacement current can be neglected in the non-relativistic approximation. Then, MHD equations can be derived as shown in the following sections.
The Navier-Stokes equation and the Reynolds number
Equations which describe the dynamics of real incompressible fluid flows have been introduced by Claude-Louis Navier in 1823 and improved by George G. Stokes. They are nothing but the momentum equation based on Newton's second law, which relates the acceleration of a fluid particleFootnote 2 to the resulting volume and body forces acting on it. These equations have been introduced by Leonhard Euler, however, the main contribution by Navier was to add a friction forcing term due to the interactions between fluid layers which move with different speed. This term results to be proportional to the viscosity coefficients η and ξ and to the variation of speed. By defining the velocity field u(r, t) the kinetic pressure p and the density ρ, the equations describing a fluid flow are the continuity equation to describe the conservation of mass
$$\frac{{\partial \rho }} {{\partial t}} + (u \cdot \nabla )\rho = - \rho \nabla \cdot u,$$
((1))
the equation for the conservation of momentum
$$\rho \left[ {\frac{{\partial u}} {{\partial t}} + (u \cdot \nabla )u} \right] = - \nabla p + \eta \nabla ^2 u + \left( {\xi + \frac{\eta } {3}} \right)\nabla (\nabla \cdot u),$$
and an equation for the conservation of energy
$$\rho T\left[ {\frac{{\partial s}} {{\partial t}} + (u \cdot \nabla )s} \right] = \nabla (\chi \nabla T) + \frac{\eta } {2}\left( {\frac{{\partial u_i }} {{\partial x_k }} + \frac{{\partial u_k }} {{\partial x_i }} - \frac{2} {3}\delta _{ik} \nabla \cdot u} \right)^2 + \xi (\nabla \cdot u)^2 ,$$
where s is the entropy per mass unit, T is the temperature, and χ is the coefficient of thermoconduction. An equation of state closes the system of fluid equations.
The above equations considerably simplify if we consider the incompressible fluid, where ρ = const. so that we obtain the Navier-Stokes (NS) equation
$$\frac{{\partial u}} {{\partial t}} + (u \cdot \nabla )u = - \left( {\frac{{\nabla p}} {\rho }} \right) + \nu \nabla ^2 u,$$
where the coefficient ν = η/ρ is the kinematic viscosity. The incompressibility of the flow translates in a condition on the velocity field, namely the field is divergence-free, i.e., ∇·u = 0. This condition eliminates all high-frequency sound waves and is called the incompressible limit. The non-linear term in equations represents the convective (or substantial) derivative. Of course, we can add on the right hand side of this equation all external forces, which eventually act on the fluid parcel.
We use the velocity scale U and the length scale L to define dimensionless independent variables, namely r = r'L (from which ∇ = ∇'/L) and t = t'(L/U), and dependent variables u = u'U andp = p'U2ρ. Then, using these variables in Equation (4), we obtain
$$\frac{{\partial u'}} {{\partial t'}} + (u' \cdot \nabla ')u' = - \nabla 'p' + Re^{ - 1} \nabla '^2 u'.$$
The Reynolds number Re = UL/ν is evidently the only parameter of the fluid flow. This defines a Reynolds number similarity for fluid flows, namely fluids with the same value of the Reynolds number behaves in the same way. Looking at Equation (5) it can be realized that the Reynolds number represents a measure of the relative strength between the non-linear convective term and the viscous term in Equation (4). The higher Re, the more important the non-linear term is in the dynamics of the flow. Turbulence is a genuine result of the non-linear dynamics of fluid flows.
The coupling between a charged fluid and the magnetic field
Magnetic fields are ubiquitous in the Universe and are dynamically important. At high frequencies, kinetic effects are dominant, but at frequencies lower than the ion cyclotron frequency, the evolution of plasma can be modeled using the MHD approximation. Furthermore, dissipative phenomena can be neglected at large scales although their effects will be felt because of non-locality of non-linear interactions. In the presence of a magnetic field, the Lorentz force j × B, where j is the electric current density, must be added to the fluid equations, namely
$$\rho \left[ {\frac{{\partial u}} {{\partial t}} + (u \cdot \nabla )u} \right] = - \nabla p + \eta \nabla ^2 u + \left( {\xi + \frac{\eta } {3}} \right)\nabla (\nabla \cdot u) - \frac{1} {{4\pi }}B \times (\nabla \times B),$$
and the Joule heat must be added to the equation for energy
$$\rho T\left[ {\frac{{\partial s}} {{\partial t}} + (u \cdot \nabla )s} \right] = \sigma _{ik} \frac{{\partial u_i }} {{\partial x_k }} + \chi \nabla ^2 T + \frac{{c^2 }} {{16\pi ^2 \sigma }}(\nabla \times B)^2 ,$$
where σ is the conductivity of the medium, and we introduced the viscous stress tensor
$$\sigma _{ik} = \eta \left( {\frac{{\partial u_i }} {{\partial x_k }} + \frac{{\partial u_k }} {{\partial x_i }} - \frac{2} {3}\delta _{ik} \nabla \cdot u} \right) + \xi \delta _{ik} \nabla \cdot u.$$
An equation for the magnetic field stems from the Maxwell equations in which the displacement current is neglected under the assumption that the velocity of the fluid under consideration is much smaller than the speed of light. Then, using
$$\nabla \times B = \mu _0 j$$
((8a))
and the Ohm's law for a conductor in motion with a speed u in a magnetic field
$$j = \sigma (E + u \times B),$$
((8b))
we obtain the induction equation which describes the time evolution of the magnetic field
$$\frac{{\partial B}} {{\partial t}} = \nabla \times (u \times B) + (1/\sigma \mu _0 )\nabla ^2 B,$$
together with the constraint ∇ · B = 0 (no magnetic monopoles in the classical case).
In the incompressible case, where ∇ · u = 0, MHD equations can be reduced to
$$\frac{{\partial u}} {{\partial t}} + (u \cdot \nabla )u = - \nabla P_{tot} + \nu \nabla ^2 u + (b \cdot \nabla )b$$
$$\frac{{\partial b}} {{\partial t}} + (u \cdot \nabla )b = - (b \cdot \nabla )u + \eta \nabla ^2 b.$$
Here Ptot is the total kinetic Pk = nkT plus magnetic pressure Pm = B2/8π, divided by the constant mass density ρ. Moreover, we introduced the velocity variables b = B/√πρ and the magnetic diffusivity η.
Similar to the usual Reynolds number, a magnetic Reynolds number Rm can be defined, namely
$$R_m = \frac{{c_A L_0 }} {\eta },$$
((11a))
where cA = B0/√4πρ is the Alfvén speed related to the large-scale B0 magnetic field B0. This number in most circumstances in astrophysics is very large, but the ratio of the two Reynolds numbers or, in other words, the magnetic Prandtl number Pm = ν/η can differ widely. In absence of dissipative terms, for each volume V MHD equations conserve the total energy E(t)
$$E(t) = \int_V {(v^2 + b^2 )d^3 r,}$$
the cross-helicity Hc(t), which represents a measure of the degree of correlations between velocity and magnetic fields
$$H_c (t) = \int_V {v \cdot b d^3 r,}$$
and the magnetic helicity H(t), which represents a measure of the degree of linkage among magnetic flux tubes
$$H(t) = \int_V {a \cdot b d^3 r,}$$
where b = ∇ × a.
The change of variable due to Elsäasser (1950), say z± = u ± b', where we explicitly use the background uniform magnetic field b' = b + cA (at variance with the bulk velocity, the largest scale magnetic field cannot be eliminated through a Galilean transformation), leads to the more symmetrical form of the MHD equations in the incompressible case
$$\frac{{\partial z^ \pm }} {{\partial t}} = \mp (c_A \cdot \nabla )z^ \pm + (z \mp \cdot \nabla )z^ \pm = - \nabla P_{tot} + \nu ^ \pm \nabla ^2 z^ \pm + \nu ^ \mp \nabla ^2 z^ \mp + F^ \pm ,$$
where 2ν± = ν±η are the dissipative coefficients, and F± are eventual external forcing terms. The relations ∇ · z± = 0 complete the set of equations. On linearizing Equation (15) and neglecting both the viscous and the external forcing terms, we have
$$\frac{{\partial z^ \pm }} {{\partial t}} = \mp (c_A \cdot \nabla )z^ \pm \simeq 0,$$
which shows that z−(x − cAt) describes Alfvénic fluctuations propagating in the direction of B0, and z+(x + cAt) describes Alfvénic fluctuations propagating opposite to B0. Note that MHD Equations (15) have the same structure as the Navier-Stokes equation, the main difference stems from the fact that non-linear coupling happens only between fluctuations propagating in opposite directions. As we will see, this has a deep influence on turbulence described by MHD equations.
It is worthwhile to remark that in the classical hydrodynamics, dissipative processes are defined through three coefficients, namely two viscosities and one thermoconduction coefficient. In the hydromagnetic case the number of coefficients increases considerably. Apart from few additional electrical coefficients, we have a large-scale (background) magnetic field B0. This makes the MHD equations intrinsically anisotropic. Furthermore, the stress tensor (8) is deeply modified by the presence of a magnetic field B0, in that kinetic viscous coefficients must depend on the magnitude and direction of the magnetic field (Braginskii, (1965). This has a strong influence on the determination of the Reynolds number.
Scaling features of the equations
The scaled Euler equations are the same as Equations (4 and 5), but without the term proportional to R−1. The scaled variables obtained from the Euler equations are, then, the same. Thus, scaled variables exhibit scaling similarity, and the Euler equations are said to be invariant with respect to scale transformations. Said differently, this means that NS Equations (4) show scaling properties (Frisch, (1995), that is, there exists a class of solutions which are invariant under scaling transformations. Introducing a length scale ℓ, it is straightforward to verify that the scaling transformations ℓ ↑ λ ℓ' and u → λhu' (λ is a scaling factor and h is a scaling index) leave invariant the inviscid NS equation for any scaling exponent h, providing P → λ2hP'. When the dissipative term is taken into account, a characteristic length scale exists, say the dissipative scale ℓD. From a phenomenological point of view, this is the length scale where dissipative effects start to be experienced by the flow. Of course, since ℓD is in general very low, we expect that ℓD is very small. Actually, there exists a simple relationship for the scaling of .D with the Reynolds number, namely ℓD ~ LRe−3/4. The larger the Reynolds number, the smaller the dissipative length scale.
As it is easily verified, ideal MHD equations display similar scaling features. Say the following scaling transformations u → λhu' and B → λβB' (β here is a new scaling index different from h), leave the inviscid MHD equations unchanged, providing P → λ2βP', T → λ2hT', and ρ → λ2(β−h)ρ'. This means that velocity and magnetic variables have different scalings, say h ≠ β, only when the scaling for the density is taken into account. In the incompressible case, we cannot distinguish between scaling laws for velocity and magnetic variables.
The non-linear energy cascade
The basic properties of turbulence, as derived both from the Navier-Stokes equation and from phenomenological considerations, is the legacy of A. N. Kolmogorov (Frisch, (1995).Footnote 3 Phenomenology is based on the old picture by Richardson who realized that turbulence is made by a collection of eddies at all scales. Energy, injected at a length scale L, is transferred by non-linear interactions to small scales where it is dissipated at a characteristic scale ℓD, the length scale where dissipation takes place The main idea is that at very large Reynolds numbers, the injection scale L and the dissipative scale ℓD are completely separated. In a stationary situation, the energy injection rate must be balanced by the energy dissipation rate and must also be the same as the energy transfer rate ε measured at any scale ℓ within the inertial range ℓD ≪ ℓ ≪ L. From a phenomenological point of view, the energy injection rate at the scale L is given by ∈D ~ U2/τL, where τL is a characteristic time for the injection energy process, which results to be τL ~ L/U At the same scale L the energy dissipation rate is due to ∈D ~ U2/τD, where τD is the characteristic dissipation time which, from Equation (4), can be estimated to be of the order of τD ~ L2/ν. As a result, the ratio between the energy injection rate and dissipation rate is
$\frac{{\varepsilon _L }} {{\varepsilon _D }} \sim \frac{{\tau _D }} {{\tau _L }} \sim Re,$
that is, the energy injection rate at the largest scale L is Re-times the energy dissipation rate. In other words, in the case of large Reynolds numbers, the fluid system is unable to dissipate the whole energy injected at the scale L. The excess energy must be dissipated at small scales where the dissipation process is much more efficient. This is the physical reason for the energy cascade.
Fully developed turbulence involves a hierarchical process, in which many scales of motion are involved. To look at this phenomenon it is often useful to investigate the behavior of the Fourier coefficients of the fields. Assuming periodic boundary conditions the α-th component of velocity field can be Fourier decomposed as
$$u_\alpha (r,t) = \sum\limits_k {u_\alpha (k,t)\exp (ik \cdot r)},$$
where k = 2πn/L and n is a vector of integers. When used in the Navier-Stokes equation, it is a simple matter to show that the non-linear term becomes the convolution sum
$$\frac{{\partial u_\alpha (k,t)}} {{\partial t}} = M_{\alpha \beta \gamma } (k)\sum\limits_q {u_\gamma (k - q,t)u_\beta (q,t)},$$
where Mαβγ(k) = −ikβ(δαγ − kα − kβ/k2) (for the moment we disregard the linear dissipative term).
MHD equations can be written in the same way, say by introducing the Fourier decomposition for Elsäasser variables
$$z_\alpha ^ \pm (r,t) = \sum\limits_k {z_\alpha ^ \pm (k,t)\exp (ik \cdot r)} $$
and using this expression in the MHD equations we obtain an equation which describes the time evolution of each Fourier mode. However, the divergence-less condition means that not all Fourier modes are independent, rather k · z±(k, t) = 0 means that we can project the Fourier coefficients on two directions which are mutually orthogonal and orthogonal to the direction of k, that is,
$$z^ \pm (k,t) = \sum\limits_{a = 1}^2 {z_a^ \pm (k,t)e^{(a)} (k)},$$
with the constraint that k · e(a)(k) = 0. In presence of a background magnetic field we can use the well defined direction B0, so that
$$\begin{array}{*{20}c} {e^{(1)} (k) = \frac{{ik \times B_0 }} {{\left| {k \times B_0 } \right|}};} & {e^{(2)} (k) = \frac{{ik}} {{\left| k \right|}} \times e^{(1)} (k)} \\ \end{array}.$$
Note that in the linear approximation where the Elsäasser variables represent the usual MHD modes, z 1 ± (k, t) represent the amplitude of the Alfvén mode while z 2 ± (k, t) represent the amplitude of the incompressible limit of the magnetosonic mode. From MHD Equations (15) we obtain the following set of equations:
$$\left[ {\frac{\partial } {{\partial t}} \mp i(k \cdot c_A )} \right]z_a^ \pm (k,t) = \left( {\frac{L} {{2\pi }}} \right)^3 \sum\limits_{p + q = k}^\delta {\sum\limits_{b,c = 1}^2 {A_{abc} ( - k,p,q)z_b^ \pm (p,t)z_c^ \mp (q,t)} }.$$
The coupling coefficients, which satisfy the symmetry condition Aabc (k, p, q) = −Abac(p, k, q), are defined as
$$A_{abc} ( - k,p,q) = \left[ {(ik)* \cdot e^{(c)} (q)} \right]\left[ {e^{(a)*} (k) \cdot e^{(b)} (p)} \right],$$
and the sum in Equation (19) is defined as
$$\sum\limits_{p + q = k}^\delta { \equiv \left( {\frac{{2\pi }} {L}} \right)^3 \sum\limits_p {\sum\limits_q {\delta _{k,p + q} } } },$$
((19b))
where δk,p+q is the Kronecher's symbol. Quadratic non-linearities of the original equations correspond to a convolution term involving wave vectors k, p and q related by the triangular relation p = k−q. Fourier coefficients locally couple to generate an energy transfer from any pair of modes p and q to a mode k = p + q.
The pseudo-energies E±(t) are defined as
$$E^ \pm (t) = \frac{1} {2}\frac{1} {{L^3 }}\int_{L^3 } {\left| {z^ \pm (r,t)} \right|^2 d^3 r = \frac{1} {2}\sum\limits_k {\sum\limits_{a = 1}^2 { \equiv \left| {z_a^ \pm (k,t)} \right|^2 } } }$$
((19c))
and, after some algebra, it can be shown that the non-linear term of Equation (19) conserves separately E±(t). This means that both the total energy E(t) = E+ + E− and the cross-helicity Ec(t) = E+−E−, say the correlation between velocity and magnetic field, are conserved in absence of dissipation and external forcing terms.
In the idealized homogeneous and isotropic situation we can define the pseudo-energy tensor, which using the incompressibility condition can be written as
$$U_{ab}^ \pm (k,t) \equiv \left( {\frac{L} {{2\pi }}} \right)^3 \left\langle {z_a^ \pm (k,t)z_b^ \pm (k,t)} \right\rangle = \left( {\delta _{ab} - \frac{{k_a k_b }} {{k^2 }}} \right)q^ \pm (k),$$
((19d))
brackets being ensemble averages, where q±(k) is an arbitrary odd function of the wave vector k and represents the pseudo-energies spectral density. When integrated over all wave vectors under the assumption of isotropy
$$\begin{array}{*{20}c} {Tr\left[ {\int {d^3 k U_{ab}^ \pm (k,t)} } \right] = 2\smallint _0^\infty } & {E^ \pm (k,t)dk} \\ \end{array},$$
((19e))
where we introduce the spectral pseudo-energy E±(k, t) = 4πk2q±(k, t). This last quantity can be measured, and it is shown that it satisfies the equations
$$\frac{{\partial E^ \pm (k,t)}} {{\partial t}} = T^ \pm (k,t) - 2\nu k^2 E^ \pm (k,t) + F^ \pm (k,t).$$
We use ν = η in order not to worry about coupling between + and − modes in the dissipative range. Since the non-linear term conserves total pseudo-energies we have
$$\smallint _0^\infty dkT^ \pm (k,t) = 0,$$
so that, when integrated over all wave vectors, we obtain the energy balance equation for the total pseudo-energies
$$\frac{{dE^ \pm (t)}} {{dt}} = \int_0^\infty {dk F^ \pm (k,t) - 2\nu } \int_0^\infty {dk k^2 E^ \pm (k,t)}.$$
This last equation simply means that the time variations of pseudo-energies are due to the difference between the injected power and the dissipated power, so that in a stationary state
$$\int_0^\infty {dk F^ \pm (k,t) - 2\nu } \int_0^\infty {dk k^2 E^ \pm (k,t) = \varepsilon ^ \pm }.$$
Looking at Equation (20), we see that the role played by the non-linear term is that of a redistribution of energy among the various wave vectors. This is the physical meaning of the non-linear energy cascade of turbulence
The inhomogeneous case
Equations (20) refer to the standard homogeneous and incompressible MHD. Of course, the solar wind is inhomogeneous and compressible and the energy transfer equations can be as complicated as we want by modeling all possible physical effects like, for example, the wind expansion or the inhomogeneous large-scale magnetic field. Of course, simulations of all turbulent scales requires a computational effort which is beyond the actual possibilities. A way to overcome this limitation is to introduce some turbulence modeling of the various physical effects. For example, a set of equations for the cross-correlation functions of both Elsäasser fluctuations have been developed independently by Marsch and Tu (1989), Zhou and Matthaeus (1990), Oughton and Matthaeus (1992), and Tu and Marsch (1990a), following Marsch and Mangeney (1987) (see review by Tu and Marsch, (1996), and are based on some rather strong assumptions: i) a two-scale separation, and ii) small-scale fluctuations are represented as a kind of stochastic process (Tu and Marsch, (1996). These equations look quite complicated, and just a comparison based on order-of-magnitude estimates can be made between them and solar wind observations (Tu and Marsch, (1996).
A different approach, introduced by Grappin et al. (1993), is based on the so-called "expandingbox model" (Grappin and Velli, (1996; Liewer et al., (2001; Hellinger et al., (2005). The model uses transformation of variables to the moving solar wind frame that expands together with the size of the parcel of plasma as it propagates outward from the Sun. Despite the model requires several simplifying assumptions, like for example lateral expansion only for the wave-packets and constant solar wind speed, as well as a second-order approximation for coordinate transformation Liewer et al. (2001) to remain tractable, it provides qualitatively good description of the solar wind expansions, thus connecting the disparate scales of the plasma in the various parts of the heliosphere.
Dynamical system approach to turbulence
In the limit of fully developed turbulence, when dissipation goes to zero, an infinite range of scales are excited, that is, energy lies over all available wave vectors. Dissipation takes place at a typical dissipation length scale which depends on the Reynolds number Re through ℓD ~ LRe−3/4 (for a Kolmogorov spectrum E(k) ~ k−5/3). In 3D numerical simulations the minimum number of grid points necessary to obtain information on the fields at these scales is given by N ~ (L/ℓD)3 ~ Re9/4. This rough estimate shows that a considerable amount of memory is required when we want to perform numerical simulations with high Re. At present, typical values of Reynolds numbers reached in 2D and 3D numerical simulations are of the order of 104 and 103, respectively. At these values the inertial range spans approximately one decade or a little more.
Given the situation described above, the question of the best description of dynamics which results from original equations, using only a small amount of degree of freedom, becomes a very important issu. This can be achieved by introducing turbulence models which are investigated using tools of dynamical system theory (Bohr et al., (1998). Dynamical systems, then, are solutions of minimal sets of ordinary differential equations that can mimic the gross features of energy cascade turbulence These studies are motivated by the famous Lorenz's model (Lorenz, (1963) which, containing only three degrees of freedom, simulates the complex chaotic behavior of turbulent atmospheric flows, becoming a paradigm for the study of chaotic systems.
The Lorenz's model has been used as a paradigm as far as the transition to turbulence is concerned. Actually, since the solar wind is in a state of fully developed turbulence, the topic of the transition to turbulence is not so close to the main goal of this review. However, since their importance in the theory of dynamical systems, we spend few sentences abut this central topic. Up to the Lorenz's chaotic model, studies on the birth of turbulence dealt with linear and, very rarely, with weak non-linear evolution of external disturbances. The first physical model of laminar-turbulent transition is due to Landau and it is reported in the fourth volume of the course on Theoretical Physics (Landau and Lifshitz, (1971). According to this model, as the Reynolds number is increased, the transition is due to a infinite series of Hopf bifurcations at fixed values of the Reynolds number. Each subsequent bifurcation adds a new incommensurate frequency to the flow whose dynamics become rapidly quasi-periodic. Due to the infinite number of degree of freedom involved, the quasi-periodic dynamics resembles that of a turbulent flow.
The Landau transition scenario is, however, untenable because incommensurate frequencies cannot exist without coupling between them. Ruelle and Takens (1971) proposed a new mathematical model, according to which after few, usually three, Hopf bifurcations the flow becomes suddenly chaotic. In the phase space this state is characterized by a very intricate attracting subset, a strange attractor. The flow corresponding to this state is highly irregular and strongly dependent on initial conditions. This characteristic feature is now known as the butterfly effect and represents the true definition of deterministic chaos. These authors indicated as an example for the occurrence of a strange attractor the old strange time behavior of the Lorenz's model. The model is a paradigm for the occurrence of turbulence in a deterministic system, it reads
$$\begin{array}{*{20}c} {\frac{{dx}} {{dt}} = P_r (y - x),} & {\frac{{dy}} {{dt}} = Rx - y - xz,} & {\frac{{dz}} {{dt}} = xy - bz} \\ \end{array},$$
where x(t), y(t), and z(t) represent the first three modes of a Fourier expansion of fluid convective equations in the Boussinesq approximation, Pr is the Prandtl number, b is a geometrical parameter, and R is the ratio between the Rayleigh number and the critical Rayleigh number for convective motion. The time evolution of the variables x(t), y(t), and z(t) is reported in Figure 12. A reproduction of the Lorenz butterfly attractor, namely the projection of the variables on the plane (x, z) is shown in Figure 13. A few years later, Gollub and Swinney (1975) performed very sophisticated experiments,Footnote 4 concluding that the transition to turbulence in a flow between co-rotating cylinders is described by the Ruelle and Takens (1971) model rather than by the Landau scenario.
After this discovery, the strange attractor model gained a lot of popularity, thus stimulating a large number of further studies on the time evolution of non-linear dynamical systems. An enormous number of papers on chaos rapidly appeared in literature, quite in all fields of physics, and transition to chaos became a new topic. Of course, further studies on chaos rapidly lost touch with turbulence studies and turbulence, as reported by Feynman et al. (1977), still remains ... the last great unsolved problem of the classical physics. Furthermore, we like to cite recent theoretical efforts made by Chian and coworkers (Chian et al., (1998, (2003) related to the onset of Alfvénic turbulence These authors, numerically solved the derivative non-linear Schrödinger equation (Mjølhus, (1976; Ghosh and Papadopoulos, (1987) which governs the spatio-temporal dynamics of non-linear Alfvén waves, and found that Alfvénic intermittent turbulence is characterized by strange attractors. Note that, the physics involved in the derivative non-linear Schrödinger equation, and in particular the spatio-temporal dynamics of non-linear Alfvén waves, cannot be described by the usual incompressible MHD equations. Rather dispersive effects are required. At variance with the usual MHD, this can be satisfied by requiring that the effect of ion inertia be taken into account. This results in a generalized Ohm's law by including a (j̲ × B̲)-term, which represents the compressible Hall correction to MHD, say the so-called compressible Hall-MHD model.
Time evolution of the variables x(t), y(t), and z(t) in the Lorenz's model (see Equation (22)). This figure has been obtained by using the parameters Pr = 10, b = 8/3, and R = 28.
The Lorenz butterfly attractor, namely the time behavior of the variables z(t) vs. x(t) as obtained from the Lorenz's model (see Equation (22)). This figure has been obtained by using the parameters Pr = 10, b = 8/3, and R = 28.
In this context turbulence can evolve via two distinct routes: Pomeau.Manneville intermittency (Pomeau and Manneville, (1980) and crisis-induced intermittency (Ott and Sommerer, (1994). Both types of chaotic transitions follow episodic switching between different temporal behaviors. In one case (Pomeau.Manneville) the behavior of the magnetic fluctuations evolve from nearly periodic to chaotic while, in the other case the behavior intermittently assumes weakly chaotic or strongly chaotic features.
Shell models for turbulence cascade
Since numerical simulations, in some cases, cannot be used, simple dynamical systems can be introduced to investigate, for example, statistical properties of turbulent flows which can be compared with observations. These models, which try to mimic the gross features of the time evolution of spectral Navier-Stokes or MHD equations, are often called "shell models" or "discrete cascade models". Starting from the old papers by Siggia (1977) different shell models have been introduced in literature for 3D fluid turbulence (Biferale, (2003). MHD shell models have been introduced to describe the MHD turbulent cascade (Plunian et al., (2012), starting from the paper by Gloaguen et al. (1985).
The most used shell model is usually quoted in literature as the GOY model, and has been introduced some time ago by Gledzer (1973) and by Ohkitani and Yamada (1989). Apart from the first MHD shell model (Gloaguen et al., (1985), further models, like those by Frick and Sokoloff (1998) and Giuliani and Carbone (1998) have been introduced and investigated in detail. In particular, the latter ones represent the counterpart of the hydrodynamic GOY model, that is they coincide with the usual GOY model when the magnetic variables are set to zero.
In the following, we will refer to the MHD shell model as the FSGC model. The shell model can be built up through four different steps:
Introduce discrete wave vectors:
As a first step we divide the wave vector space in a discrete number of shells whose radii grow according to a power kn = k0λn, where λ > 1 is the inter-shell ratio, k0 is the fundamental wave vector related to the largest available length scale L, and n = 1, 2, ..., N.
Assign to each shell discrete scalar variables:
Each shell is assigned two or more complex scalar variables un(t) and bn(t), or Elsäasser variables Z n ± (t) = un ± bn(t). These variables describe the chaotic dynamics of modes in the shell of wave vectors between kn and kn+1. It is worth noting that the discrete variable, mimicking the average behavior of Fourier modes within each shell, represents characteristic fluctuations across eddies at the scale ℓn ~ k n −1 . That is, the fields have the same scalings as field differences, for example Z n ± ~ |Z±(x + ℓn) − Z±(x)| ~ ℓ n h in fully developed turbulence In this way, the possibility to describe spatial behavior within the model is ruled out. We can only get, from a dynamical shell model, time series for shell variables at a given kn, and we loose the fact that turbulence is a typical temporal and spatial complex phenomenon.
Introduce a dynamical model which describes non-linear evolution:
Looking at Equation (19) a model must have quadratic non-linearities among opposite variables Z n ± (t) and Z n ∓ (t), and must couple different shells with free coupling coefficients.
Fix as much as possible the coupling coefficients:
This last step is not standard. A numerical investigation of the model might require the scanning of the properties of the system when all coefficients are varied. Coupling coefficients can be fixed by imposing the conservation laws of the original equations, namely the total pseudo-energies
$$E^ \pm (t) = \frac{1} {2}\sum\limits_n {\left| {Z_n^ \pm } \right|^2 },$$
that means the conservation of both the total energy and the cross-helicity:
$$\begin{array}{*{20}c} {E(t) = \frac{1} {2}\sum\limits_n {\left| {u_n } \right|^2 + \left| {b_n } \right|^2 ;} } & {H_c (t) = \sum\limits_n {2\Re e(u_n b_n^* )} } \\ \end{array},$$
where Re indicates the real part of the product unbn*. As we said before, shell models cannot describe spatial geometry of non-linear interactions in turbulence, so that we loose the possibility of distinguishing between two-dimensional and three-dimensional turbulent behavior. The distinction is, however, of primary importance, for example as far as the dynamo effect is concerned in MHD. However, there is a third invariant which we can impose, namely
$$H(t) = \sum\limits_n {\left| { - 1} \right|^n \frac{{\left| {b_n } \right|^2 }} {{k_n^\alpha }}},$$
which can be dimensionally identified as the magnetic helicity when α = 1, so that the shell model so obtained is able to mimic a kind of 3D MHD turbulence (Giuliani and Carbone (1998).
After some algebra, taking into account both the dissipative and forcing terms, FSGC model can be written as
$$\frac{{dZ_n^ \pm }} {{dt}} = ik_n \Phi _n^{ \pm *} + \frac{{\nu \pm \mu }} {2}k_n^2 Z_n^ + + \frac{{\nu \mp \mu }} {2}k_n^2 Z_n^ - + F_n^ \pm ,$$
$$\begin{array}{*{20}c} {\Phi _n^ \pm = \left( {\frac{{2 - a - c}} {2}} \right)Z_{n + 2}^ \pm Z_{n + 1}^ \mp + \left( {\frac{{a + c}} {2}} \right)Z_{n + 1}^ \pm Z_{n + 2}^ \mp + } \\ { + \left( {\frac{{c - a}} {{2\lambda }}} \right)Z_{n - 1}^ \pm Z_{n + 1}^ \mp - \left( {\frac{{a + c}} {{2\lambda }}} \right)Z_{n - 1}^ \mp Z_{n + 1}^ \pm + } \\ { - \left( {\frac{{c - a}} {{2\lambda ^2 }}} \right)Z_{n - 2}^ \mp Z_{n - 1}^ \pm - \left( {\frac{{2 - a - c}} {{2\lambda ^2 }}} \right)Z_{n - 1}^ \mp Z_{n - 2}^ \pm } \\ \end{array},$$
whereFootnote 5 λ = 2, a = 1/2, and c = 1/3. In the following, we will consider only the case where the dissipative coefficients are the same, i.e., ν = μ.
The phenomenology of fully developed turbulence: Fluid-like case
Here we present the phenomenology of fully developed turbulence, as far as the scaling properties are concerned. In this way we are able to recover a universal form for the spectral pseudo-energy in the stationary case. In real space a common tool to investigate statistical properties of turbulence is represented by field increments Δz ℓ ± (r) = [z±(r + ℓ) − z±(r)] · e, being e the longitudinal direction. These stochastic quantities represent fluctuationsFootnote 6 across eddies at the scale ℓ. The scaling invariance of MHD equations (cf. Section 2.3), from a phenomenological point of view, implies that we expect solutions where Δz ℓ ± ~ ℓh. All the statistical properties of the field depend only on the scale ℓ, on the mean pseudo-energy dissipation rates ε±, and on the viscosity ν. Also, ε± is supposed to be the common value of the injection, transfer and dissipation rates. Moreover, the dependence on the viscosity only arises at small scales, near the bottom of the inertial range. Under these assumptions the typical pseudo-energy dissipation rate per unit mass scales as ε± ~ (Δz ℓ ± ±)2/t ℓ ± . The time t ℓ ± associated with the scale . is the typical time needed for the energy to be transferred on a smaller scale, say the eddy turnover time t ℓ ± ~ ℓ/Δz ℓ ∓ , so that
$\varepsilon ^ \pm \sim (\Delta z_\ell ^ \pm )^2 \Delta z^ \mp /\ell .$
When we conjecture that both Δz± fluctuations have the same scaling laws, namely Δz± ~ ℓh we recover the Kolmogorov scaling for the field increments
$\Delta z_\ell ^ \pm \sim (\varepsilon ^ \pm )^{1/3} \ell ^{1/3} .$
Usually, we refer to this scaling as the K41 model (Kolmogorov, (1941, (1991; Frisch, (1995). Note that, since from dimensional considerations the scaling of the energy transfer rate should be ε± ~ ℓ1−3h, h = 1/3 is the choice to guarantee the absence of scaling for ε±.
In the real space turbulence properties can be described using either the probability distribution functions (PDFs hereafter) of increments, or the longitudinal structure functions, which represents nothing but the higher order moments of the field. Disregarding the magnetic field, in a purely fully developed fluid turbulence, this is defined as S ℓ (p) = 〈Δu ℓ p 〉. These quantities, in the inertial range, behave as a power law S ℓ (p) ~ ℓξp, so that it is interesting to compute the set of scaling exponent ξp. Using, from a phenomenological point of view, the scaling for field increments (see Equation (26)), it is straightforward to compute the scaling laws S ℓ (p) ~ ℓp/3. Then ξp = p/3 results to be a linear function of the order p.
When we assume the scaling law Δz ℓ ± ~ ℓh, we can compute the high-order moments of the structure functions for increments of the Elsäasser variables, namely 〈(Δz ℓ ± )p〉 ~ ℓξp, thus obtaining a linear scaling ξp = p/3, similar to usual fluid flows. For Gaussianly distributed fields, a particular role is played by the second-order moment, because all moments can be computed from S ℓ (2) . It is straightforward to translate the dimensional analysis results to Fourier spectra. The spectral property of the field can be recovered from S ℓ (2) , say in the homogeneous and isotropic case
$$S_\ell ^{(2)} = 4\int_0^\infty {E(k)\left( {1 - \frac{{\sin k\ell }} {{k\ell }}} \right)dk,}$$
where k ~ 1/ℓ is the wave vector, so that in the inertial range where Equation (42) is verified
$E(k) \sim \varepsilon ^{2/3} k^{ - 5/3} .$
The Kolmogorov spectrum (see Equation (27)) is largely observed in all experimental investigations of turbulence, and is considered as the main result of the K41 phenomenology of turbulence (Frisch, (1995). However, spectral analysis does not provide a complete description of the statistical properties of the field, unless this has Gaussian properties. The same considerations can be made f.o[.r the spectral pseudo-energies E±(k), which are related to the 2nd order structure functions 〈[±z ℓ ± ]2〉.
The phenomenology of fully developed turbulence: Magnetically-dominated case
The phenomenology of the magnetically-dominated case has been investigated by Iroshnikov (1963) and Kraichnan (1965), then developed by Dobrowolny et al. (1980b) to tentatively explain the occurrence of the observed Alfvénic turbulence, and finally by Carbone (1993) and Biskamp (1993) to get scaling laws for structure functions. It is based on the Alfvén effect, that is, the decorrelation of interacting eddies, which can be explained phenomenologically as follows. Since non-linear interactions happen only between opposite propagating fluctuations, they are slowed down (with respect to the fluid-like case) by the sweeping of the fluctuations across each other. This means that ε± ~ (Δz ℓ ± )2/T ℓ ± but the characteristic time T ℓ ± required to efficiently transfer energy from an eddy to another eddy at smaller scales cannot be the eddy-turnover time, rather it is increased by a factor t ℓ ± /tA (tA ~ ℓ/cA < t ℓ ± is the Alfvén time), so that T ℓ ± ~ (t ℓ ± )2/tA. Then, immediately
$\varepsilon ^ \pm \sim \frac{{[\Delta z_\ell ^ \pm ]^2 [\Delta z_\ell ^ \mp ]^2 }} {{\ell c_A }} .$
This means that both ± modes are transferred at the same rate to small scales, namely ∈+ ~ ∈− ~ ∈, and this is the conclusion drawn by Dobrowolny et al. (1980b). In reality, this is not fully correct, namely the Alfvén effect yields to the fact that energy transfer rates have the same scaling laws for ± modes but, we cannot say anything about the amplitudes of ε+ and ε− (Carbone, (1993). Using the usual scaling law for fluctuations, it can be shown that the scaling behavior holds ∈ → λ1−4hε'. Then, when the energy transfer rate is constant, we found a scaling law different from that of Kolmogorov and, in particular,
$\Delta z_\ell ^ \pm \sim (\varepsilon c_A )^{1/4} \ell ^{1/4} .$
Using this phenomenology the high-order moments of fluctuations are given by S ∓ (p) ~ ℓp/4. Even in this case, ξp = p/4 results to be a linear function of the order p. The pseudo-energy spectrum can be easily found to be
$E^ \pm (k) \sim (\varepsilon c_A )^{1/2} k^{ - 3/2} .$
This is the Iroshnikov-Kraichnan spectrum. However, in a situation in which there is a balance between the linear Alfvén time scale or wave period, and the non-linear time scale needed to transfer energy to smaller scales, the energy cascade is indicated as critically balanced (Goldreich and Sridhar, (1995). In these conditions, it can be shown that the power spectrum P(k) would scale as f−5/3 when the angle θB between the mean field direction and the flow direction is 90° while, the same scaling would follow f−2 in case θB = 0° and the spectrum would also have a smaller energy content than in the other case.
Some exact relationships
So far, we have been discussing about the inertial range of turbulence What this means from a heuristic point of view is somewhat clear, but when we try to identify the inertial range from the spectral properties of turbulence, in general the best we can do is to identify the inertial range with the intermediate range of scales where a Kolmogorov's spectrum is observed. The often used identity inertial range ≃ intermediate range, is somewhat arbitrary. In this regard, a very important result on turbulence, due to Kolmogorov (1941, (1991), is the so-called "4/5-law" which, being obtained from the Navier-Stokes equation, is "... one of the most important results in fully developed turbulence because it is both exact and nontrivial" (cf. Frisch, (1995). As a matter of fact, Kolmogorov analytically derived the following exact relation for the third order structure function of velocity fluctuations:
$$\left\langle {(\Delta v_\parallel (r,\ell ))^3 } \right\rangle = - \frac{4} {5}\varepsilon \ell ,$$
where r is the sampling direction, ℓ is the corresponding scale, and ∈ is the mean energy dissipation per unit mass, assumed to be finite and nonvanishing.
This important relation can be obtained in a more general framework from MHD equations. A Yaglom's relation for MHD can be obtained using the analogy of MHD equations with a transport equation, so that we can obtain a relation similar to the Yaglom's equation for the transport of a passive quantity (Monin and Yaglom, (1975). Using the above analogy, the Yaglom's relation has been extended some time ago to MHD turbulence by Chandrasekhar (1967), and recently it has been revised by Politano et al. (1998) and Politano and Pouquet (1998) in the framework of solar wind turbulence In the following section we report an alternative and more general derivation of the Yaglom's law using structure functions (Sorriso-Valvo et al., (2007; Carbone et al., (2009c).
Yaglom's law for MHD turbulence
To obtain a general law we start from the incompressible MHD equations. If we write twice the MHD equations for two different and independent points xi and xi' = xi + ℓi, by substraction we obtain an equation for the vector differences Δz i ± = (z i ± )' − z i ± . Using the hypothesis of independence of points xi' and xi with respect to derivatives, namely ∂i(z i ± )' = ∂i'z j ± = 0 (where ∂i' represents derivative with respect to xi'), we get
$$\partial _t \Delta z_i^ \pm + \Delta z_\alpha ^ \mp \partial '_\alpha \Delta z_i^ \pm + z_\alpha ^ \mp (\partial '_\alpha + \partial _\alpha )\Delta z_i^ \pm = - (\partial '_i + \partial _i )\Delta P + + (\partial _\alpha ^{2'} + \partial _\alpha ^2 )[\nu ^ \pm \Delta z_i^ + + \nu ^ \mp \Delta z_i^ - ]$$
(ΔP = Ptot' − Ptot). We look for an equation for the second-order correlation tensor 〈Δz i ± Δz j ± 〉 related to pseudo-energies. Actually the more general thing should be to look for a mixed tensor, namely 〈Δz i ± Δz j ∓ 〉, taking into account not only both pseudo-energies but also the time evolution of the mixed correlations 〈z i + z j − 〉 and 〈z i − z j + 〉. However, using the DIA closure by Kraichnan, it is possible to show that these elements are in general poorly correlated (Veltri, (1980). Since we are interested in the energy cascade, we limit ourselves to the most interesting equation that describes correlations about Alfvénic fluctuations of the same sign. To obtain the equations for pseudo-energies we multiply Equations (31) by Δz j ± , then by averaging we get
$$\begin{array}{*{20}c} {\partial _t \left\langle {\Delta z_i^ \pm \Delta z_j^ \pm } \right\rangle + \frac{\partial } {{\partial \ell _\alpha }}\left\langle {\Delta Z_\alpha ^ \mp \left( {\Delta Z_i^ \pm \Delta Z_j^ \pm } \right)} \right\rangle = } \\ { = - \Lambda _{ij} - \Pi _{ij} + 2\nu \frac{{\partial ^2 }} {{\partial \ell _\alpha ^2 }}\left\langle {\Delta z_i^ \pm \Delta z_j^ \pm } \right\rangle - \frac{4} {3}\frac{\partial } {{\partial \ell _\alpha }}(\varepsilon _{ij}^ \pm \ell _\alpha )} \\ \end{array},$$
where we used the hypothesis of local homogeneity and incompressibility. In Equation (32) we defined the average dissipation tensor
$$\varepsilon _{ij}^ \pm = \nu \left\langle {\left( {\partial _\alpha Z_i^ \pm } \right)\left( {\partial _\alpha Z_j^ \pm } \right)} \right\rangle .$$
The first and second term on the r.h.s. of the Equation (32) represent respectively a tensor related to large-scales inhomogeneities
$$\Lambda _{ij} = \left\langle {z_\alpha ^ \mp \left( {\partial '_\alpha + \partial _\alpha } \right)\left( {\Delta z_i^ \pm \Delta z_j^ \pm } \right)} \right\rangle$$
and the tensor related to the pressure term
$$\Pi _{ij} = \left\langle {\Delta z_j^ \pm \left( {\partial '_i + \partial _i } \right)\Delta P + \Delta z_i^ \pm \left( {\partial '_j + \partial _j } \right)\Delta P} \right\rangle .$$
Furthermore, In order not to worry about couplings between Elsäasser variables in the dissipative terms, we make the usual simplifying assumption that kinematic viscosity is equal to magnetic diffusivity, that is ν± = ν∓ = ν. Equation (32) is an exact equation for anisotropic MHD equations that links the second-order complete tensor to the third-order mixed tensor via the average dissipation rate tensor. Using the hypothesis of global homogeneity the term Λij = 0, while assuming local isotropy Πij = 0. The equation for the trace of the tensor can be written as
$$\partial _t \left\langle {\left| {\Delta z_i^ \pm } \right|^2 } \right\rangle + \frac{\partial } {{\partial \ell _\alpha }}\left\langle {\Delta z_\alpha ^ \mp \left| {\Delta z_i^ \pm } \right|^2 } \right\rangle = 2\nu \frac{{\partial ^2 }} {{\partial \ell _\alpha }}\left\langle {\left| {\Delta z_i^ \pm } \right|^2 } \right\rangle - \frac{4} {3}\frac{\partial } {{\partial \ell _\alpha }}(\varepsilon _{ii}^ \pm \ell _\alpha ),$$
where the various quantities depends on the vector ℓα. Moreover, by considering only the trace we ruled out the possibility to investigate anisotropies related to different orientations of vectors within the second-order moment. It is worthwhile to remark here that only the diagonal elements of the dissipation rate tensor, namely ∈ ii ± are positive defined while, in general, the off-diagonal elements ∈ ij ± are not positive. For a stationary state the Equation (36) can be written as the divergenceless condition of a quantity involving the third-order correlations and the dissipation rates
$$\frac{\partial } {{\partial \ell _\alpha }}\left[ {\left\langle {\Delta z_\alpha ^ \mp \left| {\Delta z_i^ \pm } \right|^2 } \right\rangle - 2\nu \frac{\partial } {{\partial \ell _\alpha }}\left\langle {\left| {\Delta z_i^ \pm } \right|^2 } \right\rangle - \frac{4} {3}(\varepsilon _{ii}^ \pm \ell _\alpha )} \right] = 0,$$
from which we can obtain the Yaglom's relation by projecting Equation (37) along the longitudinal ℓα = ℓer direction. This operation involves the assumption that the flow is locally isotropic, that is fields depends locally only on the separation ℓ, so that
$$\left( {\frac{2} {\partial } + \frac{\partial } {{\partial \ell }}} \right)\left[ {\left\langle {\Delta z_\ell ^ \mp \left| {\Delta z_i^ \pm } \right|^2 } \right\rangle - 2\nu \frac{\partial } {{\partial \ell }}\left\langle {\left| {\Delta z_i^ \pm } \right|^2 } \right\rangle + \frac{4} {3}\varepsilon _{ii}^ \pm \ell } \right] = 0.$$
The only solution that is compatible with the absence of singularity in the limit ℓ → 0 is
$$\left\langle {\Delta z_\ell ^ \mp \left| {\Delta z_i^ \pm } \right|^2 } \right\rangle = 2\nu \frac{\partial } {{\partial \ell }}\left\langle {\left| {\Delta z_i^ \pm } \right|^2 } \right\rangle + \frac{4} {3}\varepsilon _{ii}^ \pm \ell ,$$
which reduces to the Yaglom's law for MHD turbulence as obtained by Politano and Pouquet (1998) in the inertial range when ν → 0
$$Y_\ell ^ \pm \equiv \left\langle {\Delta z_\ell ^ \mp \left| {\Delta z_i^ \pm } \right|^2 } \right\rangle = \frac{4} {3}\varepsilon _{ii}^ \pm \ell .$$
Finally, in the fluid-like case where z i + = z i − = ui we obtain the usual Yaglom's law for fluid flows
$$\left\langle {\Delta v_\ell \left| {\Delta v_\ell } \right|^2 } \right\rangle = - \frac{4} {3}(\varepsilon \ell ),$$
which in the isotropic case, where 〈Δu ℓ 3 〉 = 3〈ΔuℓΔu y 2 〉 = 3〈ΔuℓΔu z 2 〉 (Monin and Yaglom, (1975), immediately reduces to the Kolmogorov's law
$$\left\langle {\Delta v_\ell ^3 } \right\rangle = - \frac{4} {5}\varepsilon \ell$$
(the separation ℓ has been taken along the streamwise x-direction).
The relations we obtained can be used, or better, in a certain sense they might be used, as a formal definition of inertial range. Since they are exact relationships derived from Navier-Stokes and MHD equations under usual hypotheses, they represent a kind of "zeroth-order" conditions on experimental and theoretical analysis of the inertial range properties of turbulence It is worthwhile to remark the two main properties of the Yaglom's laws. The first one is the fact that, as it clearly appears from the Kolmogorov's relation (Kolmogorov, (1941), the third-order moment of the velocity fluctuations is different from zero. This means that some non-Gaussian features must be at work, or, which is the same, some hidden phase correlations. Turbulence is something more complicated than random fluctuations with a certain slope for the spectral density. The second feature is the minus sign which appears in the various relations. This is essential when the sign of the energy cascade must be inferred from the Yaglom relations, the negative asymmetry being a signature of a direct cascade towards smaller scales. Note that, Equation (40) has been obtained in the limit of zero viscosity assuming that the pseudo-energy dissipation rates ∈ ii ± remain finite in this limit. In usual fluid flows the analogous hypothesis, namely ν remains finite in the limit ν → 0, is an experimental evidence, confirmed by experiments in different conditions (Frisch, (1995). In MHD turbulent flows this remains a conjecture, confirmed only by high resolution numerical simulations (Mininni and Pouquet, (2009).
From Equation (37), by defining ΔZ i ± = Δui ± Δbi we immediately obtain the two equations
$$\frac{\partial } {{\partial \ell _\alpha }}\left[ {\left\langle {\Delta v_\alpha \Delta E} \right\rangle - 2\left\langle {\Delta b_\alpha \Delta C} \right\rangle - 2\nu \frac{\partial } {{\partial \ell _\alpha }}\left\langle {\Delta E} \right\rangle - \frac{4} {3}(\varepsilon _E \ell _\alpha )} \right] = 0$$
$$\frac{\partial } {{\partial \ell _\alpha }}\left[ { - \left\langle {\Delta b_\alpha \Delta E} \right\rangle + 2\left\langle {\Delta v_\alpha \Delta C} \right\rangle - 4\nu \frac{\partial } {{\partial \ell _\alpha }}\left\langle {\Delta C} \right\rangle - \frac{4} {3}(\varepsilon _C \ell _\alpha )} \right] = 0,$$
where we defined the energy fluctuations ΔE = |Δui|2 + |Δbi|2 and the correlation fluctuations ΔC = ΔuiΔbi. In the same way the quantities ∈E = (∈ ii + + ∈ ii − )/2 and ∈C = (∈ ii + − ∈ ii − /2 represent the energy and correlation dissipation rate, respectively. By projecting once more on the longitudinal direction, and assuming vanishing viscosity, we obtain the Yaglom's law written in terms of velocity and magnetic fluctuations
$$\left\langle {\Delta v_\ell \Delta E} \right\rangle - 2\left\langle {\Delta b_\ell \Delta C} \right\rangle = - \frac{4} {3}\varepsilon _E \ell$$
$$- \left\langle {\Delta b_\ell \Delta E} \right\rangle + 2\left\langle {\Delta v_\ell \Delta C} \right\rangle = - \frac{4} {3}\varepsilon _C \ell .$$
Density-mediated Elsäasser variables and Yaglom's law
Relation (40), which is of general validity within MHD turbulence, requires local characteristics of the turbulent fluid flow which can be not always satisfied in the solar wind flow, namely, largescale homogeneity, isotropy, and incompressibility. Density fluctuations in solar wind have a low amplitude, so that nearly incompressible MHD framework is usually considered (Montgomery et al., (1987; Matthaeus and Brown, (1988; Zank and Matthaeus, (1993; Matthaeus et al., (1991; Bavassano and Bruno, (1995). However, compressible fluctuations are observed, typically convected structures characterized by anticorrelation between kinetic pressure and magnetic pressure (Tu and Marsch, (1994). Properties and interaction of the basic MHD modes in the compressive case have also been considered (Goldreich and Sridhar, (1995; Cho and Lazarian, (2002).
A first attempt to include density fluctuations in the framework of fluid turbulence was due to Lighthill (1955). He pointed out that, in a compressible energy cascade, the mean energy transfer rate per unit volume ∈V ~ ρu3/ℓ should be constant in a statistical sense (u being the characteristic velocity fluctuations at the scale ℓ), thus obtaining the scaling relation u ~ (ℓ/ρ)1/3. Fluctuations of a density-weighted velocity field u ≡ ρ1/3v should thus follow the usual Kolmogorov scaling u3 ~ ℓ. The same phenomenological arguments can be introduced in MHD turbulence Carbone et al. (2009a) by considering the pseudoenergy dissipation rates per unit volume ∈ V ± = ρ∈ ii ± and introducing density-weighted Elsäasser fields, defined as w± ≡ ρ1/3z±. A relation equivalent to the Yaglom-type relation (40)
$$W_\ell ^ \pm \equiv \left\langle \rho \right\rangle ^{ - 1} \left\langle {\Delta w_\ell ^ \mp \left| {\Delta w_i^ \pm } \right|^2 } \right\rangle = - C\varepsilon _{ii}^ \pm \ell$$
(C is some constant assumed to be of the order of unit) should then hold for the density-weighted increments Δw±. Relation W ℓ ± reduces to Y ℓ ± in the case of constant density, allowing for comparison between the Yaglom's law for incompressible MHD flows and their compressible counterpart. Despite its simple phenomenological derivation, the introduction of the density fluctuations in the Yaglom-type scaling (47) should describe the turbulent cascade for compressible fluid (or magnetofluid) turbulence Even if the modified Yaglom's law (47) is not an exact relation as (40), being obtained from phenomenological considerations, the law for the velocity field in a compressible fluid flow has been observed in numerical simulations, the value of the constant C results negative and of the order of unity (Padoan et al., (2007; Kowal and Lazarian, (2007).
Yaglom's law in the shell model for MHD turbulence
As far as the shell model is concerned, the existence of a cascade towards small scales is expressed by an exact relation, which is equivalent to Equation (41). Using Equations (24), the scale-by-scale pseudo-energy budget is given by
$$\frac{d} {{dt}}\sum\limits_n {\left| {Z_n^ \pm } \right|^2 = k_n \operatorname{Im} [T_n^ \pm ] - } \sum\limits_n {2\nu k_n^2 \left| {Z_n^ \pm } \right|^2 + } \sum\limits_n {2\Re e[Z_n^ \pm F_n^{ \pm *} ].}$$
The second and third terms on the right hand side represent, respectively, the rate of pseudoenergy dissipation and the rate of pseudo-energy injection. The first term represents the flux of pseudo-energy along the wave vectors, responsible for the redistribution of pseudo-energies on the wave vectors, and is given by
$$\begin{array}{*{20}c} {T_n^ \pm - (a + c)Z_n^ \pm Z_{n + 1}^ \pm Z_{n + 2}^ \mp + \left( {\frac{{2 - a - c}} {\lambda }} \right)Z_{n - 1}^ \pm Z_{n + 1}^ \pm Z_n^ \mp + } \\ { + (2 - a - c)Z_n^ \pm Z_{n + 2}^ \pm Z_{n + 1}^ \mp + \left( {\frac{{c - a}} {\lambda }} \right)Z_) Z_n^ \pm Z_{n + 1}^ \pm Z_{n - 1}^ \mp .} \\ \end{array}$$
Using the same assumptions as before, namely: i) the forcing terms act only on the largest scales, ii) the system can reach a statistically stationary state, and iii) in the limit of fully developed turbulence, ν → 0, the mean pseudo-energy dissipation rates tend to finite positive limits ∈±, it can be found that
$$\left\langle {T_n^ \pm } \right\rangle = - \varepsilon ^ \pm k_n^{ - 1} .$$
This is an exact relation which is valid in the inertial range of turbulence Even in this case it can be used as an operative definition of the inertial range in the shell model, that is, the inertial range of the energy cascade in the shell model is defined as the range of scales kn, where the law from Equation (49) is verified.
Early Observations of MHD Turbulence in the Ecliptic
Here we briefly present the history, since the first Mariner missions during the 1960s, of the main steps towards the completion of an observational picture of turbulence in interplanetary space This retrospective look at all the advances made in this field shows that space flights allowed us to discover a very large laboratory in space As a matter of fact, in a wind tunnel we deal with characteristic dimensions of the order of L ≤ 10 m and probes of the size of about d ≃ 1 cm. In space, L ≃ 108 m, while "probes" (say spacecrafts) are about d ≃ 5 m. Thus, space provides a much larger laboratory. Most measurements are single point measurements, the ESA-Cluster project providing for multiple measurements only recently.
Turbulence in the ecliptic
When dealing with laboratory turbulence it is important to know all the aspects of the experimental device where turbulent processes take place in order to estimate related possible effects driven or influenced by the environment. In the solar wind, the situation is, in some aspects, similar although the plasma does not experience any confinement due to the "experimental device", which would be represented by free interplanetary space However, it is a matter of fact that the turbulent state of the wind fluctuations and the subsequent radial evolution during the wind expansion greatly differ from fast to slow wind, and it is now well accepted that the macrostructure convected by the wind itself plays some role (see reviews by Tu and Marsch, (1995a; Goldstein et al., (1995b).
Fast solar wind originates from the polar regions of the Sun, within the open magnetic field line regions identified by coronal holes. Beautiful observations by SOHO spacecraft (see animation of Figure 14) have localized the birthplace of the solar wind within the intergranular lane, generally where three or more granules get together. Clear outflow velocities of up to 10 km s−1 have been recorded by SOHO/SUMER instrument (Hassler et al., (1999).
mpg-Movie (2362.87792969 KB) Still from a movie showing An animation built on SOHO/EIT and SOHO/SUMER observations of the solar-wind source regions and magnetic structure of the chromospheric network. Outflow velocities, at the network cell boundaries and lane junctions below the polar coronal hole, reach up to 10 km s−1 are represented by the blue colored areas (original figures from Hassler et al., (1999). (For video see appendix)
Slow wind, on the contrary, originates from the equatorial zone of the Sun. The slow wind plasma leaks from coronal features called "helmets", which can be easily seen protruding into the Sun's atmosphere during a solar eclipse (see Figure 15). Moreover, plasma emissions due to violent and abrupt phenomena also contribute to the solar wind in these regions of the Sun. An alternative view is that both high- and low-speed winds come from coronal holes (defined as open field regions) and that the wind speed at 1 AU is determined by the rate of flux-tube expansion near the Sun as firstly suggested by Levine et al. (1977) (Wang and Sheeley Jr, (1990; Bravo and Stewart, (1997; Arge and Pizzo, (2000; Poduval and Zhao, (2004; Whang et al., (2005, see also:) and/or by the location and strength of the coronal heating (Leer and Holzer, (1980; Hammer, (1982; Hollweg, (1986; Withbroe, (1988; Wang, (1993, (1994; Sandbaek et al., (1994; Hansteen and Leer, (1995; Cranmer et al., (2007).
Helmet streamer during a solar eclipse. Slow wind leaks into the interplanetary space along the flanks of this coronal structure. Image reproduced from MSFC.
However, this situation greatly changes during different phases of the solar activity cycle. Polar coronal holes, which during the maximum of activity are limited to small and not well defined regions around the poles, considerably widen up during solar minimum, reaching the equatorial regions (Forsyth et al., (1997; Forsyth and Breen, (2002; Balogh et al., (1999). This new configuration produces an alternation of fast and slow wind streams in the ecliptic plane, the plane where most of the spacecraft operate and record data. During the expansion, a dynamical interaction between fast and slow wind develops, generating the so called "stream interface", a thin region ahead of the fast stream characterized by strong compressive phenomena.
Figure 16 shows a typical situation in the ecliptic where fast streams and slow wind were observed by Helios 2 s/c during its primary mission to the Sun. At that time, the spacecraft moved from 1 AU (around day 17) to its closest approach to the Sun at 0.29 AU (around day 108). During this radial excursion, Helios 2 had a chance to observe the same co-rotating stream, that is plasma coming from the same solar source, at different heliocentric distances. This fortuitous circumstance, gave us the unique opportunity to study the radial evolution of turbulence under the reasonable hypothesis of time-stationarity of the source regions. Obviously, similar hypotheses decay during higher activity phase of the solar cycle since, as shown in Figure 17, the nice and regular alternation of fast co-rotating streams and slow wind is replaced by a much more irregular and spiky profile also characterized by a lower average speed.
Figure 18 focuses on a region centered on day 75, recognizable in Figure 16, when the s/c was at approximately 0.7 AU from the Sun. Slow wind on the left-hand side of the plot, fast wind on the right hand side, and the stream interface in between, can be clearly seen. This is a sort of canonical situation often encountered in the ecliptic, within the inner heliosphere, during solar activity minimum. Typical solar wind parameters, like proton number density ρp proton temperature Tp, magnetic field intensity |B|, azimuthal angle Φ, and elevation angle Θ are shown in the panels below the wind speed profile. A quick look at the data reveals that fast wind is less dense but hotter than slow wind. Moreover, both proton number density and magnetic field intensity are more steady and, in addition, the bottom two panels show that magnetic field vector fluctuates in direction much more than in slow wind. This last aspect unravels the presence of strong Alfvénic fluctuations which act mainly on magnetic field and velocity vector direction, and are typically found within fast wind (Belcher and Davis Jr, (1971; Belcher and Solodyna, (1975). The region just ahead of the fast wind, namely the stream interface, where dynamical interaction between fast and slow wind develops, is characterized by compressive effects which enhance proton density, temperature and field intensity. Within slow wind, a further compressive region precedes the stream interface but it is not due to dynamical effects but identifies the heliospheric current sheet, the surface dividing the two opposite polarities of the interplanetary magnetic field. As a matter of fact, the change of polarity can be noted within the first half of day 73 when the azimuthal angle Φ rotates by about 180°. Detailed studies (Bavassano et al., (1997) based on interplanetary scintillations (IPS) and in-situ measurements have been able to find a clear correspondence between the profile of path-integrated density obtained from IPS measurements and in-situ measurements by Helios 2 when the s/c was around 0.3 AU from the Sun.
High velocity streams and slow wind as seen in the ecliptic during solar minimum as function of time [yyddd]. Streams identified by labels are the same co-rotating stream observed by Helios 2, during its primary mission to the Sun in 1976, at different heliocentric distances. These streams, named "The Bavassano.Villante streams" after Tu and Marsch (1995a), have been of fundamental importance in understanding the radial evolution of MHD turbulence in the solar wind.
High velocity streams and slow wind as seen in the ecliptic during solar maximum. Data refer to Helios 2 observations in 1979.
High velocity streams and slow wind as seen in the ecliptic during solar minimum.
Figure 19 shows measurements of several plasma and magnetic field parameters. The third panel from the top is the proton number density and it shows an enhancement within the slow wind just preceding the fast stream, as can be seen at the top panel. In this case the increase in density is not due to the dynamical interaction between slow and fast wind but it represents the profile of the heliospheric current sheet as sketched on the left panel of Figure 19. As a matter of fact, at these short distances from the Sun, dynamical interactions are still rather weak and this kind of compressive effects can be neglected with respect to the larger density values proper of the current sheet.
Spectral properties
First evidences of the presence of turbulent fluctuations were showed by Coleman (1968), who, using Mariner 2 magnetic and plasma observations, investigated the statistics of interplanetary fluctuations during the period August 27 - October 31, 1962, when the spacecraft orbited from 1.0 to 0.87 AU. At variance with Coleman (1968), Barnes and Hollweg (1974) analyzed the properties of the observed low-frequency fluctuations in terms of simple waves, disregarding the presence of an energy spectrum. Here we review the gross features of turbulence as observed in space by Mariner and Helios spacecraft. By analyzing spectral densities, Coleman (1968) concluded that the solar wind flow is often turbulent, energy being distributed over an extraordinarily wide frequency range, from one cycle per solar rotation to 0.1 Hz. The frequency spectrum, in a range of intermediate frequencies [2 × 10−5 −2.3 × 10−3], was found to behave roughly as f−1.2, the difference with the expected Kraichnan f−1.5 spectral slope was tentatively attributed to the presence of high-frequency transverse fluctuations resulting from plasma garden-hose instability (Scarf et al., (1967). Waves generated by this instability contribute to the spectrum only in the range of frequencies near the proton cyclotron frequency and would weaken the frequency dependence relatively to the Kraichnan scaling. The magnetic spectrum obtained by Coleman (1968) is shown in Figure 20.
Left panel: a simple sketch showing the configuration of a helmet streamer and the density profile across this structure. Right panel: Helios 2 observations of magnetic field and plasma parameters across the heliospheric current sheet. From top to bottom: wind speed, magnetic field azimuthal angle, proton number density, density fluctuations and normalized density fluctuations, proton temperature, magnetic field magnitude, total pressure, and plasma beta, respectively. Image reproduced by permission from Bavassano et al. (1997), copyright by AGU.
The magnetic energy spectrum as obtained by Coleman (1968).
Spectral properties of the interplanetary medium have been summarized by Russell (1972), who published a composite spectrum of the radial component of magnetic fluctuations as observed by Mariner 2, Mariner 4, and OGO 5 (see Figure 21). The frequency spectrum so obtained was divided into three main ranges: i) up to about 10−4 Hz the spectral slope is about 1/f; ii) at intermediate frequencies 10−4 ≤ f ≤ 10−1 Hz a spectrum which roughly behaves as f3/2 has been found; iii) the high-frequency part of the spectrum, up to 1 Hz, behaves as 1/f2. The intermediate rangeFootnote 7 of frequencies shows the same spectral properties as that introduced by Kraichnan (1965) in the framework of MHD turbulence It is worth reporting that scatter plots of the values of the spectral index of the intermediate region do not allow us to distinguish between a Kolmogorov spectrum f−5/3 and a Kraichnan spectrum f−3/2 (Veltri, (1980).
A composite figure of the magnetic spectrum obtained by Russell (1972).
Only lately, Podesta et al. (2007) addressed again the problem of the spectral exponents of kinetic and magnetic energy spectra in the solar wind. Their results, instead of clarifying once forever the ambiguity between f−5/3 and f−3/2 scaling, placed new questions about this unsolved problem.
As a matter of fact, Podesta et al. (2007) chose different time intervals between 1995 and 2003 lasting 2 or 3 solar rotations during which WIND spacecraft recorded solar wind velocity and magnetic field conditions. Figure 22 shows the results obtained for the time interval that lasted about 3 solar rotations between November 2000 and February 2001, and is representative also of the other analyzed time intervals. Quite unexpectedly, these authors found that the power law exponents of velocity and magnetic field fluctuations often have values near 3/2 and 5/3, respectively. In addition, the kinetic energy spectrum is characterized by a power law exponent slightly greater than or equal to 3/2 due to the effects of density fluctuations.
It is worth mentioning that this difference was first observed by Salem (2000) years before, but, at that time, the accuracy of the data was questioned Salem et al. (2009). Thus, to corroborate previous results, Salem et al. (2009) investigated anomalous scaling and intermittency effects of both magnetic field and solar wind velocity fluctuations in the inertial range using WIND data. These authors used a wavelet technique for a systematic elimination of intermittency effects on spectra and structure functions in order to recover the actual scaling properties in the inertial range. They found that magnetic field and velocity fluctuations exhibit a well-defined, although different, monofractal behavior, following a Kolmogorov −5/3 scaling and a Iroshnikov-Kraichnan −3/2 scaling, respectively. These results are clearly opposite to the expected scaling for kinetic and magnetic fluctuations which should follow Kolmogorov and Kraichnan scaling, respectively (see Section 2.8). However, as remarked by Roberts (2007), Voyager observations of the velocity spectrum have demonstrated a likely asymptotic state in which the spectrum steepens towards a spectral index of −5/3, finally matching the magnetic spectrum and the theoretical expectation of Kolmogorov turbulence Moreover, the same authors examined Ulysses spectra to determine if the Voyager result, based on a very few sufficiently complete intervals, were correct. Preliminary results confirmed the −5/3 slope for velocity fluctuations at ~5 AU from the Sun in the ecliptic.
Figure 23, taken from Roberts (2007), shows the evolution of the spectral index during the radial excursion of Ulysses. These authors examined many intervals in order to develop a more general picture of the spectral evolution in various conditions, and how magnetic and velocity spectra differ in these cases. The general trend shown in Figure 23 is towards −5/3 as the distance increases. Lower values are due to the highly Alfvénic fast polar wind while higher values, around 2, are mainly due to the jumps at the stream fronts as previously shown by Roberts (2007). Thus, the discrepancy between magnetic and velocity spectral slope is only temporary and belongs to the evolutionary phase of the spectra towards a well developed Kolmogorov like turbulence spectrum.
Horbury et al. (2008) performed a study on the anisotropy of the energy spectrum of magnetohydrodynamic (MHD) turbulence with respect to the magnetic field orientation to test the validity of the critical balance theory (Goldreich and Sridhar, (1995) in space plasma environment. This theory predicts that the power spectrum P(k) would scale as f−5/3 when the angle θB between the mean field direction and the flow direction is 90°. On the other hand, in case θB = 0° the scaling would follow θ−2. Moreover, the latter spectrum would also have a smaller energy content.
Horbury et al. (2008) used 30 days of Ulysses magnetic field observations (1995, days 100 – 130) with a resolution of 1 second. At that time, Ulysses was immersed in the steady high speed solar wind coming from the Sun's Northern polar coronal hole at 1.4 AU from the Sun. These authors studied the anisotropies of the turbulence by measuring how the spacecraft frame spectrum of magnetic fluctuations varies with θB. They adopted a method based on wavelet analysis which was sensitive to the frequent changes of the local magnetic field direction.
The lower panel of Figure 24 clearly shows that for angles larger than about 45. the spectral index smoothly fluctuates around −5/3 while, for smaller angles, it tends to a value of −2, as predicted by the critical balance type of cascade. However, although the same authors recognize that a spectral index of .2 has not been routinely observed in the fast solar wind and that the range of θB over which the spectral index deviates from −5/3 is wider than expected, they consider these findings to be a robust evidence of the validity of critical balance theory in space plasma environment.
Experimental evaluation of Reynolds number in the solar wind
Properties of solar wind fluctuations have been widely studied in the past, relying on the "frozen-in approximation" (Taylor, (1938). The hypothesis at the basis of Taylor's approximation is that, since large integral scales in turbulence contain most of the energy, the advection due to the smallest turbulent scales fluctuations can be disregarded and, consequently, the advection of a turbulent field past an observer in a fixed location is considered solely due to the larger scales. In experimental physics, this hypothesis allows time series measured at a single point in space to be interpreted as spatial variations in the mean flow being swept past the observer. However, the canonical way to establish the presence of spatial structures relies in the computation of two-point single time measurements. Only recently, the simultaneous presence of several spacecraft sampling solar wind parameters allowed to correlate simultaneous in-situ observations in two different observing locations in space Matthaeus et al. (2005) and Weygand et al. (2007) firstly evaluated the twopoint correlation function using simultaneous measurements of interplanetary magnetic field from the Wind, ACE, and Cluster spacecraft. Their technique allowed to compute for the first time fundamental turbulence parameters previously determined from single spacecraft measurements. In particular, these authors evaluated the correlation scale λC and the Taylor microscale λT which allow to determine empirically the effective magnetic Reynolds number.
Top panel: Trace of power in the magnetic field as a function of the angle between the local magnetic field and the sampling direction at a spacecraft frequency of 61 mHz. The larger scatter for θB > 90 is the result of fewer data points at these angles. Bottom panel: spectral index of the trace, fitted over spacecraft frequencies from 15.98 mHz. Image reproduced by permission from Horbury et al. (2008), copyright by APS.
As a matter of fact, there are three standard turbulence length scales which can be identified in a typical turbulence power spectrum as shown in Figure 25: the correlation length λC, the Taylor scale λT and the Kolmogorov scale λK. The Correlation or integral length scale represents the largest separation distance over which eddies are still correlated, i.e., the largest turbulent eddy size. The Taylor scale is the scale size at which viscous dissipation begins to affect the eddies, it is several times larger than Kolmogorov scale and marks the transition from the inertial range to the dissipation range. The Kolmogorov scale is the one that characterizes the smallest dissipation-scale eddies.
The Taylor scale λT and the correlation length λC, as indicated in Figure 26, can be obtained from the two-point correlation function being the former the radius of curvature of the Correlation function at the origin and the latter the scale at which turbulent fluctuation are no longer correlated. Thus, λT can be obtained from from Taylor expansion of the two point correlation function for r → 0 (Tennekes and Lumely, (1972):
$$R(r) \approx 1 - \frac{{r^2 }} {{2\lambda _T^2 }} + \ldots$$
where r is the spacecraft separation and R(r) = 〈b(x) · b(x + r). is the auto-correlation function computed along the x direction for the fluctuating field b(x). On the other hand, the correlation length λC can be obtained integrating the normalized correlation function along a chosen direction of integration ξ:
$$\lambda _C \approx \int_0^\infty {\frac{{R(\xi )}} {{R(0)}}} d\xi .$$
Typical interplanetary magnetic field power spectrum at 1 AU. The low frequency range refers to Helios 2 observations (adapted from Bruno et al., (2009) while the high frequency refers to WIND observations (adapted from Leamon et al., (1998). Vertical dashed lines indicate the correlative, Taylor and Kolmogorov length scales.
Typical two-point correlation function. The Taylor scale λT and the correlation length λC are the radius of curvature of the Correlation function at the origin (see inset graph) and the scale at which turbulent fluctuation are no longer correlated, respectively.
At this point, following Batchelor (1970) it is possible to obtain the effective magnetic Reynolds number:
$$R_m^{eff} = \left( {\frac{{\lambda _C }} {{\lambda _T }}} \right)^2 .$$
Figure 27 shows estimates of the correlation function from ACE-Wind for separation distances 20 − 350 RE and two sets of Cluster data for separations 0.02 − 0.04 RE and 0.4 − 1.2 RE, respectively.
Estimates of the correlation function from ACE-Wind for separation distances 20 . 350 RE and two sets of Cluster data for separations 0.02 − 0.04 RE and 0.4 − 1.2 RE, respectively. Image adapted from Matthaeus et al. (2005).
Following the definitions of λC and λT given above, Matthaeus et al. (2005) were able to fit the first data set of Cluster, i.e., the one with shorter separations, with a parabolic fit while they used an exponential fit for ACE-Wind and the second Cluster data set. These fits provided estimates for λC and λT from which these authors obtained the first empirical determination of R m eff which resulted to be of the order of 2.3 × 105, as illustrated in Figure 28.
Evidence for non-linear interactions
As we said previously, Helios 2 s/c gave us the unique opportunity to study the radial evolution of turbulent fluctuations in the solar wind within the inner heliosphere. Most of the theoretical studies which aim to understand the physical mechanism at the base of this evolution originate from these observations (Bavassano et al., (1982b; Denskat and Neubauer, (1983).
In Figure 29 we consider again similar observations taken by Helios 2 during its primary mission to the Sun together with observations taken by Ulysses in the ecliptic at 1.4 and 4.8 AU in order to extend the total radial excursion.
Helios 2 power density spectra were obtained from the trace of the spectral matrix of magnetic field fluctuations, and belong to the same co-rotating stream observed on day 49, at a heliocentric distance of 0.9 AU, on day 75 at 0.7 AU and, finally, on day 104 at 0.3 AU. Ulysses spectra, constructed in the same way as those of Helios 2, were taken at 1.4 and 4.8 AU during the ecliptic phase of the orbit. Observations at 4.8 AU refer to the end of 1991 (fast wind period started on day 320, slow wind period started on day 338) while observations taken at 1.4 AU refer to fast wind observed at the end of August of 2007, starting on day 241:12.
Left panel: parabolic fit at small scales in order to estimate λT Right panel: exponential fit at intermediate and large scales in order to estimate λC. The square of the ratio of these two length scales gives an estimate of the effective magnetic Reynolds number. Image adapted from Matthaeus et al. (2005).
While the spectral index of slow wind does not show any radial dependence, being characterized by a single Kolmogorov type spectral index, fast wind is characterized by two distinct spectral slopes: about −1 within low frequencies and about a Kolmogorov like spectrum at higher frequencies. These two regimes are clearly separated by a knee in the spectrum often referred to as "frequency break". As the wind expands, the frequency break moves to lower and lower frequencies so that larger and larger scales become part of the Kolmogorov-like turbulence spectrum, i.e., of what we will indicate as "inertial range" (see discussion at the end of the previous section). Thus, the power spectrum of solar wind fluctuations is not solely function of frequency f, i.e., P(f), but it also depends on heliocentric distance r, i.e., P(f) → P(f, r).
Figure 30 shows the frequency location of the spectral breaks observed in the left-hand-side panel of Figure 29 as a function of heliocentric distance The radial distribution of these 5 points suggests that the frequency break moves at lower and lower frequencies during the wind expansion following a power-law of the order of R−1.5. Previous results, obtained for long data sets spanning hundreds of days and inevitably mixing fast and slow wind, were obtained by Matthaeus and Goldstein (1986) who found the breakpoint around 10 h at 1 AU, and Klein et al. (1992) who found that the breakpoint was near 16 h at 4 AU. Obviously, the frequency location of the breakpoint provided by these early determinations is strongly affected by the fact that mixing fast and slow wind would shift the frequency break to lower frequencies with respect to solely fast wind. In any case, this frequency break is strictly related to the correlation length (Klein, (1987) and the shift to lower frequency, during the wind expansion, is consistent with the growth of the correlation length observed in the inner (Bruno and Dobrowolny, (1986) and outer heliosphere (Matthaeus and Goldstein, (1982a). Analogous behavior for the low frequency shift of the spectral break, similar to the one observed in the ecliptic, has been reported by Horbury et al. (1996a) studying the rate of turbulent evolution over the Sun's poles. These authors used Ulysses magnetic field observations between 1.5 and 4.5 AU selecting mostly undisturbed, high speed polar flows. They found a radial gradient of the order of R−1.1, clearly slower than the one reported in Figure 30 or that can be inferred from results by Bavassano et al. (1982b) confirming that the turbulence evolution in the polar wind is slower than the one in the ecliptic, as qualitatively predicted by Bruno (1992), because of the lack of large scale stream shears. However, these results will be discussed more extensively in in Section 4.1.
Left panel: power density spectra of magnetic field fluctuations observed by Helios 2 between 0.3 and 1 AU within the trailing edge of the same corotating stream shown in Figure 16, during the first mission to the Sun in 1976 and by Ulysses between 1.4 and 4.8 AU during the ecliptic phase. Ulysses observations at 4.8 AU refer to the end of 1991 while observations taken at 1.4 AU refer to the end of August of 2007. While the spectral index of slow wind does not show any radial dependence, the spectral break, clearly present in fast wind and marked by a blue dot, moves to lower and lower frequency as the heliocentric distance increases. Image adapted from Bruno et al. (2009).
However, the phenomenology described above only apparently resembles hydrodynamic turbulence where the large eddies, below the frequency break, govern the whole process of energy cascade along the spectrum (Tu and Marsch, (1995b). As a matter of fact, when the relaxation time increases, the largest eddies provide the energy to be transferred along the spectrum and dissipated, with a decay rate approximately equal to the transfer rate and, finally, to the dissipation rate at the smallest wavelengths where viscosity dominates. Thus, we expect that the energy containing scales would loose energy during this process but would not become part of the turbulent cascade, say of the inertial range. Scales on both sides of the frequency break would remain separated. Accurate analysis performed in the solar wind (Bavassano et al., (1982b; Marsch and Tu, (1990b; Roberts, (1992) have shown that the low frequency range of the solar wind magnetic field spectrum radially evolves following the WKB model, or geometrical optics, which predicts a radial evolution of the power associated with the fluctuations ~ r−3. Moreover, a steepening of the spectrum towards a Kolmogorov like spectral index can be observed. On the contrary, the same in-situ observations established that the radial decay for the higher frequencies was faster than ~r−3 and the overall spectral slope remained unchanged. This means that the energy contained in the largest eddies does not decay as it would happen in hydrodynamic turbulence and, as a consequence, the largest eddies cannot be considered equivalent to the energy containing eddies identified in hydrodynamic turbulence So, this low frequency range is not separated from the inertial range but becomes part of it as the turbulence ages. These observations cast some doubts on the applicability of hydrodynamic turbulence paradigm to interplanetary MHD turbulence A theoretical help came from adopting a local energy transfer function (Tu et al., (1984; Tu, (1987a,b, (1988), which would take into account the non-linear effects between eddies of slightly differing wave numbers, together with a WKB description which would mainly work for the large scale fluctuations. This model was able to reproduce the displacement of the frequency break with distance by combining the linear WKB law and a model of nonlinear coupling besides most of the features observed in the magnetic power spectra P(f, r) observed by Bavassano et al. (1982b). In particular, the concept of the "frequency break", just mentioned, was pointed out for the first time by Tu et al. (1984) who, developing the analytic solution for the radially evolving power spectrum P(f, r) of fluctuations, obtained a critical frequency "fc" such that for frequencies f ≪ fc, P(f, r) ∝ f−1 and for f≫ fc, P(f, r) ∝ f−1.5.
Radial dependence of the frequency break observed in the ecliptic within fast wind as shown in the previous Figure 29. The radial dependence seems to be governed by a power-law of the order of R−1.5.
Fluctuations anisotropy
Interplanetary magnetic field (IMF) and velocity fluctuations are rather anisotropic as for the first time observed by Belcher and Davis Jr (1971); Belcher and Solodyna (1975); Chang and Nishida (1973); Burlaga and Turner (1976); Solodyna and Belcher (1976); Parker (1980); Bavassano et al. (1982a); Tu et al. (1989a); and Marsch and Tu (1990a). This feature can be better observed if fluctuations are rotated into the minimum variance reference system (see Appendix D).
Sonnerup and Cahill (1967) introduced the minimum variance analysis which consists in determining the eigenvectors of the matrix
$$S_{ij} = \left\langle {B_i B_j } \right\rangle - \left\langle {B_i } \right\rangle \left\langle {B_j } \right\rangle ,$$
where i and j denote the components of magnetic field along the axes of a given reference system. The statistical properties of eigenvalues approximately satisfy the following statements:
One of the eigenvalues of the variance matrix is always much smaller than the others, say λ1 ≪ (λ2, λ3), and the corresponding eigenvector Ṽ1 is the minimum-variance direction (see Appendix D.1 for more details). This indicates that, at least locally, the magnetic fluctuations are confined in a plane perpendicular to the minimum-variance direction.
In the plane perpendicular to Ṽ1, fluctuations appear to be anisotropically distributed, say λ3 > λ2. Typical values for eigenvalues are λ3 : λ2 : λ1 = 10 : 3.5 : 1.2 (Chang and Nishida, (1973; Bavassano et al., (1982a).
The direction Ṽ1 is nearly parallel to the average magnetic field B0, that is, the distribution of the angles between Ṽ1 and B0 is narrow with width of about 10° and centered around zero.
As shown in Figure 31, in this new reference system it is readily seen that the maximum and intermediate components have much more power compared with the minimum variance component. Generally, this kind of anisotropy characterizes Alfvénic intervals and, as such, it is more commonly found within high velocity streams (Marsch and Tu, (1990a).
A systematic analysis for both magnetic and velocity fluctuations was performed by Klein et al. (1991, (1993) between 0.3 and 10 AU. These studies showed that magnetic field and velocity minimum variance directions are close to each other within fast wind and mainly clustered around the local magnetic field direction. The effects of expansion are such as to separate field and velocity minimum variance directions. While magnetic field fluctuations keep their minimum variance direction loosely aligned with the mean field direction, velocity fluctuations tend to have their minimum variance direction oriented along the radial direction. The depleted alignment to the background magnetic field would suggest a smaller anisotropy of the fluctuations. As a matter of fact, Klein et al. (1991) found that the degree of anisotropy, which can be defined as the ratio between the power perpendicular to and that along the minimum variance direction, decreases with heliocentric distance in the outer heliosphere.
At odds with these conclusions were the results by Bavassano et al. (1982a) who showed that the ratio λ1/λ3, calculated in the inner heliosphere within a co-rotating high velocity stream, clearly decreased with distance, indicating that the degree of magnetic anisotropy increased with distance Moreover, this radial evolution was more remarkable for fluctuations of the order of a few hours than for those around a few minutes. Results by Klein et al. (1991) in the outer heliosphere and by Bavassano et al. (1982a) in the inner heliosphere remained rather controversial until recent studies (see Section 10.2), performed by Bruno et al. (1999b), found a reason for this discrepancy.
A different approach to anisotropic fluctuations in solar wind turbulence have been made by Bigazzi et al. (2006) and Sorriso-Valvo et al. (2006, (2010b). In these studies the full tensor of the mixed second-order structure functions has been used to quantitatively measure the degree of anisotropy and its effect on small-scale turbulence through a fit of the various elements of the tensor on a typical function (Sorriso-Valvo et al., (2006). Moreover three different regions of the near-Earth space have been studied, namely the solar wind, the Earth's foreshock and magnetosheath showing that, while in the undisturbed solar wind the observed strong anisotropy is mainly due to the largescale magnetic field, near the magnetosphere other sources of anisotropy influence the magnetic field fluctuations (Sorriso-Valvo et al., (2010b).
Power density spectra of the three components of IMF after rotation into the minimum variance reference system. The black curve corresponds to the minimum variance component, the blue curve to the maximum variance, and the red one to the intermediate component. This case refers to fast wind observed at 0.3 AU and the minimum variance direction forms an angle of ~ 8° with respect to the ambient magnetic field direction. Thus, most of the power is associated with the two components quasi-transverse to the ambient field.
Simulations of anisotropic MHD
In the presence of a DC background magnetic field B0 which, differently from the bulk velocity field, cannot be eliminated by a Galilean transformation, MHD incompressible turbulence becomes anisotropic (Shebalin et al., (1983; Montgomery, (1982; Zank and Matthaeus, (1992; Carbone and Veltri, (1990; Oughton, (1993). The main effect produced by the presence of the background field is to generate an anisotropic distribution of wave vectors as a consequence of the dependence of the characteristic time for the non-linear coupling on the angle between the wave vector and the background field. This effect can be easily understood if one considers the MHD equation. Due to the presence of a term (B0 · ∇)z±, which describes the convection of perturbations in theaverage magnetic field, the non-linear interactions between Alfvénic fluctuations are weakened, since convection decorrelates the interacting eddies on a time of the order (k · B0)−1. Clearly fluctuations with wave vectors almost perpendicular to B0 are interested by such an effect much less than fluctuations with k ∥ B0. As a consequence, the former are transferred along the spectrum much faster than the latter (Shebalin et al., (1983; Grappin, (1986; Carbone and Veltri, (1990).
To quantify anisotropy in the distribution of wave vectors k for a given dynamical variable Q(k, t) (namely the energy, cross-helicity, etc.), it is useful to introduce the parameter
$$\Omega _Q = \tan ^{ - 1} \sqrt {\frac{{\left\langle {k_ \bot ^2 } \right\rangle _Q }} {{\left\langle {k_\parallel ^2 } \right\rangle _Q }}}$$
(Shebalin et al., (1983; Carbone and Veltri, (1990), where the average of a given quantity g(k) is defined as
$$\left\langle {g(k)} \right\rangle _Q = \frac{{\int {d^3 k g(k)Q(k,t)} }} {{\int {d^3 k Q(k,t)} }}.$$
For a spectrum with wave vectors perpendicular to B0 we have a spectral anisotropy Ω = 90°, while for an isotropic spectrum Ω = 45°. Numerical simulations in 2D configuration by Shebalin et al. (1983) confirmed the occurrence of anisotropy, and found that anisotropy increases with the Reynolds number. Unfortunately, in these old simulations, the Reynolds numbers used are too small to achieve a well defined spectral anisotropy. Carbone and Veltri (1990) started from the spectral equations obtained through the Direct Interaction Approximation closure by Veltri et al. (1982), and derived a shell model analogous for the anisotropic MHD turbulence Of course the anisotropy is over-simplified in the model, in particular the Alfvén time is assumed isotropic. However, the model was useful to investigate spectral anisotropy at very high Reynolds numbers. The phenomenological anisotropic spectrum obtained from the model, for both pseudo-energies obtained through polarizations a = 1, 2 defined through Equation (18), can be written as
$E_a^ \pm (k,t) \sim C_a^ \pm \left[ {\ell _{||}^2 k_{||}^2 + \ell _ \bot ^2 k_ \bot ^2 } \right]^{ - \mu ^ \pm } .$
The spectral anisotropy is different within the injection, inertial, and dissipative ranges of turbulence (Carbone and Veltri, (1990). Wave vectors perpendicular to B0 are present in the spectrum, but when the process of energy transfer generates a strong anisotropy (at small times), a competing process takes place which redistributes the energy over all wave vectors. The dynamical balance between these tendencies fixes the value of the spectral anisotropy Ω ≃ 55° in the inertial range. On the contrary, since the redistribution of energy cannot take place, in the dissipation domain the spectrum remains strongly anisotropic, with Ω ≃ 80°. When the Reynolds number increases, the contribution of the inertial range extends, and the increases of the total anisotropy tends to saturate at about Ω ≃ 60° at Reynolds number of 105. This value corresponds to a rather low value for the ratio between parallel and perpendicular correlation lengths ℓ∥/ℓ⊥ ≥ 2, too small with respect to the observed value ℓ∥/ℓ⊥ ≥ 10. This suggests that the non-linear dynamical evolution of an initially isotropic spectrum of turbulence is perhaps not sufficient to explain the observed anisotropy. These results have been confirmed numerically (Oughton et al., (1994).
Spectral anisotropy in the solar wind
The correlation time, as defined in Appendix A, estimates how much an element of our time series x(t) at time t1 depends on the value assumed by x(t) at time t0, being t1 = t0 + δt. This concept can be transferred from the time domain to the space domain if we adopt the Taylor hypothesis and, consequently, we can talk about spatial scales.
Correlation lengths in the solar wind generally increase with heliocentric distance (Matthaeus and Goldstein, (1982b; Bruno and Dobrowolny, (1986), suggesting that large scale correlations are built up during the wind expansion. This kind of evolution is common to both fast and slow wind as shown in Figure 32, where we can observe the behavior of the Bz correlation function for fast and slow wind at 0.3 and 0.9 AU.
Correlation function just for the Z component of interplanetary magnetic field as observed by Helios 2 during its primary mission to the Sun. The blue color refers to data recorded at 0.9 AU while the red color refers to 0.3 AU. Solid lines refer to fast wind, dashed lines refer to slow wind.
Moreover, the fast wind correlation functions decrease much faster than those related to slow wind. This behavior reflects also the fact that the stochastic character of Alfvénic fluctuations in the fast wind is very efficient in decorrelating the fluctuations of each of the magnetic field components.
More detailed studies performed by Matthaeus et al. (1990) provided for the first time the twodimensional correlation function of solar wind fluctuations at 1 AU. The original dataset comprised approximately 16 months of almost continuous magnetic field 5-min averages. These results, based on ISEE 3 magnetic field data, are shown in Figure 33, also called the "The Maltese Cross".
This figure has been obtained under the hypothesis of cylindrical symmetry. Real determination of the correlation function could be obtained only in the positive quadrant, and the whole plot was then made by mirroring these results on the remaining three quadrants. The iso-contour lines show contours mainly elongated along the ambient field direction or perpendicular to it. Alfvénic fluctuations with k ⊥ B0 contribute to contours elongated parallel to r⊥. Fluctuations in the two-dimensional turbulence limit (Montgomery, (1982) contribute to contours elongated parallel to r⊥. This two-dimensional turbulence is characterized for having both the wave vector k and the perturbing field δb perpendicular to the ambient field B0. Given the fact that the analysis did not select fast and slow wind, separately, it is likely that most of the slab correlations came from the fast wind while the 2D correlations came from the slow wind. As a matter of fact, Dasso et al. (2005), using 5 years of spacecraft observations at roughly 1 AU, showed that fast streams are dominated by fluctuations with wavevectors quasi-parallel to the local magnetic field, while slow streams are dominated by quasi-perpendicular fluctuation wavevectors. Anisotropic turbulence has been observed in laboratory plasmas and reverse pinch devices (Zweben et al., (1979).
Bieber et al. (1996) formulated an observational test to distinguish the slab (Alfvénic) from the 2D component within interplanetary turbulence These authors assumed a mixture of transverse fluctuations, some of which have wave vectors perpendicular k ⊥ B0 and polarization of fluctuations δB(k⊥) perpendicular to both vectors (2D geometry with k ∥ ≃ 0), and some parallel to the mean magnetic field k ∥ B0, the polarization of fluctuations δB(k∥) being perpendicular to the direction of B0 (slab geometry with k⊥ ≃ 0). The magnetic field is then rotated into the same mean field coordinate system used by Belcher and Davis Jr (1971) and Belcher and Solodyna (1975), where the y-coordinate is perpendicular to both B0 and the radial direction, while the x-coordinate is perpendicular to B0 but with a component also in the radial direction. Using that geometry, and defining the power spectrum matrix as
$$P_{ij} (k) = \frac{1} {{(2\pi )^3 }}\int {d^3 r} \left\langle {B_i (x)B_j (x + r)} \right\rangle e^{ - ik \cdot r} ,$$
it can be found that, assuming axisymmetry, a two-component model can be written in the frequency domain
$$f P_{yy} (f) = rC_s \left( {\frac{{2\pi f}} {{U_w \cos \psi }}} \right)^{1 - q} + (1 - r)C_s \frac{{2q}} {{(1 + q)}}\left( {\frac{{2\pi f}} {{U_w \sin \psi }}} \right)^{1 - q} ,$$
$$f P_{xx} (f) = rC_s \left( {\frac{{2\pi f}} {{U_w \cos \psi }}} \right)^{1 - q} + (1 - r)C_s \frac{2} {{(1 + q)}}\left( {\frac{{2\pi f}} {{U_w \sin \psi }}} \right)^{1 - q} ,$$
where the anisotropic energy spectrum is the sum of both components:
$$fT(f) = 2rC_s \left( {\frac{{2\pi f}} {{U_w \cos \psi }}} \right)^{1 - q} + 2(1 - r)C_s \left( {\frac{{2\pi f}} {{U_w \sin \psi }}} \right)^{1 - q} .$$
Here f is the frequency, Cs is a constant defining the overall spectrum amplitude in wave vector space, Uw is the bulk solar wind speed and ψ is the angle between B0 and the wind direction. Finally, r is the fraction of slab components and (1 − r) is the fraction of 2D components.
Contour plot of the 2D correlation function of interplanetary magnetic field fluctuations as a function of parallel and perpendicular distance with respect to the mean magnetic field. The separation in r∥ and r⊥ is in units of 1010 cm. Image reproduced by permission from Matthaeus et al. (1990), copyright by AGU.
The ratio test adopted by these authors was based on the ratio between the reduced perpendicular spectrum (fluctuations ⊥ to the mean field and solar wind flow direction) and the reduced quasi-parallel spectrum (fluctuations ⊥ to the mean field and in the plane defined by the mean field and the flow direction). This ratio, expected to be 1 for slab turbulence, resulted to be ~ 1.4 for fluctuations within the inertial range, consistent with 74% of 2D turbulence and 26% of slab. A further test, the anisotropy test, evaluated how the spectrum should vary with the angle between the mean magnetic field and the flow direction of the wind. The measured slab spectrum should decrease with the field angle while the 2D spectrum should increase, depending on how these spectra project on the flow direction. The results from this test were consistent with with 95% of 2D turbulence and 5% of slab. In other words, the slab turbulence due to Alfvénic fluctuations would be a minor component of interplanetary MHD turbulence A third test derived from Mach number scaling associated with the nearly incompressible theory (Zank and Matthaeus, (1992), assigned the same fraction ~ 80% to the 2D component. However, the data base for this analysis was derived from Helios magnetic measurements, and all data were recorded near times of solar energetic particle events. Moreover, the quasi totality of the data belonged to slow solar wind (Wanner and Wibberenz, (1993) and, as such, this analysis cannot be representative of the whole phenomenon of turbulence in solar wind. As a matter of fact, using Ulysses observations, Smith (2003) found that in the polar wind the percentage of slab and 2D components is about the same, say the high latitude slab component results unusually higher as compared with ecliptic observations.
Successive theoretical works by Ghosh et al. (1998a,b) in which they used compressible models in large variety of cases were able to obtain, in some cases, parallel and perpendicular correlations similar to those obtained in the solar wind. However, they concluded that the "Maltese" cross does not come naturally from the turbulent evolution of the fluctuations but it strongly depends on the initial conditions adopted when the simulation starts. It seems that the existence of these correlations in the initial data represents an unavoidable constraint. Moreover, they also stressed the importance of time-averaging since the interaction between slab waves and transverse pressurebalanced magnetic structures causes the slab turbulence to evolve towards a state in which a two-component correlation function emerges during the process of time averaging.
The presence of two populations, i.e., a slab-like and a quasi-2D like, was also inferred by Dasso et al. (2003). These authors computed the reduced spectra of the normalized cross-helicity and the Alfvén ratio from ACE dataset. These parameters, calculated for different intervals of the angle θ between the flow direction and the orientation of the mean field B0, showed a remarkable dependence on θ.
The geometry used in these analyses assumes that the energy spectrum in the rest frame of the plasma is axisymmetric and invariant for rotations about the direction of B0. Even if these assumption are good when we want to translate results coming from 2D numerical simulations to 3D geometry, these assumptions are quite in contrast with the observational fact that the eigenvalues of the variance matrix are different, namely λ3 ≠ λ2.
Going back from the correlation tensor to the power spectrum is a complicated technical problem. However, Carbone et al. (1995a) derived a description of the observed anisotropy in terms of a model for the three-dimensional energy spectra of magnetic fluctuations. The divergence-less of the magnetic field allows to decompose the Fourier amplitudes of magnetic fluctuations in two independent polarizations: The first one I[1](k) corresponds, in the weak turbulence theory, to the Alfvénic mode, while the second polarization I[2](k) corresponds to the magnetosonic mode. By using only the hypothesis that the medium is statistically homogeneous and some algebra, authors found that the energy spectra of both polarizations can be related to the two-points correlation tensor and to the variance matrix. Through numerical simulations of the shell model (see later in the review) it has been shown that the anisotropic energy spectrum can be described in the inertial range by a phenomenological expression
$$I^{[s]} (k) = C_s \left[ {\left( {\ell _x^{[s]} k_x } \right)^2 + \left( {\ell _y^{[s]} k_y } \right)^2 + \left( {\ell _z^{[s]} k_z } \right)^2 } \right]^{ - 1 - \mu _s /2} ,$$
where ki are the Cartesian components of the wave vector k, and Cs, ℓ i [s] , and μs (s = 1, 2 indicates both polarizations; i = x, y, z) are free parameters. In particular, Cs gives information on the energy content of both polarizations, ℓ i [s] represent the spectral extensions along the direction of a given system of coordinates, and μs are two spectral indices.
A fit to the eigenvalues of the variance matrix allowed Carbone et al. (1995a) to fix the free parameters of the spectrum for both polarizations. They used data from Bavassano et al. (1982a) who reported the values of λi at five wave vectors calculated at three heliocentric distances, selecting periods of high correlation (Alfvénic periods) using magnetic field measured by the Helios 2 spacecraft. They found that the spectral indices of both polarizations, in the range 1.1 ≤ μ1 ≤ 1.3 and 1.46 ≤ μ2 ≤ 1.8 increase systematically with increasing distance from the Sun, the polarization [2] spectra are always steeper than the corresponding polarization [1] spectra, while polarization [1] is always more energetic than polarization [2]. As far as the characteristic lengths are concerned, it can be found that ℓ x [1] > ℓ y [1] ≫ ℓz[1], indicating that wave vectors k ∥ B0 largely dominate. Concerning polarization [2], it can be found that ℓx[2] ≫ ℓ y [2] ≃ ∓ z [2] , indicating that the spectrum I[2](k) is strongly flat on the plane defined by the directions of B0 and the radial direction. Within this plane, the energy distribution does not present any relevant anisotropy.
Let us compare these results with those by Matthaeus et al. (1990), the comparison being significant as far as the plane yz is taken into account. The decomposition of Carbone et al. (1995a) in two independent polarizations is similar to that of Matthaeus et al. (1990), a contour plot of the trace of the correlation tensor Fourier transform T(k) = I[1](k) + I[2](k) on the plane (ky; kz) shows two populations of fluctuations, with wave vectors nearly parallel and nearly perpendicular to B0, respectively. The first population is formed by all the polarization [1] fluctuations and by the fluctuations with k ∥ B0 belonging to polarization [2]. The latter fluctuations are physically indistinguishable from the former, in that when k is nearly parallel to B0, both polarization vectors are quasi-perpendicular to B0. On the contrary, the second population is almost entirely formed by fluctuations belonging to polarization [2]. While it is clear that fluctuations with k nearly parallel to B0 are mainly polarized in the plane perpendicular to B0 (a consequence of ∇ · B = 0), fluctuations with k nearly perpendicular to B0 are polarized nearly parallel to B0.
Although both models yield to the occurrence of two populations, Matthaeus et al. (1990) give an interpretation of their results which is in contrast with that of Carbone et al. (1995a). Namely Matthaeus et al. (1990) suggest that a nearly 2D incompressible turbulence characterized by wave vectors and magnetic fluctuations, both perpendicular to B0, is present in the solar wind. However, this interpretation does not arise from data analysis, rather from the 2D numerical simulations by Shebalin et al. (1983) and from analytical studies (Montgomery, (1982). Let us note, however, that in the former approach, which is strictly 2D, when k ⊥ B0 magnetic fluctuations are necessarily parallel to B0. In the latter one, along with incompressibility, it is assumed that the energy in the fluctuations is much less than in the DC magnetic field; both hypotheses do not apply to the solar wind case. On the contrary, results by Carbone et al. (1995a) can be directly related to the observational data. In any case, it is worth reporting that a model like that discussed here, that is a superposition of fluctuations with both slab and 2D components, has been used to describe turbulence also in the Jovian magnetosphere (Saur et al., (2002, (2003). In addition, several theoretical and observational works indicate that there is a competition between the radial axis and the mean field axis in shaping the polarization and spectral anisotropies in the solar wind.
In this respect, Grappin and Velli (1996) used numerical simulations of MHD equations which included expansion effects (Expanding Box Model) to study the formation of anisotropy in the wind and the interaction of Alfvén waves within a transverse magnetic structures. These authors found that a large-scale isotropic Alfvénic eddy stretched by expansion naturally mixes with smaller scale transverse Alfvén waves with a different anisotropy.
Saur and Bieber (1999), on the other hand, employed three different tests on about three decades of solar wind observations at 1 AU in order to better understand the anisotropic nature of solar wind fluctuations. Their data analysis strongly supported the composite model of a turbulence made of slab and 2-D fluctuations.
Narita et al. (2011b), using the four Cluster spacecraft, determined the three-dimensional wavevector spectra of fluctuating magnetic fields in the solar wind within the inertial range. These authors found that the spectra are anisotropic throughout the analyzed frequency range and the power is extended primarily in the directions perpendicular to the mean magnetic field, as might be expected of 2-D turbulence, however, the analyzed fluctuations cannot be considered axisymmetric.
Finally, Turner et al. (2011) suggested that the non-axisymmetry anisotropy of the frequency spectrum observed using in-situ observations may simply arise from a sampling effect related to the fact that the s/c samples three dimensional fluctuations as a one-dimensional series and that the energy density is not equally distributed among the different scales (i.e., spectral index > 1).
Magnetic helicity
Magnetic helicity Hm, as defined in Appendix B.1, measures the "knottedness" of magnetic field lines (Moffatt, (1978). Moreover, Hm is a pseudo scalar and changes sign for coordinate inversion. The plus or minus sign, for circularly polarized magnetic fluctuations in a slab geometry, indicates right or left-hand polarization. Statistical information about the magnetic helicity is derived from the Fourier transform of the magnetic field auto-correlation matrix Rij(r) = 〈Bi(x) · Bj(x+r)〉 as shown by Matthaeus and Goldstein (1982b). While the trace of the symmetric part of the spectral matrix accounts for the magnetic energy, the imaginary part of the spectral matrix accounts for the magnetic helicity (Batchelor, (1970; Montgomery, (1982; Matthaeus and Goldstein, (1982b). However, what is really available from in-situ measurements in space experiments are data from a single spacecraft, and we can obtain values of R only for collinear sequences of r along the x direction which corresponds to the radial direction from the Sun. In these conditions the Fourier transform of R allows us to obtain only a reduced spectral tensor along the radial direction so that Hm(k) will depend only on the wave-number k in this direction. Although the reduced spectral tensor does not carry the complete spectral information of the fluctuations, for slab and isotropic symmetries it contains all the information of the full tensor. The expression used by Matthaeus and Goldstein (1982b) to compute the reduced Hm is given in Appendix B.2. In the following, we will drop the suffix r for sake of simplicity.
The general features of the reduced magnetic helicity spectrum in the solar wind were described for the first time by Matthaeus and Goldstein (1982b) in the outer heliosphere, and by Bruno and Dobrowolny (1986) in the inner heliosphere. A useful dimensionless way to represent both the degree of and the sense of polarization is the normalized magnetic helicity σm (see Appendix B.2). This quantity can randomly vary between +1 and −1, as shown in Figure 34 from the work by Matthaeus and Goldstein (1982b) and relative to Voyager's data taken at 1 AU. However, net values of ±1 are reached only for pure circularly polarized waves.
Based on these results, Goldstein et al. (1991) were able to reproduce the distribution of the percentage of occurrence of values of σm(f) adopting a model where the magnitude of the magnetic field was allowed to vary in a random way and the tip of the vector moved near a sphere. By this way they showed that the interplanetary magnetic field helicity measurements were inconsistent with the previous idea that fluctuations were randomly circularly polarized at all scales and were also magnitude preserving.
σm vs. frequency and wave number relative to an interplanetary data sample recorded by Voyager 1 at approximately 1 AU. Image reproduced by permission from Matthaeus and Goldstein (1982b), copyright by AGU.
However, evidence for circular polarized MHD waves in the high frequency range was provided by Polygiannakis et al. (1994), who studied interplanetary magnetic field fluctuations from various datasets at various distances ranging from 1 to 20 AU. They also concluded that the difference between left- and right-hand polarizations is significant and continuously varying.
As already noticed by Smith et al. (1983, (1984), knowing the sign of σm and the sign of the normalized cross-helicity σc it is possible to infer the sense of polarization of the fluctuations. As a matter of fact, a positive cross-helicity indicates an Alfvén mode propagating outward, while a negative cross-helicity indicates a mode propagating inward. On the other hand, we know that a positive magnetic-helicity indicates a right-hand polarized mode, while a negative magnetichelicity indicates a left-hand polarized mode. Thus, since the sense of polarization depends on the propagating direction with respect to the observer, σm(f)σc(f) < 0 will indicate right circular polarization while σm(f)σc(f) > 0 will indicate left circular polarization. Thus, each time magnetic helicity and cross-helicity are available from measurements in a super-Alfvénic flow, it is possible to infer the rest frame polarization of the fluctuations from a single point measurements, assuming the validity of the slab geometry.
The high variability of σm, observable in Voyager's data (see Figure 34), was equally observed in Helios 2 data in the inner heliosphere (Bruno and Dobrowolny, (1986). The authors of this last work computed the difference (MH > 0) − |MH < 0| of magnetic helicity for different frequency bands and noticed that most of the resulting magnetic helicity was contained in the lowest frequency band. This result supported the theoretical prediction of an inverse cascade of magnetic helicity from the smallest to the largest scales during turbulence development (Pouquet et al., (1976).
Numerical simulations of the incompressible MHD equations by Mininni et al. (2003a), discussed in Section 3.1.9, clearly confirm the tendency of magnetic helicity to follow an inverse cascade. The generation of magnetic field in turbulent plasmas and the successive inverse cascade has strong implications in the emergence of large scale magnetic fields in stars, interplanetary medium and planets (Brandenburg, (2001).
This phenomenon was firstly demonstrated in numerical simulations based on the eddy damped quasi normal Markovian (EDQNM) closure model of three-dimensional MHD turbulence by Pouquet et al. (1976). Successively, other investigators confirmed such a tendency for the magnetic helicity to develop an inverse cascade (Meneguzzi et al., (1981; Cattaneo and Hughes, (1996; Brandenburg, (2001).
Mininni et al. (2003a) performed the first direct numerical simulations of turbulent Hall dynamo. They showed that the Hall current can have strong effects on turbulent dynamo action, enhancing or even suppressing the generation of the large-scale magnetic energy. These authors injected a weak magnetic field at small scales in a system kept in a stationary regime of hydrodynamic turbulence and followed the exponential growth of magnetic energy due to the dynamo action. This evolution can be seen in Figure 35 in the same format described for Figure 40, shown in Section 3.1.9. Now, the forcing is applied at wave number kforce = 10 in order to give enough room for the inverse cascade to develop. The fluid is initially in a strongly turbulent regime as a result of the action of the external force at wave number kforce = 10. An initial magnetic fluctuation is introduced at t = 0 at kseed = 35. The magnetic energy starts growing exponentially fast and, when the saturation is reached, the magnetic energy is larger than the kinetic energy. Notably, it is much larger at the largest scales of the system (i.e., k = 1). At these large scales, the system is very close to a magnetostatic equilibrium characterized by a force-free configuration.
mpg-Movie (1752.1640625 KB) Still from a movie showing A numerical simulation of the incompressible MHD equations in three dimensions, assuming periodic boundary conditions (see details in Mininni et al., (2003a). The left panel shows the power spectra for kinetic energy (green), magnetic energy (red), and total energy (blue) vs. time. The right panel shows the spatially integrated kinetic, magnetic, and total energies vs. time. The vertical (orange) line indicates the current time. These results correspond to a 1283 simulation with an external force applied at wave number kforce = 10 (movie kindly provided by D. Gómez). (For video see appendix)
Alfvén correlations as incompressive turbulence
In a famous paper, Belcher and Davis Jr (1971) showed that a strong correlation exists between velocity and magnetic field fluctuations, in the form
$$\delta v \simeq \pm \frac{{\delta B}} {{\sqrt {4\pi \rho } }},$$
where the sign of the correlation is given by the sign[−k · B0], being k the wave vector and B0 the background magnetic field vector. These authors showed that in about 25 d of data from Mariner 5, out of the 160 d of the whole mission, fluctuations were described by Equation (59), and the sign of the correlation was such to indicate always an outward sense of propagation with respect to the Sun. Authors also noted that these periods mainly occur within the trailing edges of high-speed streams. Moreover, in the regions where Equation (59) is verified to a high degree, the magnetic field magnitude is almost constant (B2 ~ const.).
Alfvénic correlation in fast solar wind. Left panel: large scale Alfvénic fluctuations found by Bruno et al. (1985). Right panel: small scale Alfvénic fluctuations for the first time found by Belcher and Solodyna (1975). Image reproduced by permission, copyright by AGU.
Today we know that Alfvén correlations are ubiquitous in the solar wind and that these correlations are much stronger and are found at lower and lower frequencies, as we look at shorter and shorter heliocentric distances. In the right panel of Figure 36 we show results from Belcher and Solodyna (1975) obtained on the basis of 5 min averages of velocity and magnetic field recorded by Mariner 5 in 1967, during its mission to Venus. On the left panel of Figure 36 we show results from a similar analysis performed by Bruno et al. (1985) obtained on the basis of 1 h averages of velocity and magnetic field recorded by Helios 2 in 1976, when the s/c was at 0.29 AU from the Sun. These last authors found that, in their case, Alfvén correlations extended to time periods as low as 15 h in the s/c frame at 0.29 AU, and to periods a factor of two smaller near the Earth's orbit. Now, if we think that this long period of the fluctuations at 0.29 AU was larger than the transit time from the Sun to the s/c, this results might be the first evidence for a possible solar origin for these fluctuations, probably caused by the shuffling of the foot-points of the solar surface magnetic field.
Alfvénic modes are not the only low frequency plasma fluctuations allowed by the MHD equations but they certainly are the most frequent fluctuations observed in the solar wind. The reason why other possible propagating modes like the slow sonic mode and the fast magnetosonic mode cannot easily be found, besides the fact that the eigenvectors associated with these modes are not directly identifiable because they necessitate prior identification of wavevectors, contrary to the simple Alfvénic eigenvectors, depends also on the fact that these compressive modes are strongly damped in the solar wind shortly after they are generated (see Section 6). On the contrary, Alfvén fluctuations, which are difficult to be damped because of their incompressive nature, survive much longer and dominate solar wind turbulence Nevertheless, there are regions where Alfvén correlations are much stronger like the trailing edge of fast streams, and regions where these correlations are weak like intervals of slow wind (Belcher and Davis Jr, (1971; Belcher and Solodyna, (1975). However, the degree of Alfvén correlations unavoidably fades away with increasing heliocentric distance, although it must be reported that there are cases when the absence of strong velocity shears and compressive phenomena favor a high Alfvén correlation up to very large distances from the Sun (Roberts et al., (1987a; see Section 5.1).
Alfvénic correlation in fast and slow wind. Notice the different degree of correlation between these two types of wind.
Just to give a qualitative quick example about Alfvénic correlations in fast and slow wind, we show in Figure 37 the speed profile for about 100 d of 1976 as observed by Helios 2, and the traces of velocity and magnetic field Z components (see Appendix D for the orientation of the reference system) VZ and BZ (this last one expressed in Alfvén units, see Appendix B.1) for two different time intervals, which have been enlarged in the two inserted small panels. The high velocity interval shows a remarkable anti-correlation which, since the mean magnetic field B0 is oriented away from the Sun, suggests a clear presence of outward oriented Alfvénic fluctuations given that the sign of the correlation is the sign[−k · B0]. At odds with the previous interval, the slow wind shows that the two traces are rather uncorrelated. For sake of brevity, we omit to show the very similar behavior for the other two components, within both fast and slow wind.
The discovery of Alfvén correlations in the solar wind stimulated fundamental remarks by Kraichnan (1974) who, following previous theoretical works by Kraichnan (1965) and Iroshnikov (1963), showed that the presence of a strong correlation between velocity and magnetic fluctuations renders non-linear transfer to small scales less efficient than for the Navier-Stokes equations, leading to a turbulent behavior which is different from that described by Kolmogorov (1941). In particular, when Equation (59) is exactly satisfied, non-linear interactions in MHD turbulent flows cannot exist. This fact introduces a problem in understanding the evolution of MHD turbulence as observed in the interplanetary space Both a strong correlation between velocity and magnetic fluctuations and a well defined turbulence spectrum (Figures 29, 37) are observed, and the existence of the correlations is in contrast with the existence of a spectrum which in turbulence is due to a non-linear energy cascade. Dobrowolny et al. (1980b) started to solve the puzzle on the existence of Alfvén turbulence, say the presence of predominately outward propagation and the fact that MHD turbulence with the presence of both Alfvén modes present will evolve towards a state where one of the mode disappears. However, a lengthy debate based on whether the highly Alfvén nature of fluctuations is what remains of the turbulence produced at the base of the corona or the solar wind itself is an evolving turbulent magnetofluid, has been stimulating the scientific community for quite a long time.
Radial evolution of Alfvénic turbulence
The degree of correlation not only depends on the type of wind we look at, i.e., fast or slow, but also on the radial distance from the Sun and on the time scale of the fluctuations.
Figure 38 shows the radial evolution of σc (see Appendix B.1) as observed by Helios and Voyager s/c (Roberts et al., (1987b). It is clear enough that σc not only tends to values around 0 as the heliocentric distance increases, but larger and larger time scales are less and less Alfvénic. Values of σc ~ 0 suggest a comparable amount of "outward" and "inward" correlations.
The radial evolution affects also the Alfvén ratio rA (see Appendix B.3.1) as it was found by Bruno et al. (1985). However, early analyses (Belcher and Davis Jr, (1971; Solodyna and Belcher, (1976; Matthaeus and Goldstein, (1982b) had already shown that this parameter is usually less than unit. Spectral studies by Marsch and Tu (1990a), reported in Figure 39, showed that within slow wind it is the lowest frequency range the one that experiences the strongest decrease with distance, while the highest frequency range remains almost unaffected. Moreover, the same study showed that, within fast wind, the whole frequency range experiences a general depletion. The evolution is such that close to 1 AU the value of rA in fast wind approaches that in slow wind.
Moreover, comparing these results with those by Matthaeus and Goldstein (1982b) obtained from Voyager at 2.8 AU, it seems that the evolution recorded within fast wind tends to a sort of limit value around 0.4 ∓ 0.5.
Also Roberts et al. (1990), analyzing fluctuations between 9 h and 3 d found a similar radial trend. These authors showed that rA dramatically decreases from values around unit at the Earth's orbit towards 0.4 . 0.5 at approximately 8 AU. For larger heliocentric distances, rA seems to stabilize around this last value.
The reason why rA tends to a value less than unit is still an open question although MHD computer simulations (Matthaeus, (1986) showed that magnetic reconnection and high plasma viscosity can produce values of rA < 1 within the inertial range. Moreover, the magnetic energy excess can be explained as a competing action between the equipartition trend due to linear propagation (or Alfvén effect, Kraichnan (1965)), and a local dynamo effect due to non-linear terms (Grappin et al., (1991), see closure calculations by Grappin et al. (1983); DNS by Müller and Grappin (2005).
However, this argument forecasts an Alfvén ratio rA ≠ 1 but, it does not say whether it would be larger or smaller than "1", i.e., we could also have a final excess of kinetic energy.
Histograms of normalized cross-helicity σc showing its evolution between 0.3 (circles), 2 (triangles), and 20 (squares) AU for different time scales: 3 h (top panel), 9 h (middle panel), and 81 h (bottom panel). Image Image reproduced by permission Roberts et al. (1987b, copyright by AGU.
Values of the Alfvén ratio rA as a function of frequency and heliocentric distance, within slow (left column) and fast (right column) wind. Image reproduced by permission from Marsch and Tu (1990a), copyright by AGU.
Similar unbalance between magnetic and kinetic energy has recently been found in numerical simulations by Mininni et al. (2003a), already cited in Section 3.1.7. These authors studied the effect of a weak magnetic field at small scales in a system kept in a stationary regime of hydrodynamic turbulence In these conditions, the dynamo action causes the initial magnetic energy to grow exponentially towards a state of quasi equipartition between kinetic and magnetic energy. This simulation was aiming to provide more insights on a microscopic theory of the alpha-effect, which is responsible to convert part of the toroidal magnetic field on the Sun back to poloidal to sustain the cycle. However, when the simulation saturates, the unbalance between kinetic and magnetic energy reminds the conditions in which the Alfvén ratio is found in interplanetary space Results from the above study can be viewed in the animation of Figure 40. At very early time the fluid is in a strongly turbulent regime as a result of the action of the external force at wave number kforce = 3. An initial magnetic fluctuation is introduced at t = 0 at kseed = 35. The magnetic energy starts growing exponentially fast and, when the simulation reaches the saturation stage, the magnetic power spectrum exceeds the kinetic power spectrum at large wave numbers (i.e., k > kforce), as also observed in Alfvénic fluctuations of the solar wind (Bruno et al., (1985; Tu and Marsch, (1990a) as an asymptotic state (Roberts et al., (1987a,b; Bavassano et al., (2000b) of turbulence
mpg-Movie (1780.71484375 KB)Still from a movie showing A 1283 numerical simulation, as in Figure 35, but with an external force applied at wave number kforce = 3 (movie kindly provided by D. Gómez). (For video see appendix)
However, when the two-fluid effect, such as the Hall current and the electron pressure (Mininni et al., (2003b), is included in the simulation, the dynamo can work more efficiently and the final stage of the simulation is towards equipartition between kinetic and magnetic energy.
On the other hand, Marsch and Tu (1993a) analyzed several intervals of interplanetary observations to look for a linear relationship between the mean electromotive force ε = δVδB, generated by the turbulent motions, and the mean magnetic field B0, as predicted by simple dynamo theory (Krause and Rädler, (1980). Although sizable electromotive force was found in interplanetary fluctuations, these authors could not establish any simple linear relationship between B0 and ε.
Lately, Bavassano and Bruno (2000) performed a three-fluid analysis of solar wind Alfvénic fluctuations in the inner heliosphere, in order to evaluate the effect of disregarding the multifluid nature of the wind on the factor relating velocity and magnetic field fluctuations. It is well known that converting magnetic field fluctuations into Alfvén units we divide by the factor Fp = (4πMpNp)1/2. However, fluctuations in velocity tend to be smaller than fluctuations in Alfvén units. In Figure 41 we show scatter plots between the z-component of the Alfvén velocity and the proton velocity fluctuations. The z-direction has been chosen as the same of Vp×B, where Vp is the proton bulk flow velocity and B is the mean field direction. The reason for such a choice is due to the fact that this direction is the least affected by compressive phenomena deriving from the wind dynamics. These results show that although the correlation coefficient in both cases is around −0.95, the slope of the best fit straight line passes from 1 at 0.29 AU to a slope considerably different from 1 at 0.88 AU.
Scatter plot between the z-component of the Alfvén velocity and the proton velocity fluctuations at about 2 mHz. Data refer to Helios 2 observations at 0.29 AU (left panel) and 0.88 AU (right panel). Image adapted from Bavassano and Bruno (2000).
Belcher and Davis Jr (1971) suggested that this phenomenon had to be ascribed to the presence of α particles and to an anisotropy in the thermal pressure. Moreover, taking into account the multi-fluid nature of the solar wind, the dividing factor should become F = FpFiFa, where Fi would take into account the presence of other species besides protons, and Fa would take into account the presence of pressure anisotropy P∥ ≠ P⊥, where ∥ and ⊥ refer to the background field direction. In particular, following Bavassano and Bruno (2000), the complete expressions for Fi and Fi are
$$F_i = \left[ {1 + \sum\limits_s {(M_s N_s )/(M_p N_p )} } \right]^{1/2}$$
$$F_a = \left[ {1 - \frac{{4\pi }} {{B_0^2 }}\sum\limits_s {(P_{\parallel s} - P_{ \bot s} + M_s N_s U_s^2 )} } \right]^{ - 1/2} ,$$
where the letter "s" stands for the s-th species, being Us = Vs − V its velocity in the center of mass frame of reference Vs is the velocity of the species "s" in the s/c frame and V = (ΣsMsNsVs)/(ΣsMsNs) is the velocity of the center of mass.
Bavassano and Bruno (2000) analyzed several time intervals within the same co-rotating high velocity stream observed at 0.3 and 0.9 AU and performed the analysis using the new factor "F" to express magnetic field fluctuations in Alfvén units, taking into account the presence of α particles and electrons, besides the protons. However, the correction resulted to be insufficient to bring back to "1" the slope of the δVPz ∓ δVAz relationship shown in the right panel of Figure 41. In conclusion, the radial variation of the Alfvén ratio rA towards values less than 1 is not completely due to a missed inclusion of multi-fluid effects in the conversion from magnetic field to Alfvén units. Thus, we are left with the possibility that the observed depletion of rA is due to a natural evolution of turbulence towards a state in which magnetic energy becomes dominant (Grappin et al., (1991; Roberts et al., (1992; Roberts, (1992), as observed in the animation of Figure 40 taken from numerical simulations by Mininni et al. (2003a) or, it is due to the increased presence of magnetic structures like MFDT (Tu and Marsch, (1993).
Turbulence studied via Elsässer variables
The Alfvénic character of solar wind fluctuations,especially within co-rotating high velocity streams, suggests to use the Elsässer variables (Appendix B.3) to separate the "outward" from the "inward" contribution to turbulence These variables, used in theoretical studies by Dobrowolny et al. (1980a,b); Veltri et al. (1982); Marsch and Mangeney (1987); and Zhou and Matthaeus (1989), were for the first time used in interplanetary data analysis by Grappin et al. (1990) and Tu et al. (1989b). In the following, we will describe and discuss several differences between "outward" and "inward" modes, but the most important one is about their origin. As a matter of fact, the existence of the Alfvénic critical point implies that only "outward" propagating waves of solar origin will be able to escape from the Sun. "Inward" waves, being faster than the wind bulk speed, will precipitate back to the Sun if they are generated before this point. The most important implication due to this scenario is that "inward" modes observed beyond the Alfvénic point cannot have a solar origin but they must have been created locally by some physical process. Obviously, for the other Alfvénic component, both solar and local origins are still possible.
Ecliptic scenario
Early studies by Belcher and Davis Jr (1971), performed on magnetic field and velocity fluctuations recorded by Mariner 5 during its trip to Venus in 1967, already suggested that the majority of the Alfvénic fluctuations are characterized by an "outward" sense of propagation, and that the best regions where to observe these fluctuations are the trailing edge of high velocity streams. Moreover, Helios spacecraft, repeatedly orbiting around the Sun between 0.3 to 1 AU, gave the first and unique opportunity to study the radial evolution of turbulence (Bavassano et al., (1982b; Denskat and Neubauer, (1983). Successively, when Elsässer variables were introduced in the analysis (Grappin et al., (1989), it was finally possible not only to evaluate the "inward" and "outward" Alfvénic contribution to turbulence but also to study the behavior of these modes as a function of the wind speed and radial distance from the Sun.
Figure 42 (Tu et al., (1990) clearly shows the behavior of e± (see Appendix B.3) across a high speed stream observed at 0.3 AU. Within fast wind e+ is much higher than e− and its spectral slope shows a break. Lower frequencies have a flatter slope while the slope of higher frequencies is closer to a Kolmogorov-like. e− has a similar break but the slope of lower frequencies follows the Kolmogorov slope, while higher frequencies form a sort of plateau.
This configuration vanishes when we pass to the slow wind where both spectra have almost equivalent power density and follow the Kolmogorov slope. This behavior, for the first time reported by Grappin et al. (1990), is commonly found within co-rotating high velocity streams, although much more clearly expressed at shorter heliocentric distances, as shown below.
Spectral power associated with outward (right panel) and inward (left panel) Alfvénic fluctuations, based on Helios 2 observations in the inner heliosphere, are concisely reported in Figure 43. The e− spectrum, if we exclude the high frequency range of the spectrum relative to fast wind at 0.4 AU, shows an average power law profile with a slope of −1.64, consistent with Kolmogorov's scaling. The lack of radial evolution of e− spectrum brought Tu and Marsch (1990a) to name it "the background spectrum" of solar wind turbulence
Power density spectra e± computed from δz± fluctuations for different time intervals indicated by the arrows. Image reproduced by permission from Tu et al. (1990), copyright by AGU.
Power density spectra e− and e+ computed from δz− and δz+ fluctuations. Spectra have been computed within fast (H) and slow (L) streams around 0.4 and 0.9 AU as indicated by different line styles. The thick line represents the average power spectrum obtained from all the about 50 e− spectra, regardless of distances and wind speed. The shaded area is the 1σ width related to the average. Image reproduced by permission from Tu and Marsch (1990b), copyright by AGU.
Quite different is the behavior of e+ spectrum. Close to the Sun and within fast wind, this spectrum appears to be flatter at low frequency and steeper at high frequency. The overall evolution is towards the "background spectrum" by the time the wind reaches 0.8 AU.
In particular, Figure 43 tells us that the radial evolution of the normalized cross-helicity has to be ascribed mainly to the radial evolution of e+ rather than to both Alfvénic fluctuations (Tu and Marsch, (1990a). In addition, Figure 44, relative to the Elsässer ratio rE, shows that the hourly frequency range, up to ~ 2 × 10−3 Hz, is the most affected by this radial evolution.
Ratio of e− over e+ within fast wind at 0.3 and 0.9 AU in the left and right panels, respectively. Image reproduced by permission from Marsch and Tu (1990a), copyright by AGU.
As a matter of fact, this radial evolution can be inferred from Figure 45 where values of e− and e+ together with solar wind speed, magnetic field intensity, and magnetic field and particle density compression are shown between 0.3 and 1 AU during the primary mission of Helios 2. It clearly appears that enhancements of e− and depletion of e+ are connected to compressive events, particularly within slow wind. Within fast wind the average level of e− is rather constant during the radial excursion while the level of e+ dramatically decreases with a consequent increase of the Elsässer ratio (see Appendix B.3.1).
Further ecliptic observations (see Figure 46) do not indicate any clear radial trend for the Elsässer ratio between 1 and 5 AU, and its value seems to fluctuate between 0.2 and 0.4.
However, low values of the normalized cross-helicity can also be associated with a particular type of incompressive events, which Tu and Marsch (1991) called Magnetic Field Directional Turnings or MFDT. These events, found within slow wind, were characterized by very low values of δc close to zero and low values of the Alfvén ratio, around 0.2. Moreover, the spectral slope of e+, e− and the power associated with the magnetic field fluctuations was close to the Kolmogorov slope. These intervals were only scarcely compressive, and short period fluctuations, from a few minutes to about 40 min, were nearly pressure balanced. Thus, differently from what had previously been observed by Bruno et al. (1989), who found low values of cross-helicity often accompanied by compressive events, these MFDTs were mainly incompressive. In these structures most of the fluctuating energy resides in the magnetic field rather than velocity as shown in Figure 47 taken from Tu and Marsch (1991). It follows that the amplitudes of the fluctuating Alfvénic fields δz± result to be comparable and, consequently, the derived parameter σc → 0. Moreover, the presence of these structures would also be able to explain the fact that rA < 1. Tu and Marsch (1991) suggested that these fluctuations might derive from a special kind of magnetic structures, which obey the MHD equations, for which (B · ∇)B = 0, field magnitude, proton density, and temperature are all constant. The same authors suggested the possibility of an interplanetary turbulence mainly made of outwardly propagating Alfvén waves and convected structures represented by MFDTs. In other words, this model assumed that the spectrum of e− would be caused by MFDTs. The different radial evolution of the power associated with these two kind of components would determine the radial evolution observed in both σc and rA. Although the results were not quantitatively satisfactory, they did show a qualitative agreement with the observations.
Upper panel: solar wind speed and solar wind speed multiplied by σc. In the lower panels the authors reported: σc, rE, e−, e+, magnetic compression, and number density compression, respectively. Image reproduced by permission from Bruno and Bavassano (1991), copyright by AGU.
Ratio of e− over e+ within fast wind between 1 and 5 AU as observed by Ulysses in the ecliptic. Image reproduced by permission from Bavassano et al. (2001), copyright by AGU.
Left column: e+ and e− spectra (top) and σc (bottom) during a slow wind interval at 0.9 AU. Right column: kinetic eu and magnetic eB energy spectra (top) computed from the trace of the relative spectral tensor, and spectrum of the Alfvén ratio rA (bottom) Image reproduced by permission from Tu and Marsch (1991).
These convected structures are an important ingredient of the turbulent evolution of the fluctuations and can be identified as the 2D incompressible turbulence suggested by Matthaeus et al. (1990) and Tu and Marsch (1991).
As a matter of fact, a statistical analysis by Bruno et al. (2007) showed that magnetically dominated structures represent an important component of the interplanetary fluctuations within the MHD range of scales. As a matter of fact, these magnetic structures and Alfvénic fluctuations dominate at scales typical of MHD turbulence For instance, this analysis suggested that more than 20% of all analyzed intervals of 1 hr scale are magnetically dominated and only weakly Alfvénic. Observations in the ecliptic performed by Helios and WIND s/c and out of the ecliptic, performed by Ulysses, showed that these advected, mostly incompressive structures are ubiquitous in the heliosphere and can be found in both fast and slow wind.
It proves interesting enough to look at the radial evolution of interplanetary fluctuations in terms of normalized cross-helicity σc and normalized residual energy σr (see Appendix B.3).
These results, shown in the left panels of Figure 48, highlight the presence of a radial evolution of the fluctuations towards a double-peaked distribution during the expansion of the solar wind. The relative analysis has been performed on a co-rotating fast stream observed by Helios 2 at three different heliocentric distances over consecutive solar rotations (see Figure 16 and related text). Closer to the Sun, at 0.3 AU, the distribution is well centered around σr = 0 and σc = 1, suggesting that Alfvénic fluctuations, outwardly propagating, dominate the scenario. By the time the wind reaches 0.7 AU, the appearance of a tail towards negative values of σr and lower values of σc indicates a partial loss of the Alfvénic character in favor of fluctuations characterized by a stronger magnetic energy content. This clear tendency ends up with the appearance of a secondary peak by the time the wind reaches 0.88 AU. This new family of fluctuations forms around σr = −1 and σc = 0. The values of σr and σc which characterize this new population are typical of MFDT structures described by Tu and Marsch (1991). Together with the appearance of these fluctuations, the main peak characterized by Alfvén like fluctuations looses much of its original character shown at 0.3 AU. The yellow straight line that can be seen in the left panels of Figure 48 would be the linear relation between σr and σc in case fluctuations were made solely by Alfvén waves outwardly propagating and advected MFDTs (Tu and Marsch, (1991) and it would replace the canonical, quadratic relation σ r 2 + σ c 2 ≤ 1 represented by the yellow circle drawn in each panel. However, the yellow dashed line shown in the left panels of Figure 48 does not seem to fit satisfactorily the observed distributions.
Left, from top to bottom: frequency histograms of σr vs. σc (here σC and σR) for fast wind observed by Helios 2 at 0.29, 0.65 and 0.88 AU, respectively. The color code, for each panel, is normalized to the maximum of the distribution. The yellow circle represents the limiting value given by σ c 2 σ 2 2 = 1 while, the yellow dashed line represents the relation σr = σr − 1, see text for details. Right, from top to bottom: frequency histograms of σr vs. σc (here σC and σR) for slow wind observed by Helios 2 at 0.32, 0.69 and 0.90 AU, respectively. The color code, for each panel, is normalized to the maximum of the distribution. Image reproduced by permission from Bruno et al. (2007), copyright EGU.
Quite different is the situation within slow wind, as shown in the right panels of Figure 48. As a matter of fact, these histograms do not show any striking radial evolution like in the case of fast wind. High values of σc are statistically much less relevant than in fast wind and a well defined population characterized by σc = −1 and σc = 0, already present at 0.3 AU, becomes one of the dominant peaks of the histogram as the wind expands. This last feature is really at odds with what happens in fast wind and highlights the different nature of the fluctuations which, in this case, are magnetically dominated. The same authors obtained very similar results for fast and slow wind also from the same type of analysis performed on WIND and Ulysses data which, in addition, confirmed the incompressive character of the Alfvénic fluctuations and highlighted a low compressive character also for the populations characterized by σr ~ −1 and σc ~ 0.
About the origin of these structures, these authors suggest that they might be not only created locally during the non linear evolution of the fluctuations but they might also have a solar origin. The reason why they are not seen close to the Sun, within fast wind, might be due to the fact that these fluctuations, mainly non-compressive, change the direction of the magnetic field similarly to Alfvénic fluctuations but produce a much smaller effect since the associated δb is smaller than the one corresponding to Alfvénic fluctuations. As the wind expands, the Alfvénic component undergoes non-linear interactions which produce a transfer of energy to smaller and smaller scales while, these structures, being advected, have a much longer lifetime. As the expansion goes on, the relative weight of these fluctuations grows and they start to be detected.
On the nature of Alfvénic fluctuations
The Alfvénic nature of outward modes has been widely recognized through several frequency decades up to periods of the order of several hours in the s/c rest frame (Bruno et al., (1985). Conversely, the nature of those fluctuations identified by δd−, called "inward Alfvén modes", is still not completely clear. There are many clues which would suggest that these fluctuations, especially in the hourly frequencies range, have a non-Alfvénic nature. Several studies on this topic in the low frequency range have suggested that structures convected by the wind could well mimic non-existent inward propagating modes (see the review by Tu and Marsch, (1995a). However, other studies (Tu et al., (1989b) have also found, in the high frequency range and within fast streams, a certain anisotropy in the components which resembles the same anisotropy found for outward modes. So, these observations would suggest a close link between inward modes at high frequency and outward modes, possibly the same nature.
Power density spectra for e+ and e− during a high velocity stream observed at 0.3 AU. Best fit lines for different frequency intervals and related spectral indices are also shown. Vertical lines fix the limits of five different frequency intervals analyzed by Bruno et al. (1996). Image reproduced by permission, copyright by AIP.
Figure 49 shows power density spectra for e+ and e− during a high velocity stream observed at 0.3 AU (similar spectra can be also found in the paper by Grappin et al., (1990 and Tu et al., (1989b). The observed spectral indices, reported on the plot, are typically found within high velocity streams encountered at short heliocentric distances. Bruno et al. (1996) analyzed the power relative to e+ and e− modes, within five frequency bands, ranging from roughly 12 h to 3 min, delimited by the vertical solid lines equally spaced in log-scale. The integrated power associated with e+ and e− within the selected frequency bands is shown in Figure 50. Passing from slow to fast wind e+ grows much more within the highest frequency bands. Moreover, there is a good correlation between the profiles of e− and e+ within the first two highest frequency bands, as already noticed by Grappin et al. (1990) who looked at the correlation between daily averages of e− and e+ in several frequency bands, even widely separated in frequency. The above results stimulated these authors to conclude that it was reminiscent of the non-local coupling in k-space between opposite modes found by Grappin et al. (1982) in homogeneous MHD. Expansion effects were also taken into account by Velli et al. (1990) who modeled inward modes as that fraction of outward modes back-scattered by the inhomogeneities of the medium due to expansion effects (Velli et al., (1989). However, following this model we would often expect the two populations to be somehow related to each other but, in situ observations do not favor this kind of forecast (Bavassano and Bruno, (1992).
An alternative generation mechanism was proposed by Tu et al. (1989b) based on the parametric decay of e+ in high frequency range (Galeev and Oraevskii, (1963). This mechanism is such that large amplitude Alfvénic waves, unstable to perturbations of random field intensity and density fluctuations, would decay into two secondary Alfvénic modes propagating in opposite directions and a sound-like wave propagating in the same direction of the pump wave. Most of the energy of the mother wave would go into the sound-like fluctuation and the backward propagating Alfvénic mode. On the other hand, the production of e− modes by parametric instability is not particularly fast if the plasma β ~ 1, like in the case of solar wind (Goldstein, (1978; Derby, (1978), since this condition slows down the growth rate of the instability. It is also true that numerical simulations by Malara et al. (2000, (2001a, (2002), and Primavera et al. (2003) have shown that parametric decay can still be thought as a possible mechanism of local production of turbulence within the polar wind (see Section 4). However, the strong correlation between e+ and e− profiles found only within the highest frequency bands would support this mechanism and would suggest that e− modes within these frequency bands would have an Alfvénic nature. Another feature shown in Figure 50 that favors these conclusions is the fact that both δz+ and δz− keep the direction of their minimum variance axis aligned with the background magnetic field only within the fast wind, and exclusively within the highest frequency bands. This would not contradict the view suggested by Barnes (1981). Following this model, the majority of Alfvénic fluctuations propagating in one direction have the tip of the magnetic field vector randomly wandering on the surface of half a sphere of constant radius, and centered along the ambient field B∘. In this situation the minimum variance would be oriented along B∘, although this would not represent the propagation direction of each wave vector which could propagate even at large angles from this direction. This situation can be seen in the right hand panel of Figure 98 of Section 10, which refers to a typical Alfvénic interval within fast wind. Moreover, δz+ fluctuations show a persistent anisotropy throughout the fast stream since the minimum variance axis remains quite aligned to the background field direction. This situation downgrades only at the very low frequencies where θ+, the angle between the minimum variance direction of δz+ and the direction of the ambient magnetic field, starts wandering between 0° and 90°. On the contrary, in slow wind, since Alfvénic modes have a smaller amplitude, compressive structures due to the dynamic interaction between slow and fast wind or, of solar origin, push the minimum variance direction to larger angles with respect to B。, not depending on the frequency range.
Left panel: wind speed profile is shown in the top panel. Power density associated with e+ (thick line) and e− (thin line), within the five frequency bands chosen, is shown in the lower panels. Right panel: wind speed profile is shown in the top panel. Values of the angle θ± between the minimum variance direction of δz+ (thick line) and δz− (thin line) and the direction of the ambient magnetic field are shown in the lower panels, relatively to each frequency band. Image reproduced by permission from Bruno et al. (1996), copyright by AIP.
In a way, we can say that within the stream, both θ+ and θ−, the angle between the minimum variance direction of δz− and the direction of the ambient magnetic field, show a similar behavior as we look at lower and lower frequencies. The only difference is that θ− reaches higher values at higher frequencies than θ+. This was interpreted (Bruno et al., (1996) as due to the fact that transverse fluctuations of δz− carry much less power than those of δz+ and, consequently, they are more easily influenced by perturbations represented by the background, convected structure of the wind (e.g., TD's and PBS's). As a consequence, at low frequency δz− fluctuations may represent a signature of the compressive component of the turbulence while, at high frequency, they might reflect the presence of inward propagating Alfvén modes. Thus, while for periods of several hours δz+ fluctuations can still be considered as the product of Alfvén modes propagating outward (Bruno et al., (1985), δz− fluctuations are rather due to the underlying convected structure of the wind. In other words, high frequency turbulence can be looked at mainly as a mixture of inward and outward Alfvénic fluctuations plus, presumably, sound-like perturbations (Marsch and Tu, (1993a). On the other hand, low frequency turbulence would be made of outward Alfvénic fluctuations and static convected structures representing the inhomogeneities of the background medium.
Observations of MHD Turbulence in the Polar Wind
In 1994 – 1995, Ulysses gave us the opportunity to look at the solar wind out-of-the-ecliptic, providing us with new exciting observations. For the first time heliospheric instruments were sampling pure, fast solar wind, free of any dynamical interaction with slow wind. There is one figure that within our scientific community has become as popular as "La Gioconda" by Leonardo da Vinci within the world of art. This figure produced at LANL (McComas et al., (1998) is shown in the upper left panel of Figure 51, which has been taken from a successive paper by (McComas et al., (2003), and summarizes the most important aspects of the large scale structure of the polar solar wind during the minimum of the solar activity phase, as indicated by the low value of the Wolf's number reported in the lower panel. It shows speed profile, proton number density profile and magnetic field polarity vs. heliographic latitude during the first complete Ulysses' polar orbit. Fast wind fills up north and south hemispheres of the Sun almost completely, except a narrow latitudinal belt around the equator, where the slow wind dominates. Flow velocity, which rapidly increases from the equator towards higher latitudes, quickly reaches a plateau and the wind escapes the polar regions with a rather uniform speed. Moreover, polar wind is characterized by a lower number density and shows rather uniform magnetic polarity of opposite sign, depending on the hemisphere. Thus, the main difference between ecliptic and polar wind is that this last one completely lacks of dynamical interactions with slower plasma and freely flows into the interplanetary space The presence or not of this phenomenon, as we will see in the following pages, plays a major role in the development of MHD turbulence during the wind expansion.
During solar maximum (look at the upper right panel of Figure 51) the situation dramatically changes and the equatorial wind extends to higher latitudes, to the extent that there is no longer difference between polar and equatorial wind.
Large scale solar wind profile as a function of latitude during minimum (left panel) and maximum (right panel) solar cycle phases. The sunspot number is also shown at the bottom panels. Image reproduced by permission from McComas et al. (2003), copyright by AGU.
Evolving turbulence in the polar wind
Ulysses observations gave us the possibility to test whether or not we could forecast the turbulent evolution in the polar regions on the basis of what we had learned in the ecliptic. We knew that, in the ecliptic, velocity shear, parametric decay, and interaction of Alfvénic modes with convected structures (see Sections 3.2.1, 5.1) all play some role in the turbulent evolution and, before Ulysses reached the polar regions of the Sun, three possibilities were given:
Alfvénic turbulence would have not relaxed towards standard turbulence because the large scale velocity shears would have been much less relevant (Grappin et al., (1991);
since the magnetic field would be smaller far from the ecliptic, at large heliocentric distances, even small shears would lead to an isotropization of the fluctuations and produce a turbulent cascade faster than the one observed at low latitudes, and the subsequent evolution would take less time (Roberts et al., (1990);
there would still be evolution due to interaction with convected plasma and field structures but it would be slower than in the ecliptic since the power associated with Alfvénic fluctuations would largely dominate over the inhomogeneities of the medium. Thus, Alfvénic correlations should last longer than in the ecliptic plane, with a consequent slower evolution of the normalized cross-helicity (Bruno, (1992).
A fourth possibility was added by Tu and Marsch (1995a), based on their model (Tu and Marsch, (1993). Following this model they assumed that polar fluctuations were composed by outward Alfvénic fluctuations and MFDT. The spectra of these components would decrease with radial distance because of a WKB evolution and convective effects of the diverging flow. As the distance increases, the field becomes more transverse with respect to the radial direction, the s/c would sample more convective structures and, as a consequence, would observe a decrease of both σc and rA.
Today we know that polar Alfvénic turbulence evolves in the same way it does in the ecliptic plane, but much more slowly. Moreover, the absence of strong velocity shears and enhanced compressive phenomena suggests that also some other mechanism based on parametric decay instability might play some role in the local production of turbulence (Bavassano et al., (2000a; Malara et al., (2001a, (2002; Primavera et al., (2003).
The first results of Ulysses magnetic field and plasma measurements in the polar regions, i.e., above ±30. latitude (left panel of Figure 51), revealed the presence of Alfvénic correlations in a frequency range from less than 1 to more than 10 h (Balogh et al., (1995; Smith et al., (1995; Goldstein et al., (1995a) in very good agreement with ecliptic observations (Bruno et al., (1985). However, it is worth noticing that Helios observations referred to very short heliocentric distances around 0.3 AU while the above Ulysses observations were taken up to 4 AU. As a matter of fact, these long period Alfvén waves observed in the ecliptic, in the inner solar wind, become less prominent as the wind expands due to stream-stream dynamical interaction effects (Bruno et al., (1985) and strong velocity shears (Roberts et al., (1987a). At high latitude, the relative absence of enhanced dynamical interaction between flows at different speed and, as a consequence, the absence of strong velocity shears favors the survival of these extremely low frequency Alfvénic fluctuations for larger heliocentric excursions.
Figure 52 shows the hourly correlation coefficient for the transverse components of magnetic and velocity fields as Ulysses climbs to the south pole and during the fast latitude scanning that brought the s/c from the south to the north pole of the Sun in just half a year. While the equatorial phase of Ulysses journey is characterized by low values of the correlation coefficients, a gradual increase can be noticed starting at half of year 1993 when the s/c starts to increase its heliographic latitude from the ecliptic plane up to 80.2° south, at the end of 1994. Not only the degree of δb − δv correlation resembled Helios observations but also the spectra of these fluctuations showed characteristics which were very similar to those observed in the ecliptic within fast wind like the spectral index of the components, that was found to be flat at low frequency and more Kolmogorov-like at higher frequencies (Smith et al., (1995). Balogh et al. (1995) and Forsyth et al. (1996) discussed magnetic fluctuations in terms of latitudinal and radial dependence of their variances. Similarly to what had been found within fast wind in the ecliptic (Mariani et al., (1978; Bavassano et al., (1982b; Tu et al., (1989b; Roberts et al., (1992), variance of magnetic magnitude was much less than the variance associated with the components. Moreover, transverse variances had consistently higher values than the one along the radial direction and were also much more sensitive to latitude excursion, as shown in Figure 53. In addition, the level of the normalized hourly variances of the transverse components observed during the ecliptic phase, right after the compressive region ahead of co-rotating interacting regions, was maintained at the same level once the s/c entered the pure polar wind. Again, these observations showed that the fast wind observed in the ecliptic was coming from the equatorward extension of polar coronal holes.
Magnetic field and velocity hourly correlation vs. heliographic latitude. Image reproduced by permission from Smith et al. (1995), copyright by AGU.
Horbury et al. (1995c) and Forsyth et al. (1996) showed that the interplanetary magnetic field fluctuations observed by Ulysses continuously evolve within the fast polar wind, at least out to 4 AU. Since this evolution was observed within the polar wind, rather free of co-rotating and transient events like those characterizing low latitudes, they concluded that some other mechanism was at work and this evolution was an intrinsic property of turbulence.
Results in Figure 54 show the evolution of the spectral slope computed across three different time scale intervals. The smallest time scales show a clear evolution that keeps on going past the highest latitude on day 256, strongly suggesting that this evolution is radial rather than latitudinal effect. Horbury et al. (1996a) worked on determining the rate of turbulent evolution for the polar wind.
They calculated the spectral index at different frequencies from the scaling of the second order structure function (see Section 7 and papers by Burlaga, (1992a,b; Marsch and Tu, (1993a; Ruzmaikin et al., (1995; and Horbury et al., (1996b) since the spectral scaling α is related to the scaling of the structure function s by the following relation: α = s+1 (Monin and Yaglom, (1975). Horbury et al. (1996a), studying variations of the spectral index with frequency for polar turbulence, found that there are two frequency ranges where the spectral index is rather steady. The first range is around 10−2 Hz with a spectral index around ∡5/3, while the second range is at very low frequencies with a spectral index around −1. This last range is the one where Goldstein et al. (1995a) found the best example of Alfvénic fluctuations. Similarly, ecliptic studies found that the best Alfvénic correlations belonged to the hourly, low frequency regime (Bruno et al., (1985).
Normalized magnetic field components and magnitude hourly variances plotted vs. heliographic latitude during a complete latitude survey by Ulysses. Image reproduced by permission from Forsyth et al. (1996), copyright by AGU.
Spectral indexes of magnetic fluctuations within three different time scale intervals as indicated in the plot. The bottom panel shows heliographic latitude and heliocentric distance of Ulysses. Image reproduced by permission from Horbury et al. (1995c), copyright by AGU.
Horbury et al. (1995a) presented an analysis of the high latitude magnetic field using a fractal method. Within the solar wind context, this method has been described for the first time by Burlaga and Klein (1986) and Ruzmaikin et al. (1993), and is based on the estimate of the scaling of the length function L(τ) with the scale τ. This function is closely related to the first order structure function and, if statistical self-similar, has scaling properties L(τ) ~ τℓ, where ℓ is the scaling exponent. It follows that L(τ) is an estimate of the amplitude of the fluctuations at scale τ, and the relation that binds L(τ) to the variance of the fluctuations (δB)2 ~ τs(2) is:
$$L(\tau )\~N(\tau )[(\delta B)^2 ]^{1/2} \propto \tau ^{s(2)/2 - 1} ,$$
where N(τ) represents the number of points at scale τ and scales like τ−1. Since the power density spectrum fW(f) is related to (δB)2 through the relation fW(f) ~ (δB)2, if W(f) ~ f−α, then s(2) = α − 1, and, as a consequence α = 2ℓ + 3 (Marsch and Tu, (1996). Thus, it results very easy to estimate the spectral index at a given scale or frequency, without using spectral methods but simply computing the length function.
Spectral exponents for the Bz component estimated from the length function computed from Ulysses magnetic field data, when the s/c was at about 4 AU and ~ −50° latitude. Different symbols refer to different time intervals as reported in the graph. Image reproduced by permission from (from Horbury et al., 1995a).
Results in Figure 55 show the existence of two different regimes, one with a spectral index around the Kolmogorov scaling extending from 101.5 to 103 s and, separated by a clear breakpoint at scales of 103 s, a flatter and flatter spectral exponent for larger and larger scales. These observations were quite similar to what had been observed by Helios 2 in the ecliptic, although the turbulence state recorded by Ulysses resulted to be more evolved than the situation seen at 0.3 AU and, perhaps, more similar to the turbulence state observed around 1 AU, as shown by Marsch and Tu (1996). These authors compared the spectral exponents, estimated using the same method of Horbury et al. (1995a), from Helios 2 magnetic field observations at two different heliocentric distances: 0.3 and 1.0 AU. The comparison with Ulysses results is shown in Figure 56 where it appears rather clear that the slope of the Bz spectrum experiences a remarkable evolution during the wind expansion between 0.3 and 4 AU. Obviously, this comparison is meaningful in the reasonable hypothesis that fluctuations observed by Helios 2 at 0.3 AU are representative of out-of- the-ecliptic solar wind (Marsch and Tu, 1996). This figure also shows that the degree of spectral evolution experienced by the fluctuations when observed at 4 AU at high latitude, is comparable to Helios observations at 1 AU in the ecliptic. Thus, the spectral evolution at high latitude is present although quite slower with respect to the ecliptic.
Spectral exponents for the Bz component estimated from the length function computed from Helios and Ulysses magnetic field data. Ulysses length function (dotted line) is the same shown in the paper by Horbury et al. (1995a) when the s/c was at about 4 AU and ~ −50° latitude. Image reproduced by permission from Marsch and Tu (1996), copyright by AGU.
Forsyth et al. (1996) studied the radial dependence of the normalized hourly variances of the components BR, BT and BN and the magnitude |B| of the magnetic field (see Appendix D to learn about the |B| reference system). The variance along the radial direction was computed as σR2 = 〈BR2 > − < BR2 and successively normalized to |B|2 to remove the field strength dependence Moreover, variances along the other two directions T and N were similarly defined. Fitting the radial dependence with a power law of the form r−α, but limiting the fit to the radial excursion between 1.5 and 3 AU, these authors obtained α = 3.39 ± 0.07 for σ r 2 , α = 3.45 ± 0.09 for σ T 2 , α = 3.37 ± 0.09 for σ N 2 , and α = 2.48 ± 0.14 for σ B 2 . Thus, for hourly variances, the power associated with the components showed a radial dependence stronger than the one predicted by the WKB approximation, which would provide α = 3. These authors also showed that including data between 3 and 4 AU, corresponding to intervals characterized by compressional features mainly due to high latitude CMEs, they would obtain less steep radial gradients, much closer to a WKB type. These results suggested that compressive effects can feed energy at the smallest scales, counteracting dissipative phenomena and mimicking a WKB-like behavior of the fluctuations. However, they concluded that for lower frequencies, below the frequency break point, fluctuations do follow the WKB radial evolution.
Horbury and Balogh (2001) presented a detailed comparison between Ulysses and Helios observations about the evolution of magnetic field fluctuations in high-speed solar wind. Ulysses results, between 1.4 and 4.1 AU, were presented as wave number dependence of radial and latitudinal power scaling. The first results of this analysis showed (Figure 3 of their work) a general decrease of the power levels with solar distance, in both magnetic field components and magnitude fluctuations. In addition, the power associated with the radial component was always less than that of the transverse components, as already found by Forsyth et al. (1996). However, Horbury and Balogh (2001), supposing a possible latitude dependence, performed a multiple linear regression of the type:
$$\log _{10} w = A_p + B_p \log _{10} r + C_p \sin \theta ,$$
where w is the power density integrated in a given spectral band, r is the radial distance and θ is the heliolatitude (0° at the equator). Moreover, the same procedure was applied to spectral index estimates α of the form α = Aα + Bα log10 r + Cα sin θ. Results obtained for Bp, Cp, Bα, Cα are shown in Figure 58.
Hourly variances of the components and the magnitude of the magnetic field vs. radial distance from the Sun. The meaning of the different symbols is also indicated in the upper right corner. Image reproduced by permission from Forsyth et al. (1996), copyright by AGU.
On the basis of variations of spectral index and radial and latitudinal dependencies, these authors were able to identify four wave number ranges as indicated by the circled numbers in the top panel of Figure 58. Range 1 was characterized by a radial power decrease weaker than WKB (−3), positive latitudinal trend for components (more power at higher latitude) and negative for magnitude (less compressive events at higher latitudes). Range 2 showed a more rapid radial decrease of power for both magnitude and components and a negative latitudinal power trend, which implies less power at higher latitudes. Moreover, the spectral index of the components (bottom panel) is around 0.5 and tends to 0 at larger scales. Within range 3 the power of the components follows a WKB radial trend and the spectral index is around −1 for both magnitude and components. This hourly range has been identified as the most Alfvénic at low latitudes and its radial evolution has been recognized to be consistent with WKB radial index (Roberts, (1989; Marsch and Tu, (1990a). Even within this range, and also within the next one, the latitude power trend is slightly negative for both components and magnitude. Finally, range 4 is clearly indicative of turbulent cascade with a radial power trend of the components much faster than WKB expectation and becoming even stronger at higher wave numbers. Moreover, the radial spectral index reveals that steepening is at work only for the previous wave number ranges as expected since the breakpoint moves to smaller wave number during spectrum evolution. The spectral index of the components tends to −5/3 with increasing wave number while that of the magnitude is constantly flatter. The same authors gave an estimate of the radial scale-shift of the breakpoint during the wind expansion around k ∝ r1.1, in agreement with earlier estimates (Horbury et al., 1996a).
Although most of these results support previous conclusions obtained for the ecliptic turbulence, the negative value of the latitudinal power trend that starts within the second range, is unexpected. As a matter of fact, moving towards more Alfvén regions like the polar regions, one would perhaps expect a positive latitudinal trend similarly to what happens in the ecliptic when moving from slow to fast wind.
(a) Scale dependence of radial power, (b) latitudinal power, (c) radial spectral index, (d) latitudinal spectral index, and (e) spectral index computed at 2.5 AU. Solid circles refer to the trace of the spectral matrix of the components, open squares refer to field magnitude. Correspondence between wave number scale and time scale is based on a wind velocity of 750 km s−1. Image reproduced by permission from Horbury and Balogh (2001), copyright by AGU.
Horbury and Balogh (2001) and Horbury and Tsurutani (2001) estimated that the power observed at 80° is about 30% less than that observed at 30°. These authors proposed a possible effect due to the over-expansion of the polar coronal hole at higher latitudes. In addition, within the fourth range, field magnitude fluctuations radially decrease less rapidly than the fluctuations of the components, but do not show significant latitudinal variations. Finally, the smaller spectral index reveals that the high frequency range of the field magnitude spectrum shows a flattening.
The same authors investigated the anisotropy of these fluctuations as a function of radial and latitudinal excursion. Their results, reported in Figure 59, show that, at 2.5 AU, the lowest compressibility is recorded within the hourly frequency band (third and part of the fourth band), which has been recognized as the most Alfvénic frequency range. The anisotropy of the components confirms that the power associated with the transverse components is larger than that associated with the radial one, and this difference slightly tends to decrease at higher wave numbers.
(a) Scale dependence of power anisotropy at 2.5 AU plotted as the log10 of the ratio of BR (solid circles), BT (triangles), BN (diamonds), and |B| (squares) to the trace of the spectral matrix; (b) the radial, and (c) latitudinal behavior of the same values, respectively. Image reproduced by permission from Horbury and Balogh (2001), copyright by AGU.
As already shown by Horbury et al. (1995b), around the 5 min range, magnetic field fluctuations are transverse to the mean field direction the majority of the time. The minimum variance direction lies mainly within an angle of about 26° from the average background field direction and fluctuations are highly anisotropic, such that the ratio between perpendicular to parallel power is about 30. Since during the observations reported in Horbury and Balogh (2001) and Horbury and Tsurutani (2001) the mean field resulted to be radially oriented most of the time, the radial minimum variance direction at short time scales is an effect induced by larger scales behavior.
Anyhow, radial and latitudinal anisotropy trends tend to disappear for higher frequencies. In the mean time, interesting enough, there is a strong radial increase of magnetic field compression (top panel of Figure 59), defined as the ratio between the power density associated with magnetic field intensity fluctuations and that associated with the fluctuations of the three components (Bavassano et al., (1982a; Bruno and Bavassano, (1991). The attempt to attribute this phenomenon to parametric decay of large amplitude Alfvén waves or dynamical interactions between adjacent flux tubes or interstellar pick-up ions was not satisfactory in all cases.
Comparing high latitude with low latitude results for high speed streams, Horbury and Balogh (2001) found remarkable good agreement between observations by Ulysses at 2.5 AU and by Helios at 0.7 AU. In particular, Figure 60 shows Ulysses and Helios 1 spectra projected to 1 AU for comparison.
It is interesting to notice that the spectral slope of the spectrum of the components for Helios 1 is slightly higher than that of Ulysses, suggesting a slower radial evolution of turbulence in the polar wind (Bruno, (1992; Bruno and Bavassano, (1992). However, the faster spectral evolution at low latitudes does not lead to strong differences between the spectra.
Power spectra of magnetic field components (solid circles) and magnitude (open squares) from Ulysses (solid line) and Helios 1 (dashed line). Spectra have been extrapolated to 1 AU using radial trends in power scalings estimated from Ulysses between 1.4 and 4.1 AU and Helios between 0.3 and 1 AU. Image reproduced by permission from Horbury and Balogh (2001), copyright by AGU.
Polar turbulence studied via Elsässer variables
Goldstein et al. (1995a) for the first time showed a spectral analysis of Ulysses observations based on Elsässer variables during two different time intervals, at 4 AU and close to −40°, and at 2 AU and around the maximum southern pass, as shown in Figure 61. Comparing the two Ulysses observations it clearly appears that the spectrum closer to the Sun is less evolved than the spectrum measured farther out, as will be confirmed by the next Figure 62, where these authors reported the normalized cross-helicity and the Alfvén ratio for the two intervals. Moreover, following these authors, the comparison between Helios spectra at 0.3 AU and Ulysses at 2 and 4 AU suggests that the radial scaling of e+ at the low frequency end of the spectrum follows the WKB prediction of 1/r decrease (Heinemann and Olbert, 1980). However, the selected time interval for Helios s/c was characterized by rather slow wind taken during the rising phase the solar cycle, two conditions which greatly differ from those referring to Ulysses data. As a consequence, comparing Helios results with Ulysses results obtained within the fast polar wind might be misleading. It would be better to choose Helios observations within high speed co-rotating streams which resemble much better solar wind conditions at high latitude.
Anyhow, results relative to the normalized cross-helicity σc (see Figure 62) clearly show high values of σc, around 0.8, which normally we observe in the ecliptic at much shorter heliocentric distances (Tu and Marsch, (1995a). A possible radial effect would be responsible for the depleted level of σc at 4 AU. Moreover, a strong anisotropy can also be seen for frequencies between 10−6 − 10−5 Hz with the transverse σc much larger than the radial one. This anisotropy is somewhat lost during the expansion to 4 AU.
The Alfvén ratio (bottom panels of Figure 62) has values around 0.5 for frequencies higher than roughly 10.5 Hz, with no much evolution between 2 and 4 AU. A result similar to what was originally obtained in the ecliptic at about 1 AU (Martin et al., (1973; Belcher and Solodyna, (1975; Solodyna et al., (1977; Neugebauer et al., (1984; Bruno et al., (1985; Marsch and Tu, (1990a; Roberts et al., (1990). The low frequency extension of rA⊥ together with σc⊥, where the subscript ⊥ indicates that these quantities are calculated from the transverse components only, was interpreted by the authors as due to the sampling of Alfvénic features in longitude rather than to a real presence of Alfvénic fluctuations. However, by the time Ulysses reaches to 4 AU, σc⊥ has strongly decreased as expected while rA⊥ gets closer to 1, making the situation less clear. Anyhow, these results suggest that the situation at 2 AU and, even more at 4 AU, can be considered as an evolution of what Helios 2 recorded in the ecliptic at shorter heliocentric distance Ulysses observations at 2 AU resemble more the turbulence conditions observed by Helios at 0.9 AU rather than at 0.3 AU.
Trace of e+ (solid line) and e− (dash-dotted line) power spectra. The central and right panels refer to Ulysses observations at 2 and 4 AU, respectively, when Ulysses was embedded in the fast southern polar wind during 1993 – 1994. The leftmost panel refers to Helios observations during 1978 at 0.3 AU. Image reproduced by permission from Goldstein et al. (1995a), copyright by AGU.
Normalized cross-helicity and Alfvén ratio at 2 and 4 AU, as observed by Ulysses at −80° and −40° latitude, respectively. Image reproduced by permission from Goldstein et al. (1995a), copyright by AGU.
Bavassano et al. (2000a) studied in detail the evolution of the power e+ and e− associated with outward δz+ and inward δz− Alfvénic fluctuations, respectively. The study referred to the polar regions, during the wind expansion between 1.4 and 4.3 AU. These authors analyzed 1 h variances of δz± and found two different regimes, as shown in Figure 63. Inside 2.5 AU outward modes e+ decrease faster than inward modes e−, in agreement with previous ecliptic observations performed within the trailing edge of co-rotating fast streams (Bruno and Bavassano, (1991; Tu and Marsch, (1990b; Grappin et al., (1989). Beyond this distance, the radial gradient of e− becomes steeper and steeper while that of e+ remains approximately unchanged. This change in e− is rather fast and both species keep declining with the same rate beyond 2.5 AU. The radial dependence of e+ between r−1.39 and r−1.48, reported by Bavassano et al. (2000a), indicate a radial decay faster than r−1 predicted by WKB approximation. This is in agreement with the analysis performed by Forsyth et al. (1996) using magnetic field observations only.
Left panel: values of hourly variance of δz− (i.e., e±) vs. heliocentric distance, as observed by Ulysses. Helios observations are shown for comparison and appear to be in good agreement. Right panel: Elsässer ratio (top) and Alfvén ratio (bottom) are plotted vs. radial distance while Ulysses is embedded in the polar wind. Image reproduced by permission from Bavassano et al. (2000a), copyright by AGU.
This different radial behavior is readily seen in the radial plot of the Elsässer ratio rE shown in the top panel of the right column of Figure 63. Before 2.5 AU this ratio continuously grows to about 0.5 near 2.5 AU. Beyond this region, since the radial gradient of the inward and outward components is approximately the same, rE stabilizes around 0.5.
On the other hand, also the Alfvén ratio rA shows a clear radial dependence that stops at about the same limit distance of 2.5 AU. In this case, rA constantly decreases from ~ 0.4 at 1.4 AU to ~ 0.25 at 2.5 AU, slightly fluctuating around this value for larger distances. A different interpretation of these results was offered by Grappin (2002). For this author, since Ulysses has not explored the whole three-dimensional heliosphere, solar wind parameters experience different dependencies on latitude and distance which would result in the same radial distance variation along Ulysses trajectory as claimed in Bavassano's works. Another interesting feature observed in polar turbulence is unraveled by Figure 64 from Bavassano et al. (1998, 2000b). The plot shows 2D histograms of normalized cross-helicity and normalized residual energy (see Appendix B.3.1 for definition) for different heliospheric regions (ecliptic wind, mid-latitude wind with strong velocity gradients, polar wind). A predominance of outward fluctuations (positive values of σc) and of magnetic fluctuations (negative values of σr) seems to be a general feature. It results that the most Alfvénic region is the one at high latitude and at shorter heliocentric distances. However, in all the panels there is always a relative peak at σc ≃ 0 and σr ≃ −1, which might well be due to magnetic structures like the MFDT found by Tu and Marsch (1991) in the ecliptic.
In a successive paper, Bavassano et al. (2002a) tested whether or not the radial dependence observed in e± was to be completely ascribed to the radial expansion of the wind or possible latitudinal dependencies also contributed to the turbulence evolution in the polar wind.
As already discussed in the previous section, Horbury and Balogh (2001), using Ulysses data from the northern polar pass, evaluated the dependence of magnetic field power levels on solar distance and latitude using a multiple regression analysis based on Equation (60). In the Alfvénic range, the latitudinal coefficient "C" for power in field components was appreciably different from 0 (around 0.3). However, this analysis was limited to magnetic field fluctuations alone and cannot be transferred sic et simpliciter to Alfvénic turbulence In their analysis, Bavassano et al. (2002b) used the first southern and northern polar passes and removed from their dataset all intervals with large gradients in plasma velocity, and/or plasma density, and/or magnetic field magnitude, as already done in Bavassano et al. (2000a). As a matter of fact, the use of Elsässer variables (see Appendix B.3.1) instead of magnetic field, and of selected data samples, leads to very small values of the latitudinal coefficient as shown in Figure 65, where different contributions are plotted with different colors and where the top panel refers to the same dataset used by Horbury and Balogh (2001), while the bottom panel refers to a dataset omni-comprehensive of south and north passages free of strong compressive events (Bavassano et al., (2000a). Moreover, the latitudinal effect appears to be very weak also for the data sample used by Horbury and Balogh (2001), although this is the sample with the largest value of the "C" coefficient.
Results from the multiple regression analysis showing radial and latitudinal dependence of the power e+ associated with outward modes (see Appendix B.3.1). The top panel refers to the same dataset used by Horbury and Balogh (2001). The bottom panel refers to a dataset omni-comprehensive of south and north passages free of strong compressive events (Bavassano et al., (2000a). Values of e+ have been normalized to the value e+° assumed by this parameter at 1.4 AU, closest approach to the Sun. The black line is the total regression, the blue line is the latitudinal contribution and the red line is the radial contribution. Image reproduced by permission from Bavassano et al. (2002a), copyright by AGU.
A further argument in favor of radial vs. latitudinal dependence is represented by the comparison of the radial gradient of e+ in different regions, in the ecliptic and in the polar wind. These results, shown in Figure 66, provide the radial slopes for e+ (red squares) and e+ (blue diamonds) in different regions. The first three columns (labeled EQ) summarize ecliptic results obtained with different values of an upper limit (TBN) for relative fluctuations of density and magnetic intensity. The last two columns (labeled POL) refer to the results for polar turbulence (north and south passes) outside and inside 2.6 AU, respectively. A general agreement exists between slopes in ecliptic and in polar wind with no significant role left for latitude, the only exception being e+ in the region inside 2.6 AU. The behavior of the inward component cannot be explained by a simple power law over the range of distances explored by Ulysses. Moreover, a possible latitudinal effect has been clearly rejected by the results from a multiple regression analysis performed by Bavassano et al. (2002a) similar to that reported above for e+.
e+ (red) and e− (blue) radial gradient for different latitudinal regions of the solar wind. The first three columns, labeled EQ, refer to ecliptic observations obtained with different values of the upper limit of TBN defined as the relative fluctuations of density and magnetic intensity. The last two columns, labeled POL, refer to observations of polar turbulence outside and inside 2.6 AU, respectively. Image reproduced by permission from Bavassano et al. (2001), copyright by AGU.
Numerical simulations currently represent one of the main source of information about non-linear evolution of fluid flows. The actual super-computers are now powerful enough to simulate equations (NS or MHD) that describe turbulent flows with Reynolds numbers of the order of 104 in twodimensional configurations, or 103 in three-dimensional one. Of course, we are far from achieving realistic values, but now we are able to investigate turbulence with an inertial range extended for more than one decade. Rather the main source of difficulties to get results from numerical simulations is the fact that they are made under some obvious constraints (say boundary conditions, equations to be simulated, etc.), mainly dictated by the limited physical description that we are able to use when numerical simulations are made, compared with the extreme richness of the phenomena involved: numerical simulations, even in standard conditions, are used tout court as models for the solar wind behavior. Perhaps the only exception, to our knowledge, is the attempt to describe the effects of the solar wind expansion on turbulence evolution like, for example, in the papers by Velli et al. (1989, (1990); Hellinger and Trávníček (2008). Even with this far too pessimistic point of view, used here solely as a few words of caution, simulations in some cases were able to reproduce some phenomena observed in the solar wind.
Nevertheless, numerical simulations have been playing a key role, and will continue to do so in our seeking an understanding of turbulent flows. Numerical simulations allows us to get information that cannot be obtained in laboratory. For example, high resolution numerical simulations provide information at every point on a grid and, for some times, about basic vector quantities and their derivatives. The number of degree of freedom required to resolve the smaller scales is proportional to a power of the Reynolds number, say to Re9/4, although the dynamically relevant number of modes may be much less. Then one of the main challenge remaining is how to handle and analyze the huge data files produced by large simulations (of the order of Terabytes). Actually a lot of papers appeared in literature on computer simulations related to MHD turbulence The interested reader can look at the book by Biskamp (1993) and the reviews by Pouquet (1993, (1996).
Local production of Alfvénic turbulence in the ecliptic
The discovery of the strong correlation between velocity and magnetic field fluctuations has represented the motivation for some MHD numerical simulations, aimed to confirm the conjecture by Dobrowolny et al. (1980b). The high level of correlation seems to be due to a kind of selforganization (dynamical alignment) of MHD turbulence, generated by the natural evolution of MHD towards the strongest attractive fixed point of equations (Ting et al., (1986; Carbone and Veltri, (1987, (1992). Numerical simulations (Carbone and Veltri, (1992; Ting et al., (1986) confirmed this conjecture, say MHD turbulence spontaneously can tends towards a state were correlation increases, that is, the quantity σc = 2Hc/E, where Hc is the cross-helicity and E the total energy of the flow (see Appendix B.1), tends to be maximal.
The picture of the evolution of incompressible MHD turbulence, which comes out is rather nice but solar wind turbulence displays a more complicated behavior. In particular, as we have reported above, observations seems to point out that solar wind evolves in the opposite way. The correlation is high near the Sun, at larger radial distances, from 1 to 10 AU the correlation is progressively lower, while the level in fluctuations of mass density and magnetic field intensity increases. What is more difficult to understand is why correlation is progressively destroyed in the solar wind, while the natural evolution of MHD is towards a state of maximal normalized cross-helicity. A possible solution can be found in the fact that solar wind is neither incompressible nor statistically homogeneous, and some efforts to tentatively take into account more sophisticated effects have been made.
A mechanism, responsible for the radial evolution of turbulence, was suggested by Roberts and Goldstein (1988); Goldstein et al. (1989); and Roberts et al. (1991, (1992) and was based on velocity shear generation. The suggestion to adopt such a mechanism came from a detailed analysis made by Roberts et al. (1987a,b) of Helios and Voyager interplanetary observations of the radial evolution of the normalized cross-helicity σc at different time scales. Moreover, Voyager's observations showed that plasma regions, which had not experienced dynamical interactions with neighboring plasma, kept the Alfvénic character of the fluctuations at distances as far as 8 AU (Roberts et al., (1987b). In particular, the vicinity of Helios trajectory to the interplanetary current sheet, characterized by low velocity flow, suggested Roberts et al. (1991) to include in his simulations a narrow low speed flow surrounded by two high speed flows. The idea was to mimic the slow, equatorial solar wind between north and south fast polar wind. Magnetic field profile and velocity shear were reconstructed using the six lowest Z± Fourier modes as shown in Figure 67. An initial population of purely outward propagating Alfvénic fluctuations (z+) was added at large k and was characterized by a spectral slope of k−1. No inward modes were present in the same range. Results of Figure 67 show that the time evolution of z+ spectrum is quite rapid at the beginning, towards a steeper spectrum, and slows down successively. At the same time, z− modes are created by the generation mechanism at higher and higher k but, along a Kolmogorov-type slope k−5/3.
These results, although obtained from simulations performed using 2D incompressible spectral and pseudo-spectral codes, with fairly small Reynolds number of Re ≃ 200, were similar to the spectral evolution observed in the solar wind (Marsch and Tu, (1990a). Moreover, spatial averages across the simulation box revealed a strong cross-helicity depletion right across the slow wind, representing the heliospheric current sheet. However, magnetic field inversions and even relatively small velocity shears would largely affect an initially high Alfvénic flow (Roberts et al., (1992). However, Bavassano and Bruno (1992) studied an interaction region, repeatedly observed between 0.3 and 0.9 AU, characterized by a large velocity shear and previously thought to be a good candidate for shear generation (Bavassano and Bruno, (1989). They concluded that, even in the hypothesis of a very fast growth of the instability, inward modes would not have had enough time to fill up the whole region as observed by Helios 2.
The above simulations by Roberts et al. (1991) were successively implemented with a com- pressive pseudo-spectral code (Ghosh and Matthaeus, (1990) which provided evidence that, during this turbulence evolution, clear correlations between magnetic field magnitude and density fluctuations, and between z− and density fluctuations should arise. However, such a clear correlation, by-product of the non-linear evolution, was not found in solar wind data (Marsch and Tu, (1993b; Bruno et al., (1996). Moreover, their results did not show the flattening of e− spectrum at higher frequency, as observed by Helios (Tu et al., (1989b). As a consequence, velocity shear alone cannot explain the whole phenomenon, other mechanisms must also play a relevant role in the evolution of interplanetary turbulence
Time evolution of the power density spectra of z+ and z− showing the turbulent evolution of the spectra due to velocity shear generation (from Roberts et al., (1991).
Compressible numerical simulations have been performed by Veltri et al. (1992) and Malara et al. (1996, (2000) which invoked the interactions between small scale waves and large scale magnetic field gradients and the parametric instability, as characteristic effects to reduce correlations. In a compressible, statistically inhomogeneous medium such as the heliosphere, there are many processes which tend to destroy the natural evolution toward a maximal correlation, typical of standard MHD. In such a medium an Alfvén wave is subject to parametric decay instability (Viñas and Goldstein, 1991; Del Zanna et al., 2001; Del Zanna, 2001), which means that the mother wave decays in two modes: i) a compressive mode that dissipates energy because of the steepening effect, and ii) a backscattered Alfvénic mode with lower amplitude and frequency. Malara et al. (1996) showed that in a compressible medium, the correlation between the velocity and the magnetic field fluctuations is reduced because of the generation of the backward propagating Alfvénic fluctuations, and of a compressive component of turbulence, characterized by density fluctuations δρ ≠ 0 and magnetic intensity fluctuations δ|B| ≠ 0.
From a technical point of view it is worthwhile to remark that, when a large scale field which varies on a narrow region is introduced (typically a tanh-like field), periodic boundaries conditions should be used with some care. Roberts et al. (1991, 1992) used a double shear layer, while Malara et al. (1992) introduced an interesting numerical technique based on both the glue between two simulation boxes and a Chebyshev expansion, to maintain a single shear layer, say non periodic boundary conditions, and an increased resolution where the shear layer exists.
Grappin et al. (1992) observed that the solar wind expansion increases the lengths normal to the radial direction, thus producing an effect similar to a kind of inverse energy cascade. This effect perhaps should be able to compete with the turbulent cascade which transfers energy to small scales, thus stopping the non-linear interactions. In absence of non-linear interactions, the natural tendency towards an increase of σc is stopped. These inferences have been corroborated by further studies like those by Grappin and Velli (1996) and Goldstein and Roberts (1999). A numerical model treating the evolution of e+ and e−, including parametric decay of e+, was presented by Marsch and Tu (1993a). The parametric decay source term was added in order to reproduce the decreasing cross-helicity observed during the wind expansion. As a matter of fact, the cascade process, when spectral equations for both e+ and e− are included and solved self-consistently, can only steepen the spectra at high frequency. Results from this model, shown in Figure 68, partially reproduce the observed evolution of the normalized cross-helicity. While the radial evolution of e+ is correctly reproduced, the behavior of e− shows an over-production of inward modes between 0.6 and 0.8 AU probably due to an overestimation of the strength of the pump-wave. However, the model is applied to the situation observed by Helios at 0.3 AU where a rather flat e− spectrum already exists.
Radial evolution of e+ and e− spectra obtained from the Marsch and Tu (1993a) model, in which a parametric decay source term was added to the Tu's model (Tu et al., (1984) that was, in turn, extended by including both spectrum equations for e+ and e− and solved them self-consistently. Image reproduced by permission from Marsch and Tu (1993a), copyright by AGU.
Local production of Alfvénic turbulence at high latitude
An interesting solution to the radial behavior of the minority modes might be represented by local generation mechanisms, like parametric decay (Malara et al., (2001a; Del Zanna et al., 2001), which might saturate and be inhibited beyond 2.5 AU.
Parametric instability has been studied in a variety of situations depending on the value of the plasma β (among others Sagdeev and Galeev, (1969; Goldstein, (1978; Hoshino and Goldstein, (1989; Malara and Velli, (1996). Malara et al. (2000) and Del Zanna et al. (2001) recently studied the non-linear growth of parametric decay of a broadband Alfvén wave, and showed that the final state strongly depends on the value of the plasma β (thermal to magnetic pressure ratio). For β < 1 the instability completely destroys the initial Alfvénic correlation. For β ~ 1 (a value close to solar wind conditions) the instability is not able to go beyond some limit in the disruption of the initial correlation between velocity and magnetic field fluctuations, and the final state is σc ~ 0.5 as observed in the solar wind (see Section 4.2).
These authors solved numerically the fully compressible, non-linear MHD equations in a onedimensional configuration using a pseudo-spectral numerical code. The simulation starts with a non-monochromatic, large amplitude Alfvén wave polarized on the yz plane, propagating in a uniform background magnetic field. Successively, the instability was triggered by adding some noise of the order 10−6 to the initial density level.
During the first part of the evolution of the instability the amplitude of unstable modes is small and, consequently, non-linear couplings are negligible. A subsequent exponential growth, predicted by the linear theory, increases the level of both e− and density compressive fluctuations. During the second part of the development of the instability, non-linear couplings are not longer disregardable and their effect is firstly to slow down the exponential growth of unstable modes and then to saturate the instability to a level that depends on the value of the plasma β.
Spectra of e± are shown in Figure 69 for different times during the development of the instability. At the beginning the spectrum of the mother-wave is peaked at k = 10, and before the instability saturation (t ≤ 35) the back-scattered e− and the density fluctuations eρ are peaked at k = 1 and k = 11, respectively. After saturation, as the run goes on, the spectrum of e− approaches that of e+ towards a common final state characterized by a Kolmogorov-like spectrum and e+ slightly larger than e−.
The behavior of outward and inward modes, density and magnetic magnitude variances and the normalized cross-helicity σc is summarized in the left column of Figure 70. The evolution of σc, when the instability reaches saturation, can be qualitatively compared with Ulysses observations (courtesy of B. Bavassano) in the right panel of the same figure, which shows a similar trend.
Obviously, making this comparison, one has to take into account that this model has strong limitations like the presence of a peak in e+ not observed in real polar turbulence Another limitation, partly due to dissipation that has to be included in the model, is that the spectra obtained at the end of the instability growth are steeper than those observed in the solar wind. Finally, a further limitation is represented by the fact that this code is 1D. However, although for an incompressible 1-D simulation we do not expect to have turbulence development, in this case, since parametric decay is based on compressive phenomena, an energy transfer along the spectrum might be at work.
In addition, Umeki and Terasawa (1992) studying the non-linear evolution of a large-amplitude incoherent Alfvén wave via 1D magnetohydrodynamic simulations, reported that while in a low beta plasma (B ≈ 0.2) the growth of backscattered Alfvén waves, which are opposite in helicity and propagation direction from the original Alfvén waves, could be clearly detected, in a high beta plasma (B ≈ 2) there was no production of backscattered Alfvén waves. Consequently, although numerical results obtained by Malara et al. (2001b) are very encouraging, the high beta plasma (B ≈ 2), characteristic of fast polar wind at solar minimum, plays against a relevant role of parametric instability in developing solar wind turbulence as observed by Ulysses. However, these simulations do remain an important step forward towards the understanding of turbulent evolution in the polar wind until other mechanisms will be found to be active enough to justify the observations shown in Figure 63.
Spectra of e+ (thick line), e− (dashed line), and eρ (thin line) are shown for 6 different times during the development of the instability. For t ≥ 50 a typical Kolmogorov slope appears. These results refer to β = 1. Image reproduced by permission from Malara et al. (2001b), copyright by EGU.
Top left panel: time evolution of e+ (solid line) and e− (dashed line). Middle left panel: density (solid line) and magnetic magnitude (dashed line) variances. Bottom left panel: normalized cross helicity σc. Right panel: Ulysses observations of σc radial evolution within the polar wind (left column is from Malara et al., 2001b, right panel is a courtesy of B. Bavassano).
Compressive Turbulence
Interplanetary medium is slightly compressive, magnetic field intensity and proton number density experience fluctuations over all scales and the compression depends on both the scale and the nature of the wind. As a matter of fact, slow wind is generally more compressive than fast wind, as shown in Figure 71 where, following Bavassano et al. (1982a) and Bruno and Bavassano (1991), we report the ratio between the power density associated with magnetic field intensity fluctuations and that associated with the fluctuations of the three components. In addition, as already shown by Bavassano et al. (1982a), this parameter increases with heliocentric distance for both fast and slow wind as shown in the bottom panel, where the ratio between the compression at 0.9 AU and that at 0.3 AU is generally greater than 1. It is also interesting to notice that within the Alfvénic fast wind, the lowest compression is observed in the middle frequency range, roughly between 10−4 − 10−3 Hz. On the other hand, this frequency range has already been recognized as the most Alfvénic one, within the inner heliosphere (Bruno et al., (1996).
As a matter of fact, it seems that high Alfvénicity is correlated with low compressibility of the medium (Bruno and Bavassano, (1991; Klein et al., (1993; Bruno and Bavassano, (1993) although compressibility is not the only cause for a low Alfvénicity (Roberts et al., (1991, (1992; Roberts, (1992).
The radial dependence of the normalized number density fluctuations δn/n for the inner and outer heliosphere were studied by Grappin et al. (1990) and Roberts et al. (1987b for the hourly frequency range, but no clear radial trend emerged from these studies. However, interesting enough, Grappin et al. (1990) found that values of e− were closely associated with enhancements of δn/n on scales longer than 1 h.
On the other hand, a spectral analysis of proton number density, magnetic field intensity, and proton temperature performed by Marsch and Tu (1990b) and Tu et al. (1991) in the inner heliosphere, separately for fast and slow wind (see Figure 72), showed that normalized spectra of the above parameters within slow wind were only marginally dependent on the radial distance On the contrary, within fast wind, magnetic field and proton density normalized spectra showed not only a clear radial dependence but also similar level of power for k < 4×10−4 km s−1. For larger k these spectra show a flattening that becomes steeper for increasing distance, as was already found by Bavassano et al. (1982b) for magnetic field intensity. Normalized temperature spectra does not suffer any radial dependence neither in slow wind nor in fast wind.
Spectral index is around .5/3 for all the spectra in slow wind while, fast wind spectral index is around −5/3 for k < 4 × 10−4 km.1 and slightly less steep for larger wave numbers.
On the nature of compressive turbulence
Considerable efforts, both theoretical and observational, have been made in order to disclose the nature of compressive fluctuations. It has been proposed (Montgomery et al., (1987; Matthaeus and Brown, (1988; Zank et al., (1990; Zank and Matthaeus, (1990; Matthaeus et al., (1991; Zank and Matthaeus, (1992) that most of compressive fluctuations observed in the solar wind could be accounted for by the Nearly Incompressible (NI) model. Within the framework of this model, (Montgomery et al., (1987) showed that a spectrum of small scale density fluctuations follows a k−5/3 when the spectrum of magnetic field fluctuations follows the same scaling. Moreover, it was showed (Matthaeus and Brown, (1988; Zank and Matthaeus, (1992) that if compressible MHD equations are expanded in terms of small turbulent sonic Mach numbers, pressure balanced structures, Alfvénic and magnetosonic fluctuations naturally arise as solutions and, in particular, the RMS of small density fluctuations would scale like M2, being M = δu/Cs the turbulent sonic Mach number, δu the RMS of velocity fluctuations and Cs the sound speed. In addition, if heat conduction is allowed in the approximation, temperature fluctuations dominate over magnetic and density fluctuations, temperature and density are anticorrelated and would scale like M. However, in spite of some examples supporting this theory (Matthaeus et al., (1991 reported 13% of cases satisfied the requirements of NI-theory), wider statistical studies, conducted by Tu and Marsch (1994), Bavassano et al. (1995) and Bavassano and Bruno (1995), showed that NI theory is not applicable sic et simpliciter to the solar wind. The reason might be in the fact that interplanetary medium is highly inhomogeneous because of the presence of an underlying structure convected by the wind. As a matter of fact, Thieme et al. (1989) showed evidence for the presence of time intervals characterized by clear anti-correlation between kinetic pressure and magnetic pressure while the total pressure remained fairly constant. These pressure balance structures were for the first time observed by Burlaga and Ogilvie (1970) for a time scale of roughly one to two hours. Later on, Vellante and Lazarus (1987) reported strong evidence for anti-correlation between field intensity and proton density, and between plasma and field pressure on time scales up to 10 h. The anti-correlation between kinetic and magnetic pressure is usually interpreted as indicative of the presence of a pressure balance structure since slow magnetosonic modes are readily damped (Barnes, (1979).
The first two rows show magnetic field compression (see text for definition) for fast (left column) and slow (right column) wind at 0.3 AU (upper row) and 0.9 AU (middle row). The bottom panels show the ratio between compression at 0.9 AU and compression at 0.3 AU. This ratio is generally greater than 1 for both fast and slow wind.
From left to right: normalized spectra of number density, magnetic field intensity fluctuations (adapted from Marsch and Tu, (1990b), and proton temperature (adapted from Tu et al., (1991). Different lines refer to different heliocentric distances for both slow and fast wind.
These features, observed also in their dataset, were taken by Thieme et al. (1989) as evidence of stationary spatial structures which were supposed to be remnants of coronal structures convected by the wind. Different values assumed by plasma and field parameters within each structure were interpreted as a signature characterizing that particular structure and not destroyed during the expansion. These intervals, identifiable in Figure 73 by vertical dashed lines, were characterized by pressure balance and a clear anti-correlation between magnetic field intensity and temperature.
These structures were finally related to the fine ray-like structures or plumes associated with the underlying chromospheric network and interpreted as the signature of interplanetary flowtubes. The estimated dimension of these structures, back projected onto the Sun, suggested that they over-expand in the solar wind. In addition, Grappin et al. (2000) simulated the evolution of Alfvén waves propagating within such pressure equilibrium ray structures in the framework of global Eulerian solar wind approach and found that the compressive modes in these simulations are very much reduced within the ray structures, which indeed correspond to the observational findings (Buttighoffer et al., (1995, (1999).
From top to bottom: field intensity |B|; proton and alpha particle velocity up and uα; corrected proton velocity upc = up − δuA, where uA is the Alfvén speed; proton and alpha number density np and nα; proton and alpha temperature Tp and Tα; kinetic and magnetic pressure Pk and Pm, which the authors call Pgas and Pmag; total pressure Ptot and β = Pgas/Pmag (from Tu and Marsch, (1995a).
The idea of filamentary structures in the solar wind dates back to Parker (1964), followed by other authors like McCracken and Ness (1966), Siscoe et al. (1968), and more recently has been considered again in the literature with new results (see Section 10). These interplanetary flow tubes would be of different sizes, ranging from minutes to several hours and would be separated from each other by tangential discontinuities and characterized by different values of plasma parameters and a different magnetic field orientation and intensity. This kind of scenario, because of some similarity to a bunch of tangled, smoking "spaghetti" lifted by a fork, was then named "spaghetti-model".
A spectral analysis performed by Marsch and Tu (1993a) in the frequency range 6×10−3 – 6×10−6 showed that the nature and intensity of compressive fluctuations systematically vary with the stream structure. They concluded that compressive fluctuations are a complex superposition of magnetoacoustic fluctuations and pressure balance structures whose origin might be local, due to stream dynamical interaction, or of coronal origin related to the flow tube structure. These results are shown in Figure 74 where the correlation coefficient between number density n and total pressure Ptot (indicated with the symbols pT in the figure), and between kinetic pressure Pk and magnetic pressure Pm (indicated with the symbols pk and pb, respectively) is plotted for both Helios s/c relatively to fast wind. Positive values of correlation coefficients C(n, pT) and C(pk, pb) identify magnetosonic waves, while positive values of C(n, pT) and negative values of C(pk, pb) identify pressure balance structures. The purest examples of each category are located at the upper left and right corners.
Correlation coefficient between number density pT and total pressure pT plotted vs. the correlation coefficient between kinetic pressure and magnetic pressure for both Helios relatively to fast wind. Image reproduced by permission from Marsch and Tu (1993b).
Following these observations, Tu and Marsch (1994) proposed a model in which fluctuations in temperature, density, and field directly derive from an ensemble of small amplitude pressure balanced structures and small amplitude fast perpendicular magnetosonic waves. These last ones should be generated by the dynamical interaction between adjacent flow tubes due to the expansion and, eventually, they would experience also a non-linear cascade process to smaller scales. This model was able to reproduce most of the correlations described by Marsch and Tu (1993a) for fast wind.
Later on, Bavassano et al. (1996a) tried to characterize compressive fluctuations in terms of their polytropic index, which resulted to be a useful tool to study small scale variations in the solar wind. These authors followed the definition of polytropic fluid given by Chandrasekhar (1967): "a polytropic change is a quasi-static change of state carried out in such a way that the specific heat remains constant (at some prescribed value) during the entire process". For such a variation of state the adiabatic laws are still valid provided that the adiabatic index γ is replaced by a new adiabatic index γ' = (cp − c)/(cv − c) where c is the specific heat of the polytropic variation, and cp and cv are the specific heat at constant pressure and constant volume, respectively. This similarity is lost if we adopt the definition given by Courant and Friedrichs (1976), for whom a fluid is polytropic if its internal energy is proportional to the temperature. Since no restriction applies to the specific heats, relations between temperature, density, and pressure do not have a simple form as in Chandrasekhar approach (Zank and Matthaeus, (1991). Bavassano et al. (1996a) recovered the polytropic index from the relation between density n and temperature T changes for the selected scale Tn1−γ' = const. and used it to determine whether changes in density and temperature were isobaric (γ' = 0), isothermal (γ' = 1), adiabatic (γ' = γ), or isochoric (γ' = ∞). Although the role of the magnetic field was neglected, reliable conclusions could be obtained whenever the above relations between temperature and density were strikingly clear. These authors found intervals characterized by variations at constant thermal pressure P. They interpreted these intervals as a subset of total-pressure balanced structures where the equilibrium was assured by the thermal component only, perhaps tiny flow tubes like those described by Thieme et al. (1989) and Tu and Marsch (1994). Adiabatic changes were probably related to magnetosonic waves excited by contiguous flow tubes (Tu and Marsch, (1994). Proton temperature changes at almost constant density were preferentially found in fast wind, close to the Sun. These regions were characterized by values of B and N remarkable stable and by strong Alfvénic fluctuations (Bruno et al., (1985). Thus, they suggested that these temperature changes could be remnants of thermal features already established at the base of the corona.
Thus, the polytropic index offers a very simple way to identify basic properties of solar wind fluctuations, provided that the magnetic field does not play a major role.
Compressive turbulence in the polar wind
Compressive fluctuations in high latitude solar wind have been extensively studied by Bavassano et al. (2004) looking at the relationship between different parameters of the solar wind and comparing these results with predictions by existing models.
These authors indicated with N, Pm, Pk, and Pt the proton number density n, magnetic pressure, kinetic pressure and total pressure (Ptot = Pm + Pk), respectively, and computed correlation coefficients ρ between these parameters. Figure 75 clearly shows that a pronounced positive correlation for N − Pt and a negative pronounced correlation for Pm − Pk is a constant feature of the observed compressive fluctuations. In particular, the correlation for N − Pt is especially strong within polar regions at small heliocentric distance In mid-latitude regions the correlation weakens, while almost disappears at low latitudes. In the case of Pm − Pk, the anticorrelation remains strong throughout the whole latitudinal excursion. For polar wind the anticorrelation appears to be less strong at small distances, just where the N − Pt correlation is highest.
The role played by density and temperature in the anticorrelation between magnetic and thermal pressures is investigated in Figure 76, where the magnetic field magnitude is directly compared with proton density and temperature. As regards the polar regions, a strong B-T anticorrelation is clearly apparent at all distances (right panel). For B-N an anticorrelation tends to emerge when solar distance increases. This means that the magnetic-thermal pressure anticorrelation is mostly due to an anticorrelation of the magnetic field fluctuations with respect to temperature fluctuations, rather than density (see, e.g., Bavassano et al., (1996a,b). Outside polar regions the situation appears in part reversed, with a stronger role for the B-N anticorrelation.
In Figure 77 scatter plots of total pressure vs. density fluctuations are used to test a model by Tu and Marsch (1994), based on the hypothesis that the compressive fluctuations observed in solar wind are mainly due to a mixture of pressure-balanced structures (PBS) and fast magnetosonic waves (W).Waves can only contribute to total pressure fluctuations while both waves and pressurebalanced structures may contribute to density fluctuations. A tunable parameter in the model is the relative PBS/W contribution to density fluctuations α. Straight lines in Figure 77 indicate the model predictions for different values of α. It is easily seen that for all polar wind samples the great majority of experimental data fall in the α > 1 region. Thus, pressure-balanced structures appear to play a major role with respect to magnetosonic waves. This is a feature already observed by Helios in the ecliptic wind (Tu and Marsch, (1994), although in a less pronounced way. Different panels of Figure 77 refer to different heliocentric distances within the polar wind. Namely, going from P1 to P4 is equivalent to move from 1.4 to 4 AU. A comparison between these panels indicates that the observed distribution tends to shift towards higher values of α (i.e., pressure-balanced structures become increasingly important), which probably is a radial distance effect.
Histograms of ρ(N − Pt) and ρ(Pm − Pk) per solar rotation. The color bar on the left side indicates polar (red), mid-latitude (blue), and low latitude (green) phases. Moreover, universal time UT, heliocentric distance, and heliographic latitude are also indicated on the left side of the plot. Occurrence frequency is indicated by the color bar shown on the right hand side of the figure. Image reproduced by permission from Bavassano et al. (2004), copyright EGU.
Finally, the relative density fluctuations dependence on the turbulent Mach number M (the ratio between velocity fluctuation amplitude and sound speed) is shown in Figure 78. The aim is to look for the presence, in the observed fluctuations, of nearly incompressible MHD behaviors. In the framework of the NI theory (Zank and Matthaeus, (1991, (1993) two different scalings for the relative density fluctuations are possible, as M or as M2, depending on the role that thermal conduction effects may play in the plasma under study (namely a heat-fluctuation-dominated or a heat-fluctuation-modified behavior, respectively). These scalings are shown in Figure 78 as solid (for M) and dashed (for M2) lines.
It is clearly seen that for all the polar wind samples no clear trend emerges in the data. Thus, NI-MHD effects do not seem to play a relevant role in driving the polar wind fluctuations. This confirms previous results in the ecliptic by Helios in the inner heliosphere (Bavassano et al., (1995; Bavassano and Bruno, (1995) and by Voyagers in the outer heliosphere (Matthaeus et al., (1991). It is worthy of note that, apart from the lack of NI trends, the experimental data from Ulysses, Voyagers, and Helios missions in all cases exhibit quite similar distributions. In other words, for different heliospheric regions, solar wind regimes, and solar activity conditions, the behavior of the compressive fluctuations in terms of relative density fluctuations and turbulent Mach numbers seems almost to be an invariant feature.
Solar rotation histograms of B-N and B-T in the same format of Figure 75. Image reproduced by permission from Bavassano et al. (2004), copyright EGU.
Scatter plots of the relative amplitudes of total pressure vs. density fluctuations for polar wind samples P1 to P4. Straight lines indicate the Tu and Marsch (1994) model predictions for different values of α, the relative PBS/W contribution to density fluctuations. Image reproduced by permission from Bavassano et al. (2004), copyright EGU.
Relative amplitude of density fluctuations vs. turbulent Mach number for polar wind. Solid and dashed lines indicate the M and M2 scalings, respectively. Image reproduced by permission from Bavassano et al. (2004), copyright EGU.
The above observations fully support the view that compressive fluctuations in high latitude solar wind are a mixture of MHD modes and pressure balanced structures. It has to be reminded that previous studies (McComas et al., (1995, (1996; Reisenfeld et al., (1999) indicated a relevant presence of pressure balanced structures at hourly scales. Moreover, nearly-incompressible (see Section 6.1) effects do not seem to play any relevant role. Thus, polar observations do not show major differences when compared with ecliptic observations in fast wind, the only possible difference being a major role of pressure balanced structures.
The effect of compressive phenomena on Alfvénic correlations
A lack of δV − δB correlation does not strictly indicate a lack of Alfvénic fluctuations since a superposition of both outward and inward oriented fluctuations of the same amplitude would produce a very low correlation as well. In addition, the rather complicated scenario at the base of the corona, where both kinetic and magnetic phenomena contribute to the birth of the wind, suggest that the imprints of such a structured corona is carried away by the wind during its expansion. At this point, we would expect that solar wind fluctuations would not solely be due to the ubiquitous Alfvénic and other MHD propagating modes but also to an underlying structure convected by the wind, not necessarily characterized by Alfvén-like correlations. Moreover, dynamical interactions between fast and slow wind, built up during the expansion, contribute to increase the compressibility of the medium.
It has been suggested that disturbances of the mean magnetic field intensity and plasma density act destructively on δV − δB correlation. Bruno and Bavassano (1993) analyzed the loss of the Alfvénic character of interplanetary fluctuations in the inner heliosphere within the low frequency part of the Alfvénic range, i.e., between 2 and 10 h. Figure 79, from their work, shows the wind speed profile, σc, the correlation coefficients, phase and coherence for the three components (see Appendix B.2.1), the angle between magnetic field and velocity minimum variance directions, and the heliocentric distance Magnetic field sectors were rectified (see Appendix B.3) and magnetic field and velocity components were rotated into the magnetic field minimum variance reference system (see Appendix D). Although the three components behave in a similar way, the most Alfvénic ones are the two components Y and Z transverse to the minimum variance component X. As a matter of fact, for an Alfvén mode we would expect a high δV − δB correlation, a phase close to zero for outward waves and a high coherence Moreover, it is rather clear that the most Alfvénic intervals are located within the trailing edges of high velocity streams. However, as the radial distance increases, the Alfvénic character of the fluctuations decreases and the angle Θbu increases. The same authors found that high values of Θbu are associated with low values of σc and correspond to the most compressive intervals. They concluded that the depletion of the Alfvénic character of the fluctuations, within the hourly frequency range, might be driven by the interaction with static structures or magnetosonic perturbations able to modify the homogeneity of the background medium on spatial scales comparable to the wavelength of the Alfvénic fluctuations. A subsequent paper by Klein et al. (1993) showed that the δV − δB decoupling increases with the plasma β, suggesting that in regions where the local magnetic field is less relevant, compressive events play a major role in this phenomenon.
Wind speed profile V and |σc|V are shown in the top panel. The lower three panels refer to correlation coefficient, phase angle and coherence for the three components of δV and δB fluctuations, respectively. The successive panel indicates the value of the angle between magnetic field and velocity fluctuations minimum variance directions. The bottom panel refers to the heliocentric distance (from Bruno and Bavassano, (1993).
A Natural Wind Tunnel
The solar wind has been used as a wind tunnel by Burlaga who, at the beginning of the 1990s, started to investigate anomalous fluctuations (Burlaga, (1991a,b,c, (1995) as observed by measurements in the outer heliosphere by the Voyager spacecraft. In 1991, Marsch, in a review on solar wind turbulence given at the Solar Wind Seven conference, underlined the importance of investigating scaling laws in the solar wind and we like to report his sentence: "The recent work by Burlaga (1991a,b) opens in my mind a very promising avenue to analyze and understand solar wind turbulence from a new theoretical vantage point. ...This approach may also be useful for MHD turbulence Possible connections between intermittent turbulence and deterministic chaos have recently been investigated ...We are still waiting for applications of these modern concepts of chaos theory to solar wind MHD fluctuations." (cf. Marsch, (1992, p. 503). A few years later Carbone (1993) and, independently, Biskamp (1993) faced the question of anomalous scaling from a theoretical point of view. More than ten years later the investigation of statistical mechanics of MHD turbulence from one side, and of low-frequency solar wind turbulence on the other side, has produced a lot of papers, and is now mature enough to be tentatively presented in a more organic way.
Scaling exponents of structure functions
The phenomenology of turbulence developed by Kolmogorov (1941) deals with some statistical hypotheses for fluctuations. The famous footnote remark by Landau (Landau and Lifshitz, (1971) pointed out a defect in the Kolmogorov theory, namely the fact that the theory does not take proper account of spatial fluctuations of local dissipation rate (Frisch, (1995). This led different authors to investigate the features related to scaling laws of fluctuations and, in particular, to investigate the departure from the Kolmogorov's linear scaling of the structure functions (cf. Section 2.8). An up-to-date comprehensive review of these theoretical efforts can be found in the book by Frisch (1995).
Here we are interested in understanding what we can learn from solar wind turbulence about the basic features of scaling laws for fluctuations. We use velocity and magnetic fields time series, and we investigate the scaling behavior of the high-order moments of stochastic variables defined as variations of fields separated by a timeFootnote 8 interval τ. First of all, it is worthwhile to remark that scaling laws and, in particular, the exact relation (41) which defines the inertial range in fluid flows, is valid for longitudinal (streamwise) fluctuations. In common fluid flows the Kolmogorov linear scaling law is compared with the moments of longitudinal velocity differences. In the same way for the solar wind turbulence we investigate the scaling behavior of Δuτ = u(t+τ)−u(t), where u(t) represents the component of the velocity field along the radial direction. As far as the magnetic differences are concerned Δbτ = B(t+τ) − B(t), we are free for different choices and, in some sense, this is more interesting from an experimental point of view. We can use the reference system where B(t) represents the magnetic field projected along the radial direction, or the system where B(t) represents the magnetic field along the local background magnetic field, or B(t) represents the field along the minimum variance direction. As a different case we can simply investigate the scaling behavior of the fluctuations of the magnetic field intensity.
Let us consider the p-th moment of both absolute valuesFootnote 9 of velocity fluctuations Rp(τ) = 〈|Δuτ|p〉 and magnetic fluctuations Sp(τ) = 〈|Δbτ|p〉, also called p-th order structure function in literature (brackets being time average). Here we use magnetic fluctuations across structures at intervals τ calculated by using the magnetic field intensity. Typical structure functions of magnetic field fluctuations, for two different values of p, for both a slow wind and a fast wind at 0.9 AU, are shown in Figures 80. The magnetic field we used is that measured by Helios 2 spacecraft. Structure functions calculated for the velocity fields have roughly the same shape. Looking at these Figures the typical scaling features of turbulence can be observed. Starting from low values at small scales, the structure functions increase towards a region where Sp → const. at the largest scales. This means that at these scales the field fluctuations are uncorrelated. A kind of "inertial range", that is a region of intermediate scales τ where a power law can be recognized for both
$\begin{array}{*{20}c} {R_p (\tau ) = \left\langle {\left| {\Delta u_\tau } \right|^p } \right\rangle \sim \tau ^{\zeta _p } } \\ {S_p (\tau ) = \left\langle {\left| {\Delta b_\tau } \right|^p } \right\rangle \sim \tau ^{\xi _p } } \\ \end{array} $
is more or less visible only for the slow wind. In this range correlations exists, and we can obtain the scaling exponents ζp and ξp through a simple linear fit.
Structure functions for the magnetic field intensity Sn(r) for two different orders, n = 3 and n = 5, for both slow wind and fast wind, as a function of the time scale r. Data come from Helios 2 spacecraft at 0.9 AU.
Since as we have seen, Yaglom's law is observed only in some few samples, the inertial range in the whole solar wind is not well defined. A look at Figure 80 clearly shows that we are in a situation similar to a low-Reynolds number fluid flow. In order to compare scaling exponents of the solar wind turbulent fluctuations with other experiments, it is perhaps better to try to recover exponents using the Extended Self-Similarity (ESS), introduced some time ago by Benzi et al. (1993), and used here as a tool to determine relative scaling exponents. In the fluid-like case, the third-order structure function can be regarded as a generalized scaling using the inverse of Equation (42) or of Equation (41) (Politano et al., (1998). Then, we can plot the p-th order structure function vs. the third-order one to recover at least relative scaling exponents ζp/ζ3 and ζp/ξ3 (61). Quite surprisingly (see Figure 81), we find that the range where a power law can be recovered extends well beyond the inertial range, covering almost all the experimental range. In the fluid case the scaling exponents which can be obtained through ESS at low or moderate Reynolds numbers, coincide with the scaling exponents obtained for high Reynolds, where the inertial range is very well defined Benzi et al. (1993). This is due to the fact that, since by definition ζ3 = 1 in the inertial range (Frisch, (1995), whatever its extension might be. In our case scaling exponents obtained through ESS can be used as a surrogate, since we cannot be sure that an inertial range exists.
Structure functions Sn(r) for two different orders, n = 3 and n = 5, for both slow wind and high wind, as a function of the fourth-order structure function S4(r). Data come from Helios 2 spacecraft at 0.9 AU.
It is worthwhile to remark (as shown in Figure 81) that we can introduce a general scaling relation between the q-th order velocity structure function and the q-th order structure function, with a relative scaling exponent αp(q). It has been found that this relation becomes an exact relation
$$S_q (r) = [S_p (r)]^{\alpha _p (q)} ,$$
when the velocity structure functions are normalized to the average velocity within each period used to calculate the structure function (Carbone et al., (1996a). This is very interesting because it implies (Carbone et al., (1996a) that the above relationship is satisfied by the following probability distribution function, if we assume that odd moments are much smaller than the even ones:
$$PDF(\Delta u_\tau ) = \int_{ - \infty }^\infty {dk e^{ik\Delta u_\tau } } \sum\limits_{q = 0}^\infty {\frac{{(ik)^{2q} }} {{2\pi (2q)!}}[S_p (\tau )]^{\alpha _p (2q)} .}$$
That is, for each scale τ the knowledge of the relative scaling exponents αp(q) completely determines the probability distribution of velocity differences as a function of a single parameter Sp(τ).
Relative scaling exponents, calculated by using data coming from Helios 2 at 0.9 AU, are reported in Table 1. As it can be seen, two main features can be noted:
Table 1: Scaling exponents for velocity ζp and magnetic ξp variables calculated through ESS. Errors represent the standard deviations of the linear fitting. The data used comes from a turbulent sample of slow wind at 0.9 AU from Helios 2 spacecraft. As a comparison we show the normalized scaling exponents of structure functions calculated in a wind tunnel on Earth (Ruíz-Chavarría et al., (1995) for velocity and temperature. The temperature is a passive scalar in this experiment.
There is a significant departure from the Kolmogorov linear scaling, that is, real scaling exponents are anomalous and seem to be non-linear functions of p, say ζp/ζ3 > p/3 for p < 3, while ζp/ζ3 < p/3 for p > 3. The same behavior can be observed for ξp/ξ3. In Table 1 we report also the scaling exponents obtained in usual fluid flows for velocity and temperature, the latter being a passive scalar. Scaling exponents for velocity field are similar to scaling exponents obtained in turbulent flows on Earth, showing a kind of universality in the anomaly. This effect is commonly attributed to the phenomenon of intermittency in fully developed turbulence (Frisch, (1995). Turbulence in the solar wind is intermittent, just like its fluid counterpart on Earth.
The degree of intermittency is measured through the distance between the curve ζp/ζ3 and the linear scaling p/3. It can be seen that the magnetic field is more intermittent than the velocity field. The same difference is observed between the velocity field and a passive scalar (in our case the temperature) in ordinary fluid flows (Ruíz-Chavarría et al., (1995). That is the magnetic field, as long as intermittency properties are concerned, has the same scaling laws of a passive field. Of course this does not mean that the magnetic field plays the same role as a passive field. Statistical properties are in general different from dynamical properties.
In Table 1 we show scaling exponents up to the sixth order. Actually, a question concerns the validation of high-order moments estimates, say the maximum value of the order p which can be determined with a finite number of points of our dataset. As the value of p increases, we need an increasing number of points for an optimal determination of the structure function (RuíTennekes). Anomalous scaling laws are generated by rare and intense events due to singularities in the gradients: the higher their intensity the more rare these events are. Of course, when the data set has a finite extent, the probability to get singularities stronger than a certain value approaches zero. In that case, scaling exponents ζp of order higher than a certain value become linear functions of p. Actually, the structure function Sp(τ) depends on the probability distribution function PDF(Δuτ) through
$$S_p (\tau ) = \int {\Delta u_\tau ^p PDF(\delta u_\tau )d\Delta u_\tau }$$
and, the function Sp is determined only when the integral converges. As p increases, the function Fp(δuτ) = Δu τ p PDF(Δuτ) becomes more and more disturbed, with some spikes, so that the integral becomes more and more undefined, as can be seen for example in Figure 1 of the paper by Dudok de Wit (2004). A simple calculation (Dudok de Wit, (2004) for the maximum value of the order pm which can reliably be estimated with a given number N of points in the dataset, gives
$$P(\Delta z_{\lambda \ell }^ \pm ) = PDF(\lambda ^h \Delta z_{\lambda \ell }^ \pm ).$$
Table 2: Normalized scaling exponents ξp/ξ3 for radial magnetic fluctuations in a laboratory plasma, as measured at different distances a/R (R ≃ 0.45 cm being the minor radius of the torus in the experiment) from the external wall. Errors represent the standard deviations of the linear fitting. Scaling exponents have been obtained using the ESS.
the empirical criterion pm ≃ log N. Structure functions of order p > pm cannot be determined accurately.
Only few large structures are enough to generate the anomalous scaling laws. In fact, as shown by Salem et al. (2009), by suppressing through wavelets analysis just a few percentage of large structures on all scales, the scaling exponents become linear functions of p, respectively p/4 and p/3 for the kinetic and magnetic fields.
As far as a comparison between different plasmas is concerned, the scaling exponents of magnetic structure functions, obtained from laboratory plasma experiments of a Reversed-Field Pinch at different distances from the external wall (Carbone et al., (2000) are shown in Table 2. In laboratory plasmas it is difficult to measure all the components of the vector field at the same time, thus, here we show only the scaling exponents obtained using magnetic field differences Br(t+τ)−Br(t) calculated from the radial component in a toroidal device where the z-axis is directed along the axis of the torus. As it can be seen, intermittency in magnetic turbulence is not so strong as it appears to be in the solar wind, actually the degree of intermittency increases when going toward the external wall. This last feature appears to be similar to what is currently observed in channel flows, where intermittency also increases when going towards the external wall (Pope, (2000).
Scaling exponents of structure functions for Alfvén variables, velocity, and magnetic variables have been calculated also for high resolution 2D incompressible MHD numerical simulations (Politano et al., (1998). In this case, we are freed from the constraint of the Taylor hypothesis when calculating the fluctuations at a given scale. From 2D simulations we recover the fields u(r, t) and b(r, t) at some fixed times. We calculate the longitudinal fluctuations directly in space at a fixed time, namely Δu∓ = [u(r+ℓ, t)− u(r, t)] · ℓ/ℓ (the same are made for different fields, namely the magnetic field or the Elsässer fields). Finally, averaging both in space and time, we calculate the scaling exponents through the structure functions. These scaling exponents are reported in Table 3. Note that, even in numerical simulations, intermittency for magnetic variables is stronger than for the velocity field.
Probability distribution functions and self-similarity of fluctuations
The presence of scaling laws for fluctuations is a signature of the presence of self-similarity in the phenomenon. A given observable u(ℓ), which depends on a scaling variable ℓ, is invariant with respect to the scaling relation ℓ → λℓ, when there exists a parameter μ(λ) such that u(ℓ) = μ(λ)u(λℓ). The solution of this last relation is a power law u(ℓ) = Cℓh, where the scaling exponent is h = −logλμ.
Since, as we have just seen, turbulence is characterized by scaling laws, this must be a signature of self-similarity for fluctuations. Let us see what this means. Let us consider fluctuations at two different scales, namely Δz ℓ ± and Δz λℓ ± . Their ratio Δz λℓ ± /Δz ℓ ± depends only on the value of h, and this should imply that fluctuations are self-similar. This means that PDFs are related through
Table 3: Normalized scaling exponents ξp/ξ3 for Alfvénic, velocity, and magnetic fluctuations obtained from data of high resolution 2D MHD numerical simulations. Scaling exponents have been calculated from spatial fluctuations; different times, in the statistically stationary state, have been used to improve statistics. The scaling exponents have been calculated by ESS using Equation (41) as characteristic scale rather than the third-order structure function (cf. Politano et al., (1998, for details).
Let us consider the normalized variables
$$y_\ell ^ \pm = \frac{{\Delta z_\ell ^ \pm }} {{\left\langle {(\Delta z_\ell ^ \pm )^2 } \right\rangle ^{1/2} }}.$$
When h is unique or in a pure self-similar situation, PDFs are related through P(y ℓ ± ) = PDF(y λℓ ± ), say by changing scale PDFs coincide.
The PDFs relative to the normalized magnetic fluctuations δbτ = Δbτ/〈Δb τ 2 〉1/2, at three different scales τ, are shown in Figure 82. It appears evident that the global self-similarity in real turbulence is broken. PDFs do not coincide at different scales, rather their shape seems to depend on the scale τ. In particular, at large scales PDFs seem to be almost Gaussian, but they become more and more stretched as τ decreases. At the smallest scale PDFs are stretched exponentials. This scaling dependence of PDFs is a different way to say that scaling exponents of fluctuations are anomalous, or can be taken as a different definition of intermittency. Note that the wings of PDFs are higher than those of a Gaussian function. This implies that intense fluctuations have a probability of occurrence higher than that they should have if they were Gaussianly distributed. Said differently, intense stochastic fluctuations are less rare than we should expect from the point of view of a Gaussian approach to the statistics. These fluctuations play a key role in the statistics of turbulence The same statistical behavior can be found in different experiments related to the study of the atmosphere (see Figure 83) and the laboratory plasma (see Figure 84).
Left panel: normalized PDFs for the magnetic fluctuations observed in the solar wind turbulence by using Helios data. Right panel: distribution function of waiting times Δt between structures at the smallest scale. The parameter β is the scaling exponent of the scaling relation PDF(Δt) ~ Δt−β for the distribution function of waiting times.
Left panel: normalized PDFs of velocity fluctuations in atmospheric turbulence. Right panel: distribution function of waiting times Δt between structures at the smallest scale. The parameter β is the scaling exponent of the scaling relation PDF(Δt) ~ Δt−β for the distribution function of waiting times. The turbulent samples have been collected above a grass-covered forest clearing at 5 m above the ground surface and at a sampling rate of 56 Hz (Katul et al., (1997).
Left panel: normalized PDFs of the radial magnetic field collected in RFX magnetic turbulence (Carbone et al., (2000). Right panel: distribution function of waiting times Δt between structures at the smallest scale. The parameter β is the scaling exponent of the scaling relation PDF(Δt) ~ Δt−β for the distribution function of waiting times.
What is intermittent in the solar wind turbulence? The multifractal approach
Time dependence of Δuτ and Δbτ for three different scales τ is shown in Figures 85 and 86, respectively. These plots show that, as τ becomes small, intense fluctuations become more and more important, and they dominate the statistics. Fluctuations at large scales appear to be smooth while, as the scale becomes smaller, intense fluctuations becomes visible. These dominating fluctuations represent relatively rare events. Actually, at the smallest scales, the time behavior of both Δuτ and Δbτ is dominated by regions where fluctuations are low, in between regions where fluctuations are intense and turbulent activity is very high. Of course, this behavior cannot be described by a global self-similar behavior. Allowing the scaling laws to vary with the region of turbulence we are investigating would be more convincing.
The behavior we have just described is at the heart of the multifractal approach to turbulence (Frisch, (1995). In that description of turbulence, even if the small scales of fluid flow cannot be globally self-similar, self-similarity can be reintroduced as a local property. In the multifractal description it is conjectured that turbulent flows can be made by an infinite set of points Sh(r), each set being characterized by a scaling law ΔZ ℓ ± ~ ℓh(r), that is, the scaling exponent can depend on the position r. The usual dimension of that set is then not constant, but depends on the local value of h, and is quoted as D(h) in literature. Then, the probability of occurrence of a given fluctuation can be calculated through the weight the fluctuation assumes within the whole flow, i.e.,
$P(\Delta z_\ell ^ \pm ) \sim (\Delta z_\ell ^ \pm )^h \times volume occupied by fluctuations,$
and the p-th order structure function is immediately written through the integral over all (continuous) values of . weighted by a smooth function μ(h) ~ 0(1), i.e.,
$$S_p (\ell ) = \int {\mu (h)(\Delta z_\ell ^ \pm )^{ph} (\Delta z_\ell ^ \pm )^{3 - D(h)} dh} .$$
Differences for the longitudinal velocity δuτ = u(t + τ) − uu(t) at three different scales τ, as shown in the figure.
Differences for the magnetic intensity Δbτ = B(t + τ) − B(t) at three different scales τ, as shown in the figure.
A moment of reflection allows us to realize that in the limit ℓ → 0 the integral is dominated by the minimum value (over .) of the exponent and, as shown by Frisch (1995), the integral can be formally solved using the usual saddle-point method. The scaling exponents of the structure function can then be written as
$$\zeta _p = \mathop {\min }\limits_h [ph + 3 - D(h)].$$
((62f))
In this way, the departure of ζp from the linear Kolmogorov scaling and thus intermittency, can be characterized by the continuous changing of D(h) as h varies. That is, as p varies we are probing regions of fluid where even more rare and intense events exist. These regions are characterized by small values of h, that is, by stronger singularities of the gradient of the field.
Owing to the famous Landau footnote on the fact that fluctuations of the energy transfer rate must be taken into account in determining the statistics of turbulence, people tried to interpret the non-linear energy cascade typical of turbulence theory, within a geometrical framework. The old Richardson's picture of the turbulent behavior as the result of a hierarchy of eddies at different scales has been modified and, as realized by Kraichnan (1974), once we leave the idea of a constant energy cascade rate we open a "Pandora's box" of possibilities for modeling the energy cascade. By looking at scaling laws for Δz ℓ ± and introducing the scaling exponents for the energy transfer rate 〈∈ ℓ p ~ rτp, it can be found that ζp = p/m + τp/m (being m = 3 when the Kolmogorov-like phenomenology is taken into account, or m = 4 when the Iroshnikov-Kraichnan phenomenology holds). In this way the intermittency correction are determined by a cascade model for the energy transfer rate. When τp is a non-linear function of p, the energy transfer rate can be described within the multifractal geometry (see, e.g., Meneveau, (1991, and references therein) characterized by the generalized dimensions Dp = 1 − τp/(p − 1) (Hentschel and Procaccia, (1983). The scaling exponents of the structure functions are then related to Dp by
$$\zeta _p = \left( {\frac{p} {m} - 1} \right)D_{p/m} + 1.$$
((62g))
The correction to the linear scaling p/m is positive for p < m, negative for p > m, and zero for p = m. A fractal behavior where Dp = const. < 1 gives a linear correction with a slope different from 1/m.
Fragmentation models for the energy transfer rate
Cascade models view turbulence as a collection of fragments at a given scale ℓ, which results from the fragmentation of structures at the scale ℓ' > ℓ, down to the dissipative scale (Novikov, (1969). Sophisticated statistics are applied to obtain scaling exponents ζp for the p-th order structure function.
The starting point of fragmentation models is the old β-model, a "pedagogical" fractal model introduced by Frisch et al. (1978) to account for the modification of the cascade in a simple way. In this model, the cascade is realized through the conjecture that active eddies and non-active eddies are present at each scale, the space-filling factor for the fragments being fixed for each scale. Since it is a fractal model, the β-model gives a linear modification to ζp. This can account for a fit on the data, as far as small values of p are concerned. However, the whole curve ζp is clearly nonlinear, and a multifractal approach is needed.
The random-β model (Benzi et al., (1984), a multifractal modification of the β-model, can be derived by invoking that the space-filling factor for the fragments at a given scale in the energy cascade is not fixed, but is given by a random variable β. The probability of occurrence of a given β is assumed to be a bimodal distribution where the eddies fragmentation process generates either space-filling eddies with probability ξ or planar sheets with probability (1 − ξ) (for conservation 0 ≤ ξ ≤ 1). It can be found that
$$\zeta _p = \frac{p} {m} - \log _2 [1 - \xi + \xi 2^{p/m - 1} ],$$
where the free parameter ξ can be fixed through a fit on the data.
The p-model (Meneveau, (1991; Carbone, (1993) consists in an eddies fragmentation process described by a two-scale Cantor set with equal partition intervals. An eddy at the scale ℓ, with an energy derived from the transfer rate ∈r, breaks down into two eddies at the scale ℓ/2, with energies μ∈r and (1 − μ)∈r. The parameter 0.5 ≤ μ ≤ 1 is not defined by the model, but is fixed from the experimental data. The model gives
$$\zeta _p = 1 - \log _2 [\mu ^{p/m} + (1 - \mu )^{p/m} ].$$
In the model by She and Leveque (see, e.g., She and Leveque, (1994; Politano and Pouquet, (1998) one assumes an infinite hierarchy for the moments of the energy transfer rates, leading to ∈ r (p+1) ~ [∈ r (p) ]β[∈ r (∞) ]1−β, and a divergent scaling law for the infinite-order moment ∈ r (∞) ~ r−x, which describes the most singular structures within the flow. The model reads
$$\zeta _p = \frac{p} {m}(1 - x) + C\left[ {1 - \left( {1 - \frac{x} {C}} \right)^{p/m} } \right].$$
The parameter C = x/(1 − β) is identified as the codimension of the most singular structures. In the standard MHD case (Politano and Pouquet, (1995) x = β = 1/2, so that C = 1, that is, the most singular dissipative structures are planar sheets. On the contrary, in fluid flows C = 2 and the most dissipative structures are filaments. The large p behavior of the p-model is given by ζp ~ (p/m) log2(1/μ) + 1, so that Equations (64, 65) give the same results providing μ ≃ 2−x. As shown by Carbone et al. (1996b) all models are able to capture intermittency of fluctuations in the solar wind. The agreement between the curves ζp and normalized scaling exponents is excellent, and this means that we realistically cannot discriminate between the models we reported above. The main problem is that all models are based on a conjecture which gives a curve ζp as a function of a single free parameter, and that curve is able to fit the smooth observed behavior of ζp. Statistics cannot prove, just disprove. We can distinguish between the fractal model and multifractal models, but we cannot realistically distinguish among the various multifractal models.
A model for the departure from self-similarity
Besides the idea of self-similarity underlying the process of energy cascade in turbulence, a different point of view can be introduced. The idea is to characterize the behavior of the PDFs through the scaling laws of the parameters, which describe how the shape of the PDFs changes when going towards small scales. The model, originally introduced by Castaing et al. (2001), is based on a multiplicative process describing the cascade. In its simplest form the model can be introduced by saying that PDFs of increments δZ ℓ ± , at a given scale, are made as a sum of Gaussian distributions with different widths σ = 〈(δZ ℓ ± )2〉1/2. The distribution of widths is given by Gλ(σ), namely
$$P(\delta Z_\ell ^ \pm ) = \frac{1} {{2\pi }}\int_0^\infty {G_\lambda (\sigma )\exp \left( { - \frac{{\left( {\delta Z_\ell ^ \pm } \right)^2 }} {{2\sigma ^2 }}} \right)\frac{{d\sigma }} {\sigma }} .$$
In a purely self-similar situation, where the energy cascade generates only a trivial variation of σ with scales, the width of the distribution Gλ(σ) is zero and, invariably, we recover a Gaussian distribution for P(δZ ℓ ± ). On the contrary, when the cascade is not strictly self-similar, the width of Gλ(σ) is different from zero and the scaling behavior of the width λ2 of Gλ(σ) can be used to characterize intermittency.
Intermittency properties recovered via a shell model
Shell models have remarkable properties which closely resemble those typical of MHD phenomena (Gloaguen et al., (1985; Biskamp, (1994; Giuliani and Carbone (1998; Plunian et al., (2012). However, the presence of a constant forcing term always induces a dynamical alignment, unless the model is forced appropriately, which invariably brings the system towards a state in which velocity and magnetic fields are strongly correlated, that is, where Z n ± ≠ = 0 and Z n ∓ ≠ = 0. When we want to compare statistical properties of turbulence described by MHD shell models with solar wind observations, this term should be avoided. It is possible to replace the constant forcing term by an exponentially time-correlated Gaussian random forcing which is able to destabilize the Alfvénic fixed point of the model (Giuliani and Carbone (1998), thus assuring the energy cascade. The forcing is obtained by solving the following Langevin equation:
$$\frac{{dF_n }} {{dt}} = - \frac{{F_n }} {\tau } + \mu (t),$$
where μ(t) is a Gaussian stochastic process δ-correlated in time 〈μ(t)μ(t') = 2Dδ(t' − t). This kind of forcing will be used to investigate statistical properties.
We show the kinetic energy spectrum |un(t)|2 as a function of log2 kn for the MHD shell model. The full line refer to the Kolmogorov spectrum k n −2/3 .
A statistically stationary state is reached by the system Gloaguen et al. (1985); Biskamp (1994); Giuliani and Carbone (1998); Plunian et al. (2012), with a well defined inertial range, say a region where Equation (49) is verified. Spectra for both the velocity |un(t)|2 and magnetic |bn(t)|2 variables, as a function of kn, obtained in the stationary state using the GOY MHD shell model, are shown in Figures 87 and 88. Fluctuations are averaged over time. The Kolmogorov spectrum is also reported as a solid line. It is worthwhile to remark that, by adding a random term like iknB0(t)Z n ± to a little modified version of the MHD shell models (B0 is a random function with some statistical characteristics), a Kraichnan spectrum, say E(kn) ~ k n −3/2 , where E(kn) is the total energy, can be recovered (Biskamp, (1994; Hattori and Ishizawa, (2001). The term added to the model could represent the effect of the occurrence of a large-scale magnetic field.
Intermittency in the shell model is due to the time behavior of shell variables. It has been shown (Okkels, (1997) that the evolution of GOY model consists of short bursts traveling through the shells and long period of oscillations before the next burst arises. In Figures 89 and 90 we report the time evolution of the real part of both velocity variables un(t) and magnetic variables bn(t) at three different shells. It can be seen that, while at smaller kn variables seems to be Gaussian, at larger kn variables present very sharp fluctuations in between very low fluctuations.
We show the magnetic energy spectrum |bn(t)|2 as a function of log2 kn for the MHD shell model. The full line refer to the Kolmogorov spectrum k n −2/3 .
The time behavior of variables at different shells changes the statistics of fluctuations. In Figure 91 we report the probability distribution functions P(δun) and P(δBn), for different shells n, of normalized variables
$$\delta u_n = \frac{{\Re e(u_n )}} {{\sqrt {\left\langle {\left| {u_n } \right|^2 } \right\rangle } }} and \delta B_n = \frac{{\Re e(b_n )}} {{\sqrt {\left\langle {\left| {b_n } \right|^2 } \right\rangle } }} ,$$
where Re indicates that we take the real part of un and bn. Typically we see that PDFs look differently at different shells: At small kn fluctuations are quite Gaussian distributed, while at large kn they tend to become increasingly non-Gaussian, by developing fat tails. Rare fluctuations have a probability of occurrence larger than a Gaussian distribution. This is the typical behavior of intermittency as observed in usual fluid flows and described in previous sections.
The same phenomenon gives rise to the departure of scaling laws of structure functions from a Kolmogorov scaling. Within the framework of the shell model the analogous of structure functions are defined as
$\left\langle {|u_n |^p } \right\rangle \sim k_n^{ - \xi _p } ;\left\langle {|b_n |^p } \right\rangle \sim k_n^{ - \eta _p } ;\left\langle {|Z_n^ \pm |^p } \right\rangle \sim k_n^{ - \xi _p^ \pm } .$
For MHD turbulence it is also useful to report mixed correlators of the flux variables, i.e.,
$\left\langle {|T_n^ \pm |^{p/3} } \right\rangle \sim k_n^{ - \beta _p^ \pm } .$
Scaling exponents have been determined from a least square fit in the inertial range 3 ≤ n ≤ 12. The values of these exponents are reported in Table 4. It is interesting to notice that, while scaling exponents for velocity are the same as those found in the solar wind, scaling exponents for the magnetic field found in the solar wind reveal a more intermittent character. Moreover, we notice that velocity, magnetic and Elsässer variables are more intermittent than the mixed correlators and we think that this could be due to the cancelation effects among the different terms defining the mixed correlators.
Time intermittency in the shell model generates rare and intense events. These events are the result of the chaotic dynamics in the phase-space typical of the shell model (Okkels, (1997). That dynamics is characterized by a certain amount of memory, as can be seen through the statistics of waiting times between these events. The distributions P(δt) of waiting times is reported in the bottom panels of Figures 91, at a given shell n = 12. The same statistical law is observed for the bursts of total dissipation (Boffetta et al., (1999).
Time behavior of the real part of velocity variable un(t) at three different shells n, as indicated in the different panels.
Time behavior of the real part of magnetic variable bn(t) at three different shells n, as indicated in the different panels.
In the first three panels we report PDFs of both velocity (left column) and magnetic (right column) shell variables, at three different shells ℓn. The bottom panels refer to probability distribution functions of waiting times between intermittent structures at the shell n = 12 for the corresponding velocity and magnetic variables.
Table 4: Scaling exponents for velocity and magnetic variables, Elsässer variables, and fluxes. Errors on β p ± are about one order of magnitude smaller than the errors shown.
Observations of Yaglom's Law in Solar Wind Turbulence
To avoid the risk of misunderstanding, let us start by recalling that Yaglom's law (40) has been derived from a set of equations (MHD) and under assumptions which are far from representing an exact mathematical model for the solar wind plasma. Yaglom's law is valid in MHD under the hypotheses of incompressibility, stationarity, homogeneity, and isotropy. Also, the form used for the dissipative terms of MHD equations is only valid for collisional plasmas, characterized by quasi-Maxwellian distribution functions, and in case of equal kinematic viscosity and magnetic diffusivity coefficients (Biskamp, (2003). In solar wind plasmas the above hypotheses are only rough approximations, and MHD dissipative coefficients are not even defined (Tu and Marsch, (1995a). At frequencies higher than the ion cyclotron frequency, kinetic processes are indeed present, and a number of possible dissipation mechanisms can be discussed. When looking for the Yaglom's law in the SW, the strong conjecture that the law remains valid for any form of the dissipative term is needed.
Despite the above considerations, Yaglom's law results surprisingly verified in some solar wind samples. Results of the occurrence of Yaglom's law in the ecliptic plane, has been reported by MacBride et al. (2008, (2010) and Smith et al. (2009) and, independently, in the polar wind by Sorriso-Valvo et al. (2007). It is worthwhile to note that, the occurrence of Yaglom's law in polar wind, where fluctuations are Alfvénic, represents a double surprising feature because, according to the usual phenomenology of MHD turbulence, a nonlinear energy cascade should be absent for Alfénic turbulence.
In a first attempt to evaluate phenomenologically the value of the energy dissipation rate, MacBride et al. (2008) analyzed the data from ACE to evaluate the occurrence of both the Kolmogorov's 4/5-law and their MHD analog (40). Although some words of caution related to spikes in wind speed, magnetic field strength caused by shocks and other imposed heliospheric structures that constitute inhomogeneities in the data, authors found that both relations are more or less verified in solar wind turbulence They found a distribution for the energy dissipation rate, defined in the above paper as ∈ = (∈ ii + + ∈ ii − )/2, with an average of about ∈ ≃ 1.22 × 104 J/Kg s.
In order to avoid variations of the solar activity and ecliptic disturbances (like slow wind sources, coronal mass ejections, ecliptic current sheet, and so on), and mainly mixing between fast and slow wind, Sorriso-Valvo et al. (2007) used high speed polar wind data measured by the Ulysses spacecraft. In particular, authors analyze the first seven months of 1996, when the heliocentric distance slowly increased from 3 AU to 4 AU, while the heliolatitude decreased from about 55° to 30°. The third-order mixed structure functions have been obtained using 10-days moving averages, during which the fields can be considered as stationary. A linear scaling law, like the one shown in Figure 92, has been observed in a significant fraction of samples in the examined period, with a linear range spanning more than two decades. The linear law generally extends from few minutes up to 1 day or more, and is present in about 20 periods of a few days in the 7 months considered. This probably reflects different regimes of driving of the turbulence by the Sun itself, and it is certainly an indication of the nonstationarity of the energy injection process. According to the formal definition of inertial range in the usual fluid flows, authors attribute to the range where Yaglom's law appear the role of inertial range in the solar wind turbulence (Sorriso-Valvo et al., (2007). This range extends on scales larger than the usual range of scales where a Kolmogorov relation has been observed, say up to about few hours (cf. Figure 25).
An example of the linear scaling for the third-order mixed structure functions Y±, obtained in the polar wind using Ulysses measurements. A linear scaling law represents a range of scales where Yaglom's law is satisfied. Image reproduced by permission from Sorriso-Valvo et al. (2007), copyright by APS.
Several other periods are found where the linear scaling range is reduced and, in particular, the sign of Y ℓ ± is observed to be either positive or negative. In some other periods the linear scaling law is observed either for Y ℓ + or Y ℓ − rather than for both quantities. It is worth noting that in a large fraction of cases the sign switches from negative to positive (or viceversa) at scales of about 1 day, roughly indicating the scale where the small scale Alfvénic correlations between velocity and magnetic fields are lost. This should indicate that the nature of fluctuations changes across the break. The values of the pseudo-energies dissipation rates ∈± has been found to be of the order of magnitude about few hundreds of J/Kg s, higher than that found in usual fluid flows which result of the order of 1 ÷ 50 J/Kg s.
The occurrence of Yaglom's law in solar wind turbulence has been evidenced by a systematic study by MacBride et al. (2010), which, using ACE data, found a reasonable linear scaling for the mixed third-order structure functions, from about 64 s. to several hours at 1 AU in the ecliptic plane. Assuming that the third-order mixed structure function is perpendicular to the mean field, or assuming that this function varies only with the component of the scale ℓα that is perpendicular to the mean field, and is cylindrically symmetric, the Yaglom's law would reduce to a 2D state. On the other hand, if the third-order function is parallel to the mean field or varies only with the component of the scale that is parallel to the mean field, the Yaglom'slaw would reduce to a 1D-like case. In both cases the result will depend on the angle between the average magnetic field and the flow direction. In both cases the energy cascade rate varies in the range 103 ÷ 104 J/Kg s (see MacBride et al., (2010, for further details).
Quite interestingly, Smith et al. (2009) found that the pseudo-energy cascade rates derived from Yaglom's scaling law reveal a strong dependence on the amount of cross-helicity. In particular, they showed that when the correlation between velocity and magnetic fluctuations are higher than about 0.75, the third-order moment of the outward-propagating component, as well as of the total energy and cross-helicity are negative. As already made by Sorriso-Valvo et al. (2007), they attribute this phenomenon to a kind of inverse cascade, namely a back-transfer of energy from small to large scales within the inertial range of the dominant component. We should point out that experimental values of energy transfer rate in the incompressive case, estimated with different techniques from different data sets (Vasquez et al., (2007; MacBride et al., (2010), are only partially in agreement with that obtained by Sorriso-Valvo et al. (2007). However, the different nature of wind (ecliptic vs. polar, fast vs. slow, at different radial distances from the Sun) makes such a comparison only indicative.
As far as the scaling law (47) is concerned, Carbone et al. (2009a) found that a linear scaling for W ℓ ± as defined in (47), appears almost in all Ulysses dataset. In particular, the linear scaling for W ℓ ± is verified even when there is no scaling at all for Y ℓ ± (40). In particular, it has been observed (Carbone et al., (2009a) that a linear scaling for W ℓ + . appears in about half the whole signal, while W ℓ − displays scaling on about a quarter of the sample. The linear scaling law generally extends on about two decades, from a few minutes up to one day or more, as shown in Figure 93. At variance to the incompressible case, the two fluxes W ℓ ± coexist in a large number of cases. The pseudoenergies dissipation rates so obtained are considerably larger than the relative values obtained in the incompressible case. In fact it has been found that on average ∈+ ≃ 3 × 103 J/Kg s. This result shows that the nonlinear energy cascade in solar wind turbulence is considerably enhanced by density fluctuations, despite their small amplitude within the Alfvénic polar turbulence Note that the new variables Δw i ± are built by coupling the Elsässer fields with the density, before computing the scale-dependent increments. Moreover, the third-order moments are very sensitive to intense field fluctuations, that could arise when density fluctuations are correlated with velocity and magnetic field. Similar results, but with a considerably smaller effect, were found in numerical simulations of compressive MHD (Mac Low and Klessen, (2004).
The linear scaling relation is reported for both the usual third-order structure function Y ℓ + and the same quantity build up with the density-mediated variables W ℓ + . A linear relation full line is clearly observed. Data refer to the Ulysses spacecraft. Image reproduced by permission from Carbone et al. (2009a), copyright by APS.
Finally, it is worth reporting that the presence of Yaglom's law in solar wind turbulence is an interesting theoretical topic, because this is the first real experimental evidence that the solar wind turbulence, at least at large-scales, can be described within the magnetohydrodynamic model. In fact, Yaglom's law is an exact law derived from MHD equations and, let us say once more, their occurrence in a medium like the solar wind is a welcomed surprise. By the way, the presence of the law in the polar wind solves the paradox of the presence of Alfvénic turbulence as first pointed out by Dobrowolny et al. (1980a). Of course, the presence of Yaglom's law generates some controversial questions about data selection, reliability and a brief discussion on the extension of the inertial range. The interested reader can find some questions and relative answers in Physical Review Letters (Forman et al., (2010; Sorriso-Valvo et al., ( | CommonCrawl |
closure of a set is closed
Home / Uncategorised / closure of a set is closed
closure of a set is closed2020-12-102020-12-10https://www.primenewswire.com/wp-content/uploads/2016/12/logo1.pngPrime News Wirehttps://www.primenewswire.com/wp-content/uploads/2016/12/logo1.png200px200px
In this section, we finally define a "closed set." We also introduce several traditional topological concepts, such as limit points and closure. TORONTO — A Toronto school will be closed to in-class learning until January as a result of a COVID-19 outbreak that has sickened 14 students. If { } is a finite collection of closed sets, then ∪ is closed Note that the empty set and are both closed and open, a property we call clopen. The closure of set A is A ¯ = A ∪ B d y (A). $$\bar A=\bigcap_{\{C\text{ close }\mid C\supseteq A\}}C$$ | while Ais the smallest closed set containing A; i.e., Ais closed and lies inside of any closed set containing A(so in fact Ais closed if and only if A= A). 3. want to prove that the complement of the closure is open. Intuitively, an open set is a set that does not include its "boundary." Note that not every set is either open or closed, in fact generally most subsets are neither. Making statements based on opinion; back them up with references or personal experience. This is the closure in Y with respect to subspace topology. How to use closure in a sentence. Suppose then that $A = \overline{A}$. Closed Sets and Limit Points 1 Section 17. An algebraic structure is closed under an operation if the result of the operation acting on any two elements is in the set. The closed set then includes all the numbers that are not included in the open set. Closed Sets and Limit Points Note. As you suggest, let's use "The closure of a set C is the set C U {limit points of C} Sorry i made the mistake to extend this problem for all sets. Closure is the idea that you can take some member of a set, and change it by doing [some operation] to it, but because the set is closed under [some operation], the new thing must still be in the set. It seems from your notation that you think $A \subseteq (X, \mathcal{T})$ means $A$ is open. 3. There are several definitions of a "closed set," including "one that contains its limit points." Why does arXiv have a multi-day lag between submission and publication? That is, A ¯ is the set of x ∈ X such that all open neighborhoods around x intersect A. How can I improve undergraduate students' writing skills? So, you can look at it in a different way. One approach is to solve it using the fact that the complement of open sets are closed. Reflexive Closure – is the diagonal relation on set .The reflexive closure of relation on set is . A locally finite collection of subsets is a collection of subsets suc… How to think/see point-set topology abstractly? Set Closure 1. 2. Recall that a set can be open, closed, both open and closed, or neither; the ones that are both open and closed are called clopen. 1.Working in R. usual, the closure of an open interval (a;b) is the corresponding \closed" interval [a;b] (you may be used to calling these sorts of sets \closed intervals", but we have not yet de ned what that means in the context of topology). JavaScript is disabled. Since A sits in a topological space X, the collection of closed sets containing it is nonempty - it contains X - so the intersection of all the members of this collection makes sense. When we say $\overline{A} = A$, we are saying $\overline{A}$ and $A$ are the. \begin{align} \quad d(x, y) = \left\{\begin{matrix} 0 & \mathrm{if} \: x = y\\ 1 & \mathrm{if} \: x \neq y \end{matrix}\right. Which definition do you wish to use? Let $D$ be the union of all lines through $P=(0,0,2)$ and the open ball $B((0,0,0),1)$. We denote by Ω a bounded domain in ℝ N (N ⩾ 1). Question: Let \(\displaystyle f\) be a function defined on a closed domain \(\displaystyle D\). A set is closed in iff it equals the intersection of with some closed set in . I was reading Rudin's proof for the theorem that states that the closure of a set is closed. To express the closure of in one can use the following fact: the closure of in equals . The fact is that we define set equality so that this works out, but there's equalities among other objects (say, equivalence classes) that don't necessarily preserve every property we can think of. But this means that $O_x \subseteq X \setminus A$, and as all points of $X \setminus A$ are contained in such an $O_x$, $X \setminus A = \cup \{O_x: x \in X \setminus A\}$, which is a union of open sets, so $X \setminus A$ is open, and $A$ is closed. Closed Sets 33 By assumption the sets A i are closed, so the sets XrA i are open.
Hand Wash Laundry Logo, Electrolux 517 Dryer, How To Play Yamaha Keyboard, Sharif Ibn Ali Parents, Yamaha Clp-775 Review, Replace Straight Quotes With Smart Quotes In Google Docs, The Characteristic Behavior Or Function Of Something, Whimsical Clothing Brands, Dien Bien Phu Showed That, | CommonCrawl |
EURASIP Journal on Advances in Signal Processing
Error bounds of block sparse signal recovery based on q-ratio block constrained minimal singular values
Jianfeng Wang1,
Zhiyong Zhou2 &
Jun Yu1
EURASIP Journal on Advances in Signal Processing volume 2019, Article number: 57 (2019) Cite this article
In this paper, we introduce the q-ratio block constrained minimal singular values (BCMSV) as a new measure of measurement matrix in compressive sensing of block sparse/compressive signals and present an algorithm for computing this new measure. Both the mixed ℓ2/ℓq and the mixed ℓ2/ℓ1 norms of the reconstruction errors for stable and robust recovery using block basis pursuit (BBP), the block Dantzig selector (BDS), and the group lasso in terms of the q-ratio BCMSV are investigated. We establish a sufficient condition based on the q-ratio block sparsity for the exact recovery from the noise-free BBP and developed a convex-concave procedure to solve the corresponding non-convex problem in the condition. Furthermore, we prove that for sub-Gaussian random matrices, the q-ratio BCMSV is bounded away from zero with high probability when the number of measurements is reasonably large. Numerical experiments are implemented to illustrate the theoretical results. In addition, we demonstrate that the q-ratio BCMSV-based error bounds are tighter than the block-restricted isotropic constant-based bounds.
Compressive sensing (CS) [1, 2] aims to recover an unknown sparse signal \(\mathbf {x}\in \mathbb {R}^{N}\) from m noisy measurements \(\mathbf {y} \in \mathbb {R}^{m}\):
$$\begin{array}{*{20}l} \mathbf{y}=A\mathbf{x}+\boldsymbol{\epsilon}, \end{array} $$
where \(A\in \mathbb {R}^{m\times N}\) is a measurement matrix with m≪N, and \(\boldsymbol {\epsilon }\in \mathbb {R}^{m}\) is additive noise such that ∥ε∥2≤ζ for some ζ≥0. It has been proven that if A satisfies the (stable/robust) null space property (NSP) or restricted isometry property (RIP), (stable/robust) recovery can be achieved [3, Chapter 4 and 6]. However, it is computationally hard to verify NSP and compute the restricted isometry constant (RIC) for an arbitrarily chosen A [4, 5]. To overcome the drawback, a new class of measures for the measurement matrix has been developed during the last decade. To be specific, [6] introduced a new measure called ℓ1-constrained minimal singular value (CMSV): \(\rho _{s}(A)=\min \limits _{\mathbf {z}\neq 0, \lVert \mathbf {z}\rVert _{1}^{2}/\lVert \mathbf {z}\rVert _{2}^{2}\leq s}\frac {\lVert A\mathbf {z}\rVert _{2}}{\lVert \mathbf {z}\rVert _{2}}\) and obtained the ℓ2 recovery error bounds in terms of the proposed measure for the basis pursuit (BP) [7], the Dantzig selector (DS) [8], and the lasso estimator [9]. Afterwards, [10] brought in a variant of the CMSV: \(\omega _{\lozenge }(A,s)=\min \limits _{\mathbf {z}\neq 0,\lVert \mathbf {z}\rVert _{1}/\lVert \mathbf {z}\rVert _{\infty }\leq s}\frac {\lVert A\mathbf {z}\rVert _{\lozenge }}{\lVert \mathbf {z}\rVert _{\infty }}\) with \(\lVert \cdot \rVert _{\lozenge }\) denoting a general norm and expressed the ℓ∞ recovery error bounds using this quantity. The latest progress concerning the CMSV can be found in [11, 12]. Zhou and Yu [11] generalized these two measures to a new measure called q-ratio CMSV: \(\rho _{q,s}(A)=\min \limits _{\mathbf {z}\neq 0, (\lVert \mathbf {z}\rVert _{1}/\lVert \mathbf {z}\rVert _{q})^{q/(q-1)}\leq s}\frac {\lVert A\mathbf {z}\rVert _{2}}{\lVert \mathbf {z}\rVert _{q}}\) with q∈(1,∞] and established both ℓq and ℓ1 bounds of recovery errors. Zhou and Yu [12] investigated geometrical property of the q-ratio CMSV, which can be used to derive sufficient conditions and error bounds of signal recovery.
In addition to the simple sparsity, a signal x can also possess a structure called block sparsity where the non-zero elements occur in clusters. It has been shown that using block information in CS can lead to a better signal recovery [13–15]. Analogue to the simple sparsity, there are block NSP and block RIP to characterize the measurement matrix in order to guarantee a successful recovery through (1) [16]. Nevertheless, they are still computationally hard to be verified for a given A. Thus, it is desirable to develop a computable measure like the CMSV for recovery of simple (non-block) sparse signals. Tang and Nehorai [17] proposed a new measure of the measurement matrix based on the CMSV for block sparse signal recovery and derived the mixed ℓ2/ℓ∞ and ℓ2 bounds of recovery errors. In this paper, we extend the q-ratio CMSV in [11] to q-ratio block CMSV (BCMSV) and generalize the error bounds from the mixed ℓ2/ℓ∞ and ℓ2 norms in [17] to mixed ℓ2/ℓq with q∈(1,∞] and mixed ℓ2/ℓ1 norms.
This work includes four main contributions to block sparse signal recovery in compressive sensing: (i) we establish a sufficient condition based on the q-ratio block sparsity for the exact recovery from the noise-free block BP (BBP) and develop a convex-concave procedure to solve the corresponding non-convex problem in the condition; (ii) we introduce the q-ratio BCMSV and derive both the mixed ℓ2/ℓq and the mixed ℓ2/ℓ1 norms of the reconstruction errors for stable and robust recovery using the BBP, the block DS (BDS), and the group lasso in terms of the q-ratio BCMSV; (iii) we prove that for sub-Gaussian random matrices, the q-ratio BCMSV is bounded away from zero with high probability when the number of measurements is reasonably large; and (iv) we present an algorithm to compute the q-ratio BCMSV for an arbitrary measurement matrix and investigate its properties.
The paper is organized as follows. Section 2 presents our theoretical contributions, including properties of the q-ratio block sparsity and the q-ratio BCMSV, the mixed ℓ2/ℓq norm and the mixed ℓ2/ℓ1 norm reconstruction errors for the BBP, the BDS and the group lasso, and the probabilistic result of the q-ratio BCMSV for sub-Gaussian random matrices. Numerical experiments and algorithms are described in Section 3. Section 4 is devoted to conclusion and discussion. All the proofs are left in the Appendix.
Theoretical methodology
q-ratio block sparsity and q-ratio BCMSV—definition and property
In this section, we introduce the definitions of the q-ratio block sparsity and the q-ratio BCMSV and present their fundamental properties. A sufficient condition for block sparse signal recovery via the noise-free BBP using the q-ratio block sparsity and an inequality for the q-ratio BCMSV are established.
Throughout the paper, we denote vectors by bold lower case letters or bold numbers and matrices by upper case letters. xT denotes the transpose of a column vector x. For any vector \(\mathbf {x}\in \mathbb {R}^{N}\), we partition it into p blocks, each of length n, so we have \(\mathbf {x}=\left [\mathbf {x}_{1}^{T}, \mathbf {x}_{2}^{T}, \cdots, \mathbf {x}_{p}^{T}\right ]^{T}\) and \(\mathbf {x}_{i}\in \mathbb {R}^{n}\) denotes the ith block of x. We define the mixed ℓ2/ℓ0 norm \(\lVert \mathbf {x}\rVert _{2,0}=\sum _{i=1}^{p} 1\{\mathbf {x}_{i}\neq \mathbf {0}\}\), the mixed ℓ2/ℓ∞ norm ∥x∥2,∞= max1≤i≤p∥xi∥2, and the mixed ℓ2/ℓq norm \(\lVert \mathbf {x}\rVert _{2,q}=\left (\sum _{i=1}^{p} \lVert \mathbf {x}_{i}\rVert _{2}^{q}\right)^{1/q}\) for 0<q<∞. A signal x is block k-sparse if ∥x∥2,0≤k. [p] denotes the set {1,2,⋯,p} and |S| denotes the cardinality of a set S. Furthermore, we use Sc for the complement [p]∖S of a set S in [p]. The block support is defined by bsupp(x):={i∈[p]:∥xi∥2≠0}. If S⊂[p], then xS is the vector coincides with x on the block indices in S and is extended to zero outside S. For any matrix \(A\in \mathbb {R}^{m\times N}, \text {ker} A:=\{\mathbf {x}\in \mathbb {R}^{N}: A\mathbf {x}=\mathbf {0}\}, A^{T}\) is the transpose. 〈·,·〉 is the inner product function.
We first introduce the definition of the q-ratio block sparsity and its properties.
Definition 1
([18]) For any non-zero \(\mathbf {x}\in \mathbb {R}^{N}\) and non-negative q∉{0,1,∞}, the q-ratio block sparsity of x is defined as
$$\begin{array}{*{20}l} k_{q}(\mathbf{x})=\left(\frac{\lVert \mathbf{x}\rVert_{2,1}}{\lVert \mathbf{x}\rVert_{2,q}}\right)^{\frac{q}{q-1}}. \end{array} $$
The cases of q∈{0,1,∞} are evaluated by limits:
$$\begin{array}{*{20}l} k_{0}(\mathbf{x})&=\lim\limits_{q\rightarrow 0} k_{q}(\mathbf{x})=\lVert \mathbf{x}\rVert_{2,0} \end{array} $$
$$\begin{array}{*{20}l} k_{1}(\mathbf{x})&=\lim\limits_{q\rightarrow 1} k_{q}(\mathbf{x})=\exp(H_{1}(\pi(\mathbf{x}))) \end{array} $$
$$\begin{array}{*{20}l} k_{\infty}(\mathbf{x})&=\lim\limits_{q\rightarrow \infty} k_{q}(\mathbf{x})=\frac{\lVert \mathbf{x}\rVert_{2,1}}{\lVert \mathbf{x} \rVert_{2,\infty}}. \end{array} $$
Here, \(\pi (\mathbf {x})\in \mathbb {R}^{p}\) with entries πi(x)=∥xi∥2/∥x∥2,1 and H1 is the ordinary Shannon entropy \(H_{1}(\pi (\mathbf {x}))=-\sum _{i=1}^{p} \pi _{i}(\mathbf {x})\log \pi _{i}(\mathbf {x})\).
This is an extension of the sparsity measures proposed in [19, 20], where estimation and statistical inference via α-stable random projection method were investigated. In fact, this kind of sparsity measure is based on entropy, which measures energy of blocks of x via πi(x). Formally, we can express the q-ratio block sparsity by
$$\begin{array}{*{20}l} k_{q}(\mathbf{x})=\left\{\begin{array}{ll} \exp(H_{q}(\pi(\mathbf{x}))) &\text{if}\ \mathbf{x}\neq \mathbf{0}\\ 0 &\text{if}\ \mathbf{x}=\mathbf{0}, \end{array}\right. \end{array} $$
where Hq is the Rényi entropy of order q∈[0,∞] [21, 22]. When q∉{0,1,∞}, the Rényi entropy is given by \(H_{q}(\pi (\mathbf {x}))=\frac {1}{1-q}\log \left (\sum _{i=1}^{p} \pi _{i}(\mathbf {x})^{q}\right)\), and for the cases of q∈{0,1,∞}, the Rényi entropy is evaluated by limits and results in (3), (4), and (5), respectively.
Next, we present a sufficient condition for the exact recovery via the noise-free BBP in terms of the q-ratio block sparsity. Recall that when the true signal x is block k-sparse, the sufficient and necessary condition for the exact recovery via the noise-free BBP:
$$\begin{array}{*{20}l} \min\limits_{\mathbf{z}\in\mathbb{R}^{N}}\,\,\lVert \mathbf{z}\rVert_{2,1}\,\,\,\text{s.t.}\,\,\,A\mathbf{z}=A\mathbf{x} \end{array} $$
in terms of the block NSP of order k was given by [16, 23]
$$\begin{array}{*{20}l} \lVert \mathbf{z}_{S}\rVert_{2,1}<\lVert \mathbf{z}_{S^{c}}\rVert_{2,1}, \forall \mathbf{z}\in\text{ker} A\setminus \{\mathbf{0}\}, S\subset [p]\,\text{and}\,|S|\leq k. \end{array} $$
If x is block k-sparse and there exists at least one q∈(1,∞] such that k is strictly less than
$$\begin{array}{*{20}l} \min\limits_{\mathbf{z}\in\text{ker} A\setminus\{\mathbf{0}\}}\,\,2^{\frac{q}{1-q}}k_{q}(\mathbf{z}), \end{array} $$
then the unique solution to problem (7) is the true signal x.
The proof can be found in A.1 in Appendix. This proposition is an extension of Proposition 1 in [11] from simple sparse signals to block sparse signals. In Section 3.1, we adopt a convex-concave procedure algorithm to solve (8) approximately.
Now, we are ready to present the definition of the q-ratio BCMSV, which is developed based on the q-ratio block sparsity.
For any real number s∈[1,p],q∈(1,∞] and matrix \(A\in \mathbb {R}^{m\times N}\), the q-ratio block constrained minimal singular value (BCMSV) of A is defined as
$$\begin{array}{*{20}l} \beta_{q,s}(A)=\min\limits_{\mathbf{z}\neq \mathbf{0},k_{q}(\mathbf{z})\leq s}\,\,\frac{\lVert A\mathbf{z}\rVert_{2}}{\lVert \mathbf{z}\rVert_{2,q}}. \end{array} $$
For measurement matrix A with unit norm columns, it is obvious that βq,s(A)≤1 since ∥Aei∥2=1,∥ei∥2,q=1, and kq(ei)=1, where ei is the ith canonical basis for \(\mathbb {R}^{N}\). Moreover, when q and A are fixed, βq,s(A) is non-increasing with respect to s. Besides, it is worth noticing that the q-ratio BCMSV depends also on the block size n, we choose to not show this parameter for the sake of simplicity. Another interesting finding is that for any \(\alpha \in \mathbb {R}\), we have βq,s(αA)=|α|βq,s(A). This fact together with Theorem 1 in Section 2.2 implies that in the case of adopting a measurement matrix αA, increasing the measurement energy through |α| will proportionally reduce the mixed ℓ2/ℓq norm of reconstruction errors. Comparing to the block RIP [16], there are three main advantages by using the q-ratio BCMSV:
It is computable (see the algorithm in Section 3.2).
The proof procedures and results of recovery error bounds are more concise (details in Section 2.2).
The q-ratio BCMSV-based recovery bounds are smaller (better) than the block RIC-based bounds as shown in Section 3.3 (see also [11, 17], for another two specific examples).
As for different q, we have the following important inequality, which plays a crucial role in deriving the probabilistic behavior of βq,s(A) via the existing results established in [17].
If 1<q2≤q1≤∞, then for any real number \(1\leq s\leq p^{1/\tilde {q}}\) with \(\tilde {q}=\frac {q_{2}(q_{1}-1)}{q_{1}(q_{2}-1)}\), we have
$$\begin{array}{*{20}l} \beta_{q_{1},s}(A)\geq \beta_{q_{2},s^{\tilde{q}}}(A)\geq s^{-\tilde{q}} \beta_{q_{1}, s^{\tilde{q}}}(A). \end{array} $$
The proof can be found in A.2 in Appendix. Let q1=∞ and q2=2 (thus, \(\tilde {q}=2\)), we have \(\beta _{\infty,s}(A)\geq \beta _{2,s^{2}}(A)\geq \frac {1}{s^{2}}\beta _{\infty,s^{2}}(A)\). If q1≥q2>1, then \(\tilde {q}=\frac {q_{2}(q_{1}-1)}{q_{1}(q_{2}-1)}=1+\frac {q_{1}-q_{2}}{q_{1}(q_{2}-1)}\geq 1\), so \(\beta _{q_{2},s^{\tilde {q}}}(A)\leq \beta _{q_{2},s}(A)\). Similarly, we have for any \(t\in [1,p] \beta _{q_{2},t}(A)\geq \frac {1}{t}\beta _{q_{1},t}(A)\) by letting \(t=s^{\tilde {q}}\) in (10). Based on these facts, we can not obtain the monotonicity with respect to q when s and A are fixed. However, since for any \(\mathbf {z}\in \mathbb {R}^{N}\) with p blocks, kq(z)≤p, it holds trivially that βq,p(A) is non-decreasing with respect to q by using the non-increasing property of the mixed ℓ2/ℓq norm.
Recovery error bounds
In this section, we derive the recovery error bounds in terms of the mixed ℓ2/ℓq norm and the mixed ℓ2/ℓ1 norm via the q-ratio BCMSV of the measurement matrix. We focus on three renowned convex relaxation algorithms for block sparse signal recovery from (1): the BBP, the BDS, and the group lasso.
BBP: \(\min \limits _{\mathbf {z}\in \mathbb {R}^{N}}\,\,\lVert \mathbf {z}\rVert _{2,1}\,\,\,\text {s.t.}\,\,\,\lVert \mathbf {y}-A\mathbf {z}\rVert _{2}\leq \zeta \).
BDS: \(\min \limits _{\mathbf {z}\in \mathbb {R}^{N}}\,\,\lVert \mathbf {z}\rVert _{2,1}\,\,\,\text {s.t.}\,\,\,\lVert A^{T}(\mathbf {y}-A\mathbf {z})\rVert _{2,\infty }\leq \mu \).
Group lasso: \(\min \limits _{\mathbf {z}\in \mathbb {R}^{N}}\frac {1}{2}\lVert \mathbf {y}-A\mathbf {z}\rVert _{2}^{2}+\mu \lVert \mathbf {z}\rVert _{2,1}\).
Here, ζ and μ are parameters used in the constraints to control the noise level. We first present the following main results of recovery error bounds for the case when the true signal x is block k-sparse.
Theorem 1
Suppose x is block k-sparse. For any q∈(1,∞], we have 1) If ∥ε∥2≤ζ, then the solution \(\hat {\mathbf {x}}\) to the BBP obeys
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,q}&\leq \frac{2\zeta}{\beta_{q,2^{\frac{q}{q-1}}k}(A)}, \end{array} $$
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,1}&\leq \frac{4k^{1-1/q}\zeta}{\beta_{q,2^{\frac{q}{q-1}}k}(A)}. \end{array} $$
2) If the noise ε in the BDS satisfies ∥ATε∥2,∞≤μ, then the solution \(\hat {\mathbf {x}}\) to the BDS obeys
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,q}\leq \frac{4k^{1-1/q}}{\beta_{q,2^{\frac{q}{q-1}}k}^{2}(A)}\mu, \end{array} $$
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,1}\leq \frac{8k^{2-2/q}}{\beta_{q,2^{\frac{q}{q-1}}k}^{2}(A)}\mu. \end{array} $$
3) If the noise εin the group lasso satisfies ∥ATε∥2,∞≤κμ for some κ∈(0,1), then the solution \(\hat {\mathbf {x}}\) to the group lasso obeys
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,q}&\leq \frac{1+\kappa}{1-\kappa}\cdot\frac{2k^{1-1/q}}{\beta_{q,\left(\frac{2}{1-\kappa}\right)^{\frac{q}{q-1}}k}^{2}(A)}\mu, \end{array} $$
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,1}&\leq \frac{1+\kappa}{(1-\kappa)^{2}}\cdot\frac{4k^{2-2/q}}{\beta_{q,\left(\frac{2}{1-\kappa}\right)^{\frac{q}{q-1}}k}^{2}(A)}\mu. \end{array} $$
The proof can be found in A.3 in Appendix. Obviously, if \(\beta _{q,2^{\frac {q}{q-1}}k}(A)\neq 0\) in (11) and (12), then the noise free BBP (7) can uniquely recover any block k-sparse signal by letting ζ=0.
The mixed ℓ2/ℓq norm error bounds are generalized from the existing results in [17] (q=2 and ∞) to any 1<q≤∞ and from [11] (simple sparse signal recovery) to block sparse signal recovery. The mixed ℓ2/ℓq norm error bounds depend on the q-ratio BCMSV of the measurement matrix A, which is bounded away from zero for sub-Gaussian random matrix and can be computed approximately by using a specific algorithm, which are discussed later.
As shown in literature, the block RIC-based recovery error bounds for the BBP [16], the BDS [24], and the group lasso [25] are complicated. In contrast, as presented in this theorem, the q-ratio BCMSV-based bounds are much more concise and corresponding derivations are much less complicated, which are given in the Appendix.
Next, we extend Theorem 1 to the case when the signal is block compressible, in the sense that it can be approximated by a block k-sparse signal. Given a block compressible signal x, let the mixed ℓ2/ℓ1 error of the best block k-sparse approximation of x be \(\phi _{k}(\mathbf {x})=\underset {\mathbf {z}\in \mathbb {R}^{N},\lVert \mathbf {z}\rVert _{2,0}=k}{\inf } \lVert \mathbf {x}-\mathbf {z}\rVert _{2,1}\), which measures how close x is to the block k-sparse signal.
Suppose that x is block compressible. For any 1<q≤∞, we have 1) If ∥ε∥2≤ζ, then the solution \(\hat {\mathbf {x}}\) to the BBP obeys
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,q}&\leq \frac{2\zeta}{\beta_{q,4^{\frac{q}{q-1}}k}(A)}+k^{1/q-1}\phi_{k}(\mathbf{x}), \end{array} $$
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,1}&\leq \frac{4k^{1-1/q}\zeta}{\beta_{q,4^{\frac{q}{q-1}}k}(A)}+4\phi_{k}(\mathbf{x}). \end{array} $$
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,q}&\leq \frac{8k^{1-1/q}}{\beta_{q,4^{\frac{q}{q-1}}k}^{2}(A)}\mu+k^{1/q-1}\phi_{k}(\mathbf{x}), \end{array} $$
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,1}&\leq \frac{16k^{2-2/q}}{\beta_{q,4^{\frac{q}{q-1}}k}^{2}(A)}\mu+4\phi_{k}(\mathbf{x}). \end{array} $$
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,q}&\leq \frac{1+\kappa}{1-\kappa}\cdot\frac{4k^{1-1/q}}{\beta_{q,\left(\frac{4}{1-\kappa}\right)^{\frac{q}{q-1}}k}^{2}(A)}\mu+k^{1/q-1}\phi_{k}(\mathbf{x}), \end{array} $$
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,1}&\leq \frac{1+\kappa}{(1-\kappa)^{2}}\cdot\frac{8k^{2-2/q}}{\beta_{q,\left(\frac{4}{1-\kappa}\right)^{\frac{q}{q-1}}k}^{2}(A)}\mu+\frac{4}{1-\kappa}\phi_{k}(\mathbf{x}). \end{array} $$
The proof can be found in A.4 in Appendix. All the error bounds consist of two components, one is caused by the measurement error, and another one is due to the sparsity defect.
Comparing to Theorem 1, we need stronger conditions to achieve the valid error bounds. Concisely, we require \(\beta _{q,4^{\frac {q}{q-1}}k}(A)>0, \beta _{q,4^{\frac {q}{q-1}}k}(A)>0\) and \(\beta _{q,\left (\frac {4}{1-\kappa }\right)^{\frac {q}{q-1}}k}(A)>0\) for the BBP, BDS, and group lasso in the block compressible case, while \(\beta _{q,2^{\frac {q}{q-1}}k}(A)>0, \beta _{q,2^{\frac {q}{q-1}}k}(A)>0\) and \(\beta _{q,\left (\frac {2}{1-\kappa }\right)^{\frac {q}{q-1}}k}(A)>0\) in the block sparse case, respectively.
Random matrices
In this section, we study the properties of the q-ratio BCMSV of sub-Gaussian random matrix. A random vector \(\mathbf {x}\in \mathbb {R}^{N}\) is called isotropic and sub-Gaussian with constant L if it holds for all \(\mathbf {u}\in \mathbb {R}^{N}\) that \(E|\langle \mathbf {x},\mathbf {u}\rangle |^{2}=\lVert \mathbf {u}\rVert _{2}^{2}\) and \(P(|\langle \mathbf {x}, \mathbf {u}\rangle |\geq t)\leq 2\exp \left (-\frac {t^{2}}{L\lVert \mathbf {u}\rVert _{2}}\right)\). Then, as shown in Theorem 2 of [17], we have the following lemma.
Lemma 1
([17]) Suppose the rows of the scaled measurement matrix \(\sqrt {m}A\) to be i.i.d isotropic and sub-Gaussian random vectors with constant L. Then, there exists constants c1 and c2 such that for any η>0 and m≥1 satisfying
$$m\geq c_{1}\frac{L^{2}{\mathbin{s}}(n+\log p)}{\eta^{2}}, $$
$$\mathbb{E}|1-\beta_{2,s}(A)|\leq \eta $$
$$\mathbb{P}(\beta_{2,s}(A)\geq 1-\eta)\geq 1-\exp\left(-c_{2}\eta^{2}\frac{m}{L^{4}}\right).$$
Then, as a direct consequence of Proposition 2 (i.e., if 1<q<2,βq,s(A)≥s−1β2,s(A); if \(2\leq q\leq \infty, \beta _{q,s}(A)\geq \beta _{2,s^{\frac {2(q-1)}{q}}}(A)\).) and Lemma 1, we have the following probabilistic statements for βq,s(A).
Under the assumptions and notations of Lemma 1, it holds that
1) When 1<q<2, there exist constants c1 and c2 such that for any η>0 and m≥1 satisfying
$$m\geq c_{1}\frac{L^{2}{s}(n+\log p)}{\eta^{2}}, $$
$$\begin{array}{*{20}l} \mathbb{E}[\beta_{q,s}(A)]&\geq s^{-1}(1-\eta), \end{array} $$
$$\begin{array}{*{20}l} \mathbb{P}\big(\beta_{q,s}(A)&\geq s^{-1}(1-\eta)\big)\geq 1-\exp\left(-c_{2}\eta^2 \frac{m}{L^{4}}\right). \end{array} $$
2) When 2≤q≤∞, there exist constants c1 and c2 such that for any η>0 and m≥1 satisfying
$$m\geq c_{1}\frac{L^{2} s^{\frac{2(q-1)}{q}}(n+\log p)}{\eta^{2}}, $$
$$\begin{array}{*{20}l} \mathbb{E}[\beta_{q,s}(A)]&\geq 1-\eta, \end{array} $$
$$\begin{array}{*{20}l} \mathbb{P}\big(\beta_{q,s}(A)&\geq 1-\eta\big)\geq 1-\exp\left(-c_{2}\eta^2 \frac{m}{L^{4}}\right). \end{array} $$
Theorem 3 shows that for sub-Gaussian random matrix, the q-ratio BCMSV is bounded away from zero as long as the number of measurements is large enough. Sub-Gaussian random matrices include Gaussian and Bernoulli ensembles.
Numerical experiments and results
In this section, we introduce a convex-concave method to solve the sufficient condition (8) so as to achieve the maximal block sparsity k and present an algorithm to compute the q-ratio BCMSV. We also conduct comparisons between the q-ratio BCMSV-based bounds and block RIC-based bounds through the BBP.
Solving the optimization problem (8)
According to Proposition 1, given a q∈(1,∞], we need to solve the optimization problem (8) to obtain the maximal block sparsity k which guaranties that all block k-sparse signals can be uniquely recovered by (7). Solving (8) is equivalent to solve the problem:
$$\begin{array}{*{20}l} \max\limits_{\mathbf{z}\in\mathbb{R}^{N}}\,\lVert \mathbf{z}\rVert_{2,q}\,\,\,\text{s.t.}\ A\mathbf{z}=0\ \text{and}\ \lVert \mathbf{z}\rVert_{2,1}\leq 1. \end{array} $$
However, maximizing mixed ℓ2/ℓq norm over a polyhedron is non-convex. Here, we adopt the convex-concave procedure (CCP) (see [26] for details) to solve the problem (27) for any q∈(1,∞]. The algorithm is presented as follows:
We implement the algorithm to solve (27) under the following settings. Let A be either Bernoulli or Gaussian random matrix with N=256, varying m, block size n, and q. Specifically, m=64,128,192,n=1,2,4,8, and q=2,4,16,128, respectively. The results are summarized in Table 1. Note that when n=1, the algorithm (??) is identical to the one in [11]. The main findings are as follows: (i) by comparing the results between Bernoulli and Gaussian random matrices under the same settings, there is no substantial difference. Thus, we can now merely focus on the left part of the table, i.e., Bernoulli random matrix part; (ii) it can be seen that the results are not monotone with respect to q (see the row with n=4,m=192), which verifies the conclusion in Remark 3; (iii) when m is the only variable, it is easy to notice that the maximal block sparsity increases as m increases; and (iv) conversely, when n is the only variable, the maximal block sparsity decreases as n increases, which is in line with the main result in ([27], Theorem 3.1).
Table 1 Maximal sparsity levels from the CCP algorithm for both Bernoulli and Gaussian random matrices with N=256 and different combinations of n,m, and q
Computing the q-ratio BCMSVs
Computing the q-ratio BCMSV (9) is equivalent to solve
$$\begin{array}{*{20}l} \min\limits_{\mathbf{z}\in\mathbb{R}^{N}}\,\lVert A\mathbf{z}\rVert_{2}\,\,\,\text{s.t.}\,\,\,\lVert \mathbf{z}\rVert_{2,1}\leq s^{\frac{q-1}{q}}, \lVert \mathbf{z}\rVert_{2,q}=1. \end{array} $$
Since the constraint set is not convex, this is a non-convex optimization problem. In order to solve (28), we use Matlab function fmincon as in [11] and define z=z+−z− with z+= max(z,0) and z−= max(−z,0). Consequently, (28) can be reformulated to:
$$\begin{array}{*{20}l} \min\limits_{\mathbf{z}^{+},\mathbf{z}^{-}\in\mathbb{R}^{N}}&\,(\mathbf{z}^{+}-\mathbf{z}^{-})^{T} A^{T} A(\mathbf{z}^{+}-\mathbf{z}^{-}) \\ &\text{s.t.}\,\,\,\lVert \mathbf{z}^{+}-\mathbf{z}^{-}\rVert_{2,1}-s^{\frac{q-1}{q}}\leq 0, \\ &\lVert \mathbf{z}^{+}-\mathbf{z}^{-}\rVert_{2,q}=1, \\ &\mathbf{z}^{+}\geq 0, \mathbf{z}^{-}\geq 0. \end{array} $$
Due to the existence of local minima, we perform an experiment to decide a reasonable number of iterations needed to achieve the "global" minima shown in Fig. 1. In the experiment, we calculate the q-ratio BCMSV of a fixed unit norm columns Bernoulli random matrix of size 40×64,n=s=4, and varying q=2,4,8, respectively. Fifty iterations are carried out for each q. The figure shows that after about 30 experiments, the estimate of \(\beta _{q,s}, \hat {\beta }_{q,s}\), becomes convergent, so in the following experiments, we repeat the algorithm 40 times and choose the smallest value \(\hat {\beta }_{q,s}\) as the "global" minima. We test indeed to vary m,s,n, respectively, all indicate 40 is a reasonable number to be chosen (not shown).
q-ratio BCMSVs calculated for a Bernoulli random matrix of size 40×64 with n=4,s=4, and q=2,4,8 as a function of number of experiments
Next, we illustrate the properties of βq,s, which have been pointed out in Remarks 2 and 3, through experiments. We set N=64 with three different block sizes n=1,4,8 (i.e., number of blocks p=64,16,8), three different m=40,50,60, three different q=2,4,8, and three different s=2,4,8. Unit norm columns Bernoulli random matrices are used. Results are listed in Table 2. They are inline with the theoretical results:
βq,s increases as m increases for all cases given that other parameters are fixed.
Table 2 The q-ratio BCMSVs with varying m,n,p,q, and s
βq,s decreases as s increases for most of cases given that other parameters are fixed. There are exceptions when m=40,n=8 with s=4, and s=8 under q=4,8, respectively. However, the difference is about 0.0002, which is possibly caused by numerical approximation.
Monotonicity of βq,s does not hold with respect to q even given that other parameters are fixed.
Comparing error bounds
Here, we compare the q-ratio BCMSV-based bounds against the block RIC-based bounds from the BBP under different settings. The block RIC-based bound is
$$\begin{array}{*{20}l} \lVert \hat{x}-x\rVert_{2}\leq \frac{4\sqrt{1+\delta_{2k}(A)}}{1-(1+\sqrt{2})\delta_{2k}(A)}\zeta, \end{array} $$
if A satisfies the block RIP of order 2k, i.e., the block RIC \(\delta _{2k}(A)<\sqrt {2}-1\) [14, 17]. By using the Hölder's inequality, one can obtain the mixed ℓ2/ℓq norm
$$\begin{array}{*{20}l} \lVert \hat{x}-x\rVert_{2,q}\leq \frac{4\sqrt{1+\delta_{2k}(A)}}{1-(1+\sqrt{2})\delta_{2k}(A)}k^{1/q-1/2}\zeta, \end{array} $$
for 0<q≤2.
We compare the two bounds (31) and (12). Without loss of generality, let ζ=1. δ2k(A) is approximated using Monte Carlo simulations. Specifically, we randomly choose 1000 sub-matrices of \(A\in \mathbb {R}^{m\times N}\) of size m×2nk to compute δ2k(A) using the maximum of \(\max \left (\sigma _{\text {max}}^{2}-1,1-\sigma _{\text {min}}^{2}\right)\) among all sampled sub-matrices. It turns out that this approximated block RIC is always smaller than or equal to the exact block RIC; thus, the error bounds based on the exact block RIC are always larger than those based on the approximated block RIC. Therefore, it would be enough to show that the q-ratio BCMSV gives a sharper error bound than the approximated block RIC.
We use unit norm columns sub-matrices of a row-randomly-permuted Hadamard matrix (an orthogonal Bernoulli matrix) with N=64,k=1,2,4,n=1,2,q=1.8, and a variety of m≤64 to approximate the q-ratio BCMSV and the block RIC. Besides the Hadamard matrix, we also test Bernoulli random matrices and Gaussian random matrices with different configurations, which only return very fewer qualified block RICs. In the simulation results of [17], the authors showed that under all considered cases for Gaussian random matrices, \(\delta _{2k}(A)>\sqrt {2}-1,\) which is coincident with our finding. Figure 2 shows that the q-ratio BCMSV-based bounds are smaller than those based on the approximated block RIC. Note that when m approaches N, βq,s(A)→1 and δ2k(A)→0, as a result, the q-ratio BCMSV-based bounds are smaller than 2.2, while the block RIC-based bounds are larger than or equal to 4.
The q-ratio BCMSV-based bounds and the block RIC-based bounds for Hadamard sub-matrices with N=64,k=1,2,4,n=1,2, and q=1.8
Conclusion and discussion
In this study, we introduced the q-ratio block sparsity measure and the q-ratio BCMSV. Theoretically, through the q-ratio block sparsity measure and the q-ratio BCMSV, we (i) established the sufficient condition for the unique noise-free BBP recovery; (ii) derived both the mixed ℓ2/ℓq norm and the mixed ℓ2/ℓ1 norm bounds of recovery errors for the BBP, the BDS, and the group lasso estimator; and (iii) proved the q-ratio BCMSV is bounded away from zero if the number of measurements is relatively large for sub-Gaussian random matrix. Afterwards, we used numerical experiments via two algorithms to illustrate theoretical results. In addition, we demonstrated that the q-ratio BCMSV-based error bounds are much tighter than those based on block RIP through simulations.
There are still some issues left for future work. For example, analogue to the case for the q-ratio CMSV, the geometrical property of the q-ratio BCMSV can be investigated to derive sufficient conditions and error bounds for block sparse signal recovery.
Appendix - Proofs
Basically, the main processes of proofs follow from those in [11] with extensions to block sparse signals. We list all the details here for the sake of completeness.
(Proof of Proposition 1) Suppose there exists z∈kerA∖{0} and |S|≤k such that \(\lVert \mathbf {z}_{S}\rVert _{2,1}\geq \lVert \mathbf {z}_{S^{c}}\rVert _{2,1}\), then we have
$$\begin{array}{*{20}l} &\lVert \mathbf{z}\rVert_{2,1}=\lVert \mathbf{z}_{S}\rVert_{2,1}+\lVert \mathbf{z}_{S^{c}}\rVert_{2,1}\leq 2\lVert \mathbf{z}_{S}\rVert_{2,1} \\&\leq 2k^{1-1/q}\lVert \mathbf{z}_{S}\rVert_{2,q} \leq 2k^{1-1/q}\lVert \mathbf{z}\rVert_{2,q}, ~\forall q\in (1, \infty], \end{array} $$
which is identical to \(k\geq 2^{\frac {q}{1-q}} k_{q}(\mathbf {z}),\quad \forall q\in (1, \infty ]\).
In contrast, suppose ∃ q∈(1,∞] such that \(k<\min \limits _{\mathbf {z}\in \text {ker} A\setminus \{\mathbf {0}\}}\,\,2^{\frac {q}{1-q}}k_{q}(\mathbf {z})\), then \(\lVert \mathbf {z}_{S}\rVert _{2,1}<\lVert \mathbf {z}_{S^{c}}\rVert _{2,1}\) holds for all z∈kerA∖{0} and |S|≤k, which implies that the block null space property of order k is fulfilled; thus, any block k-sparse signal x can be obtained via (7). □
(Proof of Proposition 2.)
(i) Prove the left hand side of (10):
For any \(\mathbf {z}\in \mathbb {R}^{N}\setminus \{\mathbf {0}\}\) and 1<q2≤q1≤∞, suppose \(k_{q_{1}}(\mathbf {z})\leq s\), then we can get \(\left (\frac {\lVert \mathbf {z}\rVert _{2,1}}{\lVert \mathbf {z}\rVert _{{2,q_{1}}}}\right)^{\frac {q_{1}}{q_{1}-1}}\leq s\Rightarrow \lVert \mathbf {z}\rVert _{2,1}\leq s^{\frac {q_{1}-1}{q_{1}}}\lVert \mathbf {z}\rVert _{2,q_{1}}\leq s^{\frac {q_{1}-1}{q_{1}}}\lVert \mathbf {z}\rVert _{2,q_{2}}\). Since \(\tilde {q}=\frac {q_{2}(q_{1}-1)}{q_{1}(q_{2}-1)}\) and \( \frac {\lVert \mathbf {z}\rVert _{2,1}}{\lVert \mathbf {z}\rVert _{2,q_{2}}}\leq s^{\frac {q_{1}-1}{q_{1}}}\), we have
$$k_{q_{2}}(\mathbf{z})=\left(\frac{\lVert \mathbf{z}\rVert_{2,1}}{\lVert \mathbf{z}\rVert_{2,q_{2}}}\right)^{\frac{q_{2}}{q_{2}-1}}\leq s^{\frac{q_{2}(q_{1}-1)}{q_{1}(q_{2}-1)}}=s^{\tilde{q}}, $$
from which we can infe
$$\{\mathbf{z}: k_{q_{1}}(\mathbf{z})\leq s\}\subseteq \{\mathbf{z}: k_{q_{2}}(\mathbf{z})\leq s^{\tilde{q}}\}. $$
Therefore, we can get the left hand side of (10) through
$$\begin{array}{*{20}l} \beta_{q_{1},s}(A)&=\min\limits_{\mathbf{z}\neq \mathbf{0},k_{q_{1}}(\mathbf{z})\leq s}\frac{\lVert A\mathbf{z}\rVert_{2}}{\lVert \mathbf{z}\rVert_{2,q_{1}}}\geq \min\limits_{\mathbf{z}\neq \mathbf{0}, k_{q_{2}}(\mathbf{z})\leq s^{\tilde{q}}}\frac{\lVert A\mathbf{z}\rVert_{2}}{\lVert \mathbf{z}\rVert_{2,q_{1}}} \\ &= \min\limits_{\mathbf{z}\neq \mathbf{0}, k_{q_{2}}(\mathbf{z})\leq s^{\tilde{q}}} \frac{\lVert A\mathbf{z}\rVert_{2}}{\lVert \mathbf{z}\rVert_{2,q_{2}}}\cdot\frac{\lVert \mathbf{z}\rVert_{2,q_{2}}}{\lVert \mathbf{z}\rVert_{2,q_{1}}} \\ &\geq \min\limits_{\mathbf{z}\neq \mathbf{0}, k_{q_{2}}(\mathbf{z})\leq s^{\tilde{q}}} \frac{\lVert A\mathbf{z}\rVert_{2}}{\lVert \mathbf{z}\rVert_{2,q_{2}}}=\beta_{q_{2},s^{\tilde{q}}}(A). \end{array} $$
(ii) Verify the right hand side of (10):
Suppose \(k_{q_{2}}(\mathbf {z})\leq s^{\tilde {q}}\), for any \(\mathbf {z}\in \mathbb {R}^{N}\setminus \{\mathbf {0}\}\), by using the non-increasing property of the q-ratio block sparsity with respect to q and q2≤q1≤∞, we have the following two inequalities: \(\frac {\lVert \mathbf {z}\rVert _{2,1}}{\lVert \mathbf {z}\rVert _{2,\infty }}=k_{\infty }(\mathbf {z})\leq k_{q_{2}}(\mathbf {z})\leq s^{\tilde {q}}\) and \(k_{q_{1}}(\mathbf {z})\leq k_{q_{2}}(\mathbf {z})\leq s^{\tilde {q}}\). Since 1<q2≤q1≤∞, the former inequality implies that \(\frac {\lVert \mathbf {z}\rVert _{2,q_{2}}}{\lVert \mathbf {z}\rVert _{2,q_{1}}}\leq \frac {\lVert \mathbf {z}\rVert _{2,1}}{\lVert \mathbf {z}\rVert _{2,\infty }}\leq s^{\tilde {q}}\Rightarrow \frac {\lVert \mathbf {z}\rVert _{2,q_{1}}}{\lVert \mathbf {z}\rVert _{2,q_{2}}}\ge s^{\mathbin {{-\tilde {q}}}}\). The latter inequality implies that
$$\{\mathbf{z}: k_{q_{2}}(\mathbf{z})\leq s^{\tilde{q}}\}\subseteq \{\mathbf{z}: k_{q_{1}}(\mathbf{z})\leq s^{\tilde{q}} \}. $$
Therefore, we can obtain the right hand side of (10) through
$$\begin{array}{*{20}l} \beta_{q_{2},s^{\tilde{q}}}(A)&=\min\limits_{\mathbf{z}\neq \mathbf{0},k_{q_{2}}(\mathbf{z})\leq s^{\tilde{q}}}\frac{\lVert A\mathbf{z}\rVert_{2}}{\lVert \mathbf{z}\rVert_{2,q_{2}}} \\ &\geq \min\limits_{\mathbf{z}\neq \mathbf{0}, k_{q_{1}}(\mathbf{z})\leq s^{\tilde{q}}} \frac{\lVert A\mathbf{z}\rVert_{2}}{\lVert \mathbf{z}\rVert_{2,q_{2}}} \\ &= \min\limits_{\mathbf{z}\neq \mathbf{0}, k_{q_{1}}(\mathbf{z})\leq s^{\tilde{q}}} \frac{\lVert A\mathbf{z}\rVert_{2}}{\lVert \mathbf{z}\rVert_{2,q_{1}}}\cdot\frac{\lVert \mathbf{z}\rVert_{2,q_{1}}}{\lVert \mathbf{z}\rVert_{2,q_{2}}}\\ &\geq \beta_{q_{1}, s^{\tilde{q}}}(A)\cdot s^{-\tilde{q}}. \end{array} $$
(Proof of Theorem 1.) The proof procedure follows from the similar arguments in [6, 10], and the procedure can be divided into two main steps.
Step 1: We first derive upper bounds of the q-ratio block sparsity of residual \(\mathbf {h}=\hat {\mathbf {x}}-\mathbf {x}\) for all algorithms. As x is block k-sparse, we assume that bsupp(x)=S and |S|≤k.
For the BBP and the BDS, since \(\lVert \hat {\mathbf {x}}\rVert _{2,1}=\lVert \mathbf {x}+\mathbf {h}\rVert _{2,1}\) is the minimum among all z satisfying the constraints of BBP and BDS (including the true signal x), we have
$$\begin{array}{*{20}l} {}\lVert \mathbf{x}\rVert_{2,1}&\!\geq\! \lVert \hat{\mathbf{x}}\rVert_{2,1}\,=\,\lVert \mathbf{x}\,+\,\mathbf{h}\rVert_{2,1}\,=\,\lVert \mathbf{x}_{S}\,+\,\mathbf{h}_{S}\rVert_{2,1}\,+\,\lVert \mathbf{x}_{S^{c}}\,+\,\mathbf{h}_{S^{c}}\rVert_{2,1} \\ &\geq \lVert \mathbf{x}_{S}\rVert_{2,1}-\lVert \mathbf{h}_{S}\rVert_{2,1}+\lVert \mathbf{h}_{S^{c}}\rVert_{2,1} \\ &=\lVert \mathbf{x}\rVert_{2,1}-\lVert \mathbf{h}_{S}\rVert_{2,1}+\lVert \mathbf{h}_{S^{c}}\rVert_{2,1}, \end{array} $$
which can be simplified to \(\lVert \mathbf {h}_{S^{c}}\rVert _{2,1}\leq \lVert \mathbf {h}_{S}\rVert _{2,1}\). Thereby, we can obtain the following inequality:
$$\begin{array}{*{20}l} &\lVert \mathbf{h}\rVert_{2,1}=\lVert \mathbf{h}_{S}\rVert_{2,1}+\lVert \mathbf{h}_{S^{c}}\rVert_{2,1} \leq 2\lVert \mathbf{h}_{S}\rVert_{2,1}\\&\leq 2k^{1-1/q}\lVert \mathbf{h}_{S}\rVert_{2,q}\leq 2k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q}, \quad \forall q\in (1,\infty], \end{array} $$
which is equivalent to
$$k_{q}(\mathbf{h})=\left(\frac{\lVert \mathbf{h}\rVert_{2,1}}{\lVert \mathbf{h}\rVert_{2,q}}\right)^{\frac{q}{q-1}}\leq 2^{\frac{q}{q-1}} k.$$
For the group lasso, since the noise ε satisfies ∥ATε∥2,∞≤κμ for κ∈(0,1) and \(\hat {\mathbf {x}}\) is a solution of the group lasso, we have
$$\frac{1}{2}\lVert A\hat{\mathbf{x}}-\mathbf{y}\rVert_{2}^{2}+\mu\lVert \hat{\mathbf{x}}\rVert_{2,1}\leq \frac{1}{2}\lVert A\mathbf{x}-\mathbf{y}\rVert_{2}^{2}+\mu\lVert \mathbf{x}\rVert_{2,1}. $$
Substituting y by Ax+ε leads to
$$\begin{array}{*{20}l} \mu\lVert\hat{\mathbf{x}}\rVert_{2,1}&\leq \frac{1}{2}\lVert \boldsymbol{\epsilon}\rVert_{2}^{2}-\frac{1}{2}\lVert A(\hat{\mathbf{x}}-\mathbf{x})-\boldsymbol{\epsilon}\rVert_{2}^{2}+\mu\lVert \mathbf{x}\rVert_{2,1}\\ &=\frac{1}{2}\lVert \boldsymbol{\epsilon}\rVert_{2}^{2}-\frac{1}{2}\lVert A(\hat{\mathbf{x}}-\mathbf{x})\rVert_{2}^{2}+\langle A(\hat{\mathbf{x}}-\mathbf{x}),\boldsymbol{\epsilon}\rangle\\&-\frac{1}{2}\lVert \boldsymbol{\epsilon}\rVert_{2}^{2}+\mu\lVert \mathbf{x}\rVert_{2,1}\\ &\leq \langle A(\hat{\mathbf{x}}-\mathbf{x}),\boldsymbol{\epsilon}\rangle+\mu\lVert \mathbf{x}\rVert_{2,1} \\ &=\langle \hat{\mathbf{x}}-\mathbf{x}, A^{T}\boldsymbol{\epsilon}\rangle+\mu\lVert \mathbf{x}\rVert_{2,1} \\ &\leq \lVert \hat{\mathbf{x}}-\mathbf{x}\rVert_{2,1}\lVert A^{T} \boldsymbol{\epsilon}\rVert_{2,\infty}+\mu\lVert \mathbf{x}\rVert_{2,1} \\ &\leq \kappa \mu\lVert \mathbf{h}\rVert_{2,1}+\mu\lVert \mathbf{x}\rVert_{2,1}. \end{array} $$
The last second inequality follows by applying Cauchy-Schwarz inequality block wise and the last inequality can be written as
$$\begin{array}{*{20}l} \lVert \hat{\mathbf{x}}\rVert_{2,1}\leq \kappa\lVert \mathbf{h}\rVert_{2,1}+\lVert \mathbf{x}\rVert_{2,1}. \end{array} $$
Therefore, it holds that
$$\begin{array}{*{20}l} \lVert \mathbf{x}\rVert_{2,1}&\geq \lVert \hat{\mathbf{x}}\rVert_{2,1}-\kappa \lVert \mathbf{h}\rVert_{2,1}\\ &=\lVert \mathbf{x}+\mathbf{h}_{S^{c}}+\mathbf{h}_{S}\rVert_{2,1}-\kappa\lVert \mathbf{h}_{S^{c}}+\mathbf{h}_{S}\rVert_{2,1} \\ &\geq \lVert \mathbf{x}+\mathbf{h}_{S^{c}}\rVert_{2,1}\,-\,\lVert \mathbf{h}_{S}\rVert_{2,1}\,-\,\kappa(\lVert \mathbf{h}_{S^{c}}\rVert_{2,1}+\lVert \mathbf{h}_{S}\rVert_{2,1})\\ &=\lVert \mathbf{x}\rVert_{2,1}+(1-\kappa)\lVert \mathbf{h}_{S^{c}}\rVert_{2,1}-(1+\kappa)\lVert \mathbf{h}_{S}\rVert_{2,1}, \end{array} $$
which can be simplified t
$$\lVert \mathbf{h}_{S^{c}}\rVert_{2,1}\leq \frac{1+\kappa}{1-\kappa}\lVert \mathbf{h}_{S}\rVert_{2,1}. $$
Thus, we can obtain
$$\begin{array}{*{20}l} \lVert \mathbf{h}\rVert_{2,1}&=\lVert \mathbf{h}_{S^{c}}\rVert_{2,1}+\lVert \mathbf{h}_{S}\rVert_{2,1}\\ &\leq \frac{2}{1-\kappa}\lVert \mathbf{h}_{S}\rVert_{2,1}\\ &\leq \frac{2}{1-\kappa}k^{1-1/q}\lVert \mathbf{h}_{S}\rVert_{2,q} \\ &\leq \frac{2}{1-\kappa}k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q}, \end{array} $$
which can be reformulated b
$$k_{q}(\mathbf{h})=\left(\frac{\lVert \mathbf{h}\rVert_{2,1}}{\lVert \mathbf{h}\rVert_{2,q}}\right)^{\frac{q}{q-1}}\leq \left(\frac{2}{1-\kappa}\right)^{\frac{q}{q-1}}k. $$
Step 2: Obtain upper bound of ∥Ah∥2 and then construct the mixed ℓ2/ℓq norm and the mixed ℓ2/ℓ1 norm of the recovery error vector h via the q-ratio BCMSV for each algorithm.
(i) For the BBP, since both x and \(\hat {\mathbf {x}}\) satisfy the constraint ∥y−Az∥2≤ζ, by using the triangle inequality, we can get
$$\begin{array}{*{20}l} \lVert A\mathbf{h}\rVert_{2}\,=\,\lVert A(\hat{\mathbf{x}}-\mathbf{x})\rVert_{2}&\!\leq\! \lVert A\hat{\mathbf{x}}-\mathbf{y}\rVert_{2}+\lVert \mathbf{y}-A\mathbf{x}\rVert_{2}\leq 2\zeta. \end{array} $$
Following from the definition of the q-ratio BCMSV and \(k_{q}(\mathbf {h})\leq 2^{\frac {q}{q-1}}k\), we have
$${{}\begin{aligned} \beta_{q,2^{\frac{q}{q-1}}k}(A)\lVert \mathbf{h}\rVert_{2,q}\!\leq\! \lVert A\mathbf{h}\rVert_{2}\!\leq\! 2\zeta\Rightarrow \lVert \mathbf{h}\rVert_{2,q}\leq \frac{2\zeta}{\beta_{q,2^{\frac{q}{q-1}}k}(A)}. \end{aligned}} $$
Furthermore, we can obtain \(\lVert \mathbf {h}\rVert _{2,1}\leq \frac {4k^{1-1/q}\zeta }{\beta _{q,2^{\frac {q}{q-1}}k}(A)}\) by using the property ∥h∥2,1≤2k1−1/q∥h∥2,q.
(ii) Similarly for the BDS, since both x and \(\hat {\mathbf {x}}\) satisfy the constraint ∥AT(y−Az)∥2,∞≤μ, we have
$$\begin{array}{*{20}l} {}\lVert A^{T} A\mathbf{h}\rVert_{2,\infty}\!\leq\! \lVert A^{T}(\mathbf{y}-A\hat{\mathbf{x}})\rVert_{2,\infty}\,+\,\lVert A^{T}(\mathbf{y}\!-A\mathbf{x})\rVert_{2,\infty} \leq 2\mu. \end{array} $$
By applying the Cauchy-Schwarz inequality again as in Step 1, we obtain
$$\begin{array}{*{20}l} {}&\lVert A\mathbf{h}\rVert_{2}^{2}=\langle A\mathbf{h},A\mathbf{h}\rangle=\langle \mathbf{h},A^{T}A\mathbf{h}\rangle\\&\leq \lVert \mathbf{h}\rVert_{2,1}\lVert A^{T}A\mathbf{h}\rVert_{2,\infty}\leq 2\mu\lVert \mathbf{h}\rVert_{2,1}. \end{array} $$
At last, with the definition of the q-ratio BCMSV, \(k_{q}(\mathbf {h})\leq 2^{\frac {q}{q-1}}k\) and ∥h∥2,1≤2k1−1/q∥h∥2,q, we get the upper bounds of the mixed ℓ2/ℓq norm and the mixed ℓ2/ℓ1 norm for h:
$$\begin{array}{*{20}l} &\!\!\!\!\beta_{q,2^{\frac{q}{q-1}}k}^{2}(A)\lVert \mathbf{h}\rVert_{2,q}^{2}\!\leq\! \lVert A\mathbf{h}\rVert_{2}^{2}\!\leq\! 2\mu\lVert \mathbf{h}\rVert_{2,1}\!\leq\! 4\mu k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q} \\ &\Rightarrow \lVert \mathbf{h}\rVert_{2,q}\leq \frac{4k^{1-1/q}}{\beta_{q,2^{\frac{q}{q-1}}k}^{2}(A)}\mu \end{array} $$
and \(\lVert \mathbf {h}\rVert _{2,1}\leq 2k^{1-1/q}\lVert \mathbf {h}\rVert _{2,q}\leq \frac {8k^{2-2/q}}{\beta _{q,2^{\frac {q}{q-1}}k}^{2}(A)}\mu \).
(iii) For the group lasso, with ∥ATε∥2,∞≤κμ, we have
$$\begin{array}{*{20}l} \lVert A^{T}A\mathbf{h}\rVert_{2,\infty}&\leq \lVert A^{T}(\mathbf{y}-A\mathbf{x})\rVert_{2,\infty}+\lVert A^{T}(\mathbf{y}-A\hat{\mathbf{x}})\rVert_{2,\infty} \\ &\leq \lVert A^{T}\boldsymbol{\epsilon}\rVert_{2,\infty} +\lVert A^{T}(\mathbf{y}-A\hat{\mathbf{x}})\rVert_{2,\infty} \\ &\leq \kappa\mu+\lVert A^{T}(\mathbf{y}-A\hat{\mathbf{x}})\rVert_{2,\infty}. \end{array} $$
Moreover, since \(\hat {\mathbf {x}}\) is the solution of the group lasso, the optimality condition yields that
$$A^{T}(\mathbf{y}-A\hat{\mathbf{x}})\in\mu\partial \lVert \hat{\mathbf{x}}\rVert_{2,1}, $$
where the sub-gradients in \(\partial \lVert \hat {\mathbf {x}}\rVert _{2,1}\) for the ith block are \(\hat {\mathbf {x}}_{i}/\lVert \hat {\mathbf {x}}_{i}\rVert _{2}\) if \(\hat {\mathbf {x}}_{i}\neq 0\) and is some vector g satisfying ∥g∥2≤1 if \(\hat {\mathbf {x}}_{i}= 0\) (which follows from the definition of sub-gradient). Thus, we have \(\lVert A^{T}(\mathbf {y}-A\hat {\mathbf {x}})\rVert _{2,\infty }\leq \mu \), which leads to
$$\lVert A^{T}A\mathbf{h}\rVert_{2,\infty}\leq (\kappa+1)\mu. $$
Following the inequality (34), we get
$$\begin{array}{*{20}l} \lVert A\mathbf{h}\rVert_{2}^{2}\leq (\kappa+1)\mu\lVert \mathbf{h}\rVert_{2,1}. \end{array} $$
As a result, since \(k_{q}(\mathbf {h})\leq \left (\frac {2}{1-\kappa }\right)^{\frac {q}{q-1}}k\) and \(\lVert \mathbf {h}\rVert _{2,1}\leq \frac {2}{1-\kappa }k^{1-1/q}\lVert \mathbf {h}\rVert _{2,q}\), we can obtain
$$\begin{array}{*{20}l} \beta_{q,(\frac{2}{1-\kappa})^{\frac{q}{q-1}}k}^{2}(A)\lVert \mathbf{h}\rVert_{2,q}^{2}&\leq \lVert A\mathbf{h}\rVert_{2}^{2}\leq (\kappa+1)\mu\lVert \mathbf{h}\rVert_{2,1} \\ &\leq \mu\frac{2(\kappa+1)}{1-\kappa}k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q}, \end{array} $$
$$\lVert \mathbf{h}\rVert_{2,q}\leq \frac{k^{1-1/q}}{\beta_{q,\left(\frac{2}{1-\kappa}\right)^{\frac{q}{q-1}}k}^{2}(A)}\cdot \frac{2(\kappa+1)}{1-\kappa}\mu $$
and \(\lVert \mathbf {h}\rVert _{2,1}\leq \frac {1+\kappa }{(1-\kappa)^{2}}\cdot \frac {4k^{2-2/q}}{\beta _{q,(\frac {2}{1-\kappa })^{\frac {q}{q-1}}k}^{2}(A)}\mu \). □
Since the infimum of ϕk(x) is achieved by an block k-sparse signal z whose non-zero blocks equal to the largest k blocks, indexed by S, of x, so \(\phi _{k}(\mathbf {x})=\lVert \mathbf {x}_{S^{c}}\rVert _{2,1}\) and let \(\mathbf {h}=\hat {\mathbf {x}}-\mathbf {x}\). Similar as the proof procedure for Theorem 1, the derivations also have two steps.
Step 1: For all algorithms, bound ∥h∥2,1 via ∥h∥2,q and ϕk(x).
First for the BBP and the BDS, since \(\lVert \hat {\mathbf {x}}\rVert _{2,1}=\lVert \mathbf {x}+\mathbf {h}\rVert _{2,1}\) is the minimum among all z satisfying the constraints of the BBP and the BDS, we have
$$\begin{array}{*{20}l} {}\lVert \mathbf{x}_{S}\rVert_{2,1}+\lVert \mathbf{x}_{S^{c}}\rVert_{2,1}&=\lVert \mathbf{x}\rVert_{2,1}\geq \lVert \hat{\mathbf{x}}\rVert_{2,1}=\lVert \mathbf{x}+\mathbf{h}\rVert_{2,1} \\ &=\lVert \mathbf{x}_{S}+\mathbf{h}_{S}\rVert_{2,1}+\lVert \mathbf{x}_{S^{c}}+\mathbf{h}_{S^{c}}\rVert_{2,1}\\ &\geq \lVert \mathbf{x}_{S}\rVert_{2,1}\,-\,\lVert \mathbf{h}_{S}\rVert_{2,1}\,+\,\lVert \mathbf{h}_{S^{c}}\rVert\,-\,\lVert \mathbf{x}_{S^{c}}\rVert_{2,1}, \end{array} $$
$$\begin{array}{*{20}l} \lVert \mathbf{h}_{S^{c}}\rVert_{2,1}\leq \lVert \mathbf{h}_{S}\rVert_{2,1}+2\lVert \mathbf{x}_{S^{c}}\rVert_{2,1}=\lVert \mathbf{h}_{S}\rVert_{2,1}+2\phi_{k}(\mathbf{x}). \end{array} $$
In consequence, we can get
$$\begin{array}{*{20}l} \lVert \mathbf{h}\rVert_{2,1}&=\lVert \mathbf{h}_{S}\rVert_{2,1}+\lVert \mathbf{h}_{S^{c}}\rVert_{2,1}\\ &\leq 2\lVert \mathbf{h}_{S}\rVert_{2,1}+2\phi_{k}(\mathbf{x}) \end{array} $$
$$\begin{array}{*{20}l} &\leq 2k^{1-1/q}\lVert \mathbf{h}_{S}\rVert_{2,q}+2\phi_{k}(\mathbf{x}) \\ &\leq 2k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q}+2\phi_{k}(\mathbf{x}). \end{array} $$
As for the group lasso, by using (32), we can obtain
$${\begin{aligned} \lVert \mathbf{x}_{S}\rVert_{2,1}+\lVert \mathbf{x}_{S^{c}}\rVert_{2,1}&=\lVert \mathbf{x}\rVert_{2,1} \geq \lVert \hat{\mathbf{x}}\rVert_{2,1}-\kappa\lVert \mathbf{h}\rVert_{2,1} \\ &\geq \lVert \mathbf{x}_{S}+\mathbf{x}_{S^{c}}+\mathbf{h}_{S}+\mathbf{h}_{S^{c}}\rVert_{2,1}\\&-\kappa\lVert \mathbf{h}_{S}+\mathbf{h}_{S^{c}}\rVert_{2,1} \\ &\geq \lVert \mathbf{x}_{S}+\mathbf{h}_{S^{c}}\rVert_{2,1}-\lVert \mathbf{x}_{S^{c}}\rVert_{2,1}\\&-\lVert \mathbf{h}_{S}\rVert_{2,1}-\kappa\lVert \mathbf{h}_{S}\rVert_{2,1}-\kappa\lVert \mathbf{h}_{S^{c}}\rVert_{2,1} \\ &=\lVert \mathbf{x}_{S}\rVert_{2,1}+(1-\kappa)\lVert \mathbf{h}_{S^{c}}\rVert_{2,1}\\&-\lVert \mathbf{x}_{S^{c}}\rVert_{2,1}-(1+\kappa)\lVert \mathbf{h}_{S}\rVert_{2,1}, \end{aligned}} $$
which points to that
$$\begin{array}{*{20}l} \lVert \mathbf{h}_{S^{c}}\rVert_{2,1}\leq \frac{1+\kappa}{1-\kappa}\lVert \mathbf{h}_{S}\rVert_{2,1}+\frac{2}{1-\kappa}\lVert \mathbf{x}_{S^{c}}\rVert_{2,1}. \end{array} $$
Therefore, we have
$$\begin{array}{*{20}l} \lVert \mathbf{h}\rVert_{2,1}&\leq \lVert \mathbf{h}_{S}\rVert_{2,1}+\lVert \mathbf{h}_{S^{c}}\rVert_{2,1} \\ &\leq \frac{2}{1-\kappa}\lVert \mathbf{h}_{S}\rVert_{2,1}+\frac{2}{1-\kappa}\lVert \mathbf{x}_{S^{c}}\rVert_{2,1} \\ &\leq \frac{2}{1-\kappa}k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q}+\frac{2}{1-\kappa}\phi_{k}(\mathbf{x}). \end{array} $$
Step 2: Verify that the q-ratio block sparsity of h has lower bound in the form of ∥h∥2,q for each algorithm, when ∥h∥2,q is larger than the part of recovery bounds caused by the measurement error.
(i) For the BBP, we assume that h≠0 and \(\lVert \mathbf {h}\rVert _{2,q}>\frac {2\zeta }{\beta _{q,4^{\frac {q}{q-1}}k}(A)}\); otherwise, (17) holds trivially. Since ∥Ah∥2≤2ζ (see (33)), we have \(\lVert \mathbf {h}\rVert _{2,q}>\frac {\lVert A\mathbf {h}\rVert _{2}}{\beta _{q,4^{\frac {q}{q-1}}k}(A)}\). Then, it holds that
$$\frac{\lVert A\mathbf{h}\rVert_{2}}{\lVert \mathbf{h}\rVert_{2,q}}<{\beta_{q,4^{\frac{q}{q-1}}k}(A)}=\min\limits_{\mathbf{h}\neq \mathbf{0}, k_{q}(\mathbf{h})\leq 4^{\frac{q}{q-1}}k}\frac{\lVert A\mathbf{h}\rVert_{2}}{\lVert \mathbf{h}\rVert_{2,q}}, $$
which implies that
$$\begin{array}{*{20}l} k_{q}(\mathbf{h})>4^{\frac{q}{q-1}}k\Rightarrow \lVert \mathbf{h}\rVert_{2,1}>4k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q}. \end{array} $$
Combining (39), we have ∥h∥2,q<k1/q−1ϕk(x), which completes the proof for (17). The error bound of the mixed ℓ2/ℓ1 norm (18) follows immediately from (17) and (39).
(ii) As for the BDS, similarly we assume h≠0 and \(\lVert \mathbf {h}\rVert _{2,q}>\frac {8k^{1-1/q}}{\beta _{q,4^{\frac {q}{q-1}}k}^{2}(A)}\mu \); otherwise, (19) holds trivially. As \(\lVert A\mathbf {h}\rVert _{2}^{2}\leq 2\mu \lVert \mathbf {h}\rVert _{2,1}\) (see (34)), we have \(\lVert \mathbf {h}\rVert _{2,q}>\frac {4k^{1-1/q}}{\beta _{q,4^{\frac {q}{q-1}}k}^{2}(A)}\cdot \frac {\lVert A\mathbf {h}\rVert _{2}^{2}}{\lVert \mathbf {h}\rVert _{2,1}}\). Then, we can get
$${{}\begin{aligned} \beta_{q,4^{\frac{q}{q-1}}k}^{2}(A)\,=\,\min\limits_{\mathbf{h}\neq \mathbf{0}, k_{q}(\mathbf{h})\leq 4^{\frac{q}{q-1}}k}\frac{\lVert A\mathbf{h}\rVert_{2}^{2}}{\lVert \mathbf{h}\rVert_{2,q}^{2}} \!>\!\frac{\lVert A\mathbf{h}\rVert_{2}^{2}}{\lVert \mathbf{h}\rVert_{2,q}^{2}}\left(\frac{4^{\frac{q}{q-1}}k}{k_{q}(\mathbf{h})}\right)^{1-1/q}, \end{aligned}} $$
Combining (39), we have ∥h∥2,q<k1/q−1ϕk(x), which completes the proof for (19). (20) holds as a result of (19) and (39).
(iii) For the group lasso, we assume that h≠0 and \(\lVert \mathbf {h}\rVert _{2,q}>\frac {1+\kappa }{1-\kappa }\cdot \frac {4k^{1-1/q}}{\beta _{q,(\frac {4}{1-\kappa })^{\frac {q}{q-1}}k}^{2}(A)}\mu \); otherwise, (21) holds trivially. Since in this case \(\lVert A\mathbf {h}\rVert _{2}^{2}\leq (1+\kappa)\mu \lVert \mathbf {h}\rVert _{2,1}\) (see (35)), we have \(\lVert \mathbf {h}\rVert _{2,q}>\frac {4k^{1-1/q}}{(1-\kappa)\beta _{q,(\frac {4}{1-\kappa })^{\frac {q}{q-1}}k}^{2}(A)}\cdot \frac {\lVert A\mathbf {h}\rVert _{2}^{2}}{\lVert \mathbf {h}\rVert _{2,1}}\), which leads to
$$\begin{array}{*{20}l} \beta_{q,(\frac{4}{1-\kappa})^{\frac{q}{q-1}}k}^{2}(A)&=\min\limits_{\mathbf{h}\neq \mathbf{0}, k_{q}(\mathbf{h})\leq (\frac{4}{1-\kappa})^{\frac{q}{q-1}}k}\frac{\lVert A\mathbf{h}\rVert_{2}^{2}}{\lVert \mathbf{h}\rVert_{2,q}^{2}} \\ &>\frac{\lVert A\mathbf{h}\rVert_{2}^{2}}{\lVert \mathbf{h}\rVert_{2,q}^{2}}\left(\frac{(\frac{4}{1-\kappa})^{\frac{q}{q-1}}k}{k_{q}(\mathbf{h})}\right)^{1-\frac{1}{q}} \\ &\Rightarrow k_{q}(\mathbf{h})>(\frac{4}{1-\kappa})^{\frac{q}{q-1}}k \\ &\Rightarrow \lVert \mathbf{h}\rVert_{2,1}>\frac{4}{1-\kappa}k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q}. \end{array} $$
Combining (41), we have ∥h∥2,q<k1/q−1ϕk(x), which completes the proof for (21). Consequently, (22) is obtained via (21) and (41). □
Please contact the author for data request.
BBP:
Block BP
BCMSV:
q-ratio block constrained minimal singular values
BDS:
Block DS
CMSV:
ℓ1-constrained minimal singular value
CS:
Dantzig selector
NSP:
Null space property
RIC:
Restricted isometry constant
Restricted isometry property
D. L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006).
E. J. Candes, J. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math.59(8), 1207–1223 (2006).
S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing (Springer, 2013). https://doi.org/10.1007/978-0-8176-4948-7_1.
A. S. Bandeira, E. Dobriban, D. G. Mixon, W. F. Sawin, Certifying the restricted isometry property is hard. IEEE Trans. Info. Theory. 59(6), 3448–3450 (2013).
A. M. Tillmann, M. E. Pfetsch, The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inf. Theory. 60(2), 1248–1259 (2014).
G. Tang, A. Nehorai, Performance analysis of sparse recovery based on constrained minimal singular values. IEEE Trans. Sig. Process. 59(12), 5734–5745 (2011).
S. S. Chen, D. L. Donoho, M. A. Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput.20:, 33–61 (1998).
E. J. Candes, T. Tao, The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat., 2313–2351 (2007). https://doi.org/10.1214/009053606000001523.
R. Tibshirani, Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser B (Methodol), 267–288 (1996). https://doi.org/10.1111/j.1467-9868.2011.00771.x.
G. Tang, A. Nehorai, Computable performance bounds on sparse recovery. IEEE Trans. Sig. Process. 63(1), 132–141 (2015).
Z. Zhou, J. Yu, Sparse recovery based on q-ratio constrained minimal singular values. Sig. Process. 155:, 247–258 (2019).
Z. Zhou, J. Yu, On q-ratio cmsv for sparse recovery. Sig. Process (2019). https://doi.org/10.1016/j.sigpro.2019.07.003.
R. G. Baraniuk, V. Cevher, M. F. Duarte, C. Hegde, Model-based compressive sensing. IEEE Trans. Inf. Theory. 56(4), 1982–2001 (2010).
Y. C. Eldar, M. Mishali, Robust recovery of signals from a structured union of subspaces. IEEE Trans. Inf. Theory. 55(11), 5302–5316 (2009).
H. Zamani, H. Bahrami, P. Mohseni, in Proc. IEEE Biomedical Circuits and Systems Conf. (BioCAS). On the use of compressive sensing (cs) exploiting block sparsity for neural spike recording, (2016), pp. 228–231. https://doi.org/10.1109/biocas.2016.7833773.
Y. Gao, M. Ma, A new bound on the block restricted isometry constant in compressed sensing. J. Inequalities Appl.2017(1), 174–174 (2017).
G. Tang, A. Nehorai, Semidefinite programming for computable performance bounds on block-sparsity recovery. IEEE Trans. Sig. Process. 64(17), 4455–4468 (2016).
Z. Zhou, J. Yu, Estimation of block sparsity in compressive sensing (2017). arXiv preprint arXiv:1701.01055.
M. E. Lopes, in International Conference on Machine Learning. Estimating unknown sparsity in compressed sensing, (2013), pp. 217–225. http://proceedings.mlr.press/v28/lopes13.pdf.
M. E. Lopes, Unknown sparsity in compressed sensing: denoising and inference. IEEE Trans. Inf. Theory. 62(9), 5145–5166 (2016).
Y. Plan, R. Vershynin, One-bit compressed sensing by linear programming. Commun. Pure Appl. Math.66(8), 1275–1297 (2013).
R. Vershynin, in Sampling Theory, a Renaissance. Estimation in high dimensions: a geometric perspective (SpringerCham, 2015), pp. 3–66.
M. Stojnic, F. Parvaresh, B. Hassibi, On the reconstruction of block-sparse signals with an optimal number of measurements. IEEE Trans. Sig. Process. 57:, 3075–3085 (2009).
H. Liu, J. Zhang, X. Jiang, J. Liu, in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 9, ed. by Y. W. Teh, M. Titterington. The group Dantzig selector (PMLRChia Laguna Resort, Sardinia, 2010), pp. 461–468. http://proceedings.mlr.press/v9/liu10a.html.
R. Garg, R. Khandekar, in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 15, ed. by G. Gordon, D. Dunson, and M. Dudík. Block-sparse solutions using kernel block rip and its application to group lasso (PMLRFort Lauderdale, 2011), pp. 296–304. http://proceedings.mlr.press/v15/garg11a.html.
T. Lipp, S. Boyd, Variations and extension of the convex–concave procedure. Optim. Eng.17(2), 263–287 (2016).
N. Rao, B. Recht, R. Nowak, in Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 22, ed. by N. D. Lawrence, M. Girolami. Universal measurement bounds for structured sparse signal recovery (PMLRLa Palma, 2012), pp. 942–950. http://proceedings.mlr.press/v22/rao12.html.
This work is supported by the Swedish Research Council grant (Reg.No. 340-2013-5342).
Department of Mathematics and Mathematical Statistics, Umeå University, Umeå, Sweden
Jianfeng Wang
& Jun Yu
Department of Statistics, Zhejiang University City College, Hangzhou, China
Zhiyong Zhou
Search for Jianfeng Wang in:
Search for Zhiyong Zhou in:
Search for Jun Yu in:
The authors read and approved the final manuscript.
Correspondence to Jianfeng Wang.
Wang, J., Zhou, Z. & Yu, J. Error bounds of block sparse signal recovery based on q-ratio block constrained minimal singular values. EURASIP J. Adv. Signal Process. 2019, 57 (2019) doi:10.1186/s13634-019-0653-1
q-ratio block sparsity
q-ratio block constrained minimal singular value
Convex-concave procedure | CommonCrawl |
19934번
맞은 사람
숏코딩
재채점
강의 요청하기
19934번 - Connecting Supertrees 서브태스크출처다국어언어 제한함수 구현
시간 제한
메모리 제한
정답
정답 비율
1 초 1024 MB 64 28 25 45.455%
Gardens by the Bay is a large nature park in Singapore. In the park there are $n$ towers, known as supertrees. These towers are labelled $0$ to $n-1$. We would like to construct a set of zero or more bridges. Each bridge connects a pair of distinct towers and may be traversed in either direction. No two bridges should connect the same pair of towers.
A path from tower $x$ to tower $y$ is a sequence of one or more towers such that:
the first element of the sequence is $x$,
the last element of the sequence is $y$,
all elements of the sequence are distinct, and
each two consecutive elements (towers) in the sequence are connected by a bridge.
Note that by definition there is exactly one path from a tower to itself and the number of different paths from tower $i$ to tower $j$ is the same as the number of different paths from tower $j$ to tower $i$.
The lead architect in charge of the design wishes for the bridges to be built such that for all $0 \leq i, j \leq n-1$ there are exactly $p[i][j]$ different paths from tower $i$ to tower $j$, where $0 \leq p[i][j] \leq 3$.
Construct a set of bridges that satisfy the architect's requirements, or determine that it is impossible.
구현
You should implement the following procedure:
int construct(int[][] p)
$p$: an $n \times n$ array representing the architect's requirements.
If a construction is possible, this procedure should make exactly one call to build (see below) to report the construction, following which it should return $1$.
Otherwise, the procedure should return $0$ without making any calls to build.
This procedure is called exactly once.
The procedure build is defined as follows:
void build(int[][] b)
$b$: an $n \times n$ array, with $b[i][j]=1$ if there is a bridge connecting tower $i$ and tower $j$, or $b[i][j]=0$ otherwise.
Note that the array must satisfy $b[i][j]=b[j][i]$ for all $0 \leq i,j \leq n-1$ and $b[i][i] = 0$ for all $0 \leq i \leq n-1$.
제한
$1 \leq n \leq 1000$
$p[i][i] = 1$ (for all $0 \leq i \leq n-1$)
$p[i][j] = p[j][i]$ (for all $0 \leq i, j \leq n-1$)
$0 \leq p[i][j] \leq 3$ (for all $0 \leq i, j \leq n-1$)
예시 1
Consider the following call:
construct([[1, 1, 2, 2], [1, 1, 2, 2], [2, 2, 1, 2], [2, 2, 2, 1]])
This means that there should be exactly one path from tower $0$ to tower $1$. For all other pairs of towers $(x, y)$, such that $0 \leq x < y \leq 3$, there should be exactly two paths from tower $x$ to tower $y$.
This can be achieved with $4$ bridges, connecting pairs of towers $(0, 1)$, $(1, 2)$, $(1, 3)$ and $(2, 3)$.
To report this solution, the construct procedure should make the following call:
build([[0, 1, 0, 0], [1, 0, 1, 1], [0, 1, 0, 1], [0, 1, 1, 0]])
It should then return $1$.
In this case, there are multiple constructions that fit the requirements, all of which would be considered correct.
construct([[1, 0], [0, 1]])
This means that there should be no way to travel between the two towers. This can only be satisfied by having no bridges.
Therefore, the construct procedure should make the following call:
build([[0, 0], [0, 0]])
After which, the construct procedure should return $1$.
This means that there should be exactly $3$ paths from tower $0$ to tower $1$. This set of requirements cannot be satisfied. As such, the construct procedure should return $0$ without making any call to build.
서브태스크
$p[i][j] = 1$ (for all $0 \leq i, j \leq n-1$)
$p[i][j] = 0$ or $1$ (for all $0 \leq i, j \leq n-1$)
$p[i][j] = 0$ or $2$ (for all $i\neq j$, $0 \leq i, j \leq n-1$)
$0 \leq p[i][j] \leq 2$ (for all $0 \leq i, j \leq n-1$) and there is at least one construction satisfying the requirements.
힌트
supertrees.zip
Olympiad > International Olympiad in Informatics > IOI 2020 2번
제출할 수 있는 언어
C++17, C++14, C++20, C++14 (Clang), C++17 (Clang), C++20 (Clang)
채점 및 기타 정보
예제는 채점하지 않는다. | CommonCrawl |
Explain the phenomena of interference.
Define constructive interference for a double slit and destructive interference for a double slit.
Although Christiaan Huygens thought that light was a wave, Isaac Newton did not. Newton felt that there were other explanations for color, and for the interference and diffraction effects that were observable at the time. Owing to Newton's tremendous stature, his view generally prevailed. The fact that Huygens's principle worked was not considered evidence that was direct enough to prove that light is a wave. The acceptance of the wave character of light came many years later when, in 1801, the English physicist and physician Thomas Young (1773–1829) did his now-classic double slit experiment (see Figure 1).
Figure 1. Young's double slit experiment. Here pure-wavelength light sent through a pair of vertical slits is diffracted into a pattern on the screen of numerous vertical lines spread out horizontally. Without diffraction and interference, the light would simply make two lines on the screen.
Why do we not ordinarily observe wave behavior for light, such as observed in Young's double slit experiment? First, light must interact with something small, such as the closely spaced slits used by Young, to show pronounced wave effects. Furthermore, Young first passed light from a single source (the Sun) through a single slit to make the light somewhat coherent. By coherent, we mean waves are in phase or have a definite phase relationship. Incoherent means the waves have random phase relationships. Why did Young then pass the light through a double slit? The answer to this question is that two slits provide two coherent light sources that then interfere constructively or destructively. Young used sunlight, where each wavelength forms its own pattern, making the effect more difficult to see. We illustrate the double slit experiment with monochromatic (single λ) light to clarify the effect. Figure 2 shows the pure constructive and destructive interference of two waves having the same wavelength and amplitude.
Figure 2. The amplitudes of waves add. (a) Pure constructive interference is obtained when identical waves are in phase. (b) Pure destructive interference occurs when identical waves are exactly out of phase, or shifted by half a wavelength.
When light passes through narrow slits, it is diffracted into semicircular waves, as shown in Figure 3a. Pure constructive interference occurs where the waves are crest to crest or trough to trough. Pure destructive interference occurs where they are crest to trough. The light must fall on a screen and be scattered into our eyes for us to see the pattern. An analogous pattern for water waves is shown in Figure 3b. Note that regions of constructive and destructive interference move out from the slits at well-defined angles to the original beam. These angles depend on wavelength and the distance between the slits, as we shall see below.
Figure 3. Double slits produce two coherent sources of waves that interfere. (a) Light spreads out (diffracts) from each slit, because the slits are narrow. These waves overlap and interfere constructively (bright lines) and destructively (dark regions). We can only see this if the light falls onto a screen and is scattered into our eyes. (b) Double slit interference pattern for water waves are nearly identical to that for light. Wave action is greatest in regions of constructive interference and least in regions of destructive interference. (c) When light that has passed through double slits falls on a screen, we see a pattern such as this. (credit: PASCO)
To understand the double slit interference pattern, we consider how two waves travel from the slits to the screen, as illustrated in Figure 4. Each slit is a different distance from a given point on the screen. Thus different numbers of wavelengths fit into each path. Waves start out from the slits in phase (crest to crest), but they may end up out of phase (crest to trough) at the screen if the paths differ in length by half a wavelength, interfering destructively as shown in Figure 4a. If the paths differ by a whole wavelength, then the waves arrive in phase (crest to crest) at the screen, interfering constructively as shown in Figure 4b. More generally, if the paths taken by the two waves differ by any half-integral number of wavelengths [(1/2)λ, (3/2)λ, (5/2)λ, etc.], then destructive interference occurs. Similarly, if the paths taken by the two waves differ by any integral number of wavelengths (λ, 2λ, 3λ, etc.), then constructive interference occurs.
Figure 4. Waves follow different paths from the slits to a common point on a screen. (a) Destructive interference occurs here, because one path is a half wavelength longer than the other. The waves start in phase but arrive out of phase. (b) Constructive interference occurs here because one path is a whole wavelength longer than the other. The waves start out and arrive in phase.
Take-Home Experiment: Using Fingers as Slits
Look at a light, such as a street lamp or incandescent bulb, through the narrow gap between two fingers held close together. What type of pattern do you see? How does it change when you allow the fingers to move a little farther apart? Is it more distinct for a monochromatic source, such as the yellow light from a sodium vapor lamp, than for an incandescent bulb?
Figure 5. The paths from each slit to a common point on the screen differ by an amount dsinθ, assuming the distance to the screen is much greater than the distance between slits (not to scale here).
Figure 5 shows how to determine the path length difference for waves traveling from two slits to a common point on a screen. If the screen is a large distance away compared with the distance between the slits, then the angle θ between the path and a line from the slits to the screen (see the figure) is nearly the same for each path. The difference between the paths is shown in the figure; simple trigonometry shows it to be d sin θ, where d is the distance between the slits. To obtain constructive interference for a double slit, the path length difference must be an integral multiple of the wavelength, or d sin θ = mλ, for m = 0, 1, −1, 2, −2, . . . (constructive).
Similarly, to obtain destructive interference for a double slit, the path length difference must be a half-integral multiple of the wavelength, or
where λ is the wavelength of the light, d is the distance between slits, and θ is the angle from the original direction of the beam as discussed above. We call m the order of the interference. For example, m = 4 is fourth-order interference.
The equations for double slit interference imply that a series of bright and dark lines are formed. For vertical slits, the light spreads out horizontally on either side of the incident beam into a pattern called interference fringes, illustrated in Figure 6. The intensity of the bright fringes falls off on either side, being brightest at the center. The closer the slits are, the more is the spreading of the bright fringes. We can see this by examining the equation d sin θ = mλ, for m = 0, 1, −1, 2, −2, . . . .
For fixed λ and m, the smaller d is, the larger θ must be, since . This is consistent with our contention that wave effects are most noticeable when the object the wave encounters (here, slits a distance d apart) is small. Small d gives large θ, hence a large effect.
Figure 6. The interference pattern for a double slit has an intensity that falls off with angle. The photograph shows multiple bright and dark lines, or fringes, formed by light passing through a double slit.
Example 1. Finding a Wavelength from an Interference Pattern
Suppose you pass light from a He-Ne laser through two slits separated by 0.0100 mm and find that the third bright line on a screen is formed at an angle of 10.95º relative to the incident beam. What is the wavelength of the light?
The third bright line is due to third-order constructive interference, which means that m = 3. We are given d = 0.0100 mm and θ = 10.95º. The wavelength can thus be found using the equation d sin θ = mλ for constructive interference.
The equation is d sin θ = mλ. Solving for the wavelength λ gives .
Substituting known values yields
To three digits, this is the wavelength of light emitted by the common He-Ne laser. Not by coincidence, this red color is similar to that emitted by neon lights. More important, however, is the fact that interference patterns can be used to measure wavelength. Young did this for visible wavelengths. This analytical technique is still widely used to measure electromagnetic spectra. For a given order, the angle for constructive interference increases with λ, so that spectra (measurements of intensity versus wavelength) can be obtained.
Example 2. Calculating Highest Order Possible
Interference patterns do not have an infinite number of lines, since there is a limit to how big m can be. What is the highest-order constructive interference possible with the system described in the preceding example?
Strategy and Concept
The equation d sin θ = mλ (for m = 0, 1, −1, 2, −2, . . . ) describes constructive interference. For fixed values of d and λ, the larger m is, the larger sin θ is. However, the maximum value that sin θ can have is 1, for an angle of 90º. (Larger angles imply that light goes backward and does not reach the screen at all.) Let us find which m corresponds to this maximum diffraction angle.
Solving the equation d sin θ = mλ for m gives .
Taking sin θ = 1 and substituting the values of d and λ from the preceding example gives
Therefore, the largest integer m can be is 15, or m = 15.
The number of fringes depends on the wavelength and slit separation. The number of fringes will be very large for large slit separations. However, if the slit separation becomes much greater than the wavelength, the intensity of the interference pattern changes so that the screen has two bright lines cast by the slits, as expected when light behaves like a ray. We also note that the fringes get fainter further away from the center. Consequently, not all 15 fringes may be observable.
Young's double slit experiment gave definitive proof of the wave character of light.
An interference pattern is obtained by the superposition of light from two slits.
There is constructive interference when d sin θ = mλ (for m = 0, 1, −1, 2, −2, . . . ), where d is the distance between the slits, θ is the angle relative to the incident direction, and m is the order of the interference.
There is destructive interference when d sin θ = mλ (for m = 0, 1, −1, 2, −2, . . . ).
Young's double slit experiment breaks a single light beam into two sources. Would the same pattern be obtained for two independent sources of light, such as the headlights of a distant car? Explain.
Suppose you use the same double slit to perform Young's double slit experiment in air and then repeat the experiment in water. Do the angles to the same parts of the interference pattern get larger or smaller? Does the color of the light change? Explain.
Is it possible to create a situation in which there is only destructive interference? Explain.
Figure 7 shows the central part of the interference pattern for a pure wavelength of red light projected onto a double slit. The pattern is actually a combination of single slit and double slit interference. Note that the bright spots are evenly spaced. Is this a double slit or single slit characteristic? Note that some of the bright spots are dim on either side of the center. Is this a single slit or double slit characteristic? Which is smaller, the slit width or the separation between slits? Explain your responses.
Figure 7. This double slit interference pattern also shows signs of single slit interference. (credit: PASCO)
At what angle is the first-order maximum for 450-nm wavelength blue light falling on double slits separated by 0.0500 mm?
Calculate the angle for the third-order maximum of 580-nm wavelength yellow light falling on double slits separated by 0.100 mm.
What is the separation between two slits for which 610-nm orange light has its first maximum at an angle of 30.0º?
Find the distance between two slits that produces the first minimum for 410-nm violet light at an angle of 45.0º.
Calculate the wavelength of light that has its third minimum at an angle of 30.0º when falling on double slits separated by 3.00 μm.
What is the wavelength of light falling on double slits separated by 2.00 μm if the third-order maximum is at an angle of 60.0º?
At what angle is the fourth-order maximum for the situation in Question 1?
What is the highest-order maximum for 400-nm light falling on double slits separated by 25.0 μm?
Find the largest wavelength of light falling on double slits separated by 1.20 μm for which there is a first-order maximum. Is this in the visible part of the spectrum?
What is the smallest separation between two slits that will produce a second-order maximum for 720-nm red light?
(a) What is the smallest separation between two slits that will produce a second-order maximum for any visible light? (b) For all visible light?
(a) If the first-order maximum for pure-wavelength light falling on a double slit is at an angle of 10.0º, at what angle is the second-order maximum? (b) What is the angle of the first minimum? (c) What is the highest-order maximum possible here?
Figure 8 shows a double slit located a distance x from a screen, with the distance from the center of the screen given by y. When the distance d between the slits is relatively large, there will be numerous bright spots, called fringes. Show that, for small angles (where , with θ in radians), the distance between fringes is given by .
Figure 8. The distance between adjacent fringes is , assuming the slit separation d is large compared with λ.
Using the result of the problem above, calculate the distance between fringes for 633-nm light falling on double slits separated by 0.0800 mm, located 3.00 m from a screen as in Figure 8.
Using the result of the problem two problems prior, find the wavelength of light that produces fringes 7.50 mm apart on a screen 2.00 m from double slits separated by 0.120 mm (see Figure 8).
coherent: waves are in phase or have a definite phase relationship
constructive interference for a double slit: the path length difference must be an integral multiple of the wavelength
destructive interference for a double slit: the path length difference must be a half-integral multiple of the wavelength
incoherent: waves have random phase relationships
order: the integer m used in the equations for constructive and destructive interference for a double slit
1. 0.516º
3. 1.22 × 10−6 m
7. 2.06º
9. 1200 nm (not visible)
11. (a) 760 nm; (b) 1520 nm
13. For small angles sin θ − tan θ ≈ θ (in radians).
For two adjacent fringes we have, d sin θm = mλ and d sin θm + 1 = (m + 1)λ
Subtracting these equations gives
*** QuickLaTeX cannot compile formula:
\begin{array}{}d\left(\sin{\theta }_{\text{m}+1}-\sin{\theta }_{\text{m}}\right)=\left[\left(m+1\right)-m\right]\lambda \\ d\left({\theta }_{\text{m}+1}-{\theta }_{\text{m}}\right)=\lambda \\ \text{tan}{\theta }_{\text{m}}=\frac{{y}_{\text{m}}}{x}\approx {\theta }_{\text{m}}\Rightarrow d\left(\frac{{y}_{\text{m}+1}}{x}-\frac{{y}_{\text{m}}}{x}\right)=\lambda \\ d\frac{\Delta y}{x}=\lambda \Rightarrow \Delta y=\frac{\mathrm{x\lambda }}{d}\end{array}\\
*** Error message:
Missing # inserted in alignment preamble.
leading text: $\begin{array}{}
Missing $ inserted.
leading text: $\begin{array}{}d\left
leading text: ...\left[\left(m+1\right)-m\right]\lambda \\ d
leading text: ...[\left(m+1\right)-m\right]\lambda \\ d\left
leading text: ...\theta }_{\text{m}}\right)=\lambda \\ \text
leading text: ...ext{m}}\right)=\lambda \\ \text{tan}{\theta
Extra }, or forgotten $.
leading text: ...t{m}}\right)=\lambda \\ \text{tan}{\theta }
Missing } inserted.
leading text: ...frac{{y}_{\text{m}}}{x}\right)=\lambda \\ d
15. 450 nm
Previous: Huygens's Principle: Diffraction
Next: Multiple Slit Diffraction
Young's Double Slit Experiment by Lumen Learning is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted. | CommonCrawl |
Importance of Position 170 in the Inhibition of GES-Type β-Lactamases by Clavulanic Acid
Hilary Frase, Marta Toth, Matthew M. Champion, Nuno T. Antunes, Sergei B. Vakulenko
Hilary Frase
Department of Chemistry and Biochemistry, University of Notre Dame, Notre Dame, Indiana 46556
Marta Toth
Matthew M. Champion
Nuno T. Antunes
Sergei B. Vakulenko
For correspondence: [email protected]
DOI: 10.1128/AAC.01292-10
Bacterial resistance to β-lactam antibiotics (penicillins, cephalosporins, carbapenems, etc.) is commonly the result of the production of β-lactamases. The emergence of β-lactamases capable of turning over carbapenem antibiotics is of great concern, since these are often considered the last resort antibiotics in the treatment of life-threatening infections. β-Lactamases of the GES family are extended-spectrum enzymes that include members that have acquired carbapenemase activity through a single amino acid substitution at position 170. We investigated inhibition of the GES-1, -2, and -5 β-lactamases by the clinically important β-lactamase inhibitor clavulanic acid. While GES-1 and -5 are susceptible to inhibition by clavulanic acid, GES-2 shows the greatest susceptibility. This is the only variant to possess the canonical asparagine at position 170. The enzyme with asparagine, as opposed to glycine (GES-1) or serine (GES-5), then leads to a higher affinity for clavulanic acid (Ki = 5 μM), a higher rate constant for inhibition, and a lower partition ratio (r ≈ 20). Asparagine at position 170 also results in the formation of stable complexes, such as a cross-linked species and a hydrated aldehyde. In contrast, serine at position 170 leads to formation of a long-lived trans-enamine species. These studies provide new insight into the importance of the residue at position 170 in determining the susceptibility of GES enzymes to clavulanic acid.
Gram-negative bacteria are responsible for more than 30% of all hospital-acquired infections and up to 70% of infections within intensive care units (14, 18). β-Lactam antibiotics, including penicillins, cephalosporins, and carbapenems, are commonly used in the treatment of these infections (30). This class of antibiotics function as mechanism-based inhibitors of cell wall biosynthesis by forming an irreversible, covalent adduct with penicillin-binding proteins (PBPs), the enzymes responsible for cross-linking the cell wall (12). When unable to maintain the integrity of their cell wall, the bacteria are either unable to reproduce (a bacteriostatic effect) and/or survive (a bactericidal effect) (12). Gram-negative bacteria use multiple mechanisms to become resistant to these antibiotics, including decreased permeability of the outer cell membrane to antibiotics, mutation of PBPs to decrease their affinity for the antibiotic, and/or expression of β-lactamases, enzymes which degrade the antibiotic (41). β-Lactamases are the means most commonly used by Gram-negative bacteria in achieving resistance.
β-Lactamases are classified as belonging to one of four molecular classes (A to D) based on their amino acid sequence (12). Members of class A, C, and D possess an active-site serine residue which catalyzes hydrolysis of the β-lactam bond. Members of class B are metalloenzymes that hydrolyze the β-lactam bond using an active-site zinc ion. Most clinical isolates resistant to β-lactam antibiotics harbor enzymes belonging to class A. These enzymes were initially narrow spectrum, capable of hydrolyzing penicillins and early generation cephalosporins, but selective pressure has resulted in the appearance of hundreds of mutants with the ability to hydrolyze β-lactam antibiotics of every known class (4, 26).
Clavulanic acid, tazobactam, and sulbactam are β-lactamase inhibitors developed to extend the utility of β-lactam antibiotics for which resistance has previously been developed (11). These inhibitors function as mechanism-based inactivators of β-lactamases. When coadministered with a β-lactam antibiotic, β-lactamase inhibitors inactivate the β-lactamase, thus preventing the antibiotic from being hydrolyzed by the enzyme. Examples of such combinations in clinical use include amoxicillin-clavulanic acid, ampicillin-sulbactam, piperacillin-tazobactam, and ticarcillin-clavulanic acid.
Carbapenem antibiotics are often considered the last-resort β-lactams due to their broad spectrum of antimicrobial activity, including both Gram-positive and -negative pathogens, and their high potency (19). Furthermore, unlike with penicillins and cephalosporins, clinical use of carbapenems has not resulted in the selection of carbapenem-resistant variants within the most abundant types of class A β-lactamases, such as TEM, CTX, and SHV (11). Instead, new class A β-lactamases, the carbapenemases, have appeared which are able to produce resistance to carbapenems alone or in conjunction with other resistance mechanisms. These enzymes are classified into six families (GES, KPC, IMI/NMC, SME, BIC, and SFC), which share 32 to 70% amino acid sequence identity (15, 39). Most of these enzymes are rare in clinical isolates, but members of the GES and KPC families are commonly found and now pose a serious threat to our ability to treat life-threatening infections (38).
The GES family of class A β-lactamases are found in 4 of the 10 most common pathogens causing hospital-acquired infections (18). The first variant described, GES-1, was found in Klebsiella pneumoniae in 1998 and classified as an extended-spectrum β-lactamase (ESBL) (27). Since that time, 14 additional variants have been identified (GES-2 to -15) in Pseudomonas aeruginosa, Serratia marcescens, Escherichia coli, Enterobacter cloacae, and Acinetobacter baumannii from clinical isolates originating in France, Portugal, Spain, Brazil, Argentina, Netherlands, South Africa, Japan, Korea, Greece, China, the United Kingdom, Belgium, and Poland (http://www.lahey.org/studies/other.asp#table1). GES-3, -7 to -9, -11, and -13 are also classified as ESBLs, but GES-2, -4, -5, and -6 have gained the ability to hydrolyze carbapenems (20, 22, 40).
Carbapenemase activity by GES enzymes is attributed to a single amino acid substitution at position 170 (Ambler numbering used) (13). The canonical residue at this position in most class A β-lactamases is an asparagine; however, exceptions are known. For example, glycine can be found in certain Streptomyces species, serine can be found in some Bacteroides and Streptomyces species, and histidine can be found in VEB-1, PER-type β-lactamases, and various Bacteroides species (1, 28). GES-1 has a glycine at position 170 and shows negligible carbapenemase activity (27). GES-2 and -5 contain a single amino acid substitution to an asparagine and serine, respectively, which confers carbapenemase activity (2, 29, 37) and makes them clinically significant. Biochemical characterization of GES-1, -2, and -5 has also revealed that they show some susceptibility to β-lactamase inhibitors, with GES-1 and -2 having 50% inhibitory concentrations (IC50s) of 5 and 1 μM, respectively (27, 29). In order to further investigate the importance of this residue in inhibition by the clinically used inhibitor clavulanic acid, we performed kinetic inhibition studies, UV difference (UVD) spectroscopy, and mass spectrometry (MS) of GES-clavulanic acid complexes of the GES-1, -2, and -5 β-lactamases.
Plasmids.A constitutive expression vector, pHF016, containing the gene for the GES-1 (pHF:GES-1), GES-2 (pHF:GES-2), or GES-5 (pHF:GES-5) β-lactamases, cloned between the unique NdeI and HindIII sites, was used for MIC determinations as previously described (13). For protein expression, pET24a(+) containing the gene for GES-1 (pET:GES-1), GES-2 (pET:GES-2), or GES-5 (pET:GES-5), cloned between the unique NdeI and HindIII sites, was used as previously described (13).
MIC determinations.The MICs of β-lactam antibiotics were determined by the broth microdilution method as recommended by the Clinical and Laboratory Standards Institute (9). GES-1, -2, or -5 was expressed in E. coli JM83 using the plasmid pHF:GES-1, -2, or -5, respectively. E. coli JM83 harboring pHF016 was used as a control. The MICs were determined in Mueller-Hinton II broth (Difco) using a bacterial inoculum of 5 × 105 CFU/ml. All plates were incubated at 37°C for 16 to 20 h before the results were interpreted.
Expression and purification of GES enzymes.To express GES-1, E. coli BL21(DE3) was transformed with pET:GES-1, and cells containing the construct were selected on LB agar supplemented with 60 μg of kanamycin/ml. Selected cells were grown in LB medium supplemented with 60 μg of kanamycin/ml at 37°C and 220 rpm until the optical density at 600 nm (OD600) reached 0.4. IPTG (isopropyl-β-d-thiogalactopyranoside) was added to a final concentration of 1 mM, and the cells were further grown at 25°C and 220 rpm for 24 h. The cells were pelleted by centrifugation at 20,000 × g and 4°C, and the medium was concentrated by centrifugal filtration at 3,000 × g and 4°C using a Centricon Plus 70 (Millipore) concentrator with a 10-kDa molecular mass cutoff filter. The concentrated medium was then dialyzed against buffer A (20 mM Tris [pH 7.5]) and fractionated on a DEAE (Bio-Rad) column (2.5 by 22 cm) using a linear gradient of NaCl (0 to 0.3 M) in buffer A. The fractions containing GES-1 were pooled and dialyzed against 20 mM HEPES (pH 7.6) and stored at 4°C. The enzyme concentration was determined spectrophotometrically using a BCA (bicinchoninic acid) protein assay kit (Pierce), using bovine serum albumin as a standard. SDS-PAGE showed the enzyme purity to be >95%. GES-2 and -5 were expressed and purified in the same manner, using the plasmids pET:GES-2 and pET:GES-5, respectively.
Data collection and analysis.All UV/Vis spectrophotometric data were collected on a Cary 50 spectrophotometer (Varian) at 22°C. Analyses were performed using the nonlinear regression program Prism 5 (GraphPad Software, Inc.) with data obtained from experiments performed at least in triplicate.
Determination of dissociation constants.The inhibitor dissociation constant (Ki) for clavulanic acid with GES-1, -2, and -5 was determined using nitrocefin (Δε500 = +15,900 cm−1 M−1) as a reporter substrate. Reactions containing 50 mM NaPi (pH 7.0), 100 mM NaCl, 200 and 300 μM (GES-1 and -5) or 10 and 20 μM (GES-2) nitrocefin, and various concentrations of the inhibitor were initiated by the addition of the enzyme (10 nM final for GES-1, 20 nM final for GES-2, and 760 pM final for GES-5). The absorbance was monitored at 500 nm, and the steady-state velocities were determined from the linear phase of the reaction time courses. The initial steady-state velocity data were plotted as a function of inhibitor concentration and the dissociation constant determined by using the method of Dixon (10).
Determination of the rate constant for inhibitor inactivation.The rate constant describing inactivation by clavulanic acid (kclav) with GES-1, -2, and -5 was determined using nitrocefin as a reporter substrate. This rate constant is different from kinact, which is often reported in the literature, because it represents formation of both transiently and irreversibly inhibited species and not only irreversible inactivation. Reactions containing 50 mM NaPi (pH 7.0), 100 mM NaCl, nitrocefin (100 μM for GES-1, 20 μM for GES-2, and 400 μM for GES-5), and various concentrations of the inhibitor were initiated by the addition of the enzyme (10 nM final for GES-1, 20 nM final for GES-2, and 760 pM final for GES-5). The absorbance was monitored at 500 nm, and the time courses were fit with equation 1. $$mathtex$$\[A_{t}{=}A_{\mathrm{o}}{+}{\nu}_{ss}t{+}\frac{{\nu}_{i}{-}{\nu}_{ss}}{k_{\mathrm{inter}}}(1{-}e^{{-}k_{\mathrm{inter}}t})\]$$mathtex$$(1) where At is the absorbance at time t, A0 is the initial absorbance, vss is the steady-state velocity, vi is the initial velocity, and kinter is the rate constant for the interconversion between vi and vss. The values for kinter were plotted as a function of inhibitor concentration and fit with equation 2. $$mathtex$$\[k_{\mathrm{inter}}{=}\frac{k_{\mathrm{clav}}I}{K_{I}{+}I}\]$$mathtex$$(2) where kinter is as described above, kclav is the rate constant describing inactivation, I is the concentration of inhibitor, and KI is the apparent concentration of inhibitor required to reach kinter = kclav/2.
Determination of the partition ratio.The partition ratio, r or kcat/kinact, was determined by using the titration method (34). Various molar ratios of inhibitor and enzyme (100 nM), up to 1,500:1, were incubated overnight at 4°C in 50 mM NaPi (pH 7.0) and 100 mM NaCl. The remaining activity was measured after a 1:100-fold dilution of the enzyme (1 nM final) into 50 mM NaPi (pH 7.0) and 100 mM NaCl containing excess nitrocefin (600 μM final). The absorbance was monitored at 500 nm, and the steady-state velocities were determined from the linear phase of the reaction time courses. The inhibitor/enzyme ratio resulting in ≥90% inactivation was designated as the partition ratio, as previously defined (6, 24).
Detection and kinetic characterization of enamine species by UV spectroscopy.The cis- and trans-enamine reaction intermediates, formed during inhibition by β-lactamase inhibitors, are characterized by chromophores absorbing at wavelengths greater than 250 nm (3, 8, 32, 35). Reactions containing 50 mM NaPi (pH 7), 100 mM NaCl, and 500 μM clavulanic acid were initiated by the addition of 5 μM GES-1, -2, or -5. The absorbance from 200 to 350 nm was monitored every 0.1 min over 30 min. Subtracting the spectrum of the enzyme alone from the enzyme-clavulanic acid spectra generated the desired difference spectra.
In order to compare the initial formation of the trans-enamine, we used a SFA-20 stopped-flow apparatus (Hi-Tech Scientific, Salisbury, United Kingdom). Reaction mixtures contained 50 mM NaPi (pH 7.0), 100 mM NaCl, 500 μM clavulanic acid, and 5 μM GES-1, -2, or -5. The trans-enamine species was detected at 264 nm. All time courses were normalized to an absorbance of zero at t = 0 s.
Detection of GES-clavulanic acid reaction intermediates by electrospray-ionization (ESI) MS.Reactions containing 50 mM NaPi (pH 7.0), 100 mM NaCl, and 70 μM GES-1, -2, or -5 were preincubated in the presence or absence of 1 mM clavulanic acid. A 15-μl aliquot of each reaction was diluted 1:4 into 0.5% formic acid in H2O prior to liquid chromatography-MS (LC-MS) analysis. A 25-μl aliquot of the quenched reaction, containing 12.5 μg of enzyme, was injected onto a Poroshell C3 column (2.1 by 75 mm; Agilent) at 440 μl/min using a Dionex RSLC high-pressure liquid chromatography system (A = H2O with 0.1% formic acid, B = acetonitrile with 0.1% formic acid). The gradient was held at 15% B for 4.1 min, followed by a linear gradient from 15 to 90% B over 5.4 min. The gradient was held constant at 90% B for 3 min and then re-equilibrated in 15% B for 6.5 min. The sample was diverted from the MS source during the first 4 min postinjection.
LC-MS was performed on a Bruker MicroQTOF QqTOF instrument. Single MS spectra from 300 to 3,000 m/z were acquired at a spectral sum rate of 1 Hz. Two spectra were averaged together per displayed/saved spectrum. Calibration was provided externally through infused Agilent ESI-low tune mix from m/z 322.0481 to 2,721.8948. A six-point parameterized fit (MicroTOFControl) was used. Instrument parameters were: positive mode with a bias of 4,500 V on the capillary, a -500-V end plate offset, 3.5 Bar of nebulizer gas, dry gas at 8 liters/min and dry gas temperature of 180°C.
The charge/mass deconvolution was performed using Bayesian reconstruction (ABSciex) from exported ASCII data (FlexAnalysis). The following mass reconstruction parameters were used: mass considered was 20,000 to 35,000 Da; signal-to-noise ratio, 5:1, step size, 0.1 Da; and number of iterations, 20. To confirm the findings, the data were also charge deconvolved within FlexAnalysis. Mass agreement with charge deconvolution was identical (data not shown). The average mass error for the GES-1, -2, and -5 measurements was 0.008% (predicted mass averages were 29,216.9, 29,273.9, and 29,246.9 Da for the GES-1, -2, and -5 polypeptides, respectively).
MICs of β-lactam-clavulanic acid combinations against E. coli JM83 producing GES β-lactamases.Previous studies have shown that in the presence of clavulanic acid, the MICs of amoxicillin and/or ticarcillin are reduced in strains expressing GES enzymes (27, 29, 37). We evaluated the MIC values for β-lactam-inhibitor combinations used in clinical practice, i.e., amoxicillin-clavulanic acid and ticarcillin-clavulanic acid, against E. coli JM83 expressing GES-1, -2, and -5. We used our constitutive expression vector, which allowed direct comparison of the three enzymes under an identical promoter (Table 1). The MICs for amoxicillin against strains expressing the GES variants are all ≥2048, but the MICs are each reduced in the presence of clavulanic acid. GES-2 is the most susceptible to inhibition by clavulanic acid, with the MIC value for amoxicillin-clavulanic acid approaching background levels. The MICs for ticarcillin also decrease in the presence of clavulanic acid. Strains expressing GES-1 and -5 show 32- and 16-fold reductions, respectively, in the MIC for ticarcillin, whereas GES-2 expression results in a 256-fold decrease in MIC. The identity of the amino acid at position 170 in GES-1, -2, and -5 influences their resistance to inhibition by clavulanic acid, with the canonical asparagine resulting in the highest susceptibility.
MICs of β-lactam for E. coli JM83 producing various GES enzymes
Kinetic characterization of clavulanic acid inhibition of GES enzymes.The IC50 values for clavulanic acid with GES-1 and -2 were previously shown to be in the low micromolar range (27, 29). We evaluated the true dissociation constants (Ki) for clavulanic acid from GES-1, -2, and -5 (Table 2). The affinity of clavulanic acid for GES-1 and -5 is lower than for GES-2. This increased affinity of the GES-2 β-lactamase for clavulanic acid may, in part, account for the increased susceptibility of GES-2 to inhibition. Compared to other class A β-lactamases, the Ki for GES-2 (5.0 μM) is 4- and 12-fold higher than those for the TEM-1 (1.4 μM) and SHV-1 (0.43 μM) β-lactamases, respectively (16, 36). Compared to other carbapenemases, the Ki for GES-2 is comparable to that for NMC-A (3.2 μM) and 3- and 26-fold higher than the Ki values for KPC-2 (1.5 μM) and SME-1 (0.19 μM), respectively (21, 31, 43). Unlike typical class A β-lactamases, GES-1 and -5 do not contain the canonical Asn at position 170, which functions in anchoring the deacylation water molecule in the active site. This residue is also known to be important in substrate binding as reflected in the lower Km values for penicillin and cephalosporin substrates for GES-2 (2, 27, 29). The lower Ki value for clavulanic acid with GES-2 implies that this residue is important in the binding of β-lactamase inhibitors as well.
Kinetic parameters for inhibition of GES enzymes by clavulanic acid
In order to evaluate the efficacy of inhibition of GES enzymes by clavulanic acid, we monitored the loss of enzyme activity over time using a continuous assay. This method measures the loss of enzyme activity due to formation of both transiently and irreversibly inactive species. We have defined the rate constant describing this inactivation as kclav. This is different from the kinetic parameter kinact, often reported in the literature, which represents the formation of only irreversibly inactivated species (34) and is determined by a discontinuous method. We chose to use kclav in comparing the ability of clavulanic acid to inhibit GES enzymes, since it is more physiologically relevant. The value for kclav for GES-1 (0.018 s−1) and GES-5 (0.015 s−1) is 4-fold lower than for GES-2 (0.082 s−1) (Table 2). Despite this difference, the half-life for inactivation would still be less than 1 min for all of the enzymes. This enhanced rate constant with GES-2 also would contribute to the increased susceptibility to clavulanic acid, in addition to the increased affinity. The only other class A β-lactamase for which we are aware this parameter has been measured is the carbapenemase KPC-2 (0.027 s−1), which has a value for kclav more similar to GES-1 and -5 than GES-2 (25). Not surprisingly, like GES-1 and -5, KPC-2 is also clinically resistant to amoxicillin in the presence of clavulanic acid (42).
The partition ratio (r) is defined as the number of molecules of inhibitor hydrolyzed by the enzyme prior to irreversible inactivation and is represented by the ratio kcat/kinact (34). The better an enzyme is at evading irreversible inactivation, by hydrolyzing the β-lactam bond of the inhibitor and rendering it inactive, the higher the value of the partition ratio. The partition ratio for GES-2 (r = 20) is at least 10-fold lower than for GES-1 (r = 300) and GES-5 (r = 450) (Table 2). Other non-carbapenemase class A β-lactamases, such as TEM-1 (r = 120) and SHV-1 (r = 40), also have relatively low partition ratios for clavulanic acid (7, 17). Carbapenemases typically have higher partition ratios for clavulanic acid, with NMC-A and KPC-2 having values of 428 and 2,500, respectively (21, 25). GES-5 has the highest carbapenemase activity and GES-1 has the lowest, of the three GES variants studied, but have similar partition ratios. This implies that carbapenemase activity may not always occur in conjunction with higher partition ratios for clavulanic acid.
Low values for the partition ratio result from both the β-lactamase inhibitor being a poor substrate for the enzyme (i.e., low kcat) or an increased ability of the inhibitor to irreversibly inactivate the enzyme (i.e., high kinact). Previous studies have shown that having an Asn at position 170 in GES enzymes leads to a lower turnover of penicillins and cephalosporins (2, 27, 29). Therefore, it is likely that the decreased partition ratio for clavulanic acid with GES-2 is due to decreases in the value for kcat.
Detection of enamine intermediates by UV/Vis spectroscopy.Inhibition of class A enzymes by β-lactamase inhibitors proceeds through a variety of species, both transient and irreversible (Fig. 1). The importance of transient species during inhibition has been well established (11). Since both the cis-enamine (compound 4) and trans-enamine (compound 5) species contain a chromophore (8, 23, 32), we studied the UVD spectra of GES β-lactamases in the presence clavulanic acid to evaluate whether either of these transient species were important during inhibition. When monitored over 30 min, the UVD spectrum for each enzyme revealed a single peak (λmax = 264 nm) that varied in intensity over time, first increasing and then decreasing (Fig. 2). We have assigned this peak to the trans-enamine species (compound 5), since the cis isomer would be predicted to absorb closer to 300 nm. Only minor changes in the spectrum could be detected at 300 nm. We compared the change in absorbance at 264 nm over time in order to compare the relative rate and amount of the trans-enamine formed by each variant (Fig. 2D). The trans-enamine species is formed rapidly in all three enzymes upon addition of clavulanic acid. GES-5 forms 2.5-fold more of the trans-enamine species than GES-1 or -2. The formation and disappearance of the trans-enamine is similar in GES-1 and -2, but this species is longer lived in GES-5. These kinetics are quite different from what is seen in the class A β-lactamase SHV-1 when inhibited by clavulanic acid (35). In this enzyme, formation of the trans-enamine is slow, not reaching a maximum until 15 min. This species was also stable over 60 min, unlike with GES enzymes. In GES-1 and -2, the trans-enamine species would not be the predominant inhibitory complex in vivo, since they rapidly disappear. However, in GES-5 the trans-enamine species survives for at least 10 min. Since this corresponds to approximately half the doubling time of the bacterium, it is reasonable to assume that the trans-enamine species likely contributes to physiologically relevant inhibition of GES-5.
Proposed mechanism for the inhibition of GES enzymes by clavulanic acid.
UVD spectra of GES-1 (A), GES-2 (B), and GES-5 (C) in the presence of clavulanic acid at time t = 0 min (pink), 0.5 min (orange), 1 min (red), 5 min (green), 20 min (blue), and 30 min (purple). (D) Change in absorbance over time at 264 nm, representing the trans-enamine species, for GES-1 (pink), GES-2 (orange), and GES-5 (blue) in the presence of clavulanic acid.
Since formation of the trans-enamine species was rapid, we used a stopped-flow apparatus to monitor the first 60 s of the reaction in more detail (Fig. 3). There is a small, but reproducible, lag in the time course for all GES enzymes. This would indicate something prior to the formation of the trans-enamine reaction limits the steady-state reaction, such as substrate binding, hydrolysis of the β-lactam ring or tautomerization of the imine to the trans-enamine. The formation of the trans-enamine is fastest with GES-2 and slowest with GES-5. The disappearance of this species is also fastest in GES-2. This would imply that this species is short-lived in GES-2, again consistent with the idea that this is not a physiologically relevant complex during inhibition in vivo.
Representative time course at 264 nm for the production of the trans-enamine species with clavulanic acid for GES-1 (solid black line), GES-2 (solid light gray line), and GES-5 (solid dark gray line) as measured by stopped-flow analysis.
Detection of GES-clavulanic acid intermediates by ESI-MS.In order to detect other species important in the inhibition of GES enzymes by clavulanic acid, we used ESI-MS. The deconvoluted mass spectra reveal that immediately upon mixing, GES-1, -2, and -5 form three major species (Fig. 4). These species are consistent with those previously seen with TEM-2 and SHV-1 (5, 33). Thus, we propose that they represent a cross-linked species between Ser70 and Ser130 (Δ+52, compound 10), either an aldehyde (Δ+70, compound 8) or an irreversibly inactivated complex (Δ+70, compound 11), and a hydrated aldehyde (Δ+88, compound 9). In addition to the main peaks observed, there are also a few minor peaks which we ascribe to hydration products of the main peaks due to their spacing of +Δ18. There is little free enzyme remaining upon mixing of the enzyme with clavulanic acid, which is consistent with rapid inhibition.
Deconvoluted mass spectra of GES-1 (A), GES-2 (B), and GES-5 (C) immediately upon mixing (left) and after 20 min (right) with clavulanic acid. The arrow indicates the position of the native enzyme, and all mass changes are relative to this peak.
After 20 min of incubation with clavulanic acid, GES-1 and -5 have begun to regenerate the free enzyme, whereas the GES-2 spectrum has remained essentially unchanged. This implies that the GES-2-clavulanic acid complexes are more stable than those with GES-1 and -5. This is in agreement with the MIC and kinetic data, which show GES-2 to be more susceptible to inhibition by clavulanic acid. In addition, since the GES-2/clavulanic acid complexes 8 to 11 rapidly form and remain stable for over 20 min, these are the species which would be physiologically relevant to inhibition in vivo. This is in contrast to GES-5, where the trans-enamine is more physiologically relevant.
Conclusion.GES β-lactamases are important targets in the development of novel therapeutic agents. This family of enzymes, through a single amino acid substitution, has expanded its resistance profile to include carbapenem antibiotics. We have investigated the susceptibility of E. coli harboring the GES β-lactamases—GES-1, -2, and -5—to β-lactams in the presence of clavulanic acid. Bacteria producing GES-2, an enzyme containing the canonical Asn at position 170, were more susceptible to β-lactams in the presence of this inhibitor than GES-1 and -5. This is due to both an increased affinity of the enzyme for clavulanic acid and a low partition ratio. Mass spectral data support the idea that the physiologically relevant species in vivo are the cross-linked species, the aldehyde or inactive complex, and the hydrated aldehyde. This is in contrast to GES-5, a carbapenemase possessing a Ser at position 170, in which the trans-enamine is the physiological species. GES-1, which is not a carbapenemase and contains a Gly at position 170 is similar to GES-5 both in its ability to confer resistance to β-lactams in the presence of clavulanic acid and in its kinetics of inhibition. These data provide new insights into the importance of position 170 in determining the susceptibility of GES enzymes to inhibition by clavulanic acid.
The MS data are based upon work supported by NSF CHE-0741793.
Received 21 September 2010.
Returned for modification 19 November 2010.
Accepted 1 January 2011.
Accepted manuscript posted online 10 January 2011.
Copyright © 2011, American Society for Microbiology
Ambler, R. P., et al. 1991. A standard numbering scheme for the class A β-lactamases. Biochem. J. 276(Pt. 1):269-270.
Bae, I. K., et al. 2007. Genetic and biochemical characterization of GES-5, an extended-spectrum class A β-lactamase from Klebsiella pneumoniae. Diagn. Microbiol. Infect. Dis. 58:465-468.
Bonomo, R. A., et al. 2001. Inactivation of CMY-2 β-lactamase by tazobactam: initial mass spectroscopic characterization. Biochim. Biophys. Acta 1547:196-205.
Bradford, P. A. 2001. Extended-spectrum β-lactamases in the 21st century: characterization, epidemiology, and detection of this important resistance threat. Clin. Rev. Microbiol. 14:933-951.
Brown, R. P., R. T. Aplin, and C. J. Schofield. 1996. Inhibition of TEM-2 β-lactamase from Escherichia coli by clavulanic acid: observation of intermediates by electrospray ionization mass spectrometry. Biochemistry 35:12421-12432.
Bush, K., C. Macalintal, B. A. Rasmussen, V. J. Lee, and Y. Yang. 1993. Kinetic interactions of tazobactam with β-lactamases from all major structural classes. Antimicrob. Agents Chemother. 37:851-858.
Canica, M. M., et al. 1998. Phenotypic study of resistance of β-lactamase-inhibitor-resistant TEM enzymes which differ by naturally occurring variations and by site-directed substitution at Asp276. Antimicrob. Agents Chemother. 42:1323-1328.
Cartwright, S. J., and A. F. Coulson. 1979. A semi-synthetic penicillinase inactivator. Nature 278:360-361.
Clinical and Laboratory Standards Institute. 2006. Methods for dilution antimicrobial susceptibility tests for bacteria that grow aerobically: approved standard, 7th ed. Clinical and Laboratory Standards Institute, Wayne, PA.
Dixon, M. 1953. The determination of enzyme inhibitor constants. Biochem. J. 55:170-171.
Drawz, S. M., and R. A. Bonomo. 2010. Three decades of β-lactamase inhibitors. Clin. Microbiol. Rev. 23:160-201.
Fisher, J. F., S. O. Meroueh, and S. Mobashery. 2005. Bacterial resistance to β-lactam antibiotics: compelling opportunism, compelling opportunity. Chemical Rev. 105:395-424.
Frase, H., Q. Shi, S. A. Testero, S. Mobashery, and S. B. Vakulenko. 2009. Mechanistic basis for the emergence of catalytic competence against carbapenem antibiotics by the GES family of β-lactamases. J. Biol. Chem. 284:29509-29513.
Gaynes, R., and J. R. Edwards. 2005. Overview of nosocomial infections caused by gram-negative bacilli. Clin. Infect. Dis. 41:848-854.
Girlich, D., L. Poirel, and P. Nordmann. 2010. Novel ambler class A carbapenem-hydrolyzing β-lactamase from a Pseudomonas fluorescens isolate from the Seine River, Paris, France. Antimicrob. Agents Chemother. 54:328-332.
Grace, M. E., K. P. Fu, F. J. Gregory, and P. P. Hung. 1987. Interaction of clavulanic acid, sulbactam and cephamycin antibiotics with β-lactamases. Drugs Exp. Clin. Res. 13:145-148.
Helfand, M. S., et al. 2003. Understanding resistance to β-lactams and β-lactamase inhibitors in the SHV β-lactamase: lessons from the mutagenesis of SER-130. J. Biol. Chem. 278:52724-52729.
Hidron, A. I., et al. 2008. NHSN annual update: antimicrobial-resistant pathogens associated with healthcare-associated infections: annual summary of data reported to the National Health Care Safety Network at the Centers for Disease Control and Prevention, 2006-2007. Infect. Control Hosp. Epidemiol. 29:996-1011.
Kesado, T., T. Hashizume, and Y. Asahi. 1980. Anti-bacterial activities of a new stabilized thienamycin, N-formimidoyl thienamycin, in comparison with other antibiotics. Antimicrob. Agents Chemother. 17:912-917.
Kotsakis, S. D., et al. 2010. GES-13, a β-lactamase variant possessing Lys-104 and Asn-170 in Pseudomonas aeruginosa. Antimicrob. Agents Chemother. 54:1331-1333.
Mariotte-Boyer, S., M. H. Nicolas-Chanoine, and R. Labia. 1996. A kinetic study of NMC-A β-lactamase, an Ambler class A carbapenemase also hydrolyzing cephamycins. FEMS Microbiol. Lett. 143:29-33.
Moubareck, C., S. Bremont, M. C. Conroy, P. Courvalin, and T. Lambert. 2009. GES-11, a novel integron-associated GES variant in Acinetobacter baumannii. Antimicrob. Agents Chemother. 53:3579-3581.
Ostercamp, D. L. 1970. Vinylogous imides. II. Ultraviolet spectra and the application of Woodward's rules. J. Org. Chem. 35:1632-1641.
Padayatti, P. S., et al. 2006. Rational design of a β-lactamase inhibitor achieved via stabilization of the trans-enamine intermediate: 1.28 Å crystal structure of wt SHV-1 complex with a penam sulfone. J. Am. Chem. Soc. 128:13235-13242.
Papp-Wallace, K. M., et al. 2010. Inhibitor resistance in the KPC-2 β-lactamase, a preeminent property of this class A β-lactamase. Antimicrob. Agents Chemother. 54:890-897.
Perez, F., A. Endimiani, K. M. Hujer, and R. A. Bonomo. 2007. The continuing challenge of ESBLs. Curr. Opin. Pharmacol. 7:459-469.
Poirel, L., I. Le Thomas, T. Naas, A. Karim, and P. Nordmann. 2000. Biochemical sequence analyses of GES-1, a novel class A extended-spectrum β-lactamase, and the class 1 integron In52 from Klebsiella pneumoniae. Antimicrob. Agents Chemother. 44:622-632.
Poirel, L., et al. 1999. Molecular and biochemical characterization of VEB-1, a novel class A extended-spectrum β-lactamase encoded by an Escherichia coli integron gene. Antimicrob. Agents Chemother. 43:573-581.
Poirel, L., et al. 2001. GES-2, a class A β-lactamase from Pseudomonas aeruginosa with increased hydrolysis of imipenem. Antimicrob. Agents Chemother. 45:2598-2603.
Poole, K. 2004. Resistance to β-lactam antibiotics. Cell. Mol. Life Sci. 61:2200-2223.
Queenan, A. M., et al. 2000. SME-type carbapenem-hydrolyzing class A β-lactamases from geographically diverse Serratia marcescens strains. Antimicrob. Agents Chemother. 44:3035-3039.
Rizwi, I., A. K. Tan, A. L. Fink, and R. Virden. 1989. Clavulanate inactivation of Staphylococcus aureus β-lactamase. Biochem. J. 258:205-209.
Saves, I., et al. 1995. The asparagine to aspartic acid substitution at position 276 of TEM-35 and TEM-36 is involved in the β-lactamase resistance to clavulanic acid. J. Biol. Chem. 270:18240-18245.
Silverman, R. B. 1988. Mechanism-based enzyme inactivation: chemistry and enzymology. CRC Press, Boca Raton, FL.
Sulton, D., et al. 2005. Clavulanic acid inactivation of SHV-1 and the inhibitor-resistant S130G SHV-1 β-lactamase. Insights into the mechanism of inhibition. J. Biol. Chem. 280:35528-35536.
Vakulenko, S., and D. Golemi. 2002. Mutant TEM β-lactamase producing resistance to ceftazidime, ampicillins, and β-lactamase inhibitors. Antimicrob. Agents Chemother. 46:646-653.
Vourli, S., et al. 2004. Novel GES/IBC extended-spectrum β-lactamase variants with carbapenemase activity in clinical enterobacteria. FEMS Microbiol. Lett. 234:209-213.
Walsh, T. R. 2008. Clinically significant carbapenemases: an update. Curr. Opin. Infect. Dis. 21:367-371.
Walther-Rasmussen, J., and N. Hoiby. 2007. Class A carbapenemases. J. Antimicrob. Chemother. 60:470-482.
Weldhagen, G. F. 2006. GES: an emerging family of extended spectrum β-lactamases. Clin. Microbiol. Newsletter. 28:145-149.
Wilke, M. S., A. L. Lovering, and N. C. Strynadka. 2005. β-Lactam antibiotic resistance: a current structural perspective. Curr. Opin. Microbiol. 8:525-533.
Yigit, H., et al. 2001. Novel carbapenem-hydrolyzing β-lactamase, KPC-1, from a carbapenem-resistant strain of Klebsiella pneumoniae. Antimicrob. Agents Chemother. 45:1151-1161.
Yigit, H., et al. 2003. Carbapenem-resistant strain of Klebsiella oxytoca harboring carbapenem-hydrolyzing β-lactamase KPC-2. Antimicrob. Agents Chemother. 47:3881-3889.
Antimicrobial Agents and Chemotherapy Mar 2011, 55 (4) 1556-1562; DOI: 10.1128/AAC.01292-10
You are going to email the following Importance of Position 170 in the Inhibition of GES-Type β-Lactamases by Clavulanic Acid | CommonCrawl |
taylor series 2 variables 3rd order
Deletes the last element before the cursor. The conclusion of Theorem 1, that f(x) P We have seen that some functions can be represented as series, which may give valuable information about the function. $\endgroup$ mnmakrets Aug 12, 2015 at 16:32 A calculator for finding the expansion and form of the Taylor Series of a given function. Explanation: The general form of a Taylor expansion centered at a of an analytical function f is f (x) = n=0 f (n)(a) n! I think you now have a sense of why we put the 1/2 there. In particular, by keeping one additional Now, The seventh order Taylor series approximation is very close to the theoretical value of the function even if it is computed far from the point around which the Taylor series was computed (i.e., x = / 2 and a = 0 ). The most common Taylor series approximation is the first order approximation, or linear approximation. To do this we will employ a second-order Taylor series expansion for y i + 1 in terms of y i and . 2 t2y00(t)+ 1 3! Plus-- this is the power rule right here-- 2 times 1/2 is just 1, plus f prime prime of 0 times x. Take each of the results from the previous step and substitute a for x. This is the rst two terms in the Taylor expansion of f about the point x0. Find the multivariate Taylor series expansion by specifying both the vector of variables and the vector of values defining the expansion point. 2. partial derivatives at some point (x0, y0, z0) . If there are several independent variables, each has a truncation error, e.g., O(x2+ t), you can derive an n-1 order F.D. The radius of convergence, usually denoted by D, is half of the length of the interval I.The reason that it is referred to as the radius of convergence is that a power series can be considered as t3y000(t)++ 1 n! Step 1: Evaluate the function for the first part of the Taylor polynomial. Example: The Taylor Series for e x e x = 1 + x + x 2 2! tny(n)() where is some value between t and t+t. 3. 18.4.1 Summary. 13.10 Taylor Series. R, a = (0;0) and x = (x;y) then the second degree Taylor polynomial is. The Taylor expansion for a function of two variables (up to the second order) is. The following simulation shows linear and quadratic approximations of functions of two variables. It is based on a Taylor Series Expansion of a nonlinear function about a specified operating point. Select the approximation: Linear, Quadratic or Both. The constants in the general form must be defined. Inherently second order processes: Mechanical systems possessing inertia and subjected to some external force e.g. : Youre evaluating cos(x) at x = 2, so plug in cos(2): Step 2: Evaluate the function for the second part of the Taylor polynomial. x is both the 2nd order and the 3rd order Taylor polynomial of cosx, because the cubic term in its Taylor expansion vanishes. y' = f (x, y), y (x 0 ) = y 0 where. For example, to calculate Taylor expansion at 0 of the cosine function to order 4, simply enter taylor_series_expansion (
The following simulation shows linear and quadratic approximations of functions of two variables. The formula used by taylor series formula calculator for calculating a series for a function is given as: F(x) = n = 0fk(a) / k! The rst way uses the canned formula. are satis ed. (x a)n. Recall that, in real analysis, Taylors theorem gives an approximation of a k -times differentiable function Let's continue our discussion of Taylor series starting with an example. Take the 2, multiply it times 1/2, and decrement that 2 right there. In particular, it means that we only need to keep rst-order terms and only one second-order term (dBdB= dt), ignoring all other terms. If the function f The Taylor series of a function is the limit of that functions Taylor polynomials as the degree increases, provided that the limit exists. ( x a) 3 + . {xy +y^2}{1+\cos^2 x}\) at \(\mathbf a = (0,2)\). The most well-known second-order Taylor approximation is the Hessian, or the second derivatives of the cost function with respect to the weights of the network. . 1: Finding a third-degree Taylor polynomial for a function of two variables. Summary: Hello there, I need to get the Taylor Series for f (r) and r is a function f (x,y,z))=r. For example: sin(x), cos(x), exp(x), tan(x), ctan(x), sqrt(x) and other The hyperbolic tangent satisfies the second-order ordinary differential equation Find the Sum of any converging Series Maclaurin series are named after the Scottish mathematician Colin Maclaurin A second-or Aco Group Berhad A second-or. Example.In this example, we nd the second order Taylor expansion of f(x,y) = p 1+ 4x2 +y2 about (x0,y0) = (1,2) and use it to compute approximately f(1.1,2.05). This is the rst two terms in the Taylor expansion of f about the point x0. Taylor series of a function is an infinite sum of terms, that is expressed in terms of the function's derivatives at any single point, where each following term has a larger exponent like x, x 2, x 3, etc. fxx(0;0)x2+2fxy(0;0)xy+fyy(0;0)y2. 6 Pg. The order of the Taylor polynomial can be specified by using our Taylor series expansion calculator. Example 4: Find the Taylor formula up to terms of order two, for the expansion about a = -1 of . {xy Function of one variable 2. $\begingroup$ TLDR version: The OP does not have a problem with Series in Mathematica but with a Taylor series is to begin with. Use deviation variables to eliminate initial conditions for TF models. Theorem 5.13(Taylors Theorem in Two Variables) Suppose ( ) (3) f x Taylor polynomial x 4 4.5 5 5.5 6 100 200 300 400 Taylor Polynomial At x = 5, for the function f x = ex, a graph of f x and the approximating Taylor polynomial(s) of degree(s) 0. We can also deduce that the Taylor series expansion of fabout such a bifurcation point will have the form f(x; ) = a 0 + a 1x2 + a 2x + a 3 2 + O(3) for some constants a 0 6= 0, a 1 6= 0, So, the Taylor series gives us a way to 6 5.4 Runge-Kutta Methods Motivation: Obtain high-order accuracy of Taylors method without knowledge of derivatives of ( ). Table 2 shows the accuracy of fourth-order Taylor-series of overlap integrals between STOs given by Bunge. Example 1: 1/2x^2-1/2y^2 Example 2: y^2(1-xy) Drag the point A to change the approximation region on the surface. Here, the number of sensitivity terms for the Taylor series expansion of order 15 is 135 among which 133 terms are higher-order sensitivities. Taylor's Series method. The first derivative of cos(2) is -sin(2), giving us: Step 3: Evaluate the function for the third part of the Taylor polynomial. SolveMyMath's Taylor Series Expansion Calculator. If the optional keyword returnorder is specified, then an expression sequence of two elements will be returned. Now select the View Taylor Polynomials option from the Tools menu at the top of the applet. (x a)n. Here f (n) is the nth derivative of f. The third degree Taylor polynomial is a polynomial consisting of the first four ( n ranging from 0 to 3) terms of the full Taylor expansion. Note that there are two different expressions for Y because we are using two different orders in the Taylor series expansion. Step 3: Fill in the right-hand side of the Taylor series expression, using the Taylor formula of Taylor series we have discussed above : Using the Accuracy of Taylor-series depends on their order. My question concerns trying to justify a widely-used method, namely taking the expected value of Taylor Series. The seventh order Taylor series approximation is very close to the theoretical value of the function even if it is computed far from the point around which the Taylor series was computed (i.e., \(x = chy1013m1 said: find the taylor polynomial of order 3 based at (x, y) = (0, 0) for the function f (x, y) = (e^ (x-2y)) / (1 + x^2 - y) The taylor series expansion of a function about the
ODEs. In our example, the third order Taylor polynomial was good enough to approximate the integral to within 10 6.
Compute the second-order Taylor polynomial of \(f(x,y,z) = xy^2e^{z^2}\) at the point \(\mathbf a = (1,1,1)\). + x 4 4! Related Calculators. Taylors formula and Taylor series Let f(x) be a function of one variable x, with f(x), f(x), etc all existing. One Time Payment $12.99 USD for 2 months. The following example will take you step by step through the derivation of the second-order Runge-Kutta methods. Answer: Begin with the definition of a Taylor series for a single variable, which states that for small enough |t - t_0| then it holds that: f(t) \approx f(t_0) + f'(t_0)(t - t_0) + \frac {f''(t_0)}{2! Consider the one dimensional initial value problem. A multi-variable function can also be expanded by the Taylor series: which can be expressed in vector form as: where is a vector and and are respectively the gradient vector and the Hessian 1. Consider a function f(x) of a single variable x, and Step 2: Evaluate the function and its derivatives at x = a. Copy Code. 1. f(x,y) = ey sin(x) 2. f(x, y) = e(-x2-y2) sin(xy) A second-order Taylor series expansion of a scalar-valued function of more than one variable can be written compactly as T ( x ) = f ( a ) + ( x a ) T D f ( a ) + 1 2 ! Start Solution. A Category 2 or Category 3 power series in x defines a function f by setting for any x in the series' interval of convergence. ( x a) 2 + f ( 3) ( a) 3! If x a and y b, then we can get a two-variable linear approximation that is analogous to the linear approximation L ( x) in one variable.
Taylor series is a function of an infinite sum of terms in increasing order of degree. Taylor series of polynomial functions is a polynomial. What is the use of Taylor series? Note that we only convert the exponential using the Taylor series derived in the notes and, at this point, we just leave the x 6 x 6 alone in front of the series. We rst compute all A
We start by calculating derivatives: }(t - t_0)^2 Removes all text in the textfield. Additionally, we have a function, say, log ( x).
Input the function you want to expand in Taylor serie : Variable : Around the Point a = (default a = 0) Maximum Power of the Expansion: How to Input. The expected value of Y using the 2nd-Order Taylor series expansion. If a function f (x) has continuous derivatives up to (n + 1)th order, then this function can be expanded in the following way: where Rn, called the remainder after n + 1 terms, is given by. Assume we have a random variable X with positive mean and variance 2. t3y000(t)++ 1 n! + x 3 This is very useful information about Title: 2dimtaylorvital.dvi Created Date: 3/26/2007 9:22:23 AM The Taylor series of a function is the limit of that functions Taylor polynomials as the degree increases, provided that the limit exists. A Taylor Series is an expansion of some function into an infinite sum of terms, where each term has a larger exponent like x, x 2, x 3, etc. Example: The Taylor Series for e x e x = 1 + x + x 2 2! + x 3 3! + x 4 4! + x 5 5! + syms x y f = y*exp (x - 1) - x*log (y); T = taylor (f, [x y], [1 1], 'Order' ,3) T =. 2 t2y00(t)+ 1 3! scheme). Here we used the equality This is referred to later in my question as E ( Y 2). Taylor's Series method. We can write out the terms through the second derivative explicitly, but its dicult to write a general form. In the Taylor series expansion, extended Kalman neglects higher-order terms with the second order, which will cause information loss.To further improve the accuracy of the algorithm, sigma We will look at only the first two approaches. (Also note that in higher reduce to the case c = 0 by making the change of variable ex= x c and regarding all functions in question as functions of xerather than x. In terms of sigma notation, the Taylor series can be written as. Hence, 136 function evaluations were required to obtain sensitivities using ModFFD. A weight of 2 will halve the order in the corresponding variable to which the series is computed. 1.1.1 Linearization via Taylor Series In order to linearize general nonlinear systems, we will use the Taylor Series expansion of functions. Show All Steps Hide All Steps. In calculus, Taylor's theorem gives an approximation of a k -times differentiable function around a given point by a polynomial of degree k, called the k th-order Taylor polynomial. Differential equations - Taylor's method. Use x as your variable. In particular, by keeping one additional term, we get what is called a \second-order approximation". You Example.In this example, we nd the third order Taylor expansion of f(x,y) = e2xsin(3y) about (x0,y0) = (0,0) in two dierent ways. Given some function f that is differentiable n times at some point a, we define its n-th order Taylor polynomial centered at a as: P(x)=\sum_{i=0}^n \frac{f^{(i)}(a)}{i!
Gnocchi With Gorgonzola Sauce And Walnuts
Where Can I Find Anime Stuff
Mazda Cx-5 360 View Camera
Single Ear Baseball Helmet
Large Clay Chimineas For Sale Near Me
Conrad Restaurants Las Vegas
Beckett Grading Services
California Bail Schedule 2022
taylor series 2 variables 3rd order関連
taylor series 2 variables 3rd orderapple cinemas warwick
taylor series 2 variables 3rd orderalbert einstein college of medicine parking
taylor series 2 variables 3rd orderap psychology developmental psychology
taylor series 2 variables 3rd orderjet helical planer blades
taylor series 2 variables 3rd orderchick-fil-a pathway login
taylor series 2 variables 3rd orderschuman traineeship monthly grant
taylor series 2 variables 3rd orderwhere is the reticular formation located
taylor series 2 variables 3rd ordermarathons in november 2022
taylor series 2 variables 3rd orderspecial leave accrual covid-19 army | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.